Nov 22 02:39:11 np0005532048 kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 22 02:39:11 np0005532048 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 22 02:39:11 np0005532048 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 02:39:11 np0005532048 kernel: BIOS-provided physical RAM map:
Nov 22 02:39:11 np0005532048 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 22 02:39:11 np0005532048 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 22 02:39:11 np0005532048 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 22 02:39:11 np0005532048 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 22 02:39:11 np0005532048 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 22 02:39:11 np0005532048 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 22 02:39:11 np0005532048 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 22 02:39:11 np0005532048 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 22 02:39:11 np0005532048 kernel: NX (Execute Disable) protection: active
Nov 22 02:39:11 np0005532048 kernel: APIC: Static calls initialized
Nov 22 02:39:11 np0005532048 kernel: SMBIOS 2.8 present.
Nov 22 02:39:11 np0005532048 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 22 02:39:11 np0005532048 kernel: Hypervisor detected: KVM
Nov 22 02:39:11 np0005532048 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 22 02:39:11 np0005532048 kernel: kvm-clock: using sched offset of 8064981917 cycles
Nov 22 02:39:11 np0005532048 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 22 02:39:11 np0005532048 kernel: tsc: Detected 2799.998 MHz processor
Nov 22 02:39:11 np0005532048 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 22 02:39:11 np0005532048 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 22 02:39:11 np0005532048 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 22 02:39:11 np0005532048 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 22 02:39:11 np0005532048 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 22 02:39:11 np0005532048 kernel: Using GB pages for direct mapping
Nov 22 02:39:11 np0005532048 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 22 02:39:11 np0005532048 kernel: ACPI: Early table checksum verification disabled
Nov 22 02:39:11 np0005532048 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 22 02:39:11 np0005532048 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:39:11 np0005532048 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:39:11 np0005532048 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:39:11 np0005532048 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 22 02:39:11 np0005532048 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:39:11 np0005532048 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:39:11 np0005532048 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 22 02:39:11 np0005532048 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 22 02:39:11 np0005532048 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 22 02:39:11 np0005532048 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 22 02:39:11 np0005532048 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 22 02:39:11 np0005532048 kernel: No NUMA configuration found
Nov 22 02:39:11 np0005532048 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 22 02:39:11 np0005532048 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 22 02:39:11 np0005532048 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 22 02:39:11 np0005532048 kernel: Zone ranges:
Nov 22 02:39:11 np0005532048 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 22 02:39:11 np0005532048 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 22 02:39:11 np0005532048 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 02:39:11 np0005532048 kernel:  Device   empty
Nov 22 02:39:11 np0005532048 kernel: Movable zone start for each node
Nov 22 02:39:11 np0005532048 kernel: Early memory node ranges
Nov 22 02:39:11 np0005532048 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 22 02:39:11 np0005532048 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 22 02:39:11 np0005532048 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 02:39:11 np0005532048 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 22 02:39:11 np0005532048 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 22 02:39:11 np0005532048 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 22 02:39:11 np0005532048 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 22 02:39:11 np0005532048 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 22 02:39:11 np0005532048 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 22 02:39:11 np0005532048 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 22 02:39:11 np0005532048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 22 02:39:11 np0005532048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 22 02:39:11 np0005532048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 22 02:39:11 np0005532048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 22 02:39:11 np0005532048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 22 02:39:11 np0005532048 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 22 02:39:11 np0005532048 kernel: TSC deadline timer available
Nov 22 02:39:11 np0005532048 kernel: CPU topo: Max. logical packages:   8
Nov 22 02:39:11 np0005532048 kernel: CPU topo: Max. logical dies:       8
Nov 22 02:39:11 np0005532048 kernel: CPU topo: Max. dies per package:   1
Nov 22 02:39:11 np0005532048 kernel: CPU topo: Max. threads per core:   1
Nov 22 02:39:11 np0005532048 kernel: CPU topo: Num. cores per package:     1
Nov 22 02:39:11 np0005532048 kernel: CPU topo: Num. threads per package:   1
Nov 22 02:39:11 np0005532048 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 22 02:39:11 np0005532048 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 22 02:39:11 np0005532048 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 22 02:39:11 np0005532048 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 22 02:39:11 np0005532048 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 22 02:39:11 np0005532048 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 22 02:39:11 np0005532048 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 22 02:39:11 np0005532048 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 22 02:39:11 np0005532048 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 22 02:39:11 np0005532048 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 22 02:39:11 np0005532048 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 22 02:39:11 np0005532048 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 22 02:39:11 np0005532048 kernel: Booting paravirtualized kernel on KVM
Nov 22 02:39:11 np0005532048 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 22 02:39:11 np0005532048 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 22 02:39:11 np0005532048 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 22 02:39:11 np0005532048 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 22 02:39:11 np0005532048 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 02:39:11 np0005532048 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 22 02:39:11 np0005532048 kernel: random: crng init done
Nov 22 02:39:11 np0005532048 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: Fallback order for Node 0: 0 
Nov 22 02:39:11 np0005532048 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 22 02:39:11 np0005532048 kernel: Policy zone: Normal
Nov 22 02:39:11 np0005532048 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 22 02:39:11 np0005532048 kernel: software IO TLB: area num 8.
Nov 22 02:39:11 np0005532048 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 22 02:39:11 np0005532048 kernel: ftrace: allocating 49298 entries in 193 pages
Nov 22 02:39:11 np0005532048 kernel: ftrace: allocated 193 pages with 3 groups
Nov 22 02:39:11 np0005532048 kernel: Dynamic Preempt: voluntary
Nov 22 02:39:11 np0005532048 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 22 02:39:11 np0005532048 kernel: rcu: #011RCU event tracing is enabled.
Nov 22 02:39:11 np0005532048 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 22 02:39:11 np0005532048 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 22 02:39:11 np0005532048 kernel: #011Rude variant of Tasks RCU enabled.
Nov 22 02:39:11 np0005532048 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 22 02:39:11 np0005532048 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 22 02:39:11 np0005532048 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 22 02:39:11 np0005532048 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 02:39:11 np0005532048 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 02:39:11 np0005532048 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 02:39:11 np0005532048 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 22 02:39:11 np0005532048 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 22 02:39:11 np0005532048 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 22 02:39:11 np0005532048 kernel: Console: colour VGA+ 80x25
Nov 22 02:39:11 np0005532048 kernel: printk: console [ttyS0] enabled
Nov 22 02:39:11 np0005532048 kernel: ACPI: Core revision 20230331
Nov 22 02:39:11 np0005532048 kernel: APIC: Switch to symmetric I/O mode setup
Nov 22 02:39:11 np0005532048 kernel: x2apic enabled
Nov 22 02:39:11 np0005532048 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 22 02:39:11 np0005532048 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 22 02:39:11 np0005532048 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 22 02:39:11 np0005532048 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 22 02:39:11 np0005532048 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 22 02:39:11 np0005532048 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 22 02:39:11 np0005532048 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 22 02:39:11 np0005532048 kernel: Spectre V2 : Mitigation: Retpolines
Nov 22 02:39:11 np0005532048 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 22 02:39:11 np0005532048 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 22 02:39:11 np0005532048 kernel: RETBleed: Mitigation: untrained return thunk
Nov 22 02:39:11 np0005532048 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 22 02:39:11 np0005532048 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 22 02:39:11 np0005532048 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 22 02:39:11 np0005532048 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 22 02:39:11 np0005532048 kernel: x86/bugs: return thunk changed
Nov 22 02:39:11 np0005532048 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 22 02:39:11 np0005532048 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 22 02:39:11 np0005532048 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 22 02:39:11 np0005532048 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 22 02:39:11 np0005532048 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 22 02:39:11 np0005532048 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 22 02:39:11 np0005532048 kernel: Freeing SMP alternatives memory: 40K
Nov 22 02:39:11 np0005532048 kernel: pid_max: default: 32768 minimum: 301
Nov 22 02:39:11 np0005532048 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 22 02:39:11 np0005532048 kernel: landlock: Up and running.
Nov 22 02:39:11 np0005532048 kernel: Yama: becoming mindful.
Nov 22 02:39:11 np0005532048 kernel: SELinux:  Initializing.
Nov 22 02:39:11 np0005532048 kernel: LSM support for eBPF active
Nov 22 02:39:11 np0005532048 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 22 02:39:11 np0005532048 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 22 02:39:11 np0005532048 kernel: ... version:                0
Nov 22 02:39:11 np0005532048 kernel: ... bit width:              48
Nov 22 02:39:11 np0005532048 kernel: ... generic registers:      6
Nov 22 02:39:11 np0005532048 kernel: ... value mask:             0000ffffffffffff
Nov 22 02:39:11 np0005532048 kernel: ... max period:             00007fffffffffff
Nov 22 02:39:11 np0005532048 kernel: ... fixed-purpose events:   0
Nov 22 02:39:11 np0005532048 kernel: ... event mask:             000000000000003f
Nov 22 02:39:11 np0005532048 kernel: signal: max sigframe size: 1776
Nov 22 02:39:11 np0005532048 kernel: rcu: Hierarchical SRCU implementation.
Nov 22 02:39:11 np0005532048 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 22 02:39:11 np0005532048 kernel: smp: Bringing up secondary CPUs ...
Nov 22 02:39:11 np0005532048 kernel: smpboot: x86: Booting SMP configuration:
Nov 22 02:39:11 np0005532048 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 22 02:39:11 np0005532048 kernel: smp: Brought up 1 node, 8 CPUs
Nov 22 02:39:11 np0005532048 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 22 02:39:11 np0005532048 kernel: node 0 deferred pages initialised in 7ms
Nov 22 02:39:11 np0005532048 kernel: Memory: 7765676K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616276K reserved, 0K cma-reserved)
Nov 22 02:39:11 np0005532048 kernel: devtmpfs: initialized
Nov 22 02:39:11 np0005532048 kernel: x86/mm: Memory block size: 128MB
Nov 22 02:39:11 np0005532048 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 22 02:39:11 np0005532048 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: pinctrl core: initialized pinctrl subsystem
Nov 22 02:39:11 np0005532048 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 22 02:39:11 np0005532048 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 22 02:39:11 np0005532048 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 22 02:39:11 np0005532048 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 22 02:39:11 np0005532048 kernel: audit: initializing netlink subsys (disabled)
Nov 22 02:39:11 np0005532048 kernel: audit: type=2000 audit(1763797149.503:1): state=initialized audit_enabled=0 res=1
Nov 22 02:39:11 np0005532048 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 22 02:39:11 np0005532048 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 22 02:39:11 np0005532048 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 22 02:39:11 np0005532048 kernel: cpuidle: using governor menu
Nov 22 02:39:11 np0005532048 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 22 02:39:11 np0005532048 kernel: PCI: Using configuration type 1 for base access
Nov 22 02:39:11 np0005532048 kernel: PCI: Using configuration type 1 for extended access
Nov 22 02:39:11 np0005532048 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 22 02:39:11 np0005532048 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 22 02:39:11 np0005532048 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 22 02:39:11 np0005532048 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 22 02:39:11 np0005532048 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 22 02:39:11 np0005532048 kernel: Demotion targets for Node 0: null
Nov 22 02:39:11 np0005532048 kernel: cryptd: max_cpu_qlen set to 1000
Nov 22 02:39:11 np0005532048 kernel: ACPI: Added _OSI(Module Device)
Nov 22 02:39:11 np0005532048 kernel: ACPI: Added _OSI(Processor Device)
Nov 22 02:39:11 np0005532048 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 22 02:39:11 np0005532048 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 22 02:39:11 np0005532048 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 22 02:39:11 np0005532048 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 22 02:39:11 np0005532048 kernel: ACPI: Interpreter enabled
Nov 22 02:39:11 np0005532048 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 22 02:39:11 np0005532048 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 22 02:39:11 np0005532048 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 22 02:39:11 np0005532048 kernel: PCI: Using E820 reservations for host bridge windows
Nov 22 02:39:11 np0005532048 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 22 02:39:11 np0005532048 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 22 02:39:11 np0005532048 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [3] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [4] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [5] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [6] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [7] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [8] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [9] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [10] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [11] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [12] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [13] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [14] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [15] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [16] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [17] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [18] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [19] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [20] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [21] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [22] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [23] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [24] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [25] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [26] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [27] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [28] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [29] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [30] registered
Nov 22 02:39:11 np0005532048 kernel: acpiphp: Slot [31] registered
Nov 22 02:39:11 np0005532048 kernel: PCI host bridge to bus 0000:00
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 22 02:39:11 np0005532048 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 22 02:39:11 np0005532048 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 22 02:39:11 np0005532048 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 22 02:39:11 np0005532048 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 22 02:39:11 np0005532048 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 22 02:39:11 np0005532048 kernel: iommu: Default domain type: Translated
Nov 22 02:39:11 np0005532048 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 22 02:39:11 np0005532048 kernel: SCSI subsystem initialized
Nov 22 02:39:11 np0005532048 kernel: ACPI: bus type USB registered
Nov 22 02:39:11 np0005532048 kernel: usbcore: registered new interface driver usbfs
Nov 22 02:39:11 np0005532048 kernel: usbcore: registered new interface driver hub
Nov 22 02:39:11 np0005532048 kernel: usbcore: registered new device driver usb
Nov 22 02:39:11 np0005532048 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 22 02:39:11 np0005532048 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 22 02:39:11 np0005532048 kernel: PTP clock support registered
Nov 22 02:39:11 np0005532048 kernel: EDAC MC: Ver: 3.0.0
Nov 22 02:39:11 np0005532048 kernel: NetLabel: Initializing
Nov 22 02:39:11 np0005532048 kernel: NetLabel:  domain hash size = 128
Nov 22 02:39:11 np0005532048 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 22 02:39:11 np0005532048 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 22 02:39:11 np0005532048 kernel: PCI: Using ACPI for IRQ routing
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 22 02:39:11 np0005532048 kernel: vgaarb: loaded
Nov 22 02:39:11 np0005532048 kernel: clocksource: Switched to clocksource kvm-clock
Nov 22 02:39:11 np0005532048 kernel: VFS: Disk quotas dquot_6.6.0
Nov 22 02:39:11 np0005532048 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 22 02:39:11 np0005532048 kernel: pnp: PnP ACPI init
Nov 22 02:39:11 np0005532048 kernel: pnp: PnP ACPI: found 5 devices
Nov 22 02:39:11 np0005532048 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 22 02:39:11 np0005532048 kernel: NET: Registered PF_INET protocol family
Nov 22 02:39:11 np0005532048 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 22 02:39:11 np0005532048 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 02:39:11 np0005532048 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 22 02:39:11 np0005532048 kernel: NET: Registered PF_XDP protocol family
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 22 02:39:11 np0005532048 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 22 02:39:11 np0005532048 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 22 02:39:11 np0005532048 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 77429 usecs
Nov 22 02:39:11 np0005532048 kernel: PCI: CLS 0 bytes, default 64
Nov 22 02:39:11 np0005532048 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 22 02:39:11 np0005532048 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 22 02:39:11 np0005532048 kernel: ACPI: bus type thunderbolt registered
Nov 22 02:39:11 np0005532048 kernel: Trying to unpack rootfs image as initramfs...
Nov 22 02:39:11 np0005532048 kernel: Initialise system trusted keyrings
Nov 22 02:39:11 np0005532048 kernel: Key type blacklist registered
Nov 22 02:39:11 np0005532048 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 22 02:39:11 np0005532048 kernel: zbud: loaded
Nov 22 02:39:11 np0005532048 kernel: integrity: Platform Keyring initialized
Nov 22 02:39:11 np0005532048 kernel: integrity: Machine keyring initialized
Nov 22 02:39:11 np0005532048 kernel: Freeing initrd memory: 85868K
Nov 22 02:39:11 np0005532048 kernel: NET: Registered PF_ALG protocol family
Nov 22 02:39:11 np0005532048 kernel: xor: automatically using best checksumming function   avx       
Nov 22 02:39:11 np0005532048 kernel: Key type asymmetric registered
Nov 22 02:39:11 np0005532048 kernel: Asymmetric key parser 'x509' registered
Nov 22 02:39:11 np0005532048 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 22 02:39:11 np0005532048 kernel: io scheduler mq-deadline registered
Nov 22 02:39:11 np0005532048 kernel: io scheduler kyber registered
Nov 22 02:39:11 np0005532048 kernel: io scheduler bfq registered
Nov 22 02:39:11 np0005532048 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 22 02:39:11 np0005532048 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 22 02:39:11 np0005532048 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 22 02:39:11 np0005532048 kernel: ACPI: button: Power Button [PWRF]
Nov 22 02:39:11 np0005532048 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 22 02:39:11 np0005532048 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 22 02:39:11 np0005532048 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 22 02:39:11 np0005532048 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 22 02:39:11 np0005532048 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 22 02:39:11 np0005532048 kernel: Non-volatile memory driver v1.3
Nov 22 02:39:11 np0005532048 kernel: rdac: device handler registered
Nov 22 02:39:11 np0005532048 kernel: hp_sw: device handler registered
Nov 22 02:39:11 np0005532048 kernel: emc: device handler registered
Nov 22 02:39:11 np0005532048 kernel: alua: device handler registered
Nov 22 02:39:11 np0005532048 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 22 02:39:11 np0005532048 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 22 02:39:11 np0005532048 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 22 02:39:11 np0005532048 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 22 02:39:11 np0005532048 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 22 02:39:11 np0005532048 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 22 02:39:11 np0005532048 kernel: usb usb1: Product: UHCI Host Controller
Nov 22 02:39:11 np0005532048 kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 22 02:39:11 np0005532048 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 22 02:39:11 np0005532048 kernel: hub 1-0:1.0: USB hub found
Nov 22 02:39:11 np0005532048 kernel: hub 1-0:1.0: 2 ports detected
Nov 22 02:39:11 np0005532048 kernel: usbcore: registered new interface driver usbserial_generic
Nov 22 02:39:11 np0005532048 kernel: usbserial: USB Serial support registered for generic
Nov 22 02:39:11 np0005532048 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 22 02:39:11 np0005532048 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 22 02:39:11 np0005532048 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 22 02:39:11 np0005532048 kernel: mousedev: PS/2 mouse device common for all mice
Nov 22 02:39:11 np0005532048 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 22 02:39:11 np0005532048 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 22 02:39:11 np0005532048 kernel: rtc_cmos 00:04: registered as rtc0
Nov 22 02:39:11 np0005532048 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 22 02:39:11 np0005532048 kernel: rtc_cmos 00:04: setting system clock to 2025-11-22T07:39:10 UTC (1763797150)
Nov 22 02:39:11 np0005532048 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 22 02:39:11 np0005532048 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 22 02:39:11 np0005532048 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 22 02:39:11 np0005532048 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 22 02:39:11 np0005532048 kernel: usbcore: registered new interface driver usbhid
Nov 22 02:39:11 np0005532048 kernel: usbhid: USB HID core driver
Nov 22 02:39:11 np0005532048 kernel: drop_monitor: Initializing network drop monitor service
Nov 22 02:39:11 np0005532048 kernel: Initializing XFRM netlink socket
Nov 22 02:39:11 np0005532048 kernel: NET: Registered PF_INET6 protocol family
Nov 22 02:39:11 np0005532048 kernel: Segment Routing with IPv6
Nov 22 02:39:11 np0005532048 kernel: NET: Registered PF_PACKET protocol family
Nov 22 02:39:11 np0005532048 kernel: mpls_gso: MPLS GSO support
Nov 22 02:39:11 np0005532048 kernel: IPI shorthand broadcast: enabled
Nov 22 02:39:11 np0005532048 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 22 02:39:11 np0005532048 kernel: AES CTR mode by8 optimization enabled
Nov 22 02:39:11 np0005532048 kernel: sched_clock: Marking stable (1341004488, 174348135)->(1651944580, -136591957)
Nov 22 02:39:11 np0005532048 kernel: registered taskstats version 1
Nov 22 02:39:11 np0005532048 kernel: Loading compiled-in X.509 certificates
Nov 22 02:39:11 np0005532048 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 02:39:11 np0005532048 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 22 02:39:11 np0005532048 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 22 02:39:11 np0005532048 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 22 02:39:11 np0005532048 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 22 02:39:11 np0005532048 kernel: Demotion targets for Node 0: null
Nov 22 02:39:11 np0005532048 kernel: page_owner is disabled
Nov 22 02:39:11 np0005532048 kernel: Key type .fscrypt registered
Nov 22 02:39:11 np0005532048 kernel: Key type fscrypt-provisioning registered
Nov 22 02:39:11 np0005532048 kernel: Key type big_key registered
Nov 22 02:39:11 np0005532048 kernel: Key type encrypted registered
Nov 22 02:39:11 np0005532048 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 22 02:39:11 np0005532048 kernel: Loading compiled-in module X.509 certificates
Nov 22 02:39:11 np0005532048 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 02:39:11 np0005532048 kernel: ima: Allocated hash algorithm: sha256
Nov 22 02:39:11 np0005532048 kernel: ima: No architecture policies found
Nov 22 02:39:11 np0005532048 kernel: evm: Initialising EVM extended attributes:
Nov 22 02:39:11 np0005532048 kernel: evm: security.selinux
Nov 22 02:39:11 np0005532048 kernel: evm: security.SMACK64 (disabled)
Nov 22 02:39:11 np0005532048 kernel: evm: security.SMACK64EXEC (disabled)
Nov 22 02:39:11 np0005532048 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 22 02:39:11 np0005532048 kernel: evm: security.SMACK64MMAP (disabled)
Nov 22 02:39:11 np0005532048 kernel: evm: security.apparmor (disabled)
Nov 22 02:39:11 np0005532048 kernel: evm: security.ima
Nov 22 02:39:11 np0005532048 kernel: evm: security.capability
Nov 22 02:39:11 np0005532048 kernel: evm: HMAC attrs: 0x1
Nov 22 02:39:11 np0005532048 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 22 02:39:11 np0005532048 kernel: Running certificate verification RSA selftest
Nov 22 02:39:11 np0005532048 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 22 02:39:11 np0005532048 kernel: Running certificate verification ECDSA selftest
Nov 22 02:39:11 np0005532048 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 22 02:39:11 np0005532048 kernel: clk: Disabling unused clocks
Nov 22 02:39:11 np0005532048 kernel: Freeing unused decrypted memory: 2028K
Nov 22 02:39:11 np0005532048 kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 22 02:39:11 np0005532048 kernel: Write protecting the kernel read-only data: 30720k
Nov 22 02:39:11 np0005532048 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 22 02:39:11 np0005532048 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 22 02:39:11 np0005532048 kernel: Run /init as init process
Nov 22 02:39:11 np0005532048 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 22 02:39:11 np0005532048 systemd: Detected virtualization kvm.
Nov 22 02:39:11 np0005532048 systemd: Detected architecture x86-64.
Nov 22 02:39:11 np0005532048 systemd: Running in initrd.
Nov 22 02:39:11 np0005532048 systemd: No hostname configured, using default hostname.
Nov 22 02:39:11 np0005532048 systemd: Hostname set to <localhost>.
Nov 22 02:39:11 np0005532048 systemd: Initializing machine ID from VM UUID.
Nov 22 02:39:11 np0005532048 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 22 02:39:11 np0005532048 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 22 02:39:11 np0005532048 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 22 02:39:11 np0005532048 kernel: usb 1-1: Manufacturer: QEMU
Nov 22 02:39:11 np0005532048 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 22 02:39:11 np0005532048 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 22 02:39:11 np0005532048 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 22 02:39:11 np0005532048 systemd: Queued start job for default target Initrd Default Target.
Nov 22 02:39:11 np0005532048 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 22 02:39:11 np0005532048 systemd: Reached target Local Encrypted Volumes.
Nov 22 02:39:11 np0005532048 systemd: Reached target Initrd /usr File System.
Nov 22 02:39:11 np0005532048 systemd: Reached target Local File Systems.
Nov 22 02:39:11 np0005532048 systemd: Reached target Path Units.
Nov 22 02:39:11 np0005532048 systemd: Reached target Slice Units.
Nov 22 02:39:11 np0005532048 systemd: Reached target Swaps.
Nov 22 02:39:11 np0005532048 systemd: Reached target Timer Units.
Nov 22 02:39:11 np0005532048 systemd: Listening on D-Bus System Message Bus Socket.
Nov 22 02:39:11 np0005532048 systemd: Listening on Journal Socket (/dev/log).
Nov 22 02:39:11 np0005532048 systemd: Listening on Journal Socket.
Nov 22 02:39:11 np0005532048 systemd: Listening on udev Control Socket.
Nov 22 02:39:11 np0005532048 systemd: Listening on udev Kernel Socket.
Nov 22 02:39:11 np0005532048 systemd: Reached target Socket Units.
Nov 22 02:39:11 np0005532048 systemd: Starting Create List of Static Device Nodes...
Nov 22 02:39:11 np0005532048 systemd: Starting Journal Service...
Nov 22 02:39:11 np0005532048 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 22 02:39:11 np0005532048 systemd: Starting Apply Kernel Variables...
Nov 22 02:39:11 np0005532048 systemd: Starting Create System Users...
Nov 22 02:39:11 np0005532048 systemd: Starting Setup Virtual Console...
Nov 22 02:39:11 np0005532048 systemd: Finished Create List of Static Device Nodes.
Nov 22 02:39:11 np0005532048 systemd: Finished Apply Kernel Variables.
Nov 22 02:39:11 np0005532048 systemd: Finished Create System Users.
Nov 22 02:39:11 np0005532048 systemd-journald[304]: Journal started
Nov 22 02:39:11 np0005532048 systemd-journald[304]: Runtime Journal (/run/log/journal/02722e9f996f4a018f3068e10821087c) is 8.0M, max 153.6M, 145.6M free.
Nov 22 02:39:11 np0005532048 systemd-sysusers[309]: Creating group 'users' with GID 100.
Nov 22 02:39:11 np0005532048 systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Nov 22 02:39:11 np0005532048 systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 22 02:39:11 np0005532048 systemd: Started Journal Service.
Nov 22 02:39:11 np0005532048 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 22 02:39:11 np0005532048 systemd[1]: Starting Create Volatile Files and Directories...
Nov 22 02:39:11 np0005532048 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 22 02:39:11 np0005532048 systemd[1]: Finished Create Volatile Files and Directories.
Nov 22 02:39:11 np0005532048 systemd[1]: Finished Setup Virtual Console.
Nov 22 02:39:11 np0005532048 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 22 02:39:11 np0005532048 systemd[1]: Starting dracut cmdline hook...
Nov 22 02:39:11 np0005532048 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Nov 22 02:39:11 np0005532048 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 02:39:11 np0005532048 systemd[1]: Finished dracut cmdline hook.
Nov 22 02:39:11 np0005532048 systemd[1]: Starting dracut pre-udev hook...
Nov 22 02:39:11 np0005532048 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 22 02:39:11 np0005532048 kernel: device-mapper: uevent: version 1.0.3
Nov 22 02:39:11 np0005532048 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 22 02:39:11 np0005532048 kernel: RPC: Registered named UNIX socket transport module.
Nov 22 02:39:11 np0005532048 kernel: RPC: Registered udp transport module.
Nov 22 02:39:11 np0005532048 kernel: RPC: Registered tcp transport module.
Nov 22 02:39:11 np0005532048 kernel: RPC: Registered tcp-with-tls transport module.
Nov 22 02:39:11 np0005532048 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 22 02:39:11 np0005532048 rpc.statd[443]: Version 2.5.4 starting
Nov 22 02:39:11 np0005532048 rpc.statd[443]: Initializing NSM state
Nov 22 02:39:11 np0005532048 rpc.idmapd[448]: Setting log level to 0
Nov 22 02:39:11 np0005532048 systemd[1]: Finished dracut pre-udev hook.
Nov 22 02:39:11 np0005532048 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 22 02:39:11 np0005532048 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 22 02:39:11 np0005532048 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 22 02:39:11 np0005532048 systemd[1]: Starting dracut pre-trigger hook...
Nov 22 02:39:11 np0005532048 systemd[1]: Finished dracut pre-trigger hook.
Nov 22 02:39:11 np0005532048 systemd[1]: Starting Coldplug All udev Devices...
Nov 22 02:39:11 np0005532048 systemd[1]: Created slice Slice /system/modprobe.
Nov 22 02:39:11 np0005532048 systemd[1]: Starting Load Kernel Module configfs...
Nov 22 02:39:11 np0005532048 systemd[1]: Finished Coldplug All udev Devices.
Nov 22 02:39:11 np0005532048 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 02:39:11 np0005532048 systemd[1]: Finished Load Kernel Module configfs.
Nov 22 02:39:11 np0005532048 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 02:39:11 np0005532048 systemd[1]: Reached target Network.
Nov 22 02:39:11 np0005532048 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 02:39:11 np0005532048 systemd[1]: Starting dracut initqueue hook...
Nov 22 02:39:11 np0005532048 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 22 02:39:11 np0005532048 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 22 02:39:11 np0005532048 kernel: vda: vda1
Nov 22 02:39:11 np0005532048 systemd-udevd[462]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 02:39:11 np0005532048 kernel: scsi host0: ata_piix
Nov 22 02:39:11 np0005532048 kernel: scsi host1: ata_piix
Nov 22 02:39:11 np0005532048 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 22 02:39:11 np0005532048 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 22 02:39:12 np0005532048 systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 02:39:12 np0005532048 systemd[1]: Reached target Initrd Root Device.
Nov 22 02:39:12 np0005532048 systemd[1]: Mounting Kernel Configuration File System...
Nov 22 02:39:12 np0005532048 systemd[1]: Mounted Kernel Configuration File System.
Nov 22 02:39:12 np0005532048 systemd[1]: Reached target System Initialization.
Nov 22 02:39:12 np0005532048 systemd[1]: Reached target Basic System.
Nov 22 02:39:12 np0005532048 kernel: ata1: found unknown device (class 0)
Nov 22 02:39:12 np0005532048 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 22 02:39:12 np0005532048 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 22 02:39:12 np0005532048 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 22 02:39:12 np0005532048 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 22 02:39:12 np0005532048 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 22 02:39:12 np0005532048 systemd[1]: Finished dracut initqueue hook.
Nov 22 02:39:12 np0005532048 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 22 02:39:12 np0005532048 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 22 02:39:12 np0005532048 systemd[1]: Reached target Remote File Systems.
Nov 22 02:39:12 np0005532048 systemd[1]: Starting dracut pre-mount hook...
Nov 22 02:39:12 np0005532048 systemd[1]: Finished dracut pre-mount hook.
Nov 22 02:39:12 np0005532048 systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 22 02:39:12 np0005532048 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 22 02:39:12 np0005532048 systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 02:39:12 np0005532048 systemd[1]: Mounting /sysroot...
Nov 22 02:39:12 np0005532048 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 22 02:39:12 np0005532048 kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 22 02:39:13 np0005532048 kernel: XFS (vda1): Ending clean mount
Nov 22 02:39:13 np0005532048 systemd[1]: Mounted /sysroot.
Nov 22 02:39:13 np0005532048 systemd[1]: Reached target Initrd Root File System.
Nov 22 02:39:13 np0005532048 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 22 02:39:13 np0005532048 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 22 02:39:13 np0005532048 systemd[1]: Reached target Initrd File Systems.
Nov 22 02:39:13 np0005532048 systemd[1]: Reached target Initrd Default Target.
Nov 22 02:39:13 np0005532048 systemd[1]: Starting dracut mount hook...
Nov 22 02:39:13 np0005532048 systemd[1]: Finished dracut mount hook.
Nov 22 02:39:13 np0005532048 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 22 02:39:13 np0005532048 rpc.idmapd[448]: exiting on signal 15
Nov 22 02:39:13 np0005532048 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 22 02:39:13 np0005532048 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Network.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Timer Units.
Nov 22 02:39:13 np0005532048 systemd[1]: dbus.socket: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 22 02:39:13 np0005532048 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Initrd Default Target.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Basic System.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Initrd Root Device.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Initrd /usr File System.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Path Units.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Remote File Systems.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Slice Units.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Socket Units.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target System Initialization.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Local File Systems.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Swaps.
Nov 22 02:39:13 np0005532048 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped dracut mount hook.
Nov 22 02:39:13 np0005532048 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped dracut pre-mount hook.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 22 02:39:13 np0005532048 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped dracut initqueue hook.
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped Apply Kernel Variables.
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped Coldplug All udev Devices.
Nov 22 02:39:13 np0005532048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped dracut pre-trigger hook.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped Setup Virtual Console.
Nov 22 02:39:13 np0005532048 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Closed udev Control Socket.
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Closed udev Kernel Socket.
Nov 22 02:39:13 np0005532048 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped dracut pre-udev hook.
Nov 22 02:39:13 np0005532048 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped dracut cmdline hook.
Nov 22 02:39:13 np0005532048 systemd[1]: Starting Cleanup udev Database...
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 22 02:39:13 np0005532048 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 22 02:39:13 np0005532048 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Stopped Create System Users.
Nov 22 02:39:13 np0005532048 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 22 02:39:13 np0005532048 systemd[1]: Finished Cleanup udev Database.
Nov 22 02:39:13 np0005532048 systemd[1]: Reached target Switch Root.
Nov 22 02:39:13 np0005532048 systemd[1]: Starting Switch Root...
Nov 22 02:39:13 np0005532048 systemd[1]: Switching root.
Nov 22 02:39:13 np0005532048 systemd-journald[304]: Journal stopped
Nov 22 02:39:15 np0005532048 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 22 02:39:15 np0005532048 kernel: audit: type=1404 audit(1763797153.933:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 22 02:39:15 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 02:39:15 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 02:39:15 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 02:39:15 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 02:39:15 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 02:39:15 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 02:39:15 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 02:39:15 np0005532048 kernel: audit: type=1403 audit(1763797154.140:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 22 02:39:15 np0005532048 systemd: Successfully loaded SELinux policy in 212.204ms.
Nov 22 02:39:15 np0005532048 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.883ms.
Nov 22 02:39:15 np0005532048 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 22 02:39:15 np0005532048 systemd: Detected virtualization kvm.
Nov 22 02:39:15 np0005532048 systemd: Detected architecture x86-64.
Nov 22 02:39:15 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 02:39:15 np0005532048 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 22 02:39:15 np0005532048 systemd: Stopped Switch Root.
Nov 22 02:39:15 np0005532048 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 22 02:39:15 np0005532048 systemd: Created slice Slice /system/getty.
Nov 22 02:39:15 np0005532048 systemd: Created slice Slice /system/serial-getty.
Nov 22 02:39:15 np0005532048 systemd: Created slice Slice /system/sshd-keygen.
Nov 22 02:39:15 np0005532048 systemd: Created slice User and Session Slice.
Nov 22 02:39:15 np0005532048 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 22 02:39:15 np0005532048 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 22 02:39:15 np0005532048 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 22 02:39:15 np0005532048 systemd: Reached target Local Encrypted Volumes.
Nov 22 02:39:15 np0005532048 systemd: Stopped target Switch Root.
Nov 22 02:39:15 np0005532048 systemd: Stopped target Initrd File Systems.
Nov 22 02:39:15 np0005532048 systemd: Stopped target Initrd Root File System.
Nov 22 02:39:15 np0005532048 systemd: Reached target Local Integrity Protected Volumes.
Nov 22 02:39:15 np0005532048 systemd: Reached target Path Units.
Nov 22 02:39:15 np0005532048 systemd: Reached target rpc_pipefs.target.
Nov 22 02:39:15 np0005532048 systemd: Reached target Slice Units.
Nov 22 02:39:15 np0005532048 systemd: Reached target Swaps.
Nov 22 02:39:15 np0005532048 systemd: Reached target Local Verity Protected Volumes.
Nov 22 02:39:15 np0005532048 systemd: Listening on RPCbind Server Activation Socket.
Nov 22 02:39:15 np0005532048 systemd: Reached target RPC Port Mapper.
Nov 22 02:39:15 np0005532048 systemd: Listening on Process Core Dump Socket.
Nov 22 02:39:15 np0005532048 systemd: Listening on initctl Compatibility Named Pipe.
Nov 22 02:39:15 np0005532048 systemd: Listening on udev Control Socket.
Nov 22 02:39:15 np0005532048 systemd: Listening on udev Kernel Socket.
Nov 22 02:39:15 np0005532048 systemd: Mounting Huge Pages File System...
Nov 22 02:39:15 np0005532048 systemd: Mounting POSIX Message Queue File System...
Nov 22 02:39:15 np0005532048 systemd: Mounting Kernel Debug File System...
Nov 22 02:39:15 np0005532048 systemd: Mounting Kernel Trace File System...
Nov 22 02:39:15 np0005532048 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 22 02:39:15 np0005532048 systemd: Starting Create List of Static Device Nodes...
Nov 22 02:39:15 np0005532048 systemd: Starting Load Kernel Module configfs...
Nov 22 02:39:15 np0005532048 systemd: Starting Load Kernel Module drm...
Nov 22 02:39:15 np0005532048 systemd: Starting Load Kernel Module efi_pstore...
Nov 22 02:39:15 np0005532048 systemd: Starting Load Kernel Module fuse...
Nov 22 02:39:15 np0005532048 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 22 02:39:15 np0005532048 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 22 02:39:15 np0005532048 systemd: Stopped File System Check on Root Device.
Nov 22 02:39:15 np0005532048 systemd: Stopped Journal Service.
Nov 22 02:39:15 np0005532048 systemd: Starting Journal Service...
Nov 22 02:39:15 np0005532048 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 22 02:39:15 np0005532048 systemd: Starting Generate network units from Kernel command line...
Nov 22 02:39:15 np0005532048 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 02:39:15 np0005532048 systemd: Starting Remount Root and Kernel File Systems...
Nov 22 02:39:15 np0005532048 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 22 02:39:15 np0005532048 systemd: Starting Apply Kernel Variables...
Nov 22 02:39:15 np0005532048 systemd: Starting Coldplug All udev Devices...
Nov 22 02:39:15 np0005532048 kernel: fuse: init (API version 7.37)
Nov 22 02:39:15 np0005532048 systemd: Mounted Huge Pages File System.
Nov 22 02:39:15 np0005532048 systemd: Mounted POSIX Message Queue File System.
Nov 22 02:39:15 np0005532048 systemd: Mounted Kernel Debug File System.
Nov 22 02:39:15 np0005532048 systemd: Mounted Kernel Trace File System.
Nov 22 02:39:15 np0005532048 systemd: Finished Create List of Static Device Nodes.
Nov 22 02:39:15 np0005532048 systemd: modprobe@configfs.service: Deactivated successfully.
Nov 22 02:39:15 np0005532048 systemd: Finished Load Kernel Module configfs.
Nov 22 02:39:15 np0005532048 systemd: modprobe@efi_pstore.service: Deactivated successfully.
Nov 22 02:39:15 np0005532048 systemd: Finished Load Kernel Module efi_pstore.
Nov 22 02:39:15 np0005532048 systemd: modprobe@fuse.service: Deactivated successfully.
Nov 22 02:39:15 np0005532048 systemd: Finished Load Kernel Module fuse.
Nov 22 02:39:15 np0005532048 systemd: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 22 02:39:15 np0005532048 systemd: Finished Generate network units from Kernel command line.
Nov 22 02:39:15 np0005532048 systemd-journald[680]: Journal started
Nov 22 02:39:15 np0005532048 systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 22 02:39:15 np0005532048 systemd[1]: Queued start job for default target Multi-User System.
Nov 22 02:39:15 np0005532048 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 22 02:39:15 np0005532048 systemd: Started Journal Service.
Nov 22 02:39:15 np0005532048 kernel: ACPI: bus type drm_connector registered
Nov 22 02:39:15 np0005532048 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Load Kernel Module drm.
Nov 22 02:39:15 np0005532048 systemd[1]: Mounting FUSE Control File System...
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Apply Kernel Variables.
Nov 22 02:39:15 np0005532048 systemd[1]: Mounted FUSE Control File System.
Nov 22 02:39:15 np0005532048 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 22 02:39:15 np0005532048 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Rebuild Hardware Database...
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 22 02:39:15 np0005532048 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Load/Save OS Random Seed...
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Create System Users...
Nov 22 02:39:15 np0005532048 systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 22 02:39:15 np0005532048 systemd-journald[680]: Received client request to flush runtime journal.
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Load/Save OS Random Seed.
Nov 22 02:39:15 np0005532048 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Coldplug All udev Devices.
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Create System Users.
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 22 02:39:15 np0005532048 systemd[1]: Reached target Preparation for Local File Systems.
Nov 22 02:39:15 np0005532048 systemd[1]: Reached target Local File Systems.
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 22 02:39:15 np0005532048 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 22 02:39:15 np0005532048 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 22 02:39:15 np0005532048 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Automatic Boot Loader Update...
Nov 22 02:39:15 np0005532048 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Create Volatile Files and Directories...
Nov 22 02:39:15 np0005532048 bootctl[697]: Couldn't find EFI system partition, skipping.
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Automatic Boot Loader Update.
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Create Volatile Files and Directories.
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Security Auditing Service...
Nov 22 02:39:15 np0005532048 systemd[1]: Starting RPC Bind...
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Rebuild Journal Catalog...
Nov 22 02:39:15 np0005532048 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 22 02:39:15 np0005532048 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Rebuild Journal Catalog.
Nov 22 02:39:15 np0005532048 systemd[1]: Started RPC Bind.
Nov 22 02:39:15 np0005532048 augenrules[708]: /sbin/augenrules: No change
Nov 22 02:39:15 np0005532048 augenrules[723]: No rules
Nov 22 02:39:15 np0005532048 augenrules[723]: enabled 1
Nov 22 02:39:15 np0005532048 augenrules[723]: failure 1
Nov 22 02:39:15 np0005532048 augenrules[723]: pid 703
Nov 22 02:39:15 np0005532048 augenrules[723]: rate_limit 0
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog_limit 8192
Nov 22 02:39:15 np0005532048 augenrules[723]: lost 0
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog 0
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog_wait_time 60000
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog_wait_time_actual 0
Nov 22 02:39:15 np0005532048 augenrules[723]: enabled 1
Nov 22 02:39:15 np0005532048 augenrules[723]: failure 1
Nov 22 02:39:15 np0005532048 augenrules[723]: pid 703
Nov 22 02:39:15 np0005532048 augenrules[723]: rate_limit 0
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog_limit 8192
Nov 22 02:39:15 np0005532048 augenrules[723]: lost 0
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog 0
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog_wait_time 60000
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog_wait_time_actual 0
Nov 22 02:39:15 np0005532048 augenrules[723]: enabled 1
Nov 22 02:39:15 np0005532048 augenrules[723]: failure 1
Nov 22 02:39:15 np0005532048 augenrules[723]: pid 703
Nov 22 02:39:15 np0005532048 augenrules[723]: rate_limit 0
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog_limit 8192
Nov 22 02:39:15 np0005532048 augenrules[723]: lost 0
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog 0
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog_wait_time 60000
Nov 22 02:39:15 np0005532048 augenrules[723]: backlog_wait_time_actual 0
Nov 22 02:39:15 np0005532048 systemd[1]: Started Security Auditing Service.
Nov 22 02:39:15 np0005532048 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 22 02:39:15 np0005532048 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 22 02:39:16 np0005532048 systemd[1]: Finished Rebuild Hardware Database.
Nov 22 02:39:16 np0005532048 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 22 02:39:16 np0005532048 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Nov 22 02:39:16 np0005532048 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 22 02:39:16 np0005532048 systemd[1]: Starting Load Kernel Module configfs...
Nov 22 02:39:16 np0005532048 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 02:39:16 np0005532048 systemd[1]: Finished Load Kernel Module configfs.
Nov 22 02:39:16 np0005532048 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 22 02:39:16 np0005532048 systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 02:39:16 np0005532048 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 22 02:39:16 np0005532048 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 22 02:39:16 np0005532048 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 22 02:39:16 np0005532048 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 22 02:39:16 np0005532048 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 22 02:39:16 np0005532048 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 22 02:39:16 np0005532048 kernel: Console: switching to colour dummy device 80x25
Nov 22 02:39:16 np0005532048 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 22 02:39:16 np0005532048 kernel: [drm] features: -context_init
Nov 22 02:39:16 np0005532048 kernel: [drm] number of scanouts: 1
Nov 22 02:39:16 np0005532048 kernel: [drm] number of cap sets: 0
Nov 22 02:39:16 np0005532048 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 22 02:39:16 np0005532048 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 22 02:39:16 np0005532048 kernel: Console: switching to colour frame buffer device 128x48
Nov 22 02:39:16 np0005532048 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 22 02:39:16 np0005532048 kernel: kvm_amd: TSC scaling supported
Nov 22 02:39:16 np0005532048 kernel: kvm_amd: Nested Virtualization enabled
Nov 22 02:39:16 np0005532048 kernel: kvm_amd: Nested Paging enabled
Nov 22 02:39:16 np0005532048 kernel: kvm_amd: LBR virtualization supported
Nov 22 02:39:16 np0005532048 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 22 02:39:16 np0005532048 systemd[1]: Starting Update is Completed...
Nov 22 02:39:16 np0005532048 systemd[1]: Finished Update is Completed.
Nov 22 02:39:16 np0005532048 systemd[1]: Reached target System Initialization.
Nov 22 02:39:16 np0005532048 systemd[1]: Started dnf makecache --timer.
Nov 22 02:39:16 np0005532048 systemd[1]: Started Daily rotation of log files.
Nov 22 02:39:16 np0005532048 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 22 02:39:16 np0005532048 systemd[1]: Reached target Timer Units.
Nov 22 02:39:16 np0005532048 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 22 02:39:16 np0005532048 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 22 02:39:16 np0005532048 systemd[1]: Reached target Socket Units.
Nov 22 02:39:16 np0005532048 systemd[1]: Starting D-Bus System Message Bus...
Nov 22 02:39:16 np0005532048 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 02:39:16 np0005532048 systemd[1]: Started D-Bus System Message Bus.
Nov 22 02:39:16 np0005532048 systemd[1]: Reached target Basic System.
Nov 22 02:39:16 np0005532048 dbus-broker-lau[805]: Ready
Nov 22 02:39:16 np0005532048 systemd[1]: Starting NTP client/server...
Nov 22 02:39:16 np0005532048 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 22 02:39:16 np0005532048 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 22 02:39:16 np0005532048 systemd[1]: Starting IPv4 firewall with iptables...
Nov 22 02:39:16 np0005532048 systemd[1]: Started irqbalance daemon.
Nov 22 02:39:16 np0005532048 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 22 02:39:16 np0005532048 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 02:39:16 np0005532048 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 02:39:16 np0005532048 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 02:39:16 np0005532048 systemd[1]: Reached target sshd-keygen.target.
Nov 22 02:39:16 np0005532048 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 22 02:39:16 np0005532048 systemd[1]: Reached target User and Group Name Lookups.
Nov 22 02:39:16 np0005532048 systemd[1]: Starting User Login Management...
Nov 22 02:39:16 np0005532048 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 22 02:39:16 np0005532048 chronyd[831]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 22 02:39:16 np0005532048 chronyd[831]: Loaded 0 symmetric keys
Nov 22 02:39:16 np0005532048 chronyd[831]: Using right/UTC timezone to obtain leap second data
Nov 22 02:39:16 np0005532048 chronyd[831]: Loaded seccomp filter (level 2)
Nov 22 02:39:16 np0005532048 systemd[1]: Started NTP client/server.
Nov 22 02:39:16 np0005532048 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 22 02:39:16 np0005532048 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 22 02:39:16 np0005532048 systemd-logind[822]: New seat seat0.
Nov 22 02:39:16 np0005532048 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 02:39:16 np0005532048 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 02:39:16 np0005532048 systemd[1]: Started User Login Management.
Nov 22 02:39:16 np0005532048 iptables.init[817]: iptables: Applying firewall rules: [  OK  ]
Nov 22 02:39:16 np0005532048 systemd[1]: Finished IPv4 firewall with iptables.
Nov 22 02:39:18 np0005532048 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 22 Nov 2025 07:39:18 +0000. Up 9.36 seconds.
Nov 22 02:39:18 np0005532048 systemd[1]: run-cloud\x2dinit-tmp-tmppsp_dqmd.mount: Deactivated successfully.
Nov 22 02:39:18 np0005532048 systemd[1]: Starting Hostname Service...
Nov 22 02:39:18 np0005532048 systemd[1]: Started Hostname Service.
Nov 22 02:39:18 np0005532048 systemd-hostnamed[854]: Hostname set to <np0005532048.novalocal> (static)
Nov 22 02:39:19 np0005532048 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 22 02:39:19 np0005532048 systemd[1]: Reached target Preparation for Network.
Nov 22 02:39:19 np0005532048 systemd[1]: Starting Network Manager...
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.1972] NetworkManager (version 1.54.1-1.el9) is starting... (boot:4e2dc5c3-ddd6-4720-b04d-99a9b483bca6)
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.1978] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2161] manager[0x55c3153c4080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2204] hostname: hostname: using hostnamed
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2205] hostname: static hostname changed from (none) to "np0005532048.novalocal"
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2208] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2323] manager[0x55c3153c4080]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2324] manager[0x55c3153c4080]: rfkill: WWAN hardware radio set enabled
Nov 22 02:39:19 np0005532048 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2450] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2451] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2452] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2453] manager: Networking is enabled by state file
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2454] settings: Loaded settings plugin: keyfile (internal)
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2493] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2515] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2541] dhcp: init: Using DHCP client 'internal'
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2544] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2555] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2566] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2572] device (lo): Activation: starting connection 'lo' (07a5ca92-286d-43ed-b1fe-7289a1f61143)
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2580] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2582] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2606] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2609] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2611] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2613] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2615] device (eth0): carrier: link connected
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2618] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2632] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2637] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2640] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2641] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2643] manager: NetworkManager state is now CONNECTING
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2645] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2651] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2654] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:39:19 np0005532048 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 02:39:19 np0005532048 systemd[1]: Started Network Manager.
Nov 22 02:39:19 np0005532048 systemd[1]: Reached target Network.
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2707] dhcp4 (eth0): state changed new lease, address=38.129.56.62
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2714] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2731] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 02:39:19 np0005532048 systemd[1]: Starting Network Manager Wait Online...
Nov 22 02:39:19 np0005532048 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 22 02:39:19 np0005532048 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2898] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2900] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2902] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2909] device (lo): Activation: successful, device activated.
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2915] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2918] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2922] device (eth0): Activation: successful, device activated.
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2929] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 02:39:19 np0005532048 NetworkManager[858]: <info>  [1763797159.2932] manager: startup complete
Nov 22 02:39:19 np0005532048 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 22 02:39:19 np0005532048 systemd[1]: Finished Network Manager Wait Online.
Nov 22 02:39:19 np0005532048 systemd[1]: Starting Cloud-init: Network Stage...
Nov 22 02:39:19 np0005532048 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 22 02:39:19 np0005532048 systemd[1]: Reached target NFS client services.
Nov 22 02:39:19 np0005532048 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 22 02:39:19 np0005532048 systemd[1]: Reached target Remote File Systems.
Nov 22 02:39:19 np0005532048 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 02:39:19 np0005532048 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 22 Nov 2025 07:39:19 +0000. Up 10.39 seconds.
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: |  eth0  | True |         38.129.56.62         | 255.255.255.0 | global | fa:16:3e:44:e5:f5 |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe44:e5f5/64 |       .       |  link  | fa:16:3e:44:e5:f5 |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 22 02:39:19 np0005532048 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 02:39:21 np0005532048 cloud-init[920]: Generating public/private rsa key pair.
Nov 22 02:39:21 np0005532048 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 22 02:39:21 np0005532048 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 22 02:39:21 np0005532048 cloud-init[920]: The key fingerprint is:
Nov 22 02:39:21 np0005532048 cloud-init[920]: SHA256:LaMGZQoHgnmB2vPnzofcxwsH831Ci8ya93SvR5qxc18 root@np0005532048.novalocal
Nov 22 02:39:21 np0005532048 cloud-init[920]: The key's randomart image is:
Nov 22 02:39:21 np0005532048 cloud-init[920]: +---[RSA 3072]----+
Nov 22 02:39:21 np0005532048 cloud-init[920]: |ooo.             |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |+...             |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |.o. . o          |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |. oo +   .       |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |   oo   S . .    |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |    .... O + .. .|
Nov 22 02:39:21 np0005532048 cloud-init[920]: |     +oo..* = o*E|
Nov 22 02:39:21 np0005532048 cloud-init[920]: |     o+ o=+. +=.+|
Nov 22 02:39:21 np0005532048 cloud-init[920]: |     .o.oo.o. .=+|
Nov 22 02:39:21 np0005532048 cloud-init[920]: +----[SHA256]-----+
Nov 22 02:39:21 np0005532048 cloud-init[920]: Generating public/private ecdsa key pair.
Nov 22 02:39:21 np0005532048 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 22 02:39:21 np0005532048 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 22 02:39:21 np0005532048 cloud-init[920]: The key fingerprint is:
Nov 22 02:39:21 np0005532048 cloud-init[920]: SHA256:33HjqdovVMCuq4Qc1G3TbIPLQTTRIcPy0yzezmmDwLA root@np0005532048.novalocal
Nov 22 02:39:21 np0005532048 cloud-init[920]: The key's randomart image is:
Nov 22 02:39:21 np0005532048 cloud-init[920]: +---[ECDSA 256]---+
Nov 22 02:39:21 np0005532048 cloud-init[920]: |         oB+..   |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |       ..o.B+    |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |      . .o*+=.   |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |     ..  o+=+..  |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |      .+S.o=..o  |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |     .Eoo.o.o+ o |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |      o ...*..o  |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |       .  o.B.   |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |        ...oo+.  |
Nov 22 02:39:21 np0005532048 cloud-init[920]: +----[SHA256]-----+
Nov 22 02:39:21 np0005532048 cloud-init[920]: Generating public/private ed25519 key pair.
Nov 22 02:39:21 np0005532048 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 22 02:39:21 np0005532048 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 22 02:39:21 np0005532048 cloud-init[920]: The key fingerprint is:
Nov 22 02:39:21 np0005532048 cloud-init[920]: SHA256:qeN+gmkcpyliVBZs7ZrcTQjDGq0PSo0rBDNmyNbqPKE root@np0005532048.novalocal
Nov 22 02:39:21 np0005532048 cloud-init[920]: The key's randomart image is:
Nov 22 02:39:21 np0005532048 cloud-init[920]: +--[ED25519 256]--+
Nov 22 02:39:21 np0005532048 cloud-init[920]: |o = .            |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |=* X .           |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |=oO = .          |
Nov 22 02:39:21 np0005532048 cloud-init[920]: | X + o . .       |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |B O + o S        |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |E* =...o         |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |o .. Bo          |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |... B....        |
Nov 22 02:39:21 np0005532048 cloud-init[920]: |.. o .oo         |
Nov 22 02:39:21 np0005532048 cloud-init[920]: +----[SHA256]-----+
Nov 22 02:39:21 np0005532048 systemd[1]: Finished Cloud-init: Network Stage.
Nov 22 02:39:21 np0005532048 systemd[1]: Reached target Cloud-config availability.
Nov 22 02:39:21 np0005532048 systemd[1]: Reached target Network is Online.
Nov 22 02:39:21 np0005532048 systemd[1]: Starting Cloud-init: Config Stage...
Nov 22 02:39:21 np0005532048 systemd[1]: Starting Crash recovery kernel arming...
Nov 22 02:39:21 np0005532048 systemd[1]: Starting Notify NFS peers of a restart...
Nov 22 02:39:21 np0005532048 systemd[1]: Starting System Logging Service...
Nov 22 02:39:21 np0005532048 sm-notify[1004]: Version 2.5.4 starting
Nov 22 02:39:21 np0005532048 systemd[1]: Starting OpenSSH server daemon...
Nov 22 02:39:21 np0005532048 systemd[1]: Starting Permit User Sessions...
Nov 22 02:39:21 np0005532048 systemd[1]: Started Notify NFS peers of a restart.
Nov 22 02:39:21 np0005532048 systemd[1]: Started OpenSSH server daemon.
Nov 22 02:39:21 np0005532048 systemd[1]: Finished Permit User Sessions.
Nov 22 02:39:21 np0005532048 systemd[1]: Started Command Scheduler.
Nov 22 02:39:21 np0005532048 systemd[1]: Started Getty on tty1.
Nov 22 02:39:21 np0005532048 systemd[1]: Started Serial Getty on ttyS0.
Nov 22 02:39:21 np0005532048 systemd[1]: Reached target Login Prompts.
Nov 22 02:39:21 np0005532048 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Nov 22 02:39:21 np0005532048 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 22 02:39:21 np0005532048 systemd[1]: Started System Logging Service.
Nov 22 02:39:21 np0005532048 systemd[1]: Reached target Multi-User System.
Nov 22 02:39:21 np0005532048 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 22 02:39:21 np0005532048 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 22 02:39:21 np0005532048 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 22 02:39:21 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 02:39:21 np0005532048 kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Nov 22 02:39:21 np0005532048 kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 22 02:39:22 np0005532048 cloud-init[1128]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 22 Nov 2025 07:39:21 +0000. Up 12.77 seconds.
Nov 22 02:39:22 np0005532048 systemd[1]: Finished Cloud-init: Config Stage.
Nov 22 02:39:22 np0005532048 systemd[1]: Starting Cloud-init: Final Stage...
Nov 22 02:39:22 np0005532048 cloud-init[1272]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 22 Nov 2025 07:39:22 +0000. Up 13.20 seconds.
Nov 22 02:39:22 np0005532048 dracut[1276]: dracut-057-102.git20250818.el9
Nov 22 02:39:22 np0005532048 cloud-init[1303]: #############################################################
Nov 22 02:39:22 np0005532048 cloud-init[1305]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 22 02:39:22 np0005532048 cloud-init[1307]: 256 SHA256:33HjqdovVMCuq4Qc1G3TbIPLQTTRIcPy0yzezmmDwLA root@np0005532048.novalocal (ECDSA)
Nov 22 02:39:22 np0005532048 cloud-init[1309]: 256 SHA256:qeN+gmkcpyliVBZs7ZrcTQjDGq0PSo0rBDNmyNbqPKE root@np0005532048.novalocal (ED25519)
Nov 22 02:39:22 np0005532048 cloud-init[1311]: 3072 SHA256:LaMGZQoHgnmB2vPnzofcxwsH831Ci8ya93SvR5qxc18 root@np0005532048.novalocal (RSA)
Nov 22 02:39:22 np0005532048 cloud-init[1312]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 22 02:39:22 np0005532048 cloud-init[1313]: #############################################################
Nov 22 02:39:22 np0005532048 cloud-init[1272]: Cloud-init v. 24.4-7.el9 finished at Sat, 22 Nov 2025 07:39:22 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.42 seconds
Nov 22 02:39:22 np0005532048 systemd[1]: Finished Cloud-init: Final Stage.
Nov 22 02:39:22 np0005532048 systemd[1]: Reached target Cloud-init target.
Nov 22 02:39:22 np0005532048 dracut[1284]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 22 02:39:23 np0005532048 dracut[1284]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: memstrack is not available
Nov 22 02:39:24 np0005532048 dracut[1284]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 22 02:39:24 np0005532048 chronyd[831]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Nov 22 02:39:24 np0005532048 chronyd[831]: System clock TAI offset set to 37 seconds
Nov 22 02:39:24 np0005532048 dracut[1284]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 22 02:39:24 np0005532048 dracut[1284]: memstrack is not available
Nov 22 02:39:24 np0005532048 dracut[1284]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 22 02:39:24 np0005532048 dracut[1284]: *** Including module: systemd ***
Nov 22 02:39:24 np0005532048 dracut[1284]: *** Including module: fips ***
Nov 22 02:39:25 np0005532048 dracut[1284]: *** Including module: systemd-initrd ***
Nov 22 02:39:25 np0005532048 dracut[1284]: *** Including module: i18n ***
Nov 22 02:39:25 np0005532048 dracut[1284]: *** Including module: drm ***
Nov 22 02:39:25 np0005532048 dracut[1284]: *** Including module: prefixdevname ***
Nov 22 02:39:25 np0005532048 dracut[1284]: *** Including module: kernel-modules ***
Nov 22 02:39:26 np0005532048 kernel: block vda: the capability attribute has been deprecated.
Nov 22 02:39:26 np0005532048 dracut[1284]: *** Including module: kernel-modules-extra ***
Nov 22 02:39:26 np0005532048 dracut[1284]: *** Including module: qemu ***
Nov 22 02:39:26 np0005532048 dracut[1284]: *** Including module: fstab-sys ***
Nov 22 02:39:26 np0005532048 dracut[1284]: *** Including module: rootfs-block ***
Nov 22 02:39:26 np0005532048 dracut[1284]: *** Including module: terminfo ***
Nov 22 02:39:26 np0005532048 dracut[1284]: *** Including module: udev-rules ***
Nov 22 02:39:27 np0005532048 irqbalance[818]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 22 02:39:27 np0005532048 irqbalance[818]: IRQ 25 affinity is now unmanaged
Nov 22 02:39:27 np0005532048 irqbalance[818]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 22 02:39:27 np0005532048 irqbalance[818]: IRQ 31 affinity is now unmanaged
Nov 22 02:39:27 np0005532048 irqbalance[818]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 22 02:39:27 np0005532048 irqbalance[818]: IRQ 28 affinity is now unmanaged
Nov 22 02:39:27 np0005532048 irqbalance[818]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 22 02:39:27 np0005532048 irqbalance[818]: IRQ 32 affinity is now unmanaged
Nov 22 02:39:27 np0005532048 irqbalance[818]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 22 02:39:27 np0005532048 irqbalance[818]: IRQ 30 affinity is now unmanaged
Nov 22 02:39:27 np0005532048 irqbalance[818]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 22 02:39:27 np0005532048 irqbalance[818]: IRQ 29 affinity is now unmanaged
Nov 22 02:39:27 np0005532048 dracut[1284]: Skipping udev rule: 91-permissions.rules
Nov 22 02:39:27 np0005532048 dracut[1284]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 22 02:39:27 np0005532048 dracut[1284]: *** Including module: virtiofs ***
Nov 22 02:39:27 np0005532048 dracut[1284]: *** Including module: dracut-systemd ***
Nov 22 02:39:27 np0005532048 dracut[1284]: *** Including module: usrmount ***
Nov 22 02:39:27 np0005532048 dracut[1284]: *** Including module: base ***
Nov 22 02:39:27 np0005532048 dracut[1284]: *** Including module: fs-lib ***
Nov 22 02:39:27 np0005532048 dracut[1284]: *** Including module: kdumpbase ***
Nov 22 02:39:28 np0005532048 dracut[1284]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 22 02:39:28 np0005532048 dracut[1284]:  microcode_ctl module: mangling fw_dir
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 22 02:39:28 np0005532048 dracut[1284]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 22 02:39:28 np0005532048 dracut[1284]: *** Including module: openssl ***
Nov 22 02:39:28 np0005532048 dracut[1284]: *** Including module: shutdown ***
Nov 22 02:39:28 np0005532048 dracut[1284]: *** Including module: squash ***
Nov 22 02:39:28 np0005532048 dracut[1284]: *** Including modules done ***
Nov 22 02:39:28 np0005532048 dracut[1284]: *** Installing kernel module dependencies ***
Nov 22 02:39:29 np0005532048 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 02:39:29 np0005532048 dracut[1284]: *** Installing kernel module dependencies done ***
Nov 22 02:39:29 np0005532048 dracut[1284]: *** Resolving executable dependencies ***
Nov 22 02:39:31 np0005532048 dracut[1284]: *** Resolving executable dependencies done ***
Nov 22 02:39:31 np0005532048 dracut[1284]: *** Generating early-microcode cpio image ***
Nov 22 02:39:31 np0005532048 dracut[1284]: *** Store current command line parameters ***
Nov 22 02:39:31 np0005532048 dracut[1284]: Stored kernel commandline:
Nov 22 02:39:31 np0005532048 dracut[1284]: No dracut internal kernel commandline stored in the initramfs
Nov 22 02:39:31 np0005532048 dracut[1284]: *** Install squash loader ***
Nov 22 02:39:32 np0005532048 dracut[1284]: *** Squashing the files inside the initramfs ***
Nov 22 02:39:33 np0005532048 dracut[1284]: *** Squashing the files inside the initramfs done ***
Nov 22 02:39:33 np0005532048 dracut[1284]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 22 02:39:33 np0005532048 dracut[1284]: *** Hardlinking files ***
Nov 22 02:39:33 np0005532048 dracut[1284]: *** Hardlinking files done ***
Nov 22 02:39:34 np0005532048 dracut[1284]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 22 02:39:35 np0005532048 kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Nov 22 02:39:35 np0005532048 kdumpctl[1015]: kdump: Starting kdump: [OK]
Nov 22 02:39:35 np0005532048 systemd[1]: Finished Crash recovery kernel arming.
Nov 22 02:39:35 np0005532048 systemd[1]: Startup finished in 1.653s (kernel) + 3.062s (initrd) + 21.153s (userspace) = 25.869s.
Nov 22 02:39:49 np0005532048 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 02:40:30 np0005532048 chronyd[831]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Nov 22 02:52:08 np0005532048 systemd[1]: Created slice User Slice of UID 1000.
Nov 22 02:52:08 np0005532048 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 22 02:52:08 np0005532048 systemd-logind[822]: New session 1 of user zuul.
Nov 22 02:52:08 np0005532048 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 22 02:52:08 np0005532048 systemd[1]: Starting User Manager for UID 1000...
Nov 22 02:52:08 np0005532048 systemd[4306]: Queued start job for default target Main User Target.
Nov 22 02:52:08 np0005532048 systemd[4306]: Created slice User Application Slice.
Nov 22 02:52:08 np0005532048 systemd[4306]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 02:52:08 np0005532048 systemd[4306]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 02:52:08 np0005532048 systemd[4306]: Reached target Paths.
Nov 22 02:52:08 np0005532048 systemd[4306]: Reached target Timers.
Nov 22 02:52:08 np0005532048 systemd[4306]: Starting D-Bus User Message Bus Socket...
Nov 22 02:52:08 np0005532048 systemd[4306]: Starting Create User's Volatile Files and Directories...
Nov 22 02:52:08 np0005532048 systemd[4306]: Finished Create User's Volatile Files and Directories.
Nov 22 02:52:08 np0005532048 systemd[4306]: Listening on D-Bus User Message Bus Socket.
Nov 22 02:52:08 np0005532048 systemd[4306]: Reached target Sockets.
Nov 22 02:52:08 np0005532048 systemd[4306]: Reached target Basic System.
Nov 22 02:52:08 np0005532048 systemd[4306]: Reached target Main User Target.
Nov 22 02:52:08 np0005532048 systemd[4306]: Startup finished in 110ms.
Nov 22 02:52:08 np0005532048 systemd[1]: Started User Manager for UID 1000.
Nov 22 02:52:08 np0005532048 systemd[1]: Started Session 1 of User zuul.
Nov 22 02:52:08 np0005532048 python3[4390]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 02:52:11 np0005532048 python3[4418]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 02:52:17 np0005532048 python3[4476]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 02:52:18 np0005532048 python3[4516]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 22 02:52:20 np0005532048 python3[4542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs3UetWT4XiAexjidl31IpqzWiRUtq/0fyRRoxRzX73J1Agt8uTrzswf2FMvrZ3mFdKu0bgXYNLb6q/lrTmPGwbaNmitN3+d0ziOTZtTV5zDMzfvQXygH9Vrz9obgiL9A8BXFpuC0IPsKJSRYa7l/Kp/rytu80P0WE0Y2wWj8dFxOCeGRBBnnJ838qhuOWtagDXNjtB18rG3IULvBVFXvjDbqMQFd7jvleSzPZLsVOC55hT2G8cJ9NjdVTgmvq3Ce/aKLmUuySVgIvB6ccg2s/PcKroPucnFXzqE8SQnRpCeqvvYMEdPfKXBL/GZaA7Fygpo7GJ7VlNXcdHpdnVTtwIt4Qr5+AMFnpajeZL5CtlPSw60KXwipCgBJjZg1XgsFznUClozH0hFFDuox8L0+r2Q9IDBai7lXXW3HGCDXk1g7lfwPeM48NPDrEm/nLkicua33AgtPHh4v6bQhosTKibHC0EIMREOxBi5/Mt8f6VW3qXLa6cKX8idHayKGny9E= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:20 np0005532048 python3[4566]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:21 np0005532048 python3[4665]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:52:21 np0005532048 python3[4736]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763797941.0542533-207-175894770221387/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=8bb3345c360e453ebdb665d116f9d8c6_id_rsa follow=False checksum=f34157664854153d8d6d2faa8e95f2911808e759 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:22 np0005532048 python3[4859]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:52:22 np0005532048 python3[4930]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763797942.236605-240-108915195164747/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=8bb3345c360e453ebdb665d116f9d8c6_id_rsa.pub follow=False checksum=ee69e90dfcb83e6a65a497367ed9aad37b2eacfc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:24 np0005532048 python3[4978]: ansible-ping Invoked with data=pong
Nov 22 02:52:25 np0005532048 python3[5002]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 02:52:26 np0005532048 python3[5060]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 22 02:52:27 np0005532048 python3[5092]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:28 np0005532048 python3[5116]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:28 np0005532048 python3[5140]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:28 np0005532048 python3[5164]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:29 np0005532048 python3[5188]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:29 np0005532048 python3[5212]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:31 np0005532048 python3[5238]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:31 np0005532048 python3[5316]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:52:32 np0005532048 python3[5389]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763797951.2258298-21-197583865591390/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:32 np0005532048 python3[5437]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:33 np0005532048 python3[5461]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:33 np0005532048 python3[5485]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:33 np0005532048 python3[5509]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:33 np0005532048 python3[5533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:34 np0005532048 python3[5557]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:34 np0005532048 python3[5581]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:34 np0005532048 python3[5605]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:34 np0005532048 python3[5629]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:35 np0005532048 python3[5653]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:35 np0005532048 python3[5677]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:35 np0005532048 python3[5701]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:36 np0005532048 python3[5725]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:36 np0005532048 python3[5749]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:36 np0005532048 python3[5773]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:36 np0005532048 python3[5797]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:37 np0005532048 python3[5821]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:37 np0005532048 python3[5845]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:37 np0005532048 python3[5869]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:38 np0005532048 python3[5893]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:38 np0005532048 python3[5917]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:38 np0005532048 python3[5941]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:38 np0005532048 python3[5965]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:39 np0005532048 python3[5989]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:39 np0005532048 python3[6013]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:39 np0005532048 python3[6037]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:52:42 np0005532048 python3[6063]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 02:52:42 np0005532048 systemd[1]: Starting Time & Date Service...
Nov 22 02:52:42 np0005532048 systemd[1]: Started Time & Date Service.
Nov 22 02:52:42 np0005532048 systemd-timedated[6065]: Changed time zone to 'UTC' (UTC).
Nov 22 02:52:44 np0005532048 python3[6094]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:44 np0005532048 python3[6170]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:52:45 np0005532048 python3[6241]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763797964.6438055-153-164457739408449/source _original_basename=tmpqmqha7nj follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:45 np0005532048 python3[6341]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:52:45 np0005532048 python3[6412]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763797965.433796-183-253886714648908/source _original_basename=tmpg6xqpwwy follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:46 np0005532048 python3[6514]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:52:47 np0005532048 python3[6587]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763797966.482051-231-122882525349022/source _original_basename=tmpfeasr3qb follow=False checksum=97c00a37a257ec2c85ecfa45d56eec11e7fbbef3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:47 np0005532048 irqbalance[818]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 22 02:52:47 np0005532048 irqbalance[818]: IRQ 26 affinity is now unmanaged
Nov 22 02:52:47 np0005532048 python3[6635]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:52:47 np0005532048 python3[6661]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:52:48 np0005532048 python3[6741]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:52:48 np0005532048 python3[6814]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763797968.0667927-273-203385662804478/source _original_basename=tmpxxy40kgg follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:49 np0005532048 python3[6865]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-baad-1820-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:52:49 np0005532048 python3[6893]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-baad-1820-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 22 02:52:51 np0005532048 python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:53:12 np0005532048 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 02:53:16 np0005532048 python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:53:50 np0005532048 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 22 02:53:50 np0005532048 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 22 02:53:50 np0005532048 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 22 02:53:50 np0005532048 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 22 02:53:50 np0005532048 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 22 02:53:50 np0005532048 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 22 02:53:50 np0005532048 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 22 02:53:50 np0005532048 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 22 02:53:50 np0005532048 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 22 02:53:50 np0005532048 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4478] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 02:53:50 np0005532048 systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4633] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4658] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4661] device (eth1): carrier: link connected
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4663] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4668] policy: auto-activating connection 'Wired connection 1' (08c94766-55bc-38ad-81bd-916fcb461dc3)
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4671] device (eth1): Activation: starting connection 'Wired connection 1' (08c94766-55bc-38ad-81bd-916fcb461dc3)
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4672] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4675] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4678] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 02:53:50 np0005532048 NetworkManager[858]: <info>  [1763798030.4682] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:53:51 np0005532048 python3[6980]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-a7af-b846-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:54:01 np0005532048 python3[7060]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:54:01 np0005532048 python3[7133]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763798041.0310328-102-36994374769185/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=8be629ff3eff46a278f4be8a08af6d66afb327cd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:54:02 np0005532048 python3[7183]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 02:54:02 np0005532048 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 22 02:54:02 np0005532048 systemd[1]: Stopped Network Manager Wait Online.
Nov 22 02:54:02 np0005532048 systemd[1]: Stopping Network Manager Wait Online...
Nov 22 02:54:02 np0005532048 systemd[1]: Stopping Network Manager...
Nov 22 02:54:02 np0005532048 NetworkManager[858]: <info>  [1763798042.5504] caught SIGTERM, shutting down normally.
Nov 22 02:54:02 np0005532048 NetworkManager[858]: <info>  [1763798042.5514] dhcp4 (eth0): canceled DHCP transaction
Nov 22 02:54:02 np0005532048 NetworkManager[858]: <info>  [1763798042.5515] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:54:02 np0005532048 NetworkManager[858]: <info>  [1763798042.5516] dhcp4 (eth0): state changed no lease
Nov 22 02:54:02 np0005532048 NetworkManager[858]: <info>  [1763798042.5518] manager: NetworkManager state is now CONNECTING
Nov 22 02:54:02 np0005532048 NetworkManager[858]: <info>  [1763798042.5604] dhcp4 (eth1): canceled DHCP transaction
Nov 22 02:54:02 np0005532048 NetworkManager[858]: <info>  [1763798042.5606] dhcp4 (eth1): state changed no lease
Nov 22 02:54:02 np0005532048 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 02:54:02 np0005532048 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 02:54:02 np0005532048 NetworkManager[858]: <info>  [1763798042.6074] exiting (success)
Nov 22 02:54:02 np0005532048 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 22 02:54:02 np0005532048 systemd[1]: Stopped Network Manager.
Nov 22 02:54:02 np0005532048 systemd[1]: NetworkManager.service: Consumed 6.321s CPU time, 10.0M memory peak.
Nov 22 02:54:02 np0005532048 systemd[1]: Starting Network Manager...
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.6710] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4e2dc5c3-ddd6-4720-b04d-99a9b483bca6)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.6712] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.6766] manager[0x562a30926070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 02:54:02 np0005532048 systemd[1]: Starting Hostname Service...
Nov 22 02:54:02 np0005532048 systemd[1]: Started Hostname Service.
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7519] hostname: hostname: using hostnamed
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7519] hostname: static hostname changed from (none) to "np0005532048.novalocal"
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7522] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7526] manager[0x562a30926070]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7527] manager[0x562a30926070]: rfkill: WWAN hardware radio set enabled
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7548] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7548] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7549] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7549] manager: Networking is enabled by state file
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7552] settings: Loaded settings plugin: keyfile (internal)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7555] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7576] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7582] dhcp: init: Using DHCP client 'internal'
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7585] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7588] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7592] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7598] device (lo): Activation: starting connection 'lo' (07a5ca92-286d-43ed-b1fe-7289a1f61143)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7603] device (eth0): carrier: link connected
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7607] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7610] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7610] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7615] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7619] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7623] device (eth1): carrier: link connected
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7627] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7631] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (08c94766-55bc-38ad-81bd-916fcb461dc3) (indicated)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7631] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7636] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7641] device (eth1): Activation: starting connection 'Wired connection 1' (08c94766-55bc-38ad-81bd-916fcb461dc3)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7646] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 02:54:02 np0005532048 systemd[1]: Started Network Manager.
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7649] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7651] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7653] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7655] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7657] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7659] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7662] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7664] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7669] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7671] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7678] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7680] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7694] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7697] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7701] device (lo): Activation: successful, device activated.
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7707] dhcp4 (eth0): state changed new lease, address=38.129.56.62
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.7712] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 02:54:02 np0005532048 systemd[1]: Starting Network Manager Wait Online...
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.8618] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.8635] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.8638] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.8642] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.8646] device (eth0): Activation: successful, device activated.
Nov 22 02:54:02 np0005532048 NetworkManager[7200]: <info>  [1763798042.8651] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 02:54:03 np0005532048 python3[7268]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-a7af-b846-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:54:09 np0005532048 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 22 02:54:09 np0005532048 systemd[4306]: Starting Mark boot as successful...
Nov 22 02:54:09 np0005532048 systemd[4306]: Finished Mark boot as successful.
Nov 22 02:54:09 np0005532048 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 22 02:54:09 np0005532048 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 22 02:54:09 np0005532048 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 22 02:54:12 np0005532048 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 02:54:32 np0005532048 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.1963] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 02:54:48 np0005532048 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 02:54:48 np0005532048 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2251] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2254] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2266] device (eth1): Activation: successful, device activated.
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2274] manager: startup complete
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2276] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <warn>  [1763798088.2280] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2287] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 22 02:54:48 np0005532048 systemd[1]: Finished Network Manager Wait Online.
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2374] dhcp4 (eth1): canceled DHCP transaction
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2374] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2375] dhcp4 (eth1): state changed no lease
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2391] policy: auto-activating connection 'ci-private-network' (b5201727-8c94-54a5-91fa-803796ec41d6)
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2395] device (eth1): Activation: starting connection 'ci-private-network' (b5201727-8c94-54a5-91fa-803796ec41d6)
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2396] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2398] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2407] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2414] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2875] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2881] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 02:54:48 np0005532048 NetworkManager[7200]: <info>  [1763798088.2894] device (eth1): Activation: successful, device activated.
Nov 22 02:54:58 np0005532048 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 02:55:03 np0005532048 systemd-logind[822]: Session 1 logged out. Waiting for processes to exit.
Nov 22 02:55:05 np0005532048 systemd-logind[822]: New session 3 of user zuul.
Nov 22 02:55:05 np0005532048 systemd[1]: Started Session 3 of User zuul.
Nov 22 02:55:05 np0005532048 python3[7381]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:55:06 np0005532048 python3[7454]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798105.3549316-267-230414542227228/source _original_basename=tmpfst4u5dl follow=False checksum=641e0d5781e0d7c0508632d81af9e2593412b086 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:55:08 np0005532048 systemd[1]: session-3.scope: Deactivated successfully.
Nov 22 02:55:08 np0005532048 systemd-logind[822]: Session 3 logged out. Waiting for processes to exit.
Nov 22 02:55:08 np0005532048 systemd-logind[822]: Removed session 3.
Nov 22 02:57:50 np0005532048 systemd[4306]: Created slice User Background Tasks Slice.
Nov 22 02:57:50 np0005532048 systemd[4306]: Starting Cleanup of User's Temporary Files and Directories...
Nov 22 02:57:50 np0005532048 systemd[4306]: Finished Cleanup of User's Temporary Files and Directories.
Nov 22 03:00:41 np0005532048 systemd-logind[822]: New session 4 of user zuul.
Nov 22 03:00:41 np0005532048 systemd[1]: Started Session 4 of User zuul.
Nov 22 03:00:41 np0005532048 python3[7513]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-ece7-fc42-000000001cba-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:00:41 np0005532048 python3[7542]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:00:42 np0005532048 python3[7568]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:00:42 np0005532048 python3[7594]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:00:42 np0005532048 python3[7620]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:00:43 np0005532048 python3[7646]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:00:43 np0005532048 python3[7724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:00:43 np0005532048 python3[7797]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798443.300582-466-438003874250/source _original_basename=tmpa0zqyk9q follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:00:44 np0005532048 python3[7847]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:00:44 np0005532048 systemd[1]: Reloading.
Nov 22 03:00:44 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:00:46 np0005532048 python3[7903]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 22 03:00:46 np0005532048 python3[7929]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:00:47 np0005532048 python3[7957]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:00:47 np0005532048 python3[7985]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:00:47 np0005532048 python3[8013]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:00:48 np0005532048 python3[8040]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-ece7-fc42-000000001cc1-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:00:48 np0005532048 python3[8070]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:00:50 np0005532048 systemd-logind[822]: Session 4 logged out. Waiting for processes to exit.
Nov 22 03:00:50 np0005532048 systemd[1]: session-4.scope: Deactivated successfully.
Nov 22 03:00:50 np0005532048 systemd[1]: session-4.scope: Consumed 4.093s CPU time.
Nov 22 03:00:50 np0005532048 systemd-logind[822]: Removed session 4.
Nov 22 03:00:52 np0005532048 systemd-logind[822]: New session 5 of user zuul.
Nov 22 03:00:52 np0005532048 systemd[1]: Started Session 5 of User zuul.
Nov 22 03:00:53 np0005532048 python3[8104]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 03:01:23 np0005532048 kernel: SELinux:  Converting 386 SID table entries...
Nov 22 03:01:23 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:01:23 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:01:23 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:01:23 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:01:23 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:01:23 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:01:23 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:01:40 np0005532048 kernel: SELinux:  Converting 386 SID table entries...
Nov 22 03:01:40 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:01:40 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:01:40 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:01:40 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:01:40 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:01:40 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:01:40 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:01:57 np0005532048 kernel: SELinux:  Converting 386 SID table entries...
Nov 22 03:01:57 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:01:57 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:01:57 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:01:57 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:01:57 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:01:57 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:01:57 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:02:00 np0005532048 setsebool[8188]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 22 03:02:00 np0005532048 setsebool[8188]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 22 03:02:14 np0005532048 kernel: SELinux:  Converting 389 SID table entries...
Nov 22 03:02:14 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:02:14 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:02:14 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:02:14 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:02:14 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:02:14 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:02:14 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:02:40 np0005532048 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 22 03:02:40 np0005532048 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:02:40 np0005532048 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:02:40 np0005532048 systemd[1]: Reloading.
Nov 22 03:02:40 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:02:40 np0005532048 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:02:50 np0005532048 python3[13053]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-402c-0dc4-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:02:51 np0005532048 kernel: evm: overlay not supported
Nov 22 03:02:52 np0005532048 systemd[4306]: Starting D-Bus User Message Bus...
Nov 22 03:02:52 np0005532048 dbus-broker-launch[13868]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 22 03:02:52 np0005532048 dbus-broker-launch[13868]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 22 03:02:52 np0005532048 systemd[4306]: Started D-Bus User Message Bus.
Nov 22 03:02:52 np0005532048 dbus-broker-lau[13868]: Ready
Nov 22 03:02:52 np0005532048 systemd[4306]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 22 03:02:52 np0005532048 systemd[4306]: Created slice Slice /user.
Nov 22 03:02:52 np0005532048 systemd[4306]: podman-13640.scope: unit configures an IP firewall, but not running as root.
Nov 22 03:02:52 np0005532048 systemd[4306]: (This warning is only shown for the first unit using IP firewalling.)
Nov 22 03:02:52 np0005532048 systemd[4306]: Started podman-13640.scope.
Nov 22 03:02:52 np0005532048 systemd[4306]: Started podman-pause-fd95ab19.scope.
Nov 22 03:02:53 np0005532048 python3[14012]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.212:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.212:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:02:53 np0005532048 python3[14012]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 22 03:02:53 np0005532048 systemd[1]: session-5.scope: Deactivated successfully.
Nov 22 03:02:53 np0005532048 systemd[1]: session-5.scope: Consumed 1min 5.297s CPU time.
Nov 22 03:02:53 np0005532048 systemd-logind[822]: Session 5 logged out. Waiting for processes to exit.
Nov 22 03:02:53 np0005532048 systemd-logind[822]: Removed session 5.
Nov 22 03:03:17 np0005532048 irqbalance[818]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 22 03:03:17 np0005532048 irqbalance[818]: IRQ 27 affinity is now unmanaged
Nov 22 03:03:23 np0005532048 systemd-logind[822]: New session 6 of user zuul.
Nov 22 03:03:23 np0005532048 systemd[1]: Started Session 6 of User zuul.
Nov 22 03:03:23 np0005532048 python3[22953]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJKl5kSeiCnX2hwjZgoyJcoBKVg4g9mojByqg+dpEMqpi3Akh3VYt2+EBmsuTq22j6hCe5qyN8ksWGqyA4OUtJs= zuul@np0005532043.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 03:03:24 np0005532048 python3[23126]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJKl5kSeiCnX2hwjZgoyJcoBKVg4g9mojByqg+dpEMqpi3Akh3VYt2+EBmsuTq22j6hCe5qyN8ksWGqyA4OUtJs= zuul@np0005532043.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 03:03:24 np0005532048 python3[23395]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532048.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 22 03:03:30 np0005532048 python3[24139]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJKl5kSeiCnX2hwjZgoyJcoBKVg4g9mojByqg+dpEMqpi3Akh3VYt2+EBmsuTq22j6hCe5qyN8ksWGqyA4OUtJs= zuul@np0005532043.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 03:03:30 np0005532048 python3[24414]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:03:31 np0005532048 python3[24653]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763798610.5632887-135-69729302841955/source _original_basename=tmpdfhjqdg0 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:03:32 np0005532048 python3[24916]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 22 03:03:32 np0005532048 systemd[1]: Starting Hostname Service...
Nov 22 03:03:32 np0005532048 systemd[1]: Started Hostname Service.
Nov 22 03:03:32 np0005532048 systemd-hostnamed[25004]: Changed pretty hostname to 'compute-0'
Nov 22 03:03:32 np0005532048 systemd-hostnamed[25004]: Hostname set to <compute-0> (static)
Nov 22 03:03:32 np0005532048 NetworkManager[7200]: <info>  [1763798612.3944] hostname: static hostname changed from "np0005532048.novalocal" to "compute-0"
Nov 22 03:03:32 np0005532048 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 03:03:32 np0005532048 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 03:03:32 np0005532048 systemd[1]: session-6.scope: Deactivated successfully.
Nov 22 03:03:32 np0005532048 systemd[1]: session-6.scope: Consumed 2.299s CPU time.
Nov 22 03:03:32 np0005532048 systemd-logind[822]: Session 6 logged out. Waiting for processes to exit.
Nov 22 03:03:32 np0005532048 systemd-logind[822]: Removed session 6.
Nov 22 03:03:42 np0005532048 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 03:03:52 np0005532048 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:03:52 np0005532048 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:03:52 np0005532048 systemd[1]: man-db-cache-update.service: Consumed 55.062s CPU time.
Nov 22 03:03:52 np0005532048 systemd[1]: run-r57bd89d02fc642a9adeca819b75e590a.service: Deactivated successfully.
Nov 22 03:04:02 np0005532048 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 03:08:59 np0005532048 systemd-logind[822]: New session 7 of user zuul.
Nov 22 03:08:59 np0005532048 systemd[1]: Started Session 7 of User zuul.
Nov 22 03:09:00 np0005532048 python3[30024]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:09:01 np0005532048 python3[30140]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:09:02 np0005532048 python3[30213]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763798941.25606-33565-270576816327046/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:09:02 np0005532048 python3[30239]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:09:02 np0005532048 python3[30312]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763798941.25606-33565-270576816327046/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:09:03 np0005532048 python3[30338]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:09:03 np0005532048 python3[30411]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763798941.25606-33565-270576816327046/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:09:03 np0005532048 python3[30437]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:09:03 np0005532048 python3[30510]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763798941.25606-33565-270576816327046/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:09:04 np0005532048 python3[30536]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:09:04 np0005532048 python3[30609]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763798941.25606-33565-270576816327046/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:09:04 np0005532048 python3[30635]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:09:05 np0005532048 python3[30708]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763798941.25606-33565-270576816327046/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:09:05 np0005532048 python3[30734]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:09:05 np0005532048 python3[30807]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763798941.25606-33565-270576816327046/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:09:21 np0005532048 python3[30865]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:14:21 np0005532048 systemd[1]: session-7.scope: Deactivated successfully.
Nov 22 03:14:21 np0005532048 systemd[1]: session-7.scope: Consumed 4.811s CPU time.
Nov 22 03:14:21 np0005532048 systemd-logind[822]: Session 7 logged out. Waiting for processes to exit.
Nov 22 03:14:21 np0005532048 systemd-logind[822]: Removed session 7.
Nov 22 03:21:13 np0005532048 systemd-logind[822]: New session 8 of user zuul.
Nov 22 03:21:13 np0005532048 systemd[1]: Started Session 8 of User zuul.
Nov 22 03:21:14 np0005532048 python3.9[31023]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:21:15 np0005532048 python3.9[31204]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:21:24 np0005532048 systemd[1]: session-8.scope: Deactivated successfully.
Nov 22 03:21:24 np0005532048 systemd[1]: session-8.scope: Consumed 8.484s CPU time.
Nov 22 03:21:24 np0005532048 systemd-logind[822]: Session 8 logged out. Waiting for processes to exit.
Nov 22 03:21:24 np0005532048 systemd-logind[822]: Removed session 8.
Nov 22 03:21:39 np0005532048 systemd-logind[822]: New session 9 of user zuul.
Nov 22 03:21:39 np0005532048 systemd[1]: Started Session 9 of User zuul.
Nov 22 03:21:40 np0005532048 python3.9[31415]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 22 03:21:41 np0005532048 python3.9[31589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:21:42 np0005532048 python3.9[31741]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:21:43 np0005532048 python3.9[31894]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:21:43 np0005532048 python3.9[32046]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:44 np0005532048 python3.9[32198]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:45 np0005532048 python3.9[32321]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799704.0822575-73-167317791139940/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:46 np0005532048 python3.9[32473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:21:46 np0005532048 python3.9[32629]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:21:47 np0005532048 python3.9[32781]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:21:48 np0005532048 python3.9[32931]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:21:53 np0005532048 python3.9[33184]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:54 np0005532048 python3.9[33335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:21:55 np0005532048 python3.9[33489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:21:56 np0005532048 python3.9[33647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:21:57 np0005532048 python3.9[33731]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:22:47 np0005532048 systemd[1]: Reloading.
Nov 22 03:22:47 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:22:47 np0005532048 systemd[1]: Starting dnf makecache...
Nov 22 03:22:47 np0005532048 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 22 03:22:47 np0005532048 dnf[33939]: Failed determining last makecache time.
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-openstack-barbican-42b4c41831408a8e323 106 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 130 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-openstack-cinder-1c00d6490d88e436f26ef 122 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 systemd[1]: Reloading.
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-python-stevedore-c4acc5639fd2329372142 105 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-python-observabilityclient-2f31846d73c 113 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-os-net-config-bbae2ed8a159b0435a473f38 134 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 111 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-python-designate-tests-tempest-347fdbc 117 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-openstack-glance-1fd12c29b339f30fe823e 145 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 134 kB/s | 3.0 kB     00:00
Nov 22 03:22:47 np0005532048 dnf[33939]: delorean-openstack-manila-3c01b7181572c95dac462 113 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: delorean-python-whitebox-neutron-tests-tempest- 125 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 22 03:22:48 np0005532048 dnf[33939]: delorean-openstack-octavia-ba397f07a7331190208c 145 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: delorean-openstack-watcher-c014f81a8647287f6dcc 121 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 22 03:22:48 np0005532048 dnf[33939]: delorean-python-tcib-1124124ec06aadbac34f0d340b 113 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 systemd[1]: Reloading.
Nov 22 03:22:48 np0005532048 dnf[33939]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 139 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: delorean-openstack-swift-dc98a8463506ac520c469a 125 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:22:48 np0005532048 dnf[33939]: delorean-python-tempestconf-8515371b7cceebd4282 127 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: delorean-openstack-heat-ui-013accbfd179753bc3f0 128 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: CentOS Stream 9 - BaseOS                         72 kB/s | 7.3 kB     00:00
Nov 22 03:22:48 np0005532048 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 22 03:22:48 np0005532048 dnf[33939]: CentOS Stream 9 - AppStream                      47 kB/s | 7.4 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: CentOS Stream 9 - CRB                            75 kB/s | 7.2 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: CentOS Stream 9 - Extras packages                84 kB/s | 8.3 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: dlrn-antelope-testing                           141 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: dlrn-antelope-build-deps                        125 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: centos9-rabbitmq                                119 kB/s | 3.0 kB     00:00
Nov 22 03:22:48 np0005532048 dnf[33939]: centos9-storage                                 124 kB/s | 3.0 kB     00:00
Nov 22 03:22:49 np0005532048 dnf[33939]: centos9-opstools                                117 kB/s | 3.0 kB     00:00
Nov 22 03:22:49 np0005532048 dnf[33939]: NFV SIG OpenvSwitch                             110 kB/s | 3.0 kB     00:00
Nov 22 03:22:49 np0005532048 dbus-broker-launch[805]: Noticed file-system modification, trigger reload.
Nov 22 03:22:49 np0005532048 dbus-broker-launch[805]: Noticed file-system modification, trigger reload.
Nov 22 03:22:49 np0005532048 dbus-broker-launch[805]: Noticed file-system modification, trigger reload.
Nov 22 03:22:49 np0005532048 dnf[33939]: repo-setup-centos-appstream                     164 kB/s | 4.4 kB     00:00
Nov 22 03:22:49 np0005532048 dnf[33939]: repo-setup-centos-baseos                        139 kB/s | 3.9 kB     00:00
Nov 22 03:22:49 np0005532048 dnf[33939]: repo-setup-centos-highavailability              146 kB/s | 3.9 kB     00:00
Nov 22 03:22:49 np0005532048 dnf[33939]: repo-setup-centos-powertools                    180 kB/s | 4.3 kB     00:00
Nov 22 03:22:49 np0005532048 dnf[33939]: Extra Packages for Enterprise Linux 9 - x86_64  264 kB/s |  32 kB     00:00
Nov 22 03:22:50 np0005532048 dnf[33939]: Metadata cache created.
Nov 22 03:22:50 np0005532048 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 22 03:22:50 np0005532048 systemd[1]: Finished dnf makecache.
Nov 22 03:22:50 np0005532048 systemd[1]: dnf-makecache.service: Consumed 2.086s CPU time.
Nov 22 03:24:21 np0005532048 kernel: SELinux:  Converting 2718 SID table entries...
Nov 22 03:24:21 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:24:21 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:24:21 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:24:21 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:24:21 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:24:21 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:24:21 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:24:22 np0005532048 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 22 03:24:22 np0005532048 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:24:22 np0005532048 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:24:22 np0005532048 systemd[1]: Reloading.
Nov 22 03:24:22 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:24:22 np0005532048 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:24:24 np0005532048 python3.9[35287]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:24:25 np0005532048 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:24:25 np0005532048 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:24:25 np0005532048 systemd[1]: man-db-cache-update.service: Consumed 1.294s CPU time.
Nov 22 03:24:25 np0005532048 systemd[1]: run-r80d5210d66b54cc5a3334bdb79d707a1.service: Deactivated successfully.
Nov 22 03:24:28 np0005532048 python3.9[35570]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 22 03:24:28 np0005532048 python3.9[35722]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 22 03:24:32 np0005532048 python3.9[35875]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:24:33 np0005532048 python3.9[36027]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 22 03:24:34 np0005532048 python3.9[36179]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:24:40 np0005532048 python3.9[36331]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:24:41 np0005532048 python3.9[36455]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763799874.9183316-236-121560180502265/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a47cfef5659f96e38749b52219b95e14d8a2625 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:24:42 np0005532048 python3.9[36607]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:24:42 np0005532048 python3.9[36759]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:24:43 np0005532048 python3.9[36912]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:24:44 np0005532048 python3.9[37064]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 22 03:24:44 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:24:44 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:24:45 np0005532048 python3.9[37218]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:24:46 np0005532048 python3.9[37376]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 03:24:46 np0005532048 python3.9[37536]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 22 03:24:47 np0005532048 python3.9[37689]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:24:48 np0005532048 python3.9[37847]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 22 03:24:49 np0005532048 python3.9[37999]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:24:53 np0005532048 python3.9[38152]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:24:54 np0005532048 python3.9[38304]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:24:55 np0005532048 python3.9[38427]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799893.9406748-355-101981231717853/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:24:56 np0005532048 python3.9[38579]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:24:56 np0005532048 systemd[1]: Starting Load Kernel Modules...
Nov 22 03:24:56 np0005532048 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 22 03:24:56 np0005532048 kernel: Bridge firewalling registered
Nov 22 03:24:56 np0005532048 systemd-modules-load[38583]: Inserted module 'br_netfilter'
Nov 22 03:24:56 np0005532048 systemd[1]: Finished Load Kernel Modules.
Nov 22 03:24:57 np0005532048 python3.9[38738]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:24:57 np0005532048 python3.9[38861]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763799896.7295916-378-271866891486177/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:24:58 np0005532048 python3.9[39013]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:25:04 np0005532048 dbus-broker-launch[805]: Noticed file-system modification, trigger reload.
Nov 22 03:25:04 np0005532048 dbus-broker-launch[805]: Noticed file-system modification, trigger reload.
Nov 22 03:25:05 np0005532048 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:25:05 np0005532048 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:25:05 np0005532048 systemd[1]: Reloading.
Nov 22 03:25:05 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:05 np0005532048 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:25:07 np0005532048 python3.9[40532]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:25:08 np0005532048 python3.9[41408]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 22 03:25:08 np0005532048 python3.9[42223]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:25:09 np0005532048 python3.9[42952]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:09 np0005532048 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 03:25:10 np0005532048 systemd[1]: Starting Authorization Manager...
Nov 22 03:25:10 np0005532048 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 03:25:10 np0005532048 polkitd[43402]: Started polkitd version 0.117
Nov 22 03:25:10 np0005532048 systemd[1]: Started Authorization Manager.
Nov 22 03:25:11 np0005532048 python3.9[43572]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:25:11 np0005532048 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 22 03:25:11 np0005532048 systemd[1]: tuned.service: Deactivated successfully.
Nov 22 03:25:11 np0005532048 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 22 03:25:11 np0005532048 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 03:25:11 np0005532048 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 03:25:12 np0005532048 python3.9[43734]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 22 03:25:14 np0005532048 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:25:14 np0005532048 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:25:14 np0005532048 systemd[1]: man-db-cache-update.service: Consumed 5.437s CPU time.
Nov 22 03:25:14 np0005532048 systemd[1]: run-rccd4ba90d21a4fa880f95784b9127be8.service: Deactivated successfully.
Nov 22 03:25:14 np0005532048 python3.9[43887]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:25:14 np0005532048 systemd[1]: Reloading.
Nov 22 03:25:14 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:15 np0005532048 python3.9[44075]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:25:15 np0005532048 systemd[1]: Reloading.
Nov 22 03:25:16 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:17 np0005532048 python3.9[44264]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:17 np0005532048 python3.9[44417]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:17 np0005532048 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 22 03:25:18 np0005532048 python3.9[44570]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:21 np0005532048 python3.9[44732]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:22 np0005532048 python3.9[44885]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:25:22 np0005532048 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 03:25:22 np0005532048 systemd[1]: Stopped Apply Kernel Variables.
Nov 22 03:25:22 np0005532048 systemd[1]: Stopping Apply Kernel Variables...
Nov 22 03:25:22 np0005532048 systemd[1]: Starting Apply Kernel Variables...
Nov 22 03:25:22 np0005532048 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 03:25:22 np0005532048 systemd[1]: Finished Apply Kernel Variables.
Nov 22 03:25:22 np0005532048 systemd[1]: session-9.scope: Deactivated successfully.
Nov 22 03:25:22 np0005532048 systemd[1]: session-9.scope: Consumed 2min 25.103s CPU time.
Nov 22 03:25:22 np0005532048 systemd-logind[822]: Session 9 logged out. Waiting for processes to exit.
Nov 22 03:25:22 np0005532048 systemd-logind[822]: Removed session 9.
Nov 22 03:25:27 np0005532048 systemd-logind[822]: New session 10 of user zuul.
Nov 22 03:25:27 np0005532048 systemd[1]: Started Session 10 of User zuul.
Nov 22 03:25:29 np0005532048 python3.9[45068]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:25:30 np0005532048 python3.9[45224]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 22 03:25:31 np0005532048 python3.9[45377]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:25:32 np0005532048 python3.9[45535]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 03:25:34 np0005532048 python3.9[45695]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:25:35 np0005532048 python3.9[45779]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 03:25:40 np0005532048 python3.9[45943]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:25:55 np0005532048 kernel: SELinux:  Converting 2730 SID table entries...
Nov 22 03:25:55 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:25:55 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:25:55 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:25:55 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:25:55 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:25:55 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:25:55 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:25:56 np0005532048 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 22 03:25:56 np0005532048 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 22 03:25:58 np0005532048 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:25:58 np0005532048 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:25:58 np0005532048 systemd[1]: Reloading.
Nov 22 03:25:58 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:58 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:59 np0005532048 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:26:00 np0005532048 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:26:00 np0005532048 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:26:00 np0005532048 systemd[1]: run-r96da07b333694acc9172c4aa6bfbfc6a.service: Deactivated successfully.
Nov 22 03:26:01 np0005532048 python3.9[47041]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:26:01 np0005532048 systemd[1]: Reloading.
Nov 22 03:26:01 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:26:01 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:26:01 np0005532048 systemd[1]: Starting Open vSwitch Database Unit...
Nov 22 03:26:01 np0005532048 chown[47083]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 22 03:26:01 np0005532048 ovs-ctl[47088]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 22 03:26:02 np0005532048 ovs-ctl[47088]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 22 03:26:02 np0005532048 ovs-ctl[47088]: Starting ovsdb-server [  OK  ]
Nov 22 03:26:02 np0005532048 ovs-vsctl[47138]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 22 03:26:02 np0005532048 ovs-vsctl[47158]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"26987bf4-0c95-4db6-9113-da9e4051262c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 22 03:26:02 np0005532048 ovs-ctl[47088]: Configuring Open vSwitch system IDs [  OK  ]
Nov 22 03:26:02 np0005532048 ovs-ctl[47088]: Enabling remote OVSDB managers [  OK  ]
Nov 22 03:26:02 np0005532048 systemd[1]: Started Open vSwitch Database Unit.
Nov 22 03:26:02 np0005532048 ovs-vsctl[47164]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 03:26:02 np0005532048 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 22 03:26:02 np0005532048 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 22 03:26:02 np0005532048 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 22 03:26:02 np0005532048 kernel: openvswitch: Open vSwitch switching datapath
Nov 22 03:26:02 np0005532048 ovs-ctl[47209]: Inserting openvswitch module [  OK  ]
Nov 22 03:26:02 np0005532048 ovs-ctl[47178]: Starting ovs-vswitchd [  OK  ]
Nov 22 03:26:02 np0005532048 ovs-vsctl[47226]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 03:26:02 np0005532048 ovs-ctl[47178]: Enabling remote OVSDB managers [  OK  ]
Nov 22 03:26:02 np0005532048 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 22 03:26:02 np0005532048 systemd[1]: Starting Open vSwitch...
Nov 22 03:26:02 np0005532048 systemd[1]: Finished Open vSwitch.
Nov 22 03:26:03 np0005532048 python3.9[47378]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:26:04 np0005532048 python3.9[47530]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 22 03:26:07 np0005532048 kernel: SELinux:  Converting 2744 SID table entries...
Nov 22 03:26:07 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:26:07 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:26:07 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:26:07 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:26:07 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:26:07 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:26:07 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:26:08 np0005532048 python3.9[47685]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:26:08 np0005532048 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 22 03:26:09 np0005532048 python3.9[47843]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:26:11 np0005532048 python3.9[47996]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:13 np0005532048 python3.9[48283]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 03:26:13 np0005532048 python3.9[48433]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:26:14 np0005532048 python3.9[48587]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:26:17 np0005532048 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:26:17 np0005532048 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:26:17 np0005532048 systemd[1]: Reloading.
Nov 22 03:26:17 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:26:17 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:26:18 np0005532048 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:26:20 np0005532048 python3.9[48902]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:26:20 np0005532048 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 22 03:26:20 np0005532048 systemd[1]: Stopped Network Manager Wait Online.
Nov 22 03:26:20 np0005532048 systemd[1]: Stopping Network Manager Wait Online...
Nov 22 03:26:20 np0005532048 systemd[1]: Stopping Network Manager...
Nov 22 03:26:20 np0005532048 NetworkManager[7200]: <info>  [1763799980.3824] caught SIGTERM, shutting down normally.
Nov 22 03:26:20 np0005532048 NetworkManager[7200]: <info>  [1763799980.3846] dhcp4 (eth0): canceled DHCP transaction
Nov 22 03:26:20 np0005532048 NetworkManager[7200]: <info>  [1763799980.3847] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 03:26:20 np0005532048 NetworkManager[7200]: <info>  [1763799980.3847] dhcp4 (eth0): state changed no lease
Nov 22 03:26:20 np0005532048 NetworkManager[7200]: <info>  [1763799980.3849] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 03:26:20 np0005532048 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 03:26:20 np0005532048 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 03:26:20 np0005532048 NetworkManager[7200]: <info>  [1763799980.5427] exiting (success)
Nov 22 03:26:20 np0005532048 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 22 03:26:20 np0005532048 systemd[1]: Stopped Network Manager.
Nov 22 03:26:20 np0005532048 systemd[1]: NetworkManager.service: Consumed 16.795s CPU time, 4.1M memory peak, read 0B from disk, written 26.5K to disk.
Nov 22 03:26:20 np0005532048 systemd[1]: Starting Network Manager...
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.6078] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4e2dc5c3-ddd6-4720-b04d-99a9b483bca6)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.6079] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.6147] manager[0x55eee3122090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 03:26:20 np0005532048 systemd[1]: Starting Hostname Service...
Nov 22 03:26:20 np0005532048 systemd[1]: Started Hostname Service.
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7002] hostname: hostname: using hostnamed
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7002] hostname: static hostname changed from (none) to "compute-0"
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7009] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7015] manager[0x55eee3122090]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7016] manager[0x55eee3122090]: rfkill: WWAN hardware radio set enabled
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7037] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7048] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7049] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7049] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7050] manager: Networking is enabled by state file
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7053] settings: Loaded settings plugin: keyfile (internal)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7056] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7089] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7101] dhcp: init: Using DHCP client 'internal'
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7104] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7111] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7118] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7130] device (lo): Activation: starting connection 'lo' (07a5ca92-286d-43ed-b1fe-7289a1f61143)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7137] device (eth0): carrier: link connected
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7141] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7147] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7148] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7157] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7165] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7171] device (eth1): carrier: link connected
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7176] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7182] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b5201727-8c94-54a5-91fa-803796ec41d6) (indicated)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7182] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7189] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7196] device (eth1): Activation: starting connection 'ci-private-network' (b5201727-8c94-54a5-91fa-803796ec41d6)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7202] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 03:26:20 np0005532048 systemd[1]: Started Network Manager.
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7216] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7221] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7224] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7227] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7232] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7235] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7238] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7251] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7262] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7266] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7276] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7289] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7296] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7298] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7303] device (lo): Activation: successful, device activated.
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7324] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7327] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7331] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 22 03:26:20 np0005532048 NetworkManager[48920]: <info>  [1763799980.7350] device (eth1): Activation: successful, device activated.
Nov 22 03:26:20 np0005532048 systemd[1]: Starting Network Manager Wait Online...
Nov 22 03:26:21 np0005532048 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:26:21 np0005532048 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:26:21 np0005532048 systemd[1]: run-r76edce2a1184467aa5917ac1ada07318.service: Deactivated successfully.
Nov 22 03:26:21 np0005532048 python3.9[49110]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:26:22 np0005532048 NetworkManager[48920]: <info>  [1763799982.2396] dhcp4 (eth0): state changed new lease, address=38.129.56.62
Nov 22 03:26:22 np0005532048 NetworkManager[48920]: <info>  [1763799982.2407] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 03:26:22 np0005532048 NetworkManager[48920]: <info>  [1763799982.2696] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 03:26:22 np0005532048 NetworkManager[48920]: <info>  [1763799982.2740] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 03:26:22 np0005532048 NetworkManager[48920]: <info>  [1763799982.2742] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 03:26:22 np0005532048 NetworkManager[48920]: <info>  [1763799982.2747] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 03:26:22 np0005532048 NetworkManager[48920]: <info>  [1763799982.2751] device (eth0): Activation: successful, device activated.
Nov 22 03:26:22 np0005532048 NetworkManager[48920]: <info>  [1763799982.2757] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 03:26:22 np0005532048 NetworkManager[48920]: <info>  [1763799982.2760] manager: startup complete
Nov 22 03:26:22 np0005532048 systemd[1]: Finished Network Manager Wait Online.
Nov 22 03:26:30 np0005532048 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:26:30 np0005532048 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:26:30 np0005532048 systemd[1]: Reloading.
Nov 22 03:26:30 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:26:30 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:26:30 np0005532048 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:26:32 np0005532048 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 03:26:35 np0005532048 python3.9[49589]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:26:36 np0005532048 python3.9[49741]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:36 np0005532048 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:26:36 np0005532048 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:26:36 np0005532048 systemd[1]: run-r7f8bb14706654e6c9addca73432ca468.service: Deactivated successfully.
Nov 22 03:26:37 np0005532048 python3.9[49896]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:37 np0005532048 python3.9[50048]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:38 np0005532048 python3.9[50200]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:39 np0005532048 python3.9[50352]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:39 np0005532048 python3.9[50504]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:26:40 np0005532048 python3.9[50627]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763799999.323297-229-258195714937547/.source _original_basename=.z61oe3k8 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:41 np0005532048 python3.9[50779]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:42 np0005532048 python3.9[50931]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 22 03:26:42 np0005532048 python3.9[51083]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:44 np0005532048 python3.9[51510]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 22 03:26:46 np0005532048 ansible-async_wrapper.py[51685]: Invoked with j125255252547 300 /home/zuul/.ansible/tmp/ansible-tmp-1763800005.2065554-295-214826395354037/AnsiballZ_edpm_os_net_config.py _
Nov 22 03:26:46 np0005532048 ansible-async_wrapper.py[51688]: Starting module and watcher
Nov 22 03:26:46 np0005532048 ansible-async_wrapper.py[51688]: Start watching 51689 (300)
Nov 22 03:26:46 np0005532048 ansible-async_wrapper.py[51689]: Start module (51689)
Nov 22 03:26:46 np0005532048 ansible-async_wrapper.py[51685]: Return async_wrapper task started.
Nov 22 03:26:46 np0005532048 python3.9[51690]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 22 03:26:47 np0005532048 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 22 03:26:47 np0005532048 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 22 03:26:47 np0005532048 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 22 03:26:47 np0005532048 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 22 03:26:47 np0005532048 kernel: cfg80211: failed to load regulatory.db
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.3261] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.3277] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4011] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4013] audit: op="connection-add" uuid="827ece96-f987-4726-ae09-16a1e0b3dae3" name="br-ex-br" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4032] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4034] audit: op="connection-add" uuid="eca6794a-34c4-4c8e-a41e-cda401f81209" name="br-ex-port" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4050] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4052] audit: op="connection-add" uuid="905d00e5-ba83-4d6d-b8f9-4d519e33354c" name="eth1-port" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4065] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4067] audit: op="connection-add" uuid="1507c844-e8e3-4db4-93c8-41edb485baea" name="vlan20-port" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4080] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4082] audit: op="connection-add" uuid="746e293c-9465-488b-8d97-654f120220b4" name="vlan21-port" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4098] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4100] audit: op="connection-add" uuid="b9436095-6a8a-4d3b-b5e7-b35834063c27" name="vlan22-port" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4112] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4113] audit: op="connection-add" uuid="c2c6125a-b894-4ccf-85a7-29a87cd5e439" name="vlan23-port" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4138] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4159] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4160] audit: op="connection-add" uuid="09d3bbe4-6a65-434f-8578-2638a965e688" name="br-ex-if" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4621] audit: op="connection-update" uuid="b5201727-8c94-54a5-91fa-803796ec41d6" name="ci-private-network" args="ovs-external-ids.data,connection.slave-type,connection.timestamp,connection.controller,connection.master,connection.port-type,ipv4.never-default,ipv4.method,ipv4.dns,ipv4.routing-rules,ipv4.addresses,ipv4.routes,ovs-interface.type,ipv6.routes,ipv6.method,ipv6.dns,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.addresses" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4650] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4652] audit: op="connection-add" uuid="3a135260-ceaa-4896-bd86-6e298b166897" name="vlan20-if" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4670] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4672] audit: op="connection-add" uuid="619bbfa3-84b1-4d9f-8ccd-c1d994ce0d71" name="vlan21-if" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4688] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4689] audit: op="connection-add" uuid="6aa6d4b4-9bc1-466d-9f93-27d44ec10be8" name="vlan22-if" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4705] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4708] audit: op="connection-add" uuid="9f4059d1-76e2-49ec-a730-88b72cf40b0a" name="vlan23-if" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4722] audit: op="connection-delete" uuid="08c94766-55bc-38ad-81bd-916fcb461dc3" name="Wired connection 1" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4737] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4748] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4752] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (827ece96-f987-4726-ae09-16a1e0b3dae3)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4752] audit: op="connection-activate" uuid="827ece96-f987-4726-ae09-16a1e0b3dae3" name="br-ex-br" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4755] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4762] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4766] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (eca6794a-34c4-4c8e-a41e-cda401f81209)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4768] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4774] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4778] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (905d00e5-ba83-4d6d-b8f9-4d519e33354c)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4780] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4787] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4791] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (1507c844-e8e3-4db4-93c8-41edb485baea)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4793] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4799] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4803] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (746e293c-9465-488b-8d97-654f120220b4)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4805] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4811] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4815] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (b9436095-6a8a-4d3b-b5e7-b35834063c27)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4817] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4823] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4828] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (c2c6125a-b894-4ccf-85a7-29a87cd5e439)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4829] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4831] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4833] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4839] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4844] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4848] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (09d3bbe4-6a65-434f-8578-2638a965e688)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4848] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4851] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4854] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4855] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4857] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4868] device (eth1): disconnecting for new activation request.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4869] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4921] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4922] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4924] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4927] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4933] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4937] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3a135260-ceaa-4896-bd86-6e298b166897)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4938] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4942] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4944] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4945] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4949] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4953] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4957] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (619bbfa3-84b1-4d9f-8ccd-c1d994ce0d71)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4958] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4961] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4963] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4964] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4966] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4971] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4975] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (6aa6d4b4-9bc1-466d-9f93-27d44ec10be8)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4976] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4979] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4981] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.4982] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5000] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5012] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5018] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (9f4059d1-76e2-49ec-a730-88b72cf40b0a)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5019] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5025] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5028] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5031] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5034] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5051] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5055] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5059] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5062] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5074] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5079] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5084] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5089] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5091] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 kernel: ovs-system: entered promiscuous mode
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5096] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5101] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5106] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5111] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 kernel: Timeout policy base is empty
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5116] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5120] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 systemd-udevd[51695]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5124] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5126] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5131] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5136] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5139] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5140] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5146] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5153] dhcp4 (eth0): canceled DHCP transaction
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5154] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5154] dhcp4 (eth0): state changed no lease
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5157] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5167] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5171] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51691 uid=0 result="fail" reason="Device is not activated"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5181] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5285] device (eth1): Activation: starting connection 'ci-private-network' (b5201727-8c94-54a5-91fa-803796ec41d6)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5293] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5296] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5298] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5300] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5302] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5309] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5318] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5325] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5329] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5338] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5343] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5348] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5353] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5356] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5360] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5363] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5367] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5371] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5374] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5378] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5380] device (eth1): released from controller device eth1
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5386] device (eth1): disconnecting for new activation request.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5388] audit: op="connection-activate" uuid="b5201727-8c94-54a5-91fa-803796ec41d6" name="ci-private-network" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5389] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5390] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5394] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5397] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5404] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5407] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5410] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5414] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5417] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5424] device (eth1): Activation: starting connection 'ci-private-network' (b5201727-8c94-54a5-91fa-803796ec41d6)
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5434] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5437] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5442] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51691 uid=0 result="success"
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5443] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 kernel: br-ex: entered promiscuous mode
Nov 22 03:26:48 np0005532048 kernel: vlan22: entered promiscuous mode
Nov 22 03:26:48 np0005532048 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 22 03:26:48 np0005532048 systemd-udevd[51697]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:26:48 np0005532048 kernel: vlan21: entered promiscuous mode
Nov 22 03:26:48 np0005532048 kernel: vlan20: entered promiscuous mode
Nov 22 03:26:48 np0005532048 systemd-udevd[51696]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5762] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5777] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5784] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:26:48 np0005532048 kernel: vlan23: entered promiscuous mode
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5820] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5842] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5855] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5862] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5870] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5879] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5884] device (eth1): Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5895] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5897] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5900] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5907] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5912] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5918] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5934] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5946] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5947] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5951] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5957] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.5971] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.6207] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.6208] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.6211] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.6216] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.6235] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.6476] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.6479] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.6488] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:26:48 np0005532048 NetworkManager[48920]: <info>  [1763800008.6855] dhcp4 (eth0): state changed new lease, address=38.129.56.62
Nov 22 03:26:49 np0005532048 NetworkManager[48920]: <info>  [1763800009.8559] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51691 uid=0 result="success"
Nov 22 03:26:49 np0005532048 python3.9[52054]: ansible-ansible.legacy.async_status Invoked with jid=j125255252547.51685 mode=status _async_dir=/root/.ansible_async
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.0694] checkpoint[0x55eee30f8950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.0696] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51691 uid=0 result="success"
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.4040] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51691 uid=0 result="success"
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.4055] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51691 uid=0 result="success"
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.6965] audit: op="networking-control" arg="global-dns-configuration" pid=51691 uid=0 result="success"
Nov 22 03:26:50 np0005532048 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.7621] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.7919] audit: op="networking-control" arg="global-dns-configuration" pid=51691 uid=0 result="success"
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.7955] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51691 uid=0 result="success"
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.9615] checkpoint[0x55eee30f8a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 22 03:26:50 np0005532048 NetworkManager[48920]: <info>  [1763800010.9620] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51691 uid=0 result="success"
Nov 22 03:26:51 np0005532048 ansible-async_wrapper.py[51689]: Module complete (51689)
Nov 22 03:26:51 np0005532048 ansible-async_wrapper.py[51688]: Done in kid B.
Nov 22 03:26:53 np0005532048 python3.9[52163]: ansible-ansible.legacy.async_status Invoked with jid=j125255252547.51685 mode=status _async_dir=/root/.ansible_async
Nov 22 03:26:54 np0005532048 python3.9[52263]: ansible-ansible.legacy.async_status Invoked with jid=j125255252547.51685 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 03:26:54 np0005532048 python3.9[52415]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:26:55 np0005532048 python3.9[52538]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800014.2637944-322-1561775233753/.source.returncode _original_basename=.a6yexr7x follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:56 np0005532048 python3.9[52690]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:26:56 np0005532048 python3.9[52813]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800015.571294-338-17416663126914/.source.cfg _original_basename=.7ifxe_gb follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:57 np0005532048 python3.9[52966]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:26:57 np0005532048 systemd[1]: Reloading Network Manager...
Nov 22 03:26:57 np0005532048 NetworkManager[48920]: <info>  [1763800017.5615] audit: op="reload" arg="0" pid=52970 uid=0 result="success"
Nov 22 03:26:57 np0005532048 NetworkManager[48920]: <info>  [1763800017.5626] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 22 03:26:57 np0005532048 systemd[1]: Reloaded Network Manager.
Nov 22 03:26:58 np0005532048 systemd[1]: session-10.scope: Deactivated successfully.
Nov 22 03:26:58 np0005532048 systemd[1]: session-10.scope: Consumed 54.320s CPU time.
Nov 22 03:26:58 np0005532048 systemd-logind[822]: Session 10 logged out. Waiting for processes to exit.
Nov 22 03:26:58 np0005532048 systemd-logind[822]: Removed session 10.
Nov 22 03:27:03 np0005532048 systemd-logind[822]: New session 11 of user zuul.
Nov 22 03:27:03 np0005532048 systemd[1]: Started Session 11 of User zuul.
Nov 22 03:27:04 np0005532048 python3.9[53154]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:27:06 np0005532048 python3.9[53308]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:27:07 np0005532048 python3.9[53502]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:27:07 np0005532048 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 03:27:08 np0005532048 systemd[1]: session-11.scope: Deactivated successfully.
Nov 22 03:27:08 np0005532048 systemd[1]: session-11.scope: Consumed 2.538s CPU time.
Nov 22 03:27:08 np0005532048 systemd-logind[822]: Session 11 logged out. Waiting for processes to exit.
Nov 22 03:27:08 np0005532048 systemd-logind[822]: Removed session 11.
Nov 22 03:27:13 np0005532048 systemd-logind[822]: New session 12 of user zuul.
Nov 22 03:27:13 np0005532048 systemd[1]: Started Session 12 of User zuul.
Nov 22 03:27:14 np0005532048 python3.9[53686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:27:15 np0005532048 python3.9[53840]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:27:16 np0005532048 python3.9[53996]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:27:17 np0005532048 python3.9[54081]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:27:19 np0005532048 python3.9[54235]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:27:20 np0005532048 python3.9[54430]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:27:21 np0005532048 python3.9[54582]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:27:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-compat2498618744-merged.mount: Deactivated successfully.
Nov 22 03:27:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1891487074-merged.mount: Deactivated successfully.
Nov 22 03:27:22 np0005532048 podman[54583]: 2025-11-22 08:27:22.128824962 +0000 UTC m=+0.505883778 system refresh
Nov 22 03:27:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:27:22 np0005532048 python3.9[54744]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:27:23 np0005532048 python3.9[54867]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800042.3201113-79-7052304443423/.source.json follow=False _original_basename=podman_network_config.j2 checksum=d19a0e0e9084d45f15dcc9a2959c824f2fac880e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:27:24 np0005532048 python3.9[55019]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:27:24 np0005532048 python3.9[55142]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763800043.8878055-94-139105083528228/.source.conf follow=False _original_basename=registries.conf.j2 checksum=939a29a8a8b094e4c4032b270410461f3246b298 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:27:25 np0005532048 python3.9[55294]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:27:26 np0005532048 python3.9[55446]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:27:27 np0005532048 python3.9[55598]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:27:27 np0005532048 python3.9[55750]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:27:28 np0005532048 python3.9[55902]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:27:31 np0005532048 python3.9[56055]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:27:32 np0005532048 python3.9[56209]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:27:32 np0005532048 python3.9[56361]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:27:33 np0005532048 python3.9[56513]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:27:34 np0005532048 python3.9[56666]: ansible-service_facts Invoked
Nov 22 03:27:34 np0005532048 network[56683]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:27:34 np0005532048 network[56684]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:27:34 np0005532048 network[56685]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:27:40 np0005532048 python3.9[57137]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:27:42 np0005532048 python3.9[57290]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 22 03:27:44 np0005532048 python3.9[57442]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:27:44 np0005532048 python3.9[57567]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800063.5798442-238-146628906681488/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:27:45 np0005532048 python3.9[57721]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:27:46 np0005532048 python3.9[57846]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800065.065789-253-164586008570874/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:27:47 np0005532048 python3.9[58000]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:27:48 np0005532048 python3.9[58154]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:27:49 np0005532048 python3.9[58238]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:27:50 np0005532048 python3.9[58392]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:27:51 np0005532048 python3.9[58476]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:27:51 np0005532048 chronyd[831]: chronyd exiting
Nov 22 03:27:51 np0005532048 systemd[1]: Stopping NTP client/server...
Nov 22 03:27:51 np0005532048 systemd[1]: chronyd.service: Deactivated successfully.
Nov 22 03:27:51 np0005532048 systemd[1]: Stopped NTP client/server.
Nov 22 03:27:51 np0005532048 systemd[1]: Starting NTP client/server...
Nov 22 03:27:51 np0005532048 chronyd[58484]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 22 03:27:51 np0005532048 chronyd[58484]: Frequency -23.887 +/- 0.092 ppm read from /var/lib/chrony/drift
Nov 22 03:27:51 np0005532048 chronyd[58484]: Loaded seccomp filter (level 2)
Nov 22 03:27:51 np0005532048 systemd[1]: Started NTP client/server.
Nov 22 03:27:52 np0005532048 systemd-logind[822]: Session 12 logged out. Waiting for processes to exit.
Nov 22 03:27:52 np0005532048 systemd[1]: session-12.scope: Deactivated successfully.
Nov 22 03:27:52 np0005532048 systemd[1]: session-12.scope: Consumed 28.537s CPU time.
Nov 22 03:27:52 np0005532048 systemd-logind[822]: Removed session 12.
Nov 22 03:27:57 np0005532048 systemd-logind[822]: New session 13 of user zuul.
Nov 22 03:27:57 np0005532048 systemd[1]: Started Session 13 of User zuul.
Nov 22 03:27:58 np0005532048 python3.9[58665]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:27:59 np0005532048 python3.9[58817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:27:59 np0005532048 python3.9[58940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800078.4509995-34-82647415775760/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:00 np0005532048 systemd[1]: session-13.scope: Deactivated successfully.
Nov 22 03:28:00 np0005532048 systemd-logind[822]: Session 13 logged out. Waiting for processes to exit.
Nov 22 03:28:00 np0005532048 systemd[1]: session-13.scope: Consumed 1.763s CPU time.
Nov 22 03:28:00 np0005532048 systemd-logind[822]: Removed session 13.
Nov 22 03:28:05 np0005532048 systemd-logind[822]: New session 14 of user zuul.
Nov 22 03:28:05 np0005532048 systemd[1]: Started Session 14 of User zuul.
Nov 22 03:28:06 np0005532048 python3.9[59118]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:28:07 np0005532048 python3.9[59274]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:08 np0005532048 python3.9[59449]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:09 np0005532048 python3.9[59572]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763800087.9817822-41-211715603273076/.source.json _original_basename=.v7y6i4i9 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:10 np0005532048 python3.9[59724]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:10 np0005532048 python3.9[59847]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800089.792102-64-186378973435708/.source _original_basename=.my4ucciu follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:11 np0005532048 python3.9[59999]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:28:12 np0005532048 python3.9[60151]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:13 np0005532048 python3.9[60274]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763800091.9423254-88-141020356046908/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:28:13 np0005532048 python3.9[60426]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:14 np0005532048 python3.9[60549]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763800093.3038979-88-12903316522680/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:28:15 np0005532048 python3.9[60701]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:15 np0005532048 python3.9[60853]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:16 np0005532048 python3.9[60976]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800095.3798764-125-77693505797134/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:17 np0005532048 python3.9[61128]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:17 np0005532048 python3.9[61251]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800096.7533607-140-125354679289987/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:19 np0005532048 python3.9[61403]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:28:19 np0005532048 systemd[1]: Reloading.
Nov 22 03:28:19 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:28:19 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:28:19 np0005532048 systemd[1]: Reloading.
Nov 22 03:28:19 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:28:19 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:28:19 np0005532048 systemd[1]: Starting EDPM Container Shutdown...
Nov 22 03:28:19 np0005532048 systemd[1]: Finished EDPM Container Shutdown.
Nov 22 03:28:20 np0005532048 python3.9[61632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:21 np0005532048 python3.9[61755]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800099.952339-163-166280442587471/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:21 np0005532048 python3.9[61907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:22 np0005532048 python3.9[62030]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800101.241551-178-51521658574906/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:23 np0005532048 python3.9[62182]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:28:23 np0005532048 systemd[1]: Reloading.
Nov 22 03:28:23 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:28:23 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:28:23 np0005532048 systemd[1]: Reloading.
Nov 22 03:28:23 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:28:23 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:28:23 np0005532048 systemd[1]: Starting Create netns directory...
Nov 22 03:28:23 np0005532048 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:28:23 np0005532048 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:28:23 np0005532048 systemd[1]: Finished Create netns directory.
Nov 22 03:28:24 np0005532048 python3.9[62407]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:28:24 np0005532048 network[62424]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:28:24 np0005532048 network[62425]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:28:24 np0005532048 network[62426]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:28:28 np0005532048 python3.9[62688]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:28:28 np0005532048 systemd[1]: Reloading.
Nov 22 03:28:28 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:28:28 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:28:29 np0005532048 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 22 03:28:29 np0005532048 iptables.init[62728]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 22 03:28:29 np0005532048 iptables.init[62728]: iptables: Flushing firewall rules: [  OK  ]
Nov 22 03:28:29 np0005532048 systemd[1]: iptables.service: Deactivated successfully.
Nov 22 03:28:29 np0005532048 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 22 03:28:30 np0005532048 python3.9[62925]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:28:31 np0005532048 python3.9[63079]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:28:31 np0005532048 systemd[1]: Reloading.
Nov 22 03:28:31 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:28:31 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:28:31 np0005532048 systemd[1]: Starting Netfilter Tables...
Nov 22 03:28:31 np0005532048 systemd[1]: Finished Netfilter Tables.
Nov 22 03:28:32 np0005532048 python3.9[63271]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:28:33 np0005532048 python3.9[63424]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:34 np0005532048 python3.9[63549]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800112.9830267-247-102337997950901/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:35 np0005532048 python3.9[63702]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:28:35 np0005532048 systemd[1]: Reloading OpenSSH server daemon...
Nov 22 03:28:35 np0005532048 systemd[1]: Reloaded OpenSSH server daemon.
Nov 22 03:28:35 np0005532048 python3.9[63858]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:36 np0005532048 python3.9[64010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:37 np0005532048 python3.9[64133]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800116.0926628-278-83904841334412/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:38 np0005532048 python3.9[64285]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 03:28:38 np0005532048 systemd[1]: Starting Time & Date Service...
Nov 22 03:28:38 np0005532048 systemd[1]: Started Time & Date Service.
Nov 22 03:28:39 np0005532048 python3.9[64441]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:40 np0005532048 python3.9[64593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:40 np0005532048 python3.9[64716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800119.4123526-313-2125442883962/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:41 np0005532048 python3.9[64868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:41 np0005532048 python3.9[64991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800120.8044589-328-200644557795792/.source.yaml _original_basename=.t5pplyzm follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:42 np0005532048 python3.9[65143]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:43 np0005532048 python3.9[65266]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800122.099728-343-115855056187302/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:44 np0005532048 python3.9[65418]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:28:44 np0005532048 python3.9[65571]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:28:45 np0005532048 python3[65724]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 03:28:46 np0005532048 python3.9[65876]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:47 np0005532048 python3.9[65999]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800125.8979123-382-163531579657162/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:47 np0005532048 python3.9[66151]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:48 np0005532048 python3.9[66274]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800127.2659755-397-154615728666363/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:49 np0005532048 python3.9[66426]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:49 np0005532048 python3.9[66549]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800128.5113776-412-136988278141899/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:50 np0005532048 python3.9[66701]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:50 np0005532048 python3.9[66824]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800129.766016-427-212927638197424/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:51 np0005532048 python3.9[66976]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:28:52 np0005532048 python3.9[67099]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800131.0879984-442-107725719617102/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:53 np0005532048 python3.9[67251]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:53 np0005532048 python3.9[67403]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:28:54 np0005532048 python3.9[67562]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:55 np0005532048 python3.9[67715]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:56 np0005532048 python3.9[67867]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:57 np0005532048 python3.9[68019]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 03:28:57 np0005532048 python3.9[68172]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 03:28:58 np0005532048 systemd[1]: session-14.scope: Deactivated successfully.
Nov 22 03:28:58 np0005532048 systemd[1]: session-14.scope: Consumed 39.382s CPU time.
Nov 22 03:28:58 np0005532048 systemd-logind[822]: Session 14 logged out. Waiting for processes to exit.
Nov 22 03:28:58 np0005532048 systemd-logind[822]: Removed session 14.
Nov 22 03:29:02 np0005532048 systemd-logind[822]: New session 15 of user zuul.
Nov 22 03:29:02 np0005532048 systemd[1]: Started Session 15 of User zuul.
Nov 22 03:29:03 np0005532048 python3.9[68353]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 22 03:29:04 np0005532048 python3.9[68505]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:29:05 np0005532048 python3.9[68657]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:29:06 np0005532048 python3.9[68809]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClQbCrfYHYcZVrX6ClZPfju7WP41Laza3GwI9YrowKjBNGdI0nR82stmvdgQuiHajPLJ22WCr0F+kB1JrEL7C/e3dwMW71KNOV4t0/n8wi6dh7A0MAYMRWmAS4iTccodPhuAHSsL3Y2WJQ0gQWcs+D47d4S6UgUfY8McyIrSyku1RMrZvqD+Horky+VXJyMnsc6m32MTL8hw/XFttt8bUMFyhPzl8RPK57aM4xkoHnKhqMFgEdWDgJ/2bhleaNBLFcDwcYSBCIj0uOO1qOI9eWVZLBuU9MlaHzLpx44iPJwbG0fG/yd6h27j2o8Kd/RSp5wOPd86SbmEnv4yU4zFiF1jykKvvivEg0EFLYkokwg/5lFJAuf+pP/d7+yBlm5V+6NYATTJfKsY5cnPMzxllm+aANyAcNsBjnGMyWg9Ax0f+9bLnKdSWORGi7kuC4h8ELbfFKcpjWPGgHB85DCeVhTsWPVcO9FXThAzAWAixeeY0ZMKa50h9OmUwINRVu3yc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2hQQQ43NShzuaEL4Cp+20/r6q8pEcftymfrK4aUcAg#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIe46hlHz1zy6uhPDRcH4wsrfH7UfRZvDfoinfWFeiDxCLhKrGTSMkoLOX8bmMBaO1LfWPgU6AdevI9F65iL0cc=#012 create=True mode=0644 path=/tmp/ansible.c_p01uef state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:29:07 np0005532048 python3.9[68961]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.c_p01uef' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:29:08 np0005532048 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 03:29:08 np0005532048 python3.9[69115]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.c_p01uef state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:29:09 np0005532048 systemd[1]: session-15.scope: Deactivated successfully.
Nov 22 03:29:09 np0005532048 systemd[1]: session-15.scope: Consumed 3.669s CPU time.
Nov 22 03:29:09 np0005532048 systemd-logind[822]: Session 15 logged out. Waiting for processes to exit.
Nov 22 03:29:09 np0005532048 systemd-logind[822]: Removed session 15.
Nov 22 03:29:14 np0005532048 systemd-logind[822]: New session 16 of user zuul.
Nov 22 03:29:14 np0005532048 systemd[1]: Started Session 16 of User zuul.
Nov 22 03:29:15 np0005532048 python3.9[69295]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:29:16 np0005532048 python3.9[69451]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 03:29:17 np0005532048 python3.9[69605]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:29:18 np0005532048 python3.9[69758]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:29:19 np0005532048 python3.9[69911]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:29:20 np0005532048 python3.9[70065]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:29:20 np0005532048 python3.9[70220]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:29:21 np0005532048 systemd[1]: session-16.scope: Deactivated successfully.
Nov 22 03:29:21 np0005532048 systemd[1]: session-16.scope: Consumed 4.752s CPU time.
Nov 22 03:29:21 np0005532048 systemd-logind[822]: Session 16 logged out. Waiting for processes to exit.
Nov 22 03:29:21 np0005532048 systemd-logind[822]: Removed session 16.
Nov 22 03:29:26 np0005532048 systemd-logind[822]: New session 17 of user zuul.
Nov 22 03:29:26 np0005532048 systemd[1]: Started Session 17 of User zuul.
Nov 22 03:29:27 np0005532048 python3.9[70398]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:29:28 np0005532048 python3.9[70554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:29:29 np0005532048 python3.9[70638]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 03:29:32 np0005532048 python3.9[70789]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:29:33 np0005532048 python3.9[70940]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:29:34 np0005532048 python3.9[71090]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:29:34 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:29:34 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:29:35 np0005532048 python3.9[71241]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:29:35 np0005532048 systemd[1]: session-17.scope: Deactivated successfully.
Nov 22 03:29:35 np0005532048 systemd[1]: session-17.scope: Consumed 6.352s CPU time.
Nov 22 03:29:35 np0005532048 systemd-logind[822]: Session 17 logged out. Waiting for processes to exit.
Nov 22 03:29:35 np0005532048 systemd-logind[822]: Removed session 17.
Nov 22 03:29:43 np0005532048 systemd-logind[822]: New session 18 of user zuul.
Nov 22 03:29:43 np0005532048 systemd[1]: Started Session 18 of User zuul.
Nov 22 03:29:50 np0005532048 python3[72007]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:29:51 np0005532048 python3[72102]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 03:29:53 np0005532048 python3[72129]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:29:54 np0005532048 python3[72155]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:29:54 np0005532048 kernel: loop: module loaded
Nov 22 03:29:54 np0005532048 kernel: loop3: detected capacity change from 0 to 41943040
Nov 22 03:29:54 np0005532048 python3[72190]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:29:54 np0005532048 lvm[72193]: PV /dev/loop3 not used.
Nov 22 03:29:54 np0005532048 lvm[72195]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 03:29:54 np0005532048 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 22 03:29:54 np0005532048 lvm[72201]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 22 03:29:54 np0005532048 lvm[72205]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 03:29:54 np0005532048 lvm[72205]: VG ceph_vg0 finished
Nov 22 03:29:54 np0005532048 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 22 03:29:55 np0005532048 python3[72283]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:29:57 np0005532048 python3[72356]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800195.0847638-36197-228692081464803/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:29:57 np0005532048 python3[72406]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:29:58 np0005532048 systemd[1]: Reloading.
Nov 22 03:29:58 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:29:58 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:29:58 np0005532048 systemd[1]: Starting Ceph OSD losetup...
Nov 22 03:29:58 np0005532048 bash[72446]: /dev/loop3: [64513]:4328014 (/var/lib/ceph-osd-0.img)
Nov 22 03:29:58 np0005532048 systemd[1]: Finished Ceph OSD losetup.
Nov 22 03:29:58 np0005532048 lvm[72448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 03:29:58 np0005532048 lvm[72448]: VG ceph_vg0 finished
Nov 22 03:29:58 np0005532048 python3[72474]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 03:30:01 np0005532048 python3[72501]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:30:01 np0005532048 python3[72527]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:01 np0005532048 kernel: loop4: detected capacity change from 0 to 41943040
Nov 22 03:30:01 np0005532048 python3[72559]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:02 np0005532048 chronyd[58484]: Selected source 167.160.187.179 (pool.ntp.org)
Nov 22 03:30:02 np0005532048 lvm[72562]: PV /dev/loop4 not used.
Nov 22 03:30:03 np0005532048 lvm[72564]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 03:30:03 np0005532048 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 22 03:30:03 np0005532048 lvm[72566]:  0 logical volume(s) in volume group "ceph_vg1" now active
Nov 22 03:30:03 np0005532048 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 22 03:30:03 np0005532048 lvm[72574]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 03:30:03 np0005532048 lvm[72574]: VG ceph_vg1 finished
Nov 22 03:30:03 np0005532048 python3[72652]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:30:04 np0005532048 python3[72725]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800203.422457-36224-95923395526152/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:30:04 np0005532048 python3[72775]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:30:04 np0005532048 systemd[1]: Reloading.
Nov 22 03:30:04 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:30:04 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:30:04 np0005532048 systemd[1]: Starting Ceph OSD losetup...
Nov 22 03:30:04 np0005532048 bash[72816]: /dev/loop4: [64513]:4328051 (/var/lib/ceph-osd-1.img)
Nov 22 03:30:04 np0005532048 systemd[1]: Finished Ceph OSD losetup.
Nov 22 03:30:05 np0005532048 lvm[72818]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 03:30:05 np0005532048 lvm[72818]: VG ceph_vg1 finished
Nov 22 03:30:05 np0005532048 python3[72844]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 03:30:07 np0005532048 python3[72871]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:30:07 np0005532048 python3[72897]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:07 np0005532048 kernel: loop5: detected capacity change from 0 to 41943040
Nov 22 03:30:08 np0005532048 python3[72928]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:08 np0005532048 lvm[72932]: PV /dev/loop5 not used.
Nov 22 03:30:08 np0005532048 lvm[72934]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:30:08 np0005532048 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 22 03:30:08 np0005532048 lvm[72937]:  1 logical volume(s) in volume group "ceph_vg2" now active
Nov 22 03:30:08 np0005532048 lvm[72945]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:30:09 np0005532048 lvm[72945]: VG ceph_vg2 finished
Nov 22 03:30:09 np0005532048 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 22 03:30:09 np0005532048 python3[73023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:30:09 np0005532048 python3[73096]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800209.1986616-36251-104481772364430/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:30:10 np0005532048 python3[73146]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:30:10 np0005532048 systemd[1]: Reloading.
Nov 22 03:30:10 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:30:10 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:30:10 np0005532048 systemd[1]: Starting Ceph OSD losetup...
Nov 22 03:30:10 np0005532048 bash[73185]: /dev/loop5: [64513]:4328057 (/var/lib/ceph-osd-2.img)
Nov 22 03:30:10 np0005532048 systemd[1]: Finished Ceph OSD losetup.
Nov 22 03:30:10 np0005532048 lvm[73186]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:30:10 np0005532048 lvm[73186]: VG ceph_vg2 finished
Nov 22 03:30:13 np0005532048 python3[73210]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:30:15 np0005532048 python3[73303]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 03:30:17 np0005532048 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:30:17 np0005532048 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:30:18 np0005532048 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:30:18 np0005532048 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:30:18 np0005532048 python3[73413]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:30:18 np0005532048 systemd[1]: run-r91d52acb5d1b4b9f9de470ce8a8f3383.service: Deactivated successfully.
Nov 22 03:30:18 np0005532048 python3[73442]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:19 np0005532048 python3[73504]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:30:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:19 np0005532048 python3[73530]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:30:20 np0005532048 python3[73608]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:30:21 np0005532048 python3[73681]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800220.350092-36398-277231376605355/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:30:22 np0005532048 python3[73783]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:30:22 np0005532048 python3[73856]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800221.6890242-36416-117696264760251/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:30:23 np0005532048 python3[73906]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:30:23 np0005532048 python3[73934]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:30:24 np0005532048 python3[73962]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:30:24 np0005532048 python3[73990]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:24 np0005532048 systemd[1]: Created slice User Slice of UID 42477.
Nov 22 03:30:24 np0005532048 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 22 03:30:24 np0005532048 systemd-logind[822]: New session 19 of user ceph-admin.
Nov 22 03:30:24 np0005532048 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 22 03:30:24 np0005532048 systemd[1]: Starting User Manager for UID 42477...
Nov 22 03:30:24 np0005532048 systemd[74009]: Queued start job for default target Main User Target.
Nov 22 03:30:24 np0005532048 systemd[74009]: Created slice User Application Slice.
Nov 22 03:30:24 np0005532048 systemd[74009]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 03:30:24 np0005532048 systemd[74009]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 03:30:24 np0005532048 systemd[74009]: Reached target Paths.
Nov 22 03:30:24 np0005532048 systemd[74009]: Reached target Timers.
Nov 22 03:30:24 np0005532048 systemd[74009]: Starting D-Bus User Message Bus Socket...
Nov 22 03:30:24 np0005532048 systemd[74009]: Starting Create User's Volatile Files and Directories...
Nov 22 03:30:24 np0005532048 systemd[74009]: Listening on D-Bus User Message Bus Socket.
Nov 22 03:30:24 np0005532048 systemd[74009]: Reached target Sockets.
Nov 22 03:30:24 np0005532048 systemd[74009]: Finished Create User's Volatile Files and Directories.
Nov 22 03:30:24 np0005532048 systemd[74009]: Reached target Basic System.
Nov 22 03:30:24 np0005532048 systemd[74009]: Reached target Main User Target.
Nov 22 03:30:24 np0005532048 systemd[74009]: Startup finished in 135ms.
Nov 22 03:30:24 np0005532048 systemd[1]: Started User Manager for UID 42477.
Nov 22 03:30:24 np0005532048 systemd[1]: Started Session 19 of User ceph-admin.
Nov 22 03:30:25 np0005532048 systemd[1]: session-19.scope: Deactivated successfully.
Nov 22 03:30:25 np0005532048 systemd-logind[822]: Session 19 logged out. Waiting for processes to exit.
Nov 22 03:30:25 np0005532048 systemd-logind[822]: Removed session 19.
Nov 22 03:30:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-compat2453528918-lower\x2dmapped.mount: Deactivated successfully.
Nov 22 03:30:35 np0005532048 systemd[1]: Stopping User Manager for UID 42477...
Nov 22 03:30:35 np0005532048 systemd[74009]: Activating special unit Exit the Session...
Nov 22 03:30:35 np0005532048 systemd[74009]: Stopped target Main User Target.
Nov 22 03:30:35 np0005532048 systemd[74009]: Stopped target Basic System.
Nov 22 03:30:35 np0005532048 systemd[74009]: Stopped target Paths.
Nov 22 03:30:35 np0005532048 systemd[74009]: Stopped target Sockets.
Nov 22 03:30:35 np0005532048 systemd[74009]: Stopped target Timers.
Nov 22 03:30:35 np0005532048 systemd[74009]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 22 03:30:35 np0005532048 systemd[74009]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 22 03:30:35 np0005532048 systemd[74009]: Closed D-Bus User Message Bus Socket.
Nov 22 03:30:35 np0005532048 systemd[74009]: Stopped Create User's Volatile Files and Directories.
Nov 22 03:30:35 np0005532048 systemd[74009]: Removed slice User Application Slice.
Nov 22 03:30:35 np0005532048 systemd[74009]: Reached target Shutdown.
Nov 22 03:30:35 np0005532048 systemd[74009]: Finished Exit the Session.
Nov 22 03:30:35 np0005532048 systemd[74009]: Reached target Exit the Session.
Nov 22 03:30:35 np0005532048 systemd[1]: user@42477.service: Deactivated successfully.
Nov 22 03:30:35 np0005532048 systemd[1]: Stopped User Manager for UID 42477.
Nov 22 03:30:35 np0005532048 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 22 03:30:35 np0005532048 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 22 03:30:35 np0005532048 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 22 03:30:35 np0005532048 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 22 03:30:35 np0005532048 systemd[1]: Removed slice User Slice of UID 42477.
Nov 22 03:30:48 np0005532048 podman[74062]: 2025-11-22 08:30:48.104821475 +0000 UTC m=+22.949136482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:48 np0005532048 podman[74131]: 2025-11-22 08:30:48.159665935 +0000 UTC m=+0.026722429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:48 np0005532048 podman[74131]: 2025-11-22 08:30:48.2603015 +0000 UTC m=+0.127357964 container create b2b3f25a6aff013cb65483255e7a57c4616878889e3b1134b088f483900c4588 (image=quay.io/ceph/ceph:v18, name=exciting_euler, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:30:48 np0005532048 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 22 03:30:48 np0005532048 systemd[1]: Started libpod-conmon-b2b3f25a6aff013cb65483255e7a57c4616878889e3b1134b088f483900c4588.scope.
Nov 22 03:30:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:48 np0005532048 podman[74131]: 2025-11-22 08:30:48.461726216 +0000 UTC m=+0.328782710 container init b2b3f25a6aff013cb65483255e7a57c4616878889e3b1134b088f483900c4588 (image=quay.io/ceph/ceph:v18, name=exciting_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:30:48 np0005532048 podman[74131]: 2025-11-22 08:30:48.473120276 +0000 UTC m=+0.340176740 container start b2b3f25a6aff013cb65483255e7a57c4616878889e3b1134b088f483900c4588 (image=quay.io/ceph/ceph:v18, name=exciting_euler, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:48 np0005532048 podman[74131]: 2025-11-22 08:30:48.492178245 +0000 UTC m=+0.359234749 container attach b2b3f25a6aff013cb65483255e7a57c4616878889e3b1134b088f483900c4588 (image=quay.io/ceph/ceph:v18, name=exciting_euler, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:30:48 np0005532048 exciting_euler[74148]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 22 03:30:48 np0005532048 systemd[1]: libpod-b2b3f25a6aff013cb65483255e7a57c4616878889e3b1134b088f483900c4588.scope: Deactivated successfully.
Nov 22 03:30:48 np0005532048 podman[74131]: 2025-11-22 08:30:48.820682686 +0000 UTC m=+0.687739200 container died b2b3f25a6aff013cb65483255e7a57c4616878889e3b1134b088f483900c4588 (image=quay.io/ceph/ceph:v18, name=exciting_euler, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:30:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f4d17fde82b6761eaf8c628f39831b08a0ff25323ea8b4097f74eef95455d6a7-merged.mount: Deactivated successfully.
Nov 22 03:30:49 np0005532048 podman[74131]: 2025-11-22 08:30:49.411764347 +0000 UTC m=+1.278820821 container remove b2b3f25a6aff013cb65483255e7a57c4616878889e3b1134b088f483900c4588 (image=quay.io/ceph/ceph:v18, name=exciting_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:30:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:49 np0005532048 podman[74163]: 2025-11-22 08:30:49.453261058 +0000 UTC m=+0.020774792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:49 np0005532048 podman[74163]: 2025-11-22 08:30:49.617363035 +0000 UTC m=+0.184876779 container create 3132143ef88067093ef0e3461ece0e2a3af0a4a70648fd401df54d9195722563 (image=quay.io/ceph/ceph:v18, name=suspicious_noyce, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:30:49 np0005532048 systemd[1]: Started libpod-conmon-3132143ef88067093ef0e3461ece0e2a3af0a4a70648fd401df54d9195722563.scope.
Nov 22 03:30:49 np0005532048 systemd[1]: libpod-conmon-b2b3f25a6aff013cb65483255e7a57c4616878889e3b1134b088f483900c4588.scope: Deactivated successfully.
Nov 22 03:30:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:49 np0005532048 podman[74163]: 2025-11-22 08:30:49.865992692 +0000 UTC m=+0.433506496 container init 3132143ef88067093ef0e3461ece0e2a3af0a4a70648fd401df54d9195722563 (image=quay.io/ceph/ceph:v18, name=suspicious_noyce, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:30:49 np0005532048 podman[74163]: 2025-11-22 08:30:49.873718841 +0000 UTC m=+0.441232635 container start 3132143ef88067093ef0e3461ece0e2a3af0a4a70648fd401df54d9195722563 (image=quay.io/ceph/ceph:v18, name=suspicious_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:30:49 np0005532048 suspicious_noyce[74180]: 167 167
Nov 22 03:30:49 np0005532048 systemd[1]: libpod-3132143ef88067093ef0e3461ece0e2a3af0a4a70648fd401df54d9195722563.scope: Deactivated successfully.
Nov 22 03:30:49 np0005532048 podman[74163]: 2025-11-22 08:30:49.982942838 +0000 UTC m=+0.550456642 container attach 3132143ef88067093ef0e3461ece0e2a3af0a4a70648fd401df54d9195722563 (image=quay.io/ceph/ceph:v18, name=suspicious_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:30:49 np0005532048 podman[74163]: 2025-11-22 08:30:49.984563489 +0000 UTC m=+0.552077273 container died 3132143ef88067093ef0e3461ece0e2a3af0a4a70648fd401df54d9195722563 (image=quay.io/ceph/ceph:v18, name=suspicious_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:30:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-11b5a42e51022cdeb343a22644e33fe6ac1aa7d32779b8c10348188081f335bd-merged.mount: Deactivated successfully.
Nov 22 03:30:50 np0005532048 podman[74163]: 2025-11-22 08:30:50.339447498 +0000 UTC m=+0.906961242 container remove 3132143ef88067093ef0e3461ece0e2a3af0a4a70648fd401df54d9195722563 (image=quay.io/ceph/ceph:v18, name=suspicious_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:30:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:50 np0005532048 systemd[1]: libpod-conmon-3132143ef88067093ef0e3461ece0e2a3af0a4a70648fd401df54d9195722563.scope: Deactivated successfully.
Nov 22 03:30:50 np0005532048 podman[74199]: 2025-11-22 08:30:50.383664397 +0000 UTC m=+0.023122741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:50 np0005532048 podman[74199]: 2025-11-22 08:30:50.484071056 +0000 UTC m=+0.123529350 container create e360c05ae8b6e7e48563e0ccf070cf194d7cd72ff4a400ab2f17ebaff03c574f (image=quay.io/ceph/ceph:v18, name=recursing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:30:50 np0005532048 systemd[1]: Started libpod-conmon-e360c05ae8b6e7e48563e0ccf070cf194d7cd72ff4a400ab2f17ebaff03c574f.scope.
Nov 22 03:30:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:50 np0005532048 podman[74199]: 2025-11-22 08:30:50.684197469 +0000 UTC m=+0.323655823 container init e360c05ae8b6e7e48563e0ccf070cf194d7cd72ff4a400ab2f17ebaff03c574f (image=quay.io/ceph/ceph:v18, name=recursing_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:30:50 np0005532048 podman[74199]: 2025-11-22 08:30:50.69111638 +0000 UTC m=+0.330574674 container start e360c05ae8b6e7e48563e0ccf070cf194d7cd72ff4a400ab2f17ebaff03c574f (image=quay.io/ceph/ceph:v18, name=recursing_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:50 np0005532048 recursing_jepsen[74215]: AQC6dCFpCp6CKxAASRdA/SQBiqg02qWDcmRy3A==
Nov 22 03:30:50 np0005532048 systemd[1]: libpod-e360c05ae8b6e7e48563e0ccf070cf194d7cd72ff4a400ab2f17ebaff03c574f.scope: Deactivated successfully.
Nov 22 03:30:50 np0005532048 podman[74199]: 2025-11-22 08:30:50.738571747 +0000 UTC m=+0.378030081 container attach e360c05ae8b6e7e48563e0ccf070cf194d7cd72ff4a400ab2f17ebaff03c574f (image=quay.io/ceph/ceph:v18, name=recursing_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:30:50 np0005532048 podman[74199]: 2025-11-22 08:30:50.73911009 +0000 UTC m=+0.378568444 container died e360c05ae8b6e7e48563e0ccf070cf194d7cd72ff4a400ab2f17ebaff03c574f (image=quay.io/ceph/ceph:v18, name=recursing_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:30:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-049f2d2c15754d4db2874f62ebd7a32b44c46a4694664807ae7f2ae3e805c621-merged.mount: Deactivated successfully.
Nov 22 03:30:51 np0005532048 podman[74199]: 2025-11-22 08:30:51.065516381 +0000 UTC m=+0.704974695 container remove e360c05ae8b6e7e48563e0ccf070cf194d7cd72ff4a400ab2f17ebaff03c574f (image=quay.io/ceph/ceph:v18, name=recursing_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:30:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:51 np0005532048 systemd[1]: libpod-conmon-e360c05ae8b6e7e48563e0ccf070cf194d7cd72ff4a400ab2f17ebaff03c574f.scope: Deactivated successfully.
Nov 22 03:30:51 np0005532048 podman[74234]: 2025-11-22 08:30:51.165169253 +0000 UTC m=+0.081256761 container create 5c833cde159f77888eb36c77ada86df1a699dfb78d895f81eb7eb63a35742cfd (image=quay.io/ceph/ceph:v18, name=stoic_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:30:51 np0005532048 podman[74234]: 2025-11-22 08:30:51.112758042 +0000 UTC m=+0.028845620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:51 np0005532048 systemd[1]: Started libpod-conmon-5c833cde159f77888eb36c77ada86df1a699dfb78d895f81eb7eb63a35742cfd.scope.
Nov 22 03:30:51 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:51 np0005532048 podman[74234]: 2025-11-22 08:30:51.34964969 +0000 UTC m=+0.265737218 container init 5c833cde159f77888eb36c77ada86df1a699dfb78d895f81eb7eb63a35742cfd (image=quay.io/ceph/ceph:v18, name=stoic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:30:51 np0005532048 podman[74234]: 2025-11-22 08:30:51.357017372 +0000 UTC m=+0.273104870 container start 5c833cde159f77888eb36c77ada86df1a699dfb78d895f81eb7eb63a35742cfd (image=quay.io/ceph/ceph:v18, name=stoic_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:30:51 np0005532048 podman[74234]: 2025-11-22 08:30:51.388790423 +0000 UTC m=+0.304877951 container attach 5c833cde159f77888eb36c77ada86df1a699dfb78d895f81eb7eb63a35742cfd (image=quay.io/ceph/ceph:v18, name=stoic_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:30:51 np0005532048 stoic_wing[74250]: AQC7dCFpiV/xFxAAqxE0Xkg5doKW9kPoryR5jA==
Nov 22 03:30:51 np0005532048 systemd[1]: libpod-5c833cde159f77888eb36c77ada86df1a699dfb78d895f81eb7eb63a35742cfd.scope: Deactivated successfully.
Nov 22 03:30:51 np0005532048 podman[74234]: 2025-11-22 08:30:51.408266522 +0000 UTC m=+0.324354020 container died 5c833cde159f77888eb36c77ada86df1a699dfb78d895f81eb7eb63a35742cfd (image=quay.io/ceph/ceph:v18, name=stoic_wing, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:30:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d674d999ce9a31df30d325099c0e18f3d790cd492ae814a65989dcb6bf16ac36-merged.mount: Deactivated successfully.
Nov 22 03:30:51 np0005532048 podman[74234]: 2025-11-22 08:30:51.802190855 +0000 UTC m=+0.718278393 container remove 5c833cde159f77888eb36c77ada86df1a699dfb78d895f81eb7eb63a35742cfd (image=quay.io/ceph/ceph:v18, name=stoic_wing, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:30:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:51 np0005532048 systemd[1]: libpod-conmon-5c833cde159f77888eb36c77ada86df1a699dfb78d895f81eb7eb63a35742cfd.scope: Deactivated successfully.
Nov 22 03:30:51 np0005532048 podman[74270]: 2025-11-22 08:30:51.867349707 +0000 UTC m=+0.038095488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:51 np0005532048 podman[74270]: 2025-11-22 08:30:51.967028309 +0000 UTC m=+0.137774070 container create 79e4cff539129a31ee209cb9b426a4661a2c423a730555c798f2f94051926081 (image=quay.io/ceph/ceph:v18, name=practical_wilbur, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:30:52 np0005532048 systemd[1]: Started libpod-conmon-79e4cff539129a31ee209cb9b426a4661a2c423a730555c798f2f94051926081.scope.
Nov 22 03:30:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:52 np0005532048 podman[74270]: 2025-11-22 08:30:52.116147978 +0000 UTC m=+0.286893799 container init 79e4cff539129a31ee209cb9b426a4661a2c423a730555c798f2f94051926081 (image=quay.io/ceph/ceph:v18, name=practical_wilbur, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:30:52 np0005532048 podman[74270]: 2025-11-22 08:30:52.123941169 +0000 UTC m=+0.294686930 container start 79e4cff539129a31ee209cb9b426a4661a2c423a730555c798f2f94051926081 (image=quay.io/ceph/ceph:v18, name=practical_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:30:52 np0005532048 podman[74270]: 2025-11-22 08:30:52.129388483 +0000 UTC m=+0.300134334 container attach 79e4cff539129a31ee209cb9b426a4661a2c423a730555c798f2f94051926081 (image=quay.io/ceph/ceph:v18, name=practical_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:30:52 np0005532048 practical_wilbur[74288]: AQC8dCFpaP3eCBAABwcifD+d8WUG/JNktPJIQg==
Nov 22 03:30:52 np0005532048 systemd[1]: libpod-79e4cff539129a31ee209cb9b426a4661a2c423a730555c798f2f94051926081.scope: Deactivated successfully.
Nov 22 03:30:52 np0005532048 podman[74295]: 2025-11-22 08:30:52.20240399 +0000 UTC m=+0.030732927 container died 79e4cff539129a31ee209cb9b426a4661a2c423a730555c798f2f94051926081 (image=quay.io/ceph/ceph:v18, name=practical_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:52 np0005532048 podman[74295]: 2025-11-22 08:30:52.410710994 +0000 UTC m=+0.239039671 container remove 79e4cff539129a31ee209cb9b426a4661a2c423a730555c798f2f94051926081 (image=quay.io/ceph/ceph:v18, name=practical_wilbur, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:30:52 np0005532048 systemd[1]: libpod-conmon-79e4cff539129a31ee209cb9b426a4661a2c423a730555c798f2f94051926081.scope: Deactivated successfully.
Nov 22 03:30:52 np0005532048 podman[74311]: 2025-11-22 08:30:52.488301434 +0000 UTC m=+0.045040539 container create d11c43f478b5597798ca9bb9b968d515ef8857395d686827f93ef714cf4d133f (image=quay.io/ceph/ceph:v18, name=adoring_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:52 np0005532048 systemd[1]: Started libpod-conmon-d11c43f478b5597798ca9bb9b968d515ef8857395d686827f93ef714cf4d133f.scope.
Nov 22 03:30:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7305b34bd4fb2881e6d87ca0d2df724b0c24c2b2b2bd1ef115ddf9fd868918/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:52 np0005532048 podman[74311]: 2025-11-22 08:30:52.46784774 +0000 UTC m=+0.024586865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:52 np0005532048 podman[74311]: 2025-11-22 08:30:52.568839514 +0000 UTC m=+0.125578639 container init d11c43f478b5597798ca9bb9b968d515ef8857395d686827f93ef714cf4d133f (image=quay.io/ceph/ceph:v18, name=adoring_pascal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:30:52 np0005532048 podman[74311]: 2025-11-22 08:30:52.574246568 +0000 UTC m=+0.130985673 container start d11c43f478b5597798ca9bb9b968d515ef8857395d686827f93ef714cf4d133f (image=quay.io/ceph/ceph:v18, name=adoring_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:30:52 np0005532048 podman[74311]: 2025-11-22 08:30:52.579098058 +0000 UTC m=+0.135837183 container attach d11c43f478b5597798ca9bb9b968d515ef8857395d686827f93ef714cf4d133f (image=quay.io/ceph/ceph:v18, name=adoring_pascal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:30:52 np0005532048 adoring_pascal[74327]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 22 03:30:52 np0005532048 adoring_pascal[74327]: setting min_mon_release = pacific
Nov 22 03:30:52 np0005532048 adoring_pascal[74327]: /usr/bin/monmaptool: set fsid to 34829716-a12c-57a6-8915-c1aa615c9d8a
Nov 22 03:30:52 np0005532048 adoring_pascal[74327]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 22 03:30:52 np0005532048 systemd[1]: libpod-d11c43f478b5597798ca9bb9b968d515ef8857395d686827f93ef714cf4d133f.scope: Deactivated successfully.
Nov 22 03:30:52 np0005532048 podman[74311]: 2025-11-22 08:30:52.606618084 +0000 UTC m=+0.163357199 container died d11c43f478b5597798ca9bb9b968d515ef8857395d686827f93ef714cf4d133f (image=quay.io/ceph/ceph:v18, name=adoring_pascal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:30:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3c7305b34bd4fb2881e6d87ca0d2df724b0c24c2b2b2bd1ef115ddf9fd868918-merged.mount: Deactivated successfully.
Nov 22 03:30:52 np0005532048 podman[74311]: 2025-11-22 08:30:52.655512177 +0000 UTC m=+0.212251282 container remove d11c43f478b5597798ca9bb9b968d515ef8857395d686827f93ef714cf4d133f (image=quay.io/ceph/ceph:v18, name=adoring_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:30:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:52 np0005532048 systemd[1]: libpod-conmon-d11c43f478b5597798ca9bb9b968d515ef8857395d686827f93ef714cf4d133f.scope: Deactivated successfully.
Nov 22 03:30:52 np0005532048 podman[74348]: 2025-11-22 08:30:52.72676084 +0000 UTC m=+0.049795097 container create 45581fd93a96d7c5a9c6f13302793b631611ea47cb86be1092ea505836a78c19 (image=quay.io/ceph/ceph:v18, name=gracious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:52 np0005532048 systemd[1]: Started libpod-conmon-45581fd93a96d7c5a9c6f13302793b631611ea47cb86be1092ea505836a78c19.scope.
Nov 22 03:30:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30711a5211f8ebdefacffdb5eafde172f084e7e6d635eda7d2c3791bfdbd2eaa/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30711a5211f8ebdefacffdb5eafde172f084e7e6d635eda7d2c3791bfdbd2eaa/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30711a5211f8ebdefacffdb5eafde172f084e7e6d635eda7d2c3791bfdbd2eaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30711a5211f8ebdefacffdb5eafde172f084e7e6d635eda7d2c3791bfdbd2eaa/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:52 np0005532048 podman[74348]: 2025-11-22 08:30:52.703023386 +0000 UTC m=+0.026057453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:52 np0005532048 podman[74348]: 2025-11-22 08:30:52.813450442 +0000 UTC m=+0.136484529 container init 45581fd93a96d7c5a9c6f13302793b631611ea47cb86be1092ea505836a78c19 (image=quay.io/ceph/ceph:v18, name=gracious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:30:52 np0005532048 podman[74348]: 2025-11-22 08:30:52.821109421 +0000 UTC m=+0.144143468 container start 45581fd93a96d7c5a9c6f13302793b631611ea47cb86be1092ea505836a78c19 (image=quay.io/ceph/ceph:v18, name=gracious_maxwell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:30:52 np0005532048 podman[74348]: 2025-11-22 08:30:52.825306984 +0000 UTC m=+0.148341041 container attach 45581fd93a96d7c5a9c6f13302793b631611ea47cb86be1092ea505836a78c19 (image=quay.io/ceph/ceph:v18, name=gracious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:52 np0005532048 systemd[1]: libpod-45581fd93a96d7c5a9c6f13302793b631611ea47cb86be1092ea505836a78c19.scope: Deactivated successfully.
Nov 22 03:30:52 np0005532048 podman[74348]: 2025-11-22 08:30:52.946060125 +0000 UTC m=+0.269094182 container died 45581fd93a96d7c5a9c6f13302793b631611ea47cb86be1092ea505836a78c19 (image=quay.io/ceph/ceph:v18, name=gracious_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:30:52 np0005532048 podman[74348]: 2025-11-22 08:30:52.989717049 +0000 UTC m=+0.312751096 container remove 45581fd93a96d7c5a9c6f13302793b631611ea47cb86be1092ea505836a78c19 (image=quay.io/ceph/ceph:v18, name=gracious_maxwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:53 np0005532048 systemd[1]: libpod-conmon-45581fd93a96d7c5a9c6f13302793b631611ea47cb86be1092ea505836a78c19.scope: Deactivated successfully.
Nov 22 03:30:53 np0005532048 systemd[1]: Reloading.
Nov 22 03:30:53 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:30:53 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:30:53 np0005532048 systemd[1]: Reloading.
Nov 22 03:30:53 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:30:53 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:30:53 np0005532048 systemd[1]: Reached target All Ceph clusters and services.
Nov 22 03:30:53 np0005532048 systemd[1]: Reloading.
Nov 22 03:30:53 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:30:53 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:30:53 np0005532048 systemd[1]: Reached target Ceph cluster 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:30:53 np0005532048 systemd[1]: Reloading.
Nov 22 03:30:53 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:30:53 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:30:54 np0005532048 systemd[1]: Reloading.
Nov 22 03:30:54 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:30:54 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:30:54 np0005532048 systemd[1]: Created slice Slice /system/ceph-34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:30:54 np0005532048 systemd[1]: Reached target System Time Set.
Nov 22 03:30:54 np0005532048 systemd[1]: Reached target System Time Synchronized.
Nov 22 03:30:54 np0005532048 systemd[1]: Starting Ceph mon.compute-0 for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:30:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:54 np0005532048 podman[74647]: 2025-11-22 08:30:54.650817603 +0000 UTC m=+0.060105040 container create 9cb938d24f1a05f869dace6a5e757e33903f96a4d50fa039b07e69b5120afb21 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:30:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5771eb16add4b9dbb39bbba8d804a56b6564a2b3bdd9c3dad27c41e049c81a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5771eb16add4b9dbb39bbba8d804a56b6564a2b3bdd9c3dad27c41e049c81a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5771eb16add4b9dbb39bbba8d804a56b6564a2b3bdd9c3dad27c41e049c81a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5771eb16add4b9dbb39bbba8d804a56b6564a2b3bdd9c3dad27c41e049c81a3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:54 np0005532048 podman[74647]: 2025-11-22 08:30:54.620485048 +0000 UTC m=+0.029772525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:54 np0005532048 podman[74647]: 2025-11-22 08:30:54.717975825 +0000 UTC m=+0.127263242 container init 9cb938d24f1a05f869dace6a5e757e33903f96a4d50fa039b07e69b5120afb21 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:30:54 np0005532048 podman[74647]: 2025-11-22 08:30:54.727624043 +0000 UTC m=+0.136911440 container start 9cb938d24f1a05f869dace6a5e757e33903f96a4d50fa039b07e69b5120afb21 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:30:54 np0005532048 bash[74647]: 9cb938d24f1a05f869dace6a5e757e33903f96a4d50fa039b07e69b5120afb21
Nov 22 03:30:54 np0005532048 systemd[1]: Started Ceph mon.compute-0 for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: pidfile_write: ignore empty --pid-file
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: load: jerasure load: lrc 
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Git sha 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: DB SUMMARY
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: DB Session ID:  YZ3PNKZ4RIDZU9WJW92P
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                                     Options.env: 0x55fd64bb6c40
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                                Options.info_log: 0x55fd66630e80
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                                 Options.wal_dir: 
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                    Options.write_buffer_manager: 0x55fd66640b40
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                               Options.row_cache: None
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                              Options.wal_filter: None
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.wal_compression: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.max_background_jobs: 2
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.max_total_wal_size: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:       Options.compaction_readahead_size: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Compression algorithms supported:
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: #011kZSTD supported: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: #011kXpressCompression supported: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: #011kZlibCompression supported: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:           Options.merge_operator: 
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:        Options.compaction_filter: None
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fd66630a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fd666291f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:        Options.write_buffer_size: 33554432
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:  Options.max_write_buffer_number: 2
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:          Options.compression: NoCompression
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.num_levels: 7
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5b634893-b995-4ffc-939b-a63b09dd2eb8
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800254784649, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800254786853, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "YZ3PNKZ4RIDZU9WJW92P", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800254786994, "job": 1, "event": "recovery_finished"}
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fd66652e00
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: DB pointer 0x55fd6675c000
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fd666291f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 34829716-a12c-57a6-8915-c1aa615c9d8a
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@-1(???) e0 preinit fsid 34829716-a12c-57a6-8915-c1aa615c9d8a
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 22 03:30:54 np0005532048 podman[74667]: 2025-11-22 08:30:54.824010574 +0000 UTC m=+0.051222071 container create 1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1 (image=quay.io/ceph/ceph:v18, name=romantic_nightingale, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-22T08:30:52.871138Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).mds e1 new map
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: log_channel(cluster) log [DBG] : fsmap 
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mkfs 34829716-a12c-57a6-8915-c1aa615c9d8a
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 22 03:30:54 np0005532048 systemd[1]: Started libpod-conmon-1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1.scope.
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 22 03:30:54 np0005532048 ceph-mon[74666]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:30:54 np0005532048 podman[74667]: 2025-11-22 08:30:54.804961005 +0000 UTC m=+0.032172522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b668932cdd17cf430a8f246525e3435599ba6f73fcc7499ce4406362be511cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b668932cdd17cf430a8f246525e3435599ba6f73fcc7499ce4406362be511cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b668932cdd17cf430a8f246525e3435599ba6f73fcc7499ce4406362be511cd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:54 np0005532048 podman[74667]: 2025-11-22 08:30:54.926241809 +0000 UTC m=+0.153453316 container init 1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1 (image=quay.io/ceph/ceph:v18, name=romantic_nightingale, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:30:54 np0005532048 podman[74667]: 2025-11-22 08:30:54.936812618 +0000 UTC m=+0.164024115 container start 1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1 (image=quay.io/ceph/ceph:v18, name=romantic_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:54 np0005532048 podman[74667]: 2025-11-22 08:30:54.944074118 +0000 UTC m=+0.171285635 container attach 1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1 (image=quay.io/ceph/ceph:v18, name=romantic_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:30:55 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 03:30:55 np0005532048 ceph-mon[74666]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504785590' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:  cluster:
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:    id:     34829716-a12c-57a6-8915-c1aa615c9d8a
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:    health: HEALTH_OK
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]: 
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:  services:
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:    mon: 1 daemons, quorum compute-0 (age 0.533815s)
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:    mgr: no daemons active
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:    osd: 0 osds: 0 up, 0 in
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]: 
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:  data:
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:    pools:   0 pools, 0 pgs
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:    objects: 0 objects, 0 B
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:    usage:   0 B used, 0 B / 0 B avail
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]:    pgs:     
Nov 22 03:30:55 np0005532048 romantic_nightingale[74722]: 
Nov 22 03:30:55 np0005532048 systemd[1]: libpod-1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1.scope: Deactivated successfully.
Nov 22 03:30:55 np0005532048 conmon[74722]: conmon 1719aa020ff1aaa91033 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1.scope/container/memory.events
Nov 22 03:30:55 np0005532048 podman[74667]: 2025-11-22 08:30:55.400696921 +0000 UTC m=+0.627908448 container died 1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1 (image=quay.io/ceph/ceph:v18, name=romantic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:55 np0005532048 podman[74667]: 2025-11-22 08:30:55.473535803 +0000 UTC m=+0.700747300 container remove 1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1 (image=quay.io/ceph/ceph:v18, name=romantic_nightingale, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:30:55 np0005532048 systemd[1]: libpod-conmon-1719aa020ff1aaa9103301b4537d0b21f402ef6967df980bf676279b8a2333f1.scope: Deactivated successfully.
Nov 22 03:30:55 np0005532048 podman[74760]: 2025-11-22 08:30:55.538827939 +0000 UTC m=+0.044521726 container create 26bc334cc436d12c2468396446d38ea200d96ba1822f042ff163a42b3afbae23 (image=quay.io/ceph/ceph:v18, name=quizzical_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:30:55 np0005532048 systemd[1]: Started libpod-conmon-26bc334cc436d12c2468396446d38ea200d96ba1822f042ff163a42b3afbae23.scope.
Nov 22 03:30:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6159af8443041b387009686115b0dc15993cfb84b2a34d6082e8c830ab5475/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6159af8443041b387009686115b0dc15993cfb84b2a34d6082e8c830ab5475/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6159af8443041b387009686115b0dc15993cfb84b2a34d6082e8c830ab5475/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6159af8443041b387009686115b0dc15993cfb84b2a34d6082e8c830ab5475/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:55 np0005532048 podman[74760]: 2025-11-22 08:30:55.5205733 +0000 UTC m=+0.026267087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:55 np0005532048 podman[74760]: 2025-11-22 08:30:55.616765946 +0000 UTC m=+0.122459733 container init 26bc334cc436d12c2468396446d38ea200d96ba1822f042ff163a42b3afbae23 (image=quay.io/ceph/ceph:v18, name=quizzical_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:30:55 np0005532048 podman[74760]: 2025-11-22 08:30:55.623136114 +0000 UTC m=+0.128829901 container start 26bc334cc436d12c2468396446d38ea200d96ba1822f042ff163a42b3afbae23 (image=quay.io/ceph/ceph:v18, name=quizzical_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:30:55 np0005532048 podman[74760]: 2025-11-22 08:30:55.62830782 +0000 UTC m=+0.134001607 container attach 26bc334cc436d12c2468396446d38ea200d96ba1822f042ff163a42b3afbae23 (image=quay.io/ceph/ceph:v18, name=quizzical_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:55 np0005532048 ceph-mon[74666]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2825867640' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2825867640' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 03:30:56 np0005532048 quizzical_hellman[74776]: 
Nov 22 03:30:56 np0005532048 quizzical_hellman[74776]: [global]
Nov 22 03:30:56 np0005532048 quizzical_hellman[74776]: #011fsid = 34829716-a12c-57a6-8915-c1aa615c9d8a
Nov 22 03:30:56 np0005532048 quizzical_hellman[74776]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 22 03:30:56 np0005532048 quizzical_hellman[74776]: #011osd_crush_chooseleaf_type = 0
Nov 22 03:30:56 np0005532048 systemd[1]: libpod-26bc334cc436d12c2468396446d38ea200d96ba1822f042ff163a42b3afbae23.scope: Deactivated successfully.
Nov 22 03:30:56 np0005532048 podman[74760]: 2025-11-22 08:30:56.067021003 +0000 UTC m=+0.572714820 container died 26bc334cc436d12c2468396446d38ea200d96ba1822f042ff163a42b3afbae23 (image=quay.io/ceph/ceph:v18, name=quizzical_hellman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:30:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9f6159af8443041b387009686115b0dc15993cfb84b2a34d6082e8c830ab5475-merged.mount: Deactivated successfully.
Nov 22 03:30:56 np0005532048 podman[74760]: 2025-11-22 08:30:56.131540381 +0000 UTC m=+0.637234158 container remove 26bc334cc436d12c2468396446d38ea200d96ba1822f042ff163a42b3afbae23 (image=quay.io/ceph/ceph:v18, name=quizzical_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:30:56 np0005532048 systemd[1]: libpod-conmon-26bc334cc436d12c2468396446d38ea200d96ba1822f042ff163a42b3afbae23.scope: Deactivated successfully.
Nov 22 03:30:56 np0005532048 podman[74813]: 2025-11-22 08:30:56.208875413 +0000 UTC m=+0.053043426 container create 879f33e1b8aa874a10aa23402fee0005fcef700e79e87cc5ae74e825dfe65b1d (image=quay.io/ceph/ceph:v18, name=frosty_wing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:30:56 np0005532048 systemd[1]: Started libpod-conmon-879f33e1b8aa874a10aa23402fee0005fcef700e79e87cc5ae74e825dfe65b1d.scope.
Nov 22 03:30:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f37810201f5836936a286a039fd619eedd266ee0097c08dedea330ee99fd0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f37810201f5836936a286a039fd619eedd266ee0097c08dedea330ee99fd0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f37810201f5836936a286a039fd619eedd266ee0097c08dedea330ee99fd0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f37810201f5836936a286a039fd619eedd266ee0097c08dedea330ee99fd0b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:56 np0005532048 podman[74813]: 2025-11-22 08:30:56.186222356 +0000 UTC m=+0.030390369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:56 np0005532048 podman[74813]: 2025-11-22 08:30:56.29652973 +0000 UTC m=+0.140697763 container init 879f33e1b8aa874a10aa23402fee0005fcef700e79e87cc5ae74e825dfe65b1d (image=quay.io/ceph/ceph:v18, name=frosty_wing, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:30:56 np0005532048 podman[74813]: 2025-11-22 08:30:56.301623244 +0000 UTC m=+0.145791257 container start 879f33e1b8aa874a10aa23402fee0005fcef700e79e87cc5ae74e825dfe65b1d (image=quay.io/ceph/ceph:v18, name=frosty_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 03:30:56 np0005532048 podman[74813]: 2025-11-22 08:30:56.306356451 +0000 UTC m=+0.150524464 container attach 879f33e1b8aa874a10aa23402fee0005fcef700e79e87cc5ae74e825dfe65b1d (image=quay.io/ceph/ceph:v18, name=frosty_wing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826387648' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:30:56 np0005532048 systemd[1]: libpod-879f33e1b8aa874a10aa23402fee0005fcef700e79e87cc5ae74e825dfe65b1d.scope: Deactivated successfully.
Nov 22 03:30:56 np0005532048 podman[74813]: 2025-11-22 08:30:56.700114148 +0000 UTC m=+0.544282161 container died 879f33e1b8aa874a10aa23402fee0005fcef700e79e87cc5ae74e825dfe65b1d (image=quay.io/ceph/ceph:v18, name=frosty_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:30:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-71f37810201f5836936a286a039fd619eedd266ee0097c08dedea330ee99fd0b-merged.mount: Deactivated successfully.
Nov 22 03:30:56 np0005532048 podman[74813]: 2025-11-22 08:30:56.769864914 +0000 UTC m=+0.614032937 container remove 879f33e1b8aa874a10aa23402fee0005fcef700e79e87cc5ae74e825dfe65b1d (image=quay.io/ceph/ceph:v18, name=frosty_wing, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:30:56 np0005532048 systemd[1]: libpod-conmon-879f33e1b8aa874a10aa23402fee0005fcef700e79e87cc5ae74e825dfe65b1d.scope: Deactivated successfully.
Nov 22 03:30:56 np0005532048 systemd[1]: Stopping Ceph mon.compute-0 for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: from='client.? 192.168.122.100:0/2825867640' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: from='client.? 192.168.122.100:0/2825867640' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: mon.compute-0@0(leader) e1 shutdown
Nov 22 03:30:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0[74662]: 2025-11-22T08:30:56.969+0000 7f675886a640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 22 03:30:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0[74662]: 2025-11-22T08:30:56.969+0000 7f675886a640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 03:30:56 np0005532048 ceph-mon[74666]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 03:30:57 np0005532048 podman[74897]: 2025-11-22 08:30:57.040958063 +0000 UTC m=+0.108071960 container died 9cb938d24f1a05f869dace6a5e757e33903f96a4d50fa039b07e69b5120afb21 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:30:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c5771eb16add4b9dbb39bbba8d804a56b6564a2b3bdd9c3dad27c41e049c81a3-merged.mount: Deactivated successfully.
Nov 22 03:30:57 np0005532048 podman[74897]: 2025-11-22 08:30:57.100841696 +0000 UTC m=+0.167955623 container remove 9cb938d24f1a05f869dace6a5e757e33903f96a4d50fa039b07e69b5120afb21 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:30:57 np0005532048 bash[74897]: ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0
Nov 22 03:30:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:30:57 np0005532048 systemd[1]: ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@mon.compute-0.service: Deactivated successfully.
Nov 22 03:30:57 np0005532048 systemd[1]: Stopped Ceph mon.compute-0 for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:30:57 np0005532048 systemd[1]: Starting Ceph mon.compute-0 for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:30:57 np0005532048 podman[75001]: 2025-11-22 08:30:57.525720709 +0000 UTC m=+0.107797123 container create 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:30:57 np0005532048 podman[75001]: 2025-11-22 08:30:57.438210526 +0000 UTC m=+0.020286970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c77816da56a3f821f0702f04043a0047b192d1be54fabd34e3292bcb8b1ecf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c77816da56a3f821f0702f04043a0047b192d1be54fabd34e3292bcb8b1ecf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c77816da56a3f821f0702f04043a0047b192d1be54fabd34e3292bcb8b1ecf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c77816da56a3f821f0702f04043a0047b192d1be54fabd34e3292bcb8b1ecf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:57 np0005532048 podman[75001]: 2025-11-22 08:30:57.599724639 +0000 UTC m=+0.181801073 container init 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:57 np0005532048 podman[75001]: 2025-11-22 08:30:57.605330497 +0000 UTC m=+0.187406921 container start 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:30:57 np0005532048 bash[75001]: 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297
Nov 22 03:30:57 np0005532048 systemd[1]: Started Ceph mon.compute-0 for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: pidfile_write: ignore empty --pid-file
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: load: jerasure load: lrc 
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Git sha 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: DB SUMMARY
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: DB Session ID:  P2L1E0L4EXLHZ7SVTM6H
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55680 ; 
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                                     Options.env: 0x55cbe3dd0c40
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                                Options.info_log: 0x55cbe48f9040
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                                 Options.wal_dir: 
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                    Options.write_buffer_manager: 0x55cbe4908b40
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                               Options.row_cache: None
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                              Options.wal_filter: None
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.wal_compression: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.max_background_jobs: 2
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.max_total_wal_size: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:       Options.compaction_readahead_size: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Compression algorithms supported:
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: #011kZSTD supported: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: #011kXpressCompression supported: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: #011kZlibCompression supported: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:           Options.merge_operator: 
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:        Options.compaction_filter: None
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cbe48f8c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cbe48f11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:        Options.write_buffer_size: 33554432
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:  Options.max_write_buffer_number: 2
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:          Options.compression: NoCompression
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.num_levels: 7
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5b634893-b995-4ffc-939b-a63b09dd2eb8
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800257648631, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800257651530, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53801, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51390, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800257, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800257651684, "job": 1, "event": "recovery_finished"}
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cbe491ae00
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: DB pointer 0x55cbe49a4000
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.83 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.83 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 512.00 MB usage: 1.73 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 34829716-a12c-57a6-8915-c1aa615c9d8a
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@-1(???) e1 preinit fsid 34829716-a12c-57a6-8915-c1aa615c9d8a
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@-1(???).mds e1 new map
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : fsmap 
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 22 03:30:57 np0005532048 podman[75022]: 2025-11-22 08:30:57.696998702 +0000 UTC m=+0.053445755 container create 7ec3f2a49f1e0f00789ce2094c4572e0c826baf71925e622afda1b5ff9d48137 (image=quay.io/ceph/ceph:v18, name=sweet_chaplygin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:30:57 np0005532048 systemd[1]: Started libpod-conmon-7ec3f2a49f1e0f00789ce2094c4572e0c826baf71925e622afda1b5ff9d48137.scope.
Nov 22 03:30:57 np0005532048 ceph-mon[75021]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:30:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14dfbf827c6e5868dacae1316bb1cf3470386f52e596fd93c2b2d35d82de74b6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14dfbf827c6e5868dacae1316bb1cf3470386f52e596fd93c2b2d35d82de74b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14dfbf827c6e5868dacae1316bb1cf3470386f52e596fd93c2b2d35d82de74b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:57 np0005532048 podman[75022]: 2025-11-22 08:30:57.677233966 +0000 UTC m=+0.033681049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:57 np0005532048 podman[75022]: 2025-11-22 08:30:57.783708976 +0000 UTC m=+0.140156049 container init 7ec3f2a49f1e0f00789ce2094c4572e0c826baf71925e622afda1b5ff9d48137 (image=quay.io/ceph/ceph:v18, name=sweet_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:30:57 np0005532048 podman[75022]: 2025-11-22 08:30:57.791131378 +0000 UTC m=+0.147578431 container start 7ec3f2a49f1e0f00789ce2094c4572e0c826baf71925e622afda1b5ff9d48137 (image=quay.io/ceph/ceph:v18, name=sweet_chaplygin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:30:57 np0005532048 podman[75022]: 2025-11-22 08:30:57.798366966 +0000 UTC m=+0.154814049 container attach 7ec3f2a49f1e0f00789ce2094c4572e0c826baf71925e622afda1b5ff9d48137 (image=quay.io/ceph/ceph:v18, name=sweet_chaplygin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:30:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 22 03:30:58 np0005532048 systemd[1]: libpod-7ec3f2a49f1e0f00789ce2094c4572e0c826baf71925e622afda1b5ff9d48137.scope: Deactivated successfully.
Nov 22 03:30:58 np0005532048 podman[75022]: 2025-11-22 08:30:58.208435774 +0000 UTC m=+0.564882817 container died 7ec3f2a49f1e0f00789ce2094c4572e0c826baf71925e622afda1b5ff9d48137 (image=quay.io/ceph/ceph:v18, name=sweet_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-14dfbf827c6e5868dacae1316bb1cf3470386f52e596fd93c2b2d35d82de74b6-merged.mount: Deactivated successfully.
Nov 22 03:30:58 np0005532048 podman[75022]: 2025-11-22 08:30:58.641840457 +0000 UTC m=+0.998287530 container remove 7ec3f2a49f1e0f00789ce2094c4572e0c826baf71925e622afda1b5ff9d48137 (image=quay.io/ceph/ceph:v18, name=sweet_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:30:58 np0005532048 systemd[1]: libpod-conmon-7ec3f2a49f1e0f00789ce2094c4572e0c826baf71925e622afda1b5ff9d48137.scope: Deactivated successfully.
Nov 22 03:30:58 np0005532048 podman[75115]: 2025-11-22 08:30:58.72411997 +0000 UTC m=+0.063421711 container create f0c9a1d979b0670879f228b2f28eedae10f424cfdb40bccadfe90c188f565941 (image=quay.io/ceph/ceph:v18, name=bold_ardinghelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:30:58 np0005532048 podman[75115]: 2025-11-22 08:30:58.689017297 +0000 UTC m=+0.028319128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:30:58 np0005532048 systemd[1]: Started libpod-conmon-f0c9a1d979b0670879f228b2f28eedae10f424cfdb40bccadfe90c188f565941.scope.
Nov 22 03:30:58 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:30:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e79a1b1e33ea806c1574b469fe2d155a6a8689e9ac6e703afcc326c9200e245/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e79a1b1e33ea806c1574b469fe2d155a6a8689e9ac6e703afcc326c9200e245/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e79a1b1e33ea806c1574b469fe2d155a6a8689e9ac6e703afcc326c9200e245/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:58 np0005532048 podman[75115]: 2025-11-22 08:30:58.998538842 +0000 UTC m=+0.337840603 container init f0c9a1d979b0670879f228b2f28eedae10f424cfdb40bccadfe90c188f565941 (image=quay.io/ceph/ceph:v18, name=bold_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:30:59 np0005532048 podman[75115]: 2025-11-22 08:30:59.008902597 +0000 UTC m=+0.348204348 container start f0c9a1d979b0670879f228b2f28eedae10f424cfdb40bccadfe90c188f565941 (image=quay.io/ceph/ceph:v18, name=bold_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:30:59 np0005532048 podman[75115]: 2025-11-22 08:30:59.103489583 +0000 UTC m=+0.442791344 container attach f0c9a1d979b0670879f228b2f28eedae10f424cfdb40bccadfe90c188f565941 (image=quay.io/ceph/ceph:v18, name=bold_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 22 03:30:59 np0005532048 systemd[1]: libpod-f0c9a1d979b0670879f228b2f28eedae10f424cfdb40bccadfe90c188f565941.scope: Deactivated successfully.
Nov 22 03:30:59 np0005532048 podman[75115]: 2025-11-22 08:30:59.408810504 +0000 UTC m=+0.748112315 container died f0c9a1d979b0670879f228b2f28eedae10f424cfdb40bccadfe90c188f565941 (image=quay.io/ceph/ceph:v18, name=bold_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1e79a1b1e33ea806c1574b469fe2d155a6a8689e9ac6e703afcc326c9200e245-merged.mount: Deactivated successfully.
Nov 22 03:30:59 np0005532048 podman[75115]: 2025-11-22 08:30:59.56470762 +0000 UTC m=+0.904009351 container remove f0c9a1d979b0670879f228b2f28eedae10f424cfdb40bccadfe90c188f565941 (image=quay.io/ceph/ceph:v18, name=bold_ardinghelli, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:30:59 np0005532048 systemd[1]: libpod-conmon-f0c9a1d979b0670879f228b2f28eedae10f424cfdb40bccadfe90c188f565941.scope: Deactivated successfully.
Nov 22 03:30:59 np0005532048 systemd[1]: Reloading.
Nov 22 03:30:59 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:30:59 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:30:59 np0005532048 systemd[1]: Reloading.
Nov 22 03:31:00 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:31:00 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:31:00 np0005532048 systemd[1]: Starting Ceph mgr.compute-0.ldbkey for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:31:00 np0005532048 podman[75296]: 2025-11-22 08:31:00.461154504 +0000 UTC m=+0.052278557 container create bcf277aed6bbb24a8ff88fdb16370eaa70dca13b88b232b34974762f983f9d49 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:31:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa59d9c05d64d47a2e6bcebc755ba63e58373fcb4d5828e749ad79e6db28dede/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa59d9c05d64d47a2e6bcebc755ba63e58373fcb4d5828e749ad79e6db28dede/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa59d9c05d64d47a2e6bcebc755ba63e58373fcb4d5828e749ad79e6db28dede/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa59d9c05d64d47a2e6bcebc755ba63e58373fcb4d5828e749ad79e6db28dede/merged/var/lib/ceph/mgr/ceph-compute-0.ldbkey supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:00 np0005532048 podman[75296]: 2025-11-22 08:31:00.436256631 +0000 UTC m=+0.027380434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:00 np0005532048 podman[75296]: 2025-11-22 08:31:00.549172729 +0000 UTC m=+0.140296562 container init bcf277aed6bbb24a8ff88fdb16370eaa70dca13b88b232b34974762f983f9d49 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:31:00 np0005532048 podman[75296]: 2025-11-22 08:31:00.554780737 +0000 UTC m=+0.145904540 container start bcf277aed6bbb24a8ff88fdb16370eaa70dca13b88b232b34974762f983f9d49 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:31:00 np0005532048 bash[75296]: bcf277aed6bbb24a8ff88fdb16370eaa70dca13b88b232b34974762f983f9d49
Nov 22 03:31:00 np0005532048 systemd[1]: Started Ceph mgr.compute-0.ldbkey for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:31:00 np0005532048 ceph-mgr[75315]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:31:00 np0005532048 ceph-mgr[75315]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 03:31:00 np0005532048 ceph-mgr[75315]: pidfile_write: ignore empty --pid-file
Nov 22 03:31:00 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'alerts'
Nov 22 03:31:01 np0005532048 ceph-mgr[75315]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:31:01 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'balancer'
Nov 22 03:31:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:01.022+0000 7f0e8c37e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:31:01 np0005532048 podman[75340]: 2025-11-22 08:31:01.18048265 +0000 UTC m=+0.062343975 container create d44d84c1f2a8463686e497858b47b77285b1f2ecb35bd677027ca04a9578b892 (image=quay.io/ceph/ceph:v18, name=inspiring_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:31:01 np0005532048 systemd[1]: Started libpod-conmon-d44d84c1f2a8463686e497858b47b77285b1f2ecb35bd677027ca04a9578b892.scope.
Nov 22 03:31:01 np0005532048 podman[75340]: 2025-11-22 08:31:01.156427229 +0000 UTC m=+0.038288534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db1378e8df6cb5e22ccac3c45da00817f228b3302291e8f746cbbb4450f85f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db1378e8df6cb5e22ccac3c45da00817f228b3302291e8f746cbbb4450f85f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db1378e8df6cb5e22ccac3c45da00817f228b3302291e8f746cbbb4450f85f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:01 np0005532048 ceph-mgr[75315]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:31:01 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'cephadm'
Nov 22 03:31:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:01.284+0000 7f0e8c37e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:31:01 np0005532048 podman[75340]: 2025-11-22 08:31:01.290414524 +0000 UTC m=+0.172275829 container init d44d84c1f2a8463686e497858b47b77285b1f2ecb35bd677027ca04a9578b892 (image=quay.io/ceph/ceph:v18, name=inspiring_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:31:01 np0005532048 podman[75340]: 2025-11-22 08:31:01.299494368 +0000 UTC m=+0.181355653 container start d44d84c1f2a8463686e497858b47b77285b1f2ecb35bd677027ca04a9578b892 (image=quay.io/ceph/ceph:v18, name=inspiring_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 22 03:31:01 np0005532048 podman[75340]: 2025-11-22 08:31:01.304684726 +0000 UTC m=+0.186546011 container attach d44d84c1f2a8463686e497858b47b77285b1f2ecb35bd677027ca04a9578b892 (image=quay.io/ceph/ceph:v18, name=inspiring_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:31:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:31:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628315512' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]: 
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]: {
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "health": {
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "status": "HEALTH_OK",
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "checks": {},
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "mutes": []
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    },
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "election_epoch": 5,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "quorum": [
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        0
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    ],
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "quorum_names": [
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "compute-0"
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    ],
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "quorum_age": 4,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "monmap": {
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "epoch": 1,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "min_mon_release_name": "reef",
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "num_mons": 1
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    },
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "osdmap": {
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "epoch": 1,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "num_osds": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "num_up_osds": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "osd_up_since": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "num_in_osds": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "osd_in_since": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "num_remapped_pgs": 0
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    },
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "pgmap": {
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "pgs_by_state": [],
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "num_pgs": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "num_pools": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "num_objects": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "data_bytes": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "bytes_used": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "bytes_avail": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "bytes_total": 0
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    },
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "fsmap": {
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "epoch": 1,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "by_rank": [],
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "up:standby": 0
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    },
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "mgrmap": {
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "available": false,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "num_standbys": 0,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "modules": [
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:            "iostat",
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:            "nfs",
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:            "restful"
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        ],
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "services": {}
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    },
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "servicemap": {
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "epoch": 1,
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "modified": "2025-11-22T08:30:54.849005+0000",
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:        "services": {}
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    },
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]:    "progress_events": {}
Nov 22 03:31:01 np0005532048 inspiring_liskov[75358]: }
Nov 22 03:31:01 np0005532048 systemd[1]: libpod-d44d84c1f2a8463686e497858b47b77285b1f2ecb35bd677027ca04a9578b892.scope: Deactivated successfully.
Nov 22 03:31:01 np0005532048 podman[75340]: 2025-11-22 08:31:01.770219748 +0000 UTC m=+0.652081033 container died d44d84c1f2a8463686e497858b47b77285b1f2ecb35bd677027ca04a9578b892 (image=quay.io/ceph/ceph:v18, name=inspiring_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:31:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8db1378e8df6cb5e22ccac3c45da00817f228b3302291e8f746cbbb4450f85f0-merged.mount: Deactivated successfully.
Nov 22 03:31:01 np0005532048 podman[75340]: 2025-11-22 08:31:01.833134446 +0000 UTC m=+0.714995731 container remove d44d84c1f2a8463686e497858b47b77285b1f2ecb35bd677027ca04a9578b892 (image=quay.io/ceph/ceph:v18, name=inspiring_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:31:01 np0005532048 systemd[1]: libpod-conmon-d44d84c1f2a8463686e497858b47b77285b1f2ecb35bd677027ca04a9578b892.scope: Deactivated successfully.
Nov 22 03:31:03 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'crash'
Nov 22 03:31:03 np0005532048 ceph-mgr[75315]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:31:03 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'dashboard'
Nov 22 03:31:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:03.701+0000 7f0e8c37e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:31:03 np0005532048 podman[75404]: 2025-11-22 08:31:03.874745492 +0000 UTC m=+0.021180612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:04 np0005532048 podman[75404]: 2025-11-22 08:31:04.097401969 +0000 UTC m=+0.243837059 container create 97e0c59aeb0803be381ec1b6a2f3e19e5030a85d53ae430030b69747d8884818 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:04 np0005532048 systemd[1]: Started libpod-conmon-97e0c59aeb0803be381ec1b6a2f3e19e5030a85d53ae430030b69747d8884818.scope.
Nov 22 03:31:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b1435ef2843579646b2835db48dd6bd5bec040394e8cf2a9b551ec5f7ad768/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b1435ef2843579646b2835db48dd6bd5bec040394e8cf2a9b551ec5f7ad768/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4b1435ef2843579646b2835db48dd6bd5bec040394e8cf2a9b551ec5f7ad768/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:04 np0005532048 podman[75404]: 2025-11-22 08:31:04.257120758 +0000 UTC m=+0.403555868 container init 97e0c59aeb0803be381ec1b6a2f3e19e5030a85d53ae430030b69747d8884818 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:31:04 np0005532048 podman[75404]: 2025-11-22 08:31:04.267380881 +0000 UTC m=+0.413815972 container start 97e0c59aeb0803be381ec1b6a2f3e19e5030a85d53ae430030b69747d8884818 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:31:04 np0005532048 podman[75404]: 2025-11-22 08:31:04.270696173 +0000 UTC m=+0.417131263 container attach 97e0c59aeb0803be381ec1b6a2f3e19e5030a85d53ae430030b69747d8884818 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:31:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:31:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1709890495' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]: 
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]: {
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "health": {
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "status": "HEALTH_OK",
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "checks": {},
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "mutes": []
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    },
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "election_epoch": 5,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "quorum": [
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        0
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    ],
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "quorum_names": [
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "compute-0"
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    ],
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "quorum_age": 7,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "monmap": {
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "epoch": 1,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "min_mon_release_name": "reef",
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "num_mons": 1
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    },
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "osdmap": {
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "epoch": 1,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "num_osds": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "num_up_osds": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "osd_up_since": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "num_in_osds": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "osd_in_since": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "num_remapped_pgs": 0
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    },
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "pgmap": {
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "pgs_by_state": [],
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "num_pgs": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "num_pools": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "num_objects": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "data_bytes": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "bytes_used": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "bytes_avail": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "bytes_total": 0
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    },
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "fsmap": {
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "epoch": 1,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "by_rank": [],
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "up:standby": 0
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    },
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "mgrmap": {
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "available": false,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "num_standbys": 0,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "modules": [
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:            "iostat",
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:            "nfs",
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:            "restful"
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        ],
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "services": {}
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    },
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "servicemap": {
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "epoch": 1,
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "modified": "2025-11-22T08:30:54.849005+0000",
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:        "services": {}
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    },
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]:    "progress_events": {}
Nov 22 03:31:04 np0005532048 eager_agnesi[75421]: }
Nov 22 03:31:04 np0005532048 systemd[1]: libpod-97e0c59aeb0803be381ec1b6a2f3e19e5030a85d53ae430030b69747d8884818.scope: Deactivated successfully.
Nov 22 03:31:04 np0005532048 podman[75404]: 2025-11-22 08:31:04.720526619 +0000 UTC m=+0.866961709 container died 97e0c59aeb0803be381ec1b6a2f3e19e5030a85d53ae430030b69747d8884818 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:31:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d4b1435ef2843579646b2835db48dd6bd5bec040394e8cf2a9b551ec5f7ad768-merged.mount: Deactivated successfully.
Nov 22 03:31:04 np0005532048 podman[75404]: 2025-11-22 08:31:04.769090724 +0000 UTC m=+0.915525834 container remove 97e0c59aeb0803be381ec1b6a2f3e19e5030a85d53ae430030b69747d8884818 (image=quay.io/ceph/ceph:v18, name=eager_agnesi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:31:04 np0005532048 systemd[1]: libpod-conmon-97e0c59aeb0803be381ec1b6a2f3e19e5030a85d53ae430030b69747d8884818.scope: Deactivated successfully.
Nov 22 03:31:05 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'devicehealth'
Nov 22 03:31:05 np0005532048 ceph-mgr[75315]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:31:05 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 03:31:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:05.561+0000 7f0e8c37e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:31:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 03:31:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 03:31:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]:  from numpy import show_config as show_numpy_config
Nov 22 03:31:06 np0005532048 ceph-mgr[75315]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:31:06 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'influx'
Nov 22 03:31:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:06.161+0000 7f0e8c37e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:31:06 np0005532048 ceph-mgr[75315]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:31:06 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'insights'
Nov 22 03:31:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:06.417+0000 7f0e8c37e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:31:06 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'iostat'
Nov 22 03:31:06 np0005532048 podman[75459]: 2025-11-22 08:31:06.848433788 +0000 UTC m=+0.048373432 container create a129bfcfaf9f7bacc4c9d9d07dba1d7bb785958f77c7b464320d249aa9be5433 (image=quay.io/ceph/ceph:v18, name=wizardly_booth, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:31:06 np0005532048 systemd[1]: Started libpod-conmon-a129bfcfaf9f7bacc4c9d9d07dba1d7bb785958f77c7b464320d249aa9be5433.scope.
Nov 22 03:31:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cad310155b6646136276c27d33a623fd31f0159217d68db887be87c597691d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cad310155b6646136276c27d33a623fd31f0159217d68db887be87c597691d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cad310155b6646136276c27d33a623fd31f0159217d68db887be87c597691d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:06 np0005532048 podman[75459]: 2025-11-22 08:31:06.824607162 +0000 UTC m=+0.024546836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:06 np0005532048 podman[75459]: 2025-11-22 08:31:06.924238563 +0000 UTC m=+0.124178217 container init a129bfcfaf9f7bacc4c9d9d07dba1d7bb785958f77c7b464320d249aa9be5433 (image=quay.io/ceph/ceph:v18, name=wizardly_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:31:06 np0005532048 podman[75459]: 2025-11-22 08:31:06.930711692 +0000 UTC m=+0.130651336 container start a129bfcfaf9f7bacc4c9d9d07dba1d7bb785958f77c7b464320d249aa9be5433 (image=quay.io/ceph/ceph:v18, name=wizardly_booth, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:31:06 np0005532048 podman[75459]: 2025-11-22 08:31:06.935294245 +0000 UTC m=+0.135233879 container attach a129bfcfaf9f7bacc4c9d9d07dba1d7bb785958f77c7b464320d249aa9be5433 (image=quay.io/ceph/ceph:v18, name=wizardly_booth, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:06 np0005532048 ceph-mgr[75315]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:31:06 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'k8sevents'
Nov 22 03:31:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:06.955+0000 7f0e8c37e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:31:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:31:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033092723' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]: 
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]: {
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "health": {
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "status": "HEALTH_OK",
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "checks": {},
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "mutes": []
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    },
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "election_epoch": 5,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "quorum": [
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        0
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    ],
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "quorum_names": [
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "compute-0"
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    ],
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "quorum_age": 9,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "monmap": {
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "epoch": 1,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "min_mon_release_name": "reef",
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "num_mons": 1
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    },
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "osdmap": {
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "epoch": 1,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "num_osds": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "num_up_osds": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "osd_up_since": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "num_in_osds": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "osd_in_since": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "num_remapped_pgs": 0
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    },
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "pgmap": {
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "pgs_by_state": [],
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "num_pgs": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "num_pools": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "num_objects": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "data_bytes": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "bytes_used": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "bytes_avail": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "bytes_total": 0
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    },
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "fsmap": {
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "epoch": 1,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "by_rank": [],
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "up:standby": 0
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    },
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "mgrmap": {
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "available": false,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "num_standbys": 0,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "modules": [
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:            "iostat",
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:            "nfs",
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:            "restful"
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        ],
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "services": {}
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    },
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "servicemap": {
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "epoch": 1,
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "modified": "2025-11-22T08:30:54.849005+0000",
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:        "services": {}
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    },
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]:    "progress_events": {}
Nov 22 03:31:07 np0005532048 wizardly_booth[75475]: }
Nov 22 03:31:07 np0005532048 systemd[1]: libpod-a129bfcfaf9f7bacc4c9d9d07dba1d7bb785958f77c7b464320d249aa9be5433.scope: Deactivated successfully.
Nov 22 03:31:07 np0005532048 podman[75459]: 2025-11-22 08:31:07.391557572 +0000 UTC m=+0.591497206 container died a129bfcfaf9f7bacc4c9d9d07dba1d7bb785958f77c7b464320d249aa9be5433 (image=quay.io/ceph/ceph:v18, name=wizardly_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2cad310155b6646136276c27d33a623fd31f0159217d68db887be87c597691d9-merged.mount: Deactivated successfully.
Nov 22 03:31:07 np0005532048 podman[75459]: 2025-11-22 08:31:07.818965897 +0000 UTC m=+1.018905541 container remove a129bfcfaf9f7bacc4c9d9d07dba1d7bb785958f77c7b464320d249aa9be5433 (image=quay.io/ceph/ceph:v18, name=wizardly_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:31:07 np0005532048 systemd[1]: libpod-conmon-a129bfcfaf9f7bacc4c9d9d07dba1d7bb785958f77c7b464320d249aa9be5433.scope: Deactivated successfully.
Nov 22 03:31:09 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'localpool'
Nov 22 03:31:09 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'mds_autoscaler'
Nov 22 03:31:09 np0005532048 podman[75514]: 2025-11-22 08:31:09.863914908 +0000 UTC m=+0.023635435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:10 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'mirroring'
Nov 22 03:31:10 np0005532048 podman[75514]: 2025-11-22 08:31:10.476950246 +0000 UTC m=+0.636670753 container create 3102560c4ff1705f023d0057c529ffad8ba3828a3c73c75e62b0c3bb5215c169 (image=quay.io/ceph/ceph:v18, name=nervous_liskov, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:10 np0005532048 systemd[1]: Started libpod-conmon-3102560c4ff1705f023d0057c529ffad8ba3828a3c73c75e62b0c3bb5215c169.scope.
Nov 22 03:31:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206b146a40c74e149a4377f87402ca8a4c9db62c701abf78d591e124775793b9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206b146a40c74e149a4377f87402ca8a4c9db62c701abf78d591e124775793b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206b146a40c74e149a4377f87402ca8a4c9db62c701abf78d591e124775793b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:10 np0005532048 podman[75514]: 2025-11-22 08:31:10.569590385 +0000 UTC m=+0.729310922 container init 3102560c4ff1705f023d0057c529ffad8ba3828a3c73c75e62b0c3bb5215c169 (image=quay.io/ceph/ceph:v18, name=nervous_liskov, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:31:10 np0005532048 podman[75514]: 2025-11-22 08:31:10.578032653 +0000 UTC m=+0.737753160 container start 3102560c4ff1705f023d0057c529ffad8ba3828a3c73c75e62b0c3bb5215c169 (image=quay.io/ceph/ceph:v18, name=nervous_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:31:10 np0005532048 podman[75514]: 2025-11-22 08:31:10.597695429 +0000 UTC m=+0.757415966 container attach 3102560c4ff1705f023d0057c529ffad8ba3828a3c73c75e62b0c3bb5215c169 (image=quay.io/ceph/ceph:v18, name=nervous_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:31:10 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'nfs'
Nov 22 03:31:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:31:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897503065' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]: 
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]: {
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "health": {
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "status": "HEALTH_OK",
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "checks": {},
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "mutes": []
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    },
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "election_epoch": 5,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "quorum": [
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        0
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    ],
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "quorum_names": [
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "compute-0"
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    ],
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "quorum_age": 13,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "monmap": {
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "epoch": 1,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "min_mon_release_name": "reef",
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "num_mons": 1
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    },
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "osdmap": {
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "epoch": 1,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "num_osds": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "num_up_osds": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "osd_up_since": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "num_in_osds": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "osd_in_since": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "num_remapped_pgs": 0
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    },
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "pgmap": {
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "pgs_by_state": [],
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "num_pgs": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "num_pools": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "num_objects": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "data_bytes": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "bytes_used": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "bytes_avail": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "bytes_total": 0
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    },
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "fsmap": {
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "epoch": 1,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "by_rank": [],
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "up:standby": 0
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    },
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "mgrmap": {
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "available": false,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "num_standbys": 0,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "modules": [
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:            "iostat",
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:            "nfs",
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:            "restful"
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        ],
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "services": {}
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    },
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "servicemap": {
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "epoch": 1,
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "modified": "2025-11-22T08:30:54.849005+0000",
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:        "services": {}
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    },
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]:    "progress_events": {}
Nov 22 03:31:11 np0005532048 nervous_liskov[75530]: }
Nov 22 03:31:11 np0005532048 systemd[1]: libpod-3102560c4ff1705f023d0057c529ffad8ba3828a3c73c75e62b0c3bb5215c169.scope: Deactivated successfully.
Nov 22 03:31:11 np0005532048 podman[75514]: 2025-11-22 08:31:11.054989611 +0000 UTC m=+1.214710118 container died 3102560c4ff1705f023d0057c529ffad8ba3828a3c73c75e62b0c3bb5215c169 (image=quay.io/ceph/ceph:v18, name=nervous_liskov, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:31:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-206b146a40c74e149a4377f87402ca8a4c9db62c701abf78d591e124775793b9-merged.mount: Deactivated successfully.
Nov 22 03:31:11 np0005532048 podman[75514]: 2025-11-22 08:31:11.25172938 +0000 UTC m=+1.411449887 container remove 3102560c4ff1705f023d0057c529ffad8ba3828a3c73c75e62b0c3bb5215c169 (image=quay.io/ceph/ceph:v18, name=nervous_liskov, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:31:11 np0005532048 systemd[1]: libpod-conmon-3102560c4ff1705f023d0057c529ffad8ba3828a3c73c75e62b0c3bb5215c169.scope: Deactivated successfully.
Nov 22 03:31:11 np0005532048 ceph-mgr[75315]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 03:31:11 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'orchestrator'
Nov 22 03:31:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:11.415+0000 7f0e8c37e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 03:31:12 np0005532048 ceph-mgr[75315]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 03:31:12 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'osd_perf_query'
Nov 22 03:31:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:12.237+0000 7f0e8c37e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 03:31:12 np0005532048 ceph-mgr[75315]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 03:31:12 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'osd_support'
Nov 22 03:31:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:12.611+0000 7f0e8c37e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 03:31:12 np0005532048 ceph-mgr[75315]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 03:31:12 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'pg_autoscaler'
Nov 22 03:31:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:12.920+0000 7f0e8c37e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 03:31:13 np0005532048 ceph-mgr[75315]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 03:31:13 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'progress'
Nov 22 03:31:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:13.294+0000 7f0e8c37e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 03:31:13 np0005532048 podman[75567]: 2025-11-22 08:31:13.302854234 +0000 UTC m=+0.027085461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:13.581+0000 7f0e8c37e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 03:31:13 np0005532048 ceph-mgr[75315]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 03:31:13 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'prometheus'
Nov 22 03:31:14 np0005532048 ceph-mgr[75315]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 03:31:14 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'rbd_support'
Nov 22 03:31:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:14.784+0000 7f0e8c37e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 03:31:15 np0005532048 ceph-mgr[75315]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 03:31:15 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'restful'
Nov 22 03:31:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:15.118+0000 7f0e8c37e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 03:31:15 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'rgw'
Nov 22 03:31:16 np0005532048 ceph-mgr[75315]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 03:31:16 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'rook'
Nov 22 03:31:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:16.738+0000 7f0e8c37e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 03:31:19 np0005532048 ceph-mgr[75315]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 03:31:19 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'selftest'
Nov 22 03:31:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:19.191+0000 7f0e8c37e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 03:31:19 np0005532048 ceph-mgr[75315]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 03:31:19 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'snap_schedule'
Nov 22 03:31:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:19.482+0000 7f0e8c37e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 03:31:19 np0005532048 ceph-mgr[75315]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 03:31:19 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'stats'
Nov 22 03:31:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:19.789+0000 7f0e8c37e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 03:31:20 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'status'
Nov 22 03:31:20 np0005532048 ceph-mgr[75315]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 03:31:20 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'telegraf'
Nov 22 03:31:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:20.430+0000 7f0e8c37e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 03:31:20 np0005532048 ceph-mgr[75315]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 03:31:20 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'telemetry'
Nov 22 03:31:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:20.710+0000 7f0e8c37e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 03:31:21 np0005532048 podman[75567]: 2025-11-22 08:31:21.21914799 +0000 UTC m=+7.943379207 container create a1f503f6c3e7cb89bbecac4ccf1bcf997230e0f28727de1ea3c7887a9d9dbe8b (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:31:21 np0005532048 systemd[1]: Started libpod-conmon-a1f503f6c3e7cb89bbecac4ccf1bcf997230e0f28727de1ea3c7887a9d9dbe8b.scope.
Nov 22 03:31:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe3fe0d4105aa9e3fd849c83ebebc79a9694dbafdb7069d50ead44ea3333136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe3fe0d4105aa9e3fd849c83ebebc79a9694dbafdb7069d50ead44ea3333136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe3fe0d4105aa9e3fd849c83ebebc79a9694dbafdb7069d50ead44ea3333136/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:21 np0005532048 podman[75567]: 2025-11-22 08:31:21.325910546 +0000 UTC m=+8.050141763 container init a1f503f6c3e7cb89bbecac4ccf1bcf997230e0f28727de1ea3c7887a9d9dbe8b (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:31:21 np0005532048 podman[75567]: 2025-11-22 08:31:21.333675907 +0000 UTC m=+8.057907114 container start a1f503f6c3e7cb89bbecac4ccf1bcf997230e0f28727de1ea3c7887a9d9dbe8b (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:21 np0005532048 podman[75567]: 2025-11-22 08:31:21.350295918 +0000 UTC m=+8.074527165 container attach a1f503f6c3e7cb89bbecac4ccf1bcf997230e0f28727de1ea3c7887a9d9dbe8b (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:31:21 np0005532048 ceph-mgr[75315]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 03:31:21 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'test_orchestrator'
Nov 22 03:31:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:21.486+0000 7f0e8c37e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 03:31:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:31:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3322587680' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]: 
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]: {
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "health": {
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "status": "HEALTH_OK",
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "checks": {},
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "mutes": []
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    },
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "election_epoch": 5,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "quorum": [
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        0
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    ],
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "quorum_names": [
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "compute-0"
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    ],
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "quorum_age": 24,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "monmap": {
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "epoch": 1,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "min_mon_release_name": "reef",
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "num_mons": 1
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    },
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "osdmap": {
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "epoch": 1,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "num_osds": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "num_up_osds": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "osd_up_since": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "num_in_osds": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "osd_in_since": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "num_remapped_pgs": 0
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    },
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "pgmap": {
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "pgs_by_state": [],
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "num_pgs": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "num_pools": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "num_objects": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "data_bytes": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "bytes_used": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "bytes_avail": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "bytes_total": 0
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    },
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "fsmap": {
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "epoch": 1,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "by_rank": [],
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "up:standby": 0
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    },
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "mgrmap": {
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "available": false,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "num_standbys": 0,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "modules": [
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:            "iostat",
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:            "nfs",
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:            "restful"
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        ],
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "services": {}
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    },
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "servicemap": {
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "epoch": 1,
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "modified": "2025-11-22T08:30:54.849005+0000",
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:        "services": {}
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    },
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]:    "progress_events": {}
Nov 22 03:31:21 np0005532048 pedantic_bartik[75583]: }
Nov 22 03:31:21 np0005532048 systemd[1]: libpod-a1f503f6c3e7cb89bbecac4ccf1bcf997230e0f28727de1ea3c7887a9d9dbe8b.scope: Deactivated successfully.
Nov 22 03:31:21 np0005532048 podman[75567]: 2025-11-22 08:31:21.814329038 +0000 UTC m=+8.538560245 container died a1f503f6c3e7cb89bbecac4ccf1bcf997230e0f28727de1ea3c7887a9d9dbe8b (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:31:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-efe3fe0d4105aa9e3fd849c83ebebc79a9694dbafdb7069d50ead44ea3333136-merged.mount: Deactivated successfully.
Nov 22 03:31:21 np0005532048 podman[75567]: 2025-11-22 08:31:21.917555917 +0000 UTC m=+8.641787124 container remove a1f503f6c3e7cb89bbecac4ccf1bcf997230e0f28727de1ea3c7887a9d9dbe8b (image=quay.io/ceph/ceph:v18, name=pedantic_bartik, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:31:21 np0005532048 systemd[1]: libpod-conmon-a1f503f6c3e7cb89bbecac4ccf1bcf997230e0f28727de1ea3c7887a9d9dbe8b.scope: Deactivated successfully.
Nov 22 03:31:22 np0005532048 ceph-mgr[75315]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 03:31:22 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'volumes'
Nov 22 03:31:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:22.354+0000 7f0e8c37e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'zabbix'
Nov 22 03:31:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:23.243+0000 7f0e8c37e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 03:31:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:23.571+0000 7f0e8c37e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: ms_deliver_dispatch: unhandled message 0x561c6db751e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ldbkey
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr handle_mgr_map Activating!
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr handle_mgr_map I am now activating
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ldbkey(active, starting, since 0.0164317s)
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e1 all = 1
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ldbkey", "id": "compute-0.ldbkey"} v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ldbkey", "id": "compute-0.ldbkey"}]: dispatch
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: balancer
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [balancer INFO root] Starting
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: crash
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:31:23
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Manager daemon compute-0.ldbkey is now available
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [balancer INFO root] No pools available
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: devicehealth
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Starting
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: iostat
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: nfs
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: orchestrator
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: pg_autoscaler
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: progress
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [progress INFO root] Loading...
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [progress INFO root] No stored events to load
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [progress INFO root] Loaded [] historic events
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [progress INFO root] Loaded OSDMap, ready.
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] recovery thread starting
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] starting setup
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: rbd_support
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: restful
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [restful INFO root] server_addr: :: server_port: 8003
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/mirror_snapshot_schedule"} v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/mirror_snapshot_schedule"}]: dispatch
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [restful WARNING root] server not running: no certificate configured
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: status
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: telemetry
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] PerfHandler: starting
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TaskHandler: starting
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/trash_purge_schedule"} v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/trash_purge_schedule"}]: dispatch
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] setup complete
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:23 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: volumes
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: Activating manager daemon compute-0.ldbkey
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: Manager daemon compute-0.ldbkey is now available
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/mirror_snapshot_schedule"}]: dispatch
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/trash_purge_schedule"}]: dispatch
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:23 np0005532048 podman[75699]: 2025-11-22 08:31:23.982641154 +0000 UTC m=+0.043347671 container create 2f23e6888429707e77da2d9c974960a51f4c94928b40233a48556722f8f66c3f (image=quay.io/ceph/ceph:v18, name=competent_varahamihira, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:24 np0005532048 systemd[1]: Started libpod-conmon-2f23e6888429707e77da2d9c974960a51f4c94928b40233a48556722f8f66c3f.scope.
Nov 22 03:31:24 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96e20280c57b6daced999a0099aacd3bac23db6052614ff14bd22be6e9c4a98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96e20280c57b6daced999a0099aacd3bac23db6052614ff14bd22be6e9c4a98/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96e20280c57b6daced999a0099aacd3bac23db6052614ff14bd22be6e9c4a98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:24 np0005532048 podman[75699]: 2025-11-22 08:31:23.96098302 +0000 UTC m=+0.021689557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:24 np0005532048 podman[75699]: 2025-11-22 08:31:24.096873956 +0000 UTC m=+0.157580473 container init 2f23e6888429707e77da2d9c974960a51f4c94928b40233a48556722f8f66c3f (image=quay.io/ceph/ceph:v18, name=competent_varahamihira, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:31:24 np0005532048 podman[75699]: 2025-11-22 08:31:24.106605736 +0000 UTC m=+0.167312253 container start 2f23e6888429707e77da2d9c974960a51f4c94928b40233a48556722f8f66c3f (image=quay.io/ceph/ceph:v18, name=competent_varahamihira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:31:24 np0005532048 podman[75699]: 2025-11-22 08:31:24.118484509 +0000 UTC m=+0.179191026 container attach 2f23e6888429707e77da2d9c974960a51f4c94928b40233a48556722f8f66c3f (image=quay.io/ceph/ceph:v18, name=competent_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:31:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:31:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663538349' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]: 
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]: {
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "health": {
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "status": "HEALTH_OK",
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "checks": {},
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "mutes": []
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    },
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "election_epoch": 5,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "quorum": [
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        0
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    ],
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "quorum_names": [
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "compute-0"
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    ],
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "quorum_age": 26,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "monmap": {
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "epoch": 1,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "min_mon_release_name": "reef",
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "num_mons": 1
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    },
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "osdmap": {
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "epoch": 1,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "num_osds": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "num_up_osds": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "osd_up_since": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "num_in_osds": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "osd_in_since": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "num_remapped_pgs": 0
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    },
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "pgmap": {
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "pgs_by_state": [],
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "num_pgs": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "num_pools": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "num_objects": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "data_bytes": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "bytes_used": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "bytes_avail": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "bytes_total": 0
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    },
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "fsmap": {
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "epoch": 1,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "by_rank": [],
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "up:standby": 0
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    },
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "mgrmap": {
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "available": false,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "num_standbys": 0,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "modules": [
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:            "iostat",
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:            "nfs",
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:            "restful"
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        ],
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "services": {}
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    },
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "servicemap": {
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "epoch": 1,
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "modified": "2025-11-22T08:30:54.849005+0000",
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:        "services": {}
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    },
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]:    "progress_events": {}
Nov 22 03:31:24 np0005532048 competent_varahamihira[75715]: }
Nov 22 03:31:24 np0005532048 systemd[1]: libpod-2f23e6888429707e77da2d9c974960a51f4c94928b40233a48556722f8f66c3f.scope: Deactivated successfully.
Nov 22 03:31:24 np0005532048 podman[75742]: 2025-11-22 08:31:24.571487476 +0000 UTC m=+0.024078895 container died 2f23e6888429707e77da2d9c974960a51f4c94928b40233a48556722f8f66c3f (image=quay.io/ceph/ceph:v18, name=competent_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:31:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ldbkey(active, since 1.16173s)
Nov 22 03:31:24 np0005532048 ceph-mon[75021]: from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:24 np0005532048 ceph-mon[75021]: from='mgr.14102 192.168.122.100:0/4043095675' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a96e20280c57b6daced999a0099aacd3bac23db6052614ff14bd22be6e9c4a98-merged.mount: Deactivated successfully.
Nov 22 03:31:25 np0005532048 podman[75742]: 2025-11-22 08:31:25.118481845 +0000 UTC m=+0.571073244 container remove 2f23e6888429707e77da2d9c974960a51f4c94928b40233a48556722f8f66c3f (image=quay.io/ceph/ceph:v18, name=competent_varahamihira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:25 np0005532048 systemd[1]: libpod-conmon-2f23e6888429707e77da2d9c974960a51f4c94928b40233a48556722f8f66c3f.scope: Deactivated successfully.
Nov 22 03:31:25 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:31:26 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ldbkey(active, since 2s)
Nov 22 03:31:27 np0005532048 podman[75757]: 2025-11-22 08:31:27.199041516 +0000 UTC m=+0.050333074 container create 6eaf43c8800dcb3fc62de9a70254605b99ce78d656fb079e0e4dffa116f0acab (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:31:27 np0005532048 systemd[1]: Started libpod-conmon-6eaf43c8800dcb3fc62de9a70254605b99ce78d656fb079e0e4dffa116f0acab.scope.
Nov 22 03:31:27 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec985ef201bd59d6f5d88393fa183f4c900dd25cb1b6b8339008c36a793c66a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec985ef201bd59d6f5d88393fa183f4c900dd25cb1b6b8339008c36a793c66a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec985ef201bd59d6f5d88393fa183f4c900dd25cb1b6b8339008c36a793c66a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:27 np0005532048 podman[75757]: 2025-11-22 08:31:27.172673774 +0000 UTC m=+0.023965352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:27 np0005532048 podman[75757]: 2025-11-22 08:31:27.275414091 +0000 UTC m=+0.126705669 container init 6eaf43c8800dcb3fc62de9a70254605b99ce78d656fb079e0e4dffa116f0acab (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:31:27 np0005532048 podman[75757]: 2025-11-22 08:31:27.281722787 +0000 UTC m=+0.133014335 container start 6eaf43c8800dcb3fc62de9a70254605b99ce78d656fb079e0e4dffa116f0acab (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:31:27 np0005532048 podman[75757]: 2025-11-22 08:31:27.292027201 +0000 UTC m=+0.143318759 container attach 6eaf43c8800dcb3fc62de9a70254605b99ce78d656fb079e0e4dffa116f0acab (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:31:27 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:31:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:31:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120548292' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]: 
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]: {
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "health": {
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "status": "HEALTH_OK",
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "checks": {},
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "mutes": []
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    },
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "election_epoch": 5,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "quorum": [
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        0
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    ],
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "quorum_names": [
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "compute-0"
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    ],
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "quorum_age": 30,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "monmap": {
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "epoch": 1,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "min_mon_release_name": "reef",
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "num_mons": 1
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    },
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "osdmap": {
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "epoch": 1,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "num_osds": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "num_up_osds": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "osd_up_since": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "num_in_osds": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "osd_in_since": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "num_remapped_pgs": 0
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    },
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "pgmap": {
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "pgs_by_state": [],
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "num_pgs": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "num_pools": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "num_objects": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "data_bytes": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "bytes_used": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "bytes_avail": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "bytes_total": 0
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    },
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "fsmap": {
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "epoch": 1,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "by_rank": [],
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "up:standby": 0
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    },
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "mgrmap": {
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "available": true,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "num_standbys": 0,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "modules": [
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:            "iostat",
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:            "nfs",
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:            "restful"
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        ],
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "services": {}
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    },
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "servicemap": {
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "epoch": 1,
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "modified": "2025-11-22T08:30:54.849005+0000",
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:        "services": {}
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    },
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]:    "progress_events": {}
Nov 22 03:31:27 np0005532048 xenodochial_mclaren[75773]: }
Nov 22 03:31:28 np0005532048 systemd[1]: libpod-6eaf43c8800dcb3fc62de9a70254605b99ce78d656fb079e0e4dffa116f0acab.scope: Deactivated successfully.
Nov 22 03:31:28 np0005532048 podman[75757]: 2025-11-22 08:31:28.008791772 +0000 UTC m=+0.860083330 container died 6eaf43c8800dcb3fc62de9a70254605b99ce78d656fb079e0e4dffa116f0acab (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:31:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7ec985ef201bd59d6f5d88393fa183f4c900dd25cb1b6b8339008c36a793c66a-merged.mount: Deactivated successfully.
Nov 22 03:31:28 np0005532048 podman[75757]: 2025-11-22 08:31:28.177473768 +0000 UTC m=+1.028765326 container remove 6eaf43c8800dcb3fc62de9a70254605b99ce78d656fb079e0e4dffa116f0acab (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 22 03:31:28 np0005532048 systemd[1]: libpod-conmon-6eaf43c8800dcb3fc62de9a70254605b99ce78d656fb079e0e4dffa116f0acab.scope: Deactivated successfully.
Nov 22 03:31:28 np0005532048 podman[75812]: 2025-11-22 08:31:28.219378532 +0000 UTC m=+0.023154662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:28 np0005532048 podman[75812]: 2025-11-22 08:31:28.524718423 +0000 UTC m=+0.328494533 container create bb02ffa5e9b4ee748f514a875e34ecc729afd2c8d336b403e460d0161040340c (image=quay.io/ceph/ceph:v18, name=dreamy_hamilton, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:31:28 np0005532048 systemd[1]: Started libpod-conmon-bb02ffa5e9b4ee748f514a875e34ecc729afd2c8d336b403e460d0161040340c.scope.
Nov 22 03:31:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06589edb77d5e5c6114b066f115657640c49e7eac6414508226a0a4a5eb4252e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06589edb77d5e5c6114b066f115657640c49e7eac6414508226a0a4a5eb4252e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06589edb77d5e5c6114b066f115657640c49e7eac6414508226a0a4a5eb4252e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06589edb77d5e5c6114b066f115657640c49e7eac6414508226a0a4a5eb4252e/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:28 np0005532048 podman[75812]: 2025-11-22 08:31:28.624907367 +0000 UTC m=+0.428683498 container init bb02ffa5e9b4ee748f514a875e34ecc729afd2c8d336b403e460d0161040340c (image=quay.io/ceph/ceph:v18, name=dreamy_hamilton, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:31:28 np0005532048 podman[75812]: 2025-11-22 08:31:28.629977783 +0000 UTC m=+0.433753893 container start bb02ffa5e9b4ee748f514a875e34ecc729afd2c8d336b403e460d0161040340c (image=quay.io/ceph/ceph:v18, name=dreamy_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:28 np0005532048 podman[75812]: 2025-11-22 08:31:28.659270017 +0000 UTC m=+0.463046167 container attach bb02ffa5e9b4ee748f514a875e34ecc729afd2c8d336b403e460d0161040340c (image=quay.io/ceph/ceph:v18, name=dreamy_hamilton, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:31:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 03:31:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1633436242' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:31:29 np0005532048 systemd[1]: libpod-bb02ffa5e9b4ee748f514a875e34ecc729afd2c8d336b403e460d0161040340c.scope: Deactivated successfully.
Nov 22 03:31:29 np0005532048 podman[75855]: 2025-11-22 08:31:29.255193612 +0000 UTC m=+0.023854230 container died bb02ffa5e9b4ee748f514a875e34ecc729afd2c8d336b403e460d0161040340c (image=quay.io/ceph/ceph:v18, name=dreamy_hamilton, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:31:29 np0005532048 systemd[1]: var-lib-containers-storage-overlay-06589edb77d5e5c6114b066f115657640c49e7eac6414508226a0a4a5eb4252e-merged.mount: Deactivated successfully.
Nov 22 03:31:29 np0005532048 podman[75855]: 2025-11-22 08:31:29.344857637 +0000 UTC m=+0.113518245 container remove bb02ffa5e9b4ee748f514a875e34ecc729afd2c8d336b403e460d0161040340c (image=quay.io/ceph/ceph:v18, name=dreamy_hamilton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:31:29 np0005532048 systemd[1]: libpod-conmon-bb02ffa5e9b4ee748f514a875e34ecc729afd2c8d336b403e460d0161040340c.scope: Deactivated successfully.
Nov 22 03:31:29 np0005532048 podman[75870]: 2025-11-22 08:31:29.424160206 +0000 UTC m=+0.049142475 container create 07c4d4a1f7777712b35ca080b6ebfaf56cdd98000e0ce9f465a18964ba3fe436 (image=quay.io/ceph/ceph:v18, name=objective_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:31:29 np0005532048 systemd[1]: Started libpod-conmon-07c4d4a1f7777712b35ca080b6ebfaf56cdd98000e0ce9f465a18964ba3fe436.scope.
Nov 22 03:31:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce8e90dd0b3ab9f3dece6d2e6d2c8cf082210a3b2f7ad8d4be2751e8cf48fe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce8e90dd0b3ab9f3dece6d2e6d2c8cf082210a3b2f7ad8d4be2751e8cf48fe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce8e90dd0b3ab9f3dece6d2e6d2c8cf082210a3b2f7ad8d4be2751e8cf48fe0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:29 np0005532048 podman[75870]: 2025-11-22 08:31:29.398624935 +0000 UTC m=+0.023607234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:29 np0005532048 podman[75870]: 2025-11-22 08:31:29.497917446 +0000 UTC m=+0.122899735 container init 07c4d4a1f7777712b35ca080b6ebfaf56cdd98000e0ce9f465a18964ba3fe436 (image=quay.io/ceph/ceph:v18, name=objective_ride, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:31:29 np0005532048 podman[75870]: 2025-11-22 08:31:29.503224237 +0000 UTC m=+0.128206506 container start 07c4d4a1f7777712b35ca080b6ebfaf56cdd98000e0ce9f465a18964ba3fe436 (image=quay.io/ceph/ceph:v18, name=objective_ride, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:31:29 np0005532048 podman[75870]: 2025-11-22 08:31:29.523257333 +0000 UTC m=+0.148239602 container attach 07c4d4a1f7777712b35ca080b6ebfaf56cdd98000e0ce9f465a18964ba3fe436 (image=quay.io/ceph/ceph:v18, name=objective_ride, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:31:29 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1633436242' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:31:29 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:31:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 22 03:31:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1755269788' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 22 03:31:30 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1755269788' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 22 03:31:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1755269788' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  1: '-n'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  2: 'mgr.compute-0.ldbkey'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  3: '-f'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  4: '--setuser'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  5: 'ceph'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  6: '--setgroup'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  7: 'ceph'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  8: '--default-log-to-file=false'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  9: '--default-log-to-journald=true'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr respawn  exe_path /proc/self/exe
Nov 22 03:31:30 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ldbkey(active, since 6s)
Nov 22 03:31:30 np0005532048 systemd[1]: libpod-07c4d4a1f7777712b35ca080b6ebfaf56cdd98000e0ce9f465a18964ba3fe436.scope: Deactivated successfully.
Nov 22 03:31:30 np0005532048 podman[75870]: 2025-11-22 08:31:30.578689566 +0000 UTC m=+1.203671835 container died 07c4d4a1f7777712b35ca080b6ebfaf56cdd98000e0ce9f465a18964ba3fe436 (image=quay.io/ceph/ceph:v18, name=objective_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:31:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bce8e90dd0b3ab9f3dece6d2e6d2c8cf082210a3b2f7ad8d4be2751e8cf48fe0-merged.mount: Deactivated successfully.
Nov 22 03:31:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: ignoring --setuser ceph since I am not root
Nov 22 03:31:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: ignoring --setgroup ceph since I am not root
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: pidfile_write: ignore empty --pid-file
Nov 22 03:31:30 np0005532048 podman[75870]: 2025-11-22 08:31:30.662543157 +0000 UTC m=+1.287525426 container remove 07c4d4a1f7777712b35ca080b6ebfaf56cdd98000e0ce9f465a18964ba3fe436 (image=quay.io/ceph/ceph:v18, name=objective_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:31:30 np0005532048 systemd[1]: libpod-conmon-07c4d4a1f7777712b35ca080b6ebfaf56cdd98000e0ce9f465a18964ba3fe436.scope: Deactivated successfully.
Nov 22 03:31:30 np0005532048 podman[75948]: 2025-11-22 08:31:30.726996049 +0000 UTC m=+0.047501544 container create d51fe8219ed6d083a9ac8e991570b81be4c0f8bcde58e9283d74638d5ed4ec03 (image=quay.io/ceph/ceph:v18, name=condescending_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:31:30 np0005532048 systemd[1]: Started libpod-conmon-d51fe8219ed6d083a9ac8e991570b81be4c0f8bcde58e9283d74638d5ed4ec03.scope.
Nov 22 03:31:30 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'alerts'
Nov 22 03:31:30 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415d748b7bfa506a64ab2275362aad3742169158f39471dc0e89bd02e638850b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415d748b7bfa506a64ab2275362aad3742169158f39471dc0e89bd02e638850b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415d748b7bfa506a64ab2275362aad3742169158f39471dc0e89bd02e638850b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:30 np0005532048 podman[75948]: 2025-11-22 08:31:30.702510825 +0000 UTC m=+0.023016330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:30 np0005532048 podman[75948]: 2025-11-22 08:31:30.816576941 +0000 UTC m=+0.137082436 container init d51fe8219ed6d083a9ac8e991570b81be4c0f8bcde58e9283d74638d5ed4ec03 (image=quay.io/ceph/ceph:v18, name=condescending_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:31:30 np0005532048 podman[75948]: 2025-11-22 08:31:30.823076452 +0000 UTC m=+0.143581927 container start d51fe8219ed6d083a9ac8e991570b81be4c0f8bcde58e9283d74638d5ed4ec03 (image=quay.io/ceph/ceph:v18, name=condescending_thompson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:31:30 np0005532048 podman[75948]: 2025-11-22 08:31:30.831291325 +0000 UTC m=+0.151796800 container attach d51fe8219ed6d083a9ac8e991570b81be4c0f8bcde58e9283d74638d5ed4ec03 (image=quay.io/ceph/ceph:v18, name=condescending_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:31.110+0000 7f9ec1ea9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:31:31 np0005532048 ceph-mgr[75315]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:31:31 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'balancer'
Nov 22 03:31:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:31.422+0000 7f9ec1ea9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:31:31 np0005532048 ceph-mgr[75315]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:31:31 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'cephadm'
Nov 22 03:31:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 03:31:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874284238' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 03:31:31 np0005532048 condescending_thompson[75965]: {
Nov 22 03:31:31 np0005532048 condescending_thompson[75965]:    "epoch": 5,
Nov 22 03:31:31 np0005532048 condescending_thompson[75965]:    "available": true,
Nov 22 03:31:31 np0005532048 condescending_thompson[75965]:    "active_name": "compute-0.ldbkey",
Nov 22 03:31:31 np0005532048 condescending_thompson[75965]:    "num_standby": 0
Nov 22 03:31:31 np0005532048 condescending_thompson[75965]: }
Nov 22 03:31:31 np0005532048 systemd[1]: libpod-d51fe8219ed6d083a9ac8e991570b81be4c0f8bcde58e9283d74638d5ed4ec03.scope: Deactivated successfully.
Nov 22 03:31:31 np0005532048 podman[75948]: 2025-11-22 08:31:31.488883254 +0000 UTC m=+0.809388729 container died d51fe8219ed6d083a9ac8e991570b81be4c0f8bcde58e9283d74638d5ed4ec03 (image=quay.io/ceph/ceph:v18, name=condescending_thompson, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:31:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-415d748b7bfa506a64ab2275362aad3742169158f39471dc0e89bd02e638850b-merged.mount: Deactivated successfully.
Nov 22 03:31:31 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1755269788' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 22 03:31:31 np0005532048 podman[75948]: 2025-11-22 08:31:31.585940671 +0000 UTC m=+0.906446146 container remove d51fe8219ed6d083a9ac8e991570b81be4c0f8bcde58e9283d74638d5ed4ec03 (image=quay.io/ceph/ceph:v18, name=condescending_thompson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:31:31 np0005532048 systemd[1]: libpod-conmon-d51fe8219ed6d083a9ac8e991570b81be4c0f8bcde58e9283d74638d5ed4ec03.scope: Deactivated successfully.
Nov 22 03:31:31 np0005532048 podman[76004]: 2025-11-22 08:31:31.644340434 +0000 UTC m=+0.041183069 container create deb2d3649ff66e78cb41e5f834e392d40d1762d61e40a4aa5c00470828c1a0a2 (image=quay.io/ceph/ceph:v18, name=zen_austin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:31:31 np0005532048 systemd[1]: Started libpod-conmon-deb2d3649ff66e78cb41e5f834e392d40d1762d61e40a4aa5c00470828c1a0a2.scope.
Nov 22 03:31:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6a21ff65e4b01f6fb3fb01a577cae3436f1b8f4edc2a7763acd838e7c33460/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6a21ff65e4b01f6fb3fb01a577cae3436f1b8f4edc2a7763acd838e7c33460/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6a21ff65e4b01f6fb3fb01a577cae3436f1b8f4edc2a7763acd838e7c33460/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:31 np0005532048 podman[76004]: 2025-11-22 08:31:31.623459158 +0000 UTC m=+0.020301803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:31 np0005532048 podman[76004]: 2025-11-22 08:31:31.727405685 +0000 UTC m=+0.124248350 container init deb2d3649ff66e78cb41e5f834e392d40d1762d61e40a4aa5c00470828c1a0a2 (image=quay.io/ceph/ceph:v18, name=zen_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:31:31 np0005532048 podman[76004]: 2025-11-22 08:31:31.734295505 +0000 UTC m=+0.131138140 container start deb2d3649ff66e78cb41e5f834e392d40d1762d61e40a4aa5c00470828c1a0a2 (image=quay.io/ceph/ceph:v18, name=zen_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:31:31 np0005532048 podman[76004]: 2025-11-22 08:31:31.739557565 +0000 UTC m=+0.136400230 container attach deb2d3649ff66e78cb41e5f834e392d40d1762d61e40a4aa5c00470828c1a0a2 (image=quay.io/ceph/ceph:v18, name=zen_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:31:33 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'crash'
Nov 22 03:31:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:33.836+0000 7f9ec1ea9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:31:33 np0005532048 ceph-mgr[75315]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:31:33 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'dashboard'
Nov 22 03:31:35 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'devicehealth'
Nov 22 03:31:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:35.747+0000 7f9ec1ea9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:31:35 np0005532048 ceph-mgr[75315]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:31:35 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 03:31:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 03:31:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 03:31:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]:  from numpy import show_config as show_numpy_config
Nov 22 03:31:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:36.322+0000 7f9ec1ea9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:31:36 np0005532048 ceph-mgr[75315]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:31:36 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'influx'
Nov 22 03:31:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:36.580+0000 7f9ec1ea9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:31:36 np0005532048 ceph-mgr[75315]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:31:36 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'insights'
Nov 22 03:31:36 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'iostat'
Nov 22 03:31:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:37.094+0000 7f9ec1ea9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:31:37 np0005532048 ceph-mgr[75315]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:31:37 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'k8sevents'
Nov 22 03:31:38 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'localpool'
Nov 22 03:31:39 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'mds_autoscaler'
Nov 22 03:31:39 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'mirroring'
Nov 22 03:31:40 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'nfs'
Nov 22 03:31:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:40.873+0000 7f9ec1ea9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 03:31:40 np0005532048 ceph-mgr[75315]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 03:31:40 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'orchestrator'
Nov 22 03:31:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:41.583+0000 7f9ec1ea9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 03:31:41 np0005532048 ceph-mgr[75315]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 03:31:41 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'osd_perf_query'
Nov 22 03:31:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:41.838+0000 7f9ec1ea9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 03:31:41 np0005532048 ceph-mgr[75315]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 03:31:41 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'osd_support'
Nov 22 03:31:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:42.072+0000 7f9ec1ea9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 03:31:42 np0005532048 ceph-mgr[75315]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 03:31:42 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'pg_autoscaler'
Nov 22 03:31:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:42.337+0000 7f9ec1ea9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 03:31:42 np0005532048 ceph-mgr[75315]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 03:31:42 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'progress'
Nov 22 03:31:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:42.583+0000 7f9ec1ea9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 03:31:42 np0005532048 ceph-mgr[75315]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 03:31:42 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'prometheus'
Nov 22 03:31:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:43.607+0000 7f9ec1ea9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 03:31:43 np0005532048 ceph-mgr[75315]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 03:31:43 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'rbd_support'
Nov 22 03:31:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:43.963+0000 7f9ec1ea9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 03:31:43 np0005532048 ceph-mgr[75315]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 03:31:43 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'restful'
Nov 22 03:31:44 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'rgw'
Nov 22 03:31:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:45.455+0000 7f9ec1ea9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 03:31:45 np0005532048 ceph-mgr[75315]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 03:31:45 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'rook'
Nov 22 03:31:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:47.623+0000 7f9ec1ea9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 03:31:47 np0005532048 ceph-mgr[75315]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 03:31:47 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'selftest'
Nov 22 03:31:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:47.885+0000 7f9ec1ea9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 03:31:47 np0005532048 ceph-mgr[75315]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 03:31:47 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'snap_schedule'
Nov 22 03:31:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:48.206+0000 7f9ec1ea9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 03:31:48 np0005532048 ceph-mgr[75315]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 03:31:48 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'stats'
Nov 22 03:31:48 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'status'
Nov 22 03:31:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:48.771+0000 7f9ec1ea9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 03:31:48 np0005532048 ceph-mgr[75315]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 03:31:48 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'telegraf'
Nov 22 03:31:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:49.058+0000 7f9ec1ea9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 03:31:49 np0005532048 ceph-mgr[75315]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 03:31:49 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'telemetry'
Nov 22 03:31:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:49.737+0000 7f9ec1ea9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 03:31:49 np0005532048 ceph-mgr[75315]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 03:31:49 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'test_orchestrator'
Nov 22 03:31:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:50.473+0000 7f9ec1ea9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 03:31:50 np0005532048 ceph-mgr[75315]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 03:31:50 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'volumes'
Nov 22 03:31:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:51.320+0000 7f9ec1ea9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 03:31:51 np0005532048 ceph-mgr[75315]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 03:31:51 np0005532048 ceph-mgr[75315]: mgr[py] Loading python module 'zabbix'
Nov 22 03:31:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T08:31:51.574+0000 7f9ec1ea9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 03:31:51 np0005532048 ceph-mgr[75315]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 03:31:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ldbkey restarted
Nov 22 03:31:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 22 03:31:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:31:51 np0005532048 ceph-mgr[75315]: ms_deliver_dispatch: unhandled message 0x56310dc251e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 22 03:31:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ldbkey
Nov 22 03:31:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 22 03:31:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 22 03:31:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 22 03:31:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 22 03:31:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ldbkey(active, starting, since 0.403131s)
Nov 22 03:31:51 np0005532048 ceph-mgr[75315]: mgr handle_mgr_map Activating!
Nov 22 03:31:51 np0005532048 ceph-mgr[75315]: mgr handle_mgr_map I am now activating
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ldbkey", "id": "compute-0.ldbkey"} v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ldbkey", "id": "compute-0.ldbkey"}]: dispatch
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e1 all = 1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: balancer
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Manager daemon compute-0.ldbkey is now available
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Starting
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:31:52
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] No pools available
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: Active manager daemon compute-0.ldbkey restarted
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: Activating manager daemon compute-0.ldbkey
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: cephadm
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: crash
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: devicehealth
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Starting
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: iostat
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: nfs
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: orchestrator
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: pg_autoscaler
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: progress
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [progress INFO root] Loading...
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [progress INFO root] No stored events to load
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [progress INFO root] Loaded [] historic events
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [progress INFO root] Loaded OSDMap, ready.
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] recovery thread starting
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] starting setup
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: rbd_support
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: restful
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [restful INFO root] server_addr: :: server_port: 8003
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [restful WARNING root] server not running: no certificate configured
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/mirror_snapshot_schedule"} v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/mirror_snapshot_schedule"}]: dispatch
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: status
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] PerfHandler: starting
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: telemetry
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TaskHandler: starting
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/trash_purge_schedule"} v 0) v1
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/trash_purge_schedule"}]: dispatch
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] setup complete
Nov 22 03:31:52 np0005532048 ceph-mgr[75315]: mgr load Constructed class from module: volumes
Nov 22 03:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019925987 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 22 03:31:53 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ldbkey(active, since 1.74305s)
Nov 22 03:31:53 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 22 03:31:53 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 22 03:31:53 np0005532048 zen_austin[76020]: {
Nov 22 03:31:53 np0005532048 zen_austin[76020]:    "mgrmap_epoch": 7,
Nov 22 03:31:53 np0005532048 zen_austin[76020]:    "initialized": true
Nov 22 03:31:53 np0005532048 zen_austin[76020]: }
Nov 22 03:31:53 np0005532048 systemd[1]: libpod-deb2d3649ff66e78cb41e5f834e392d40d1762d61e40a4aa5c00470828c1a0a2.scope: Deactivated successfully.
Nov 22 03:31:53 np0005532048 podman[76004]: 2025-11-22 08:31:53.353774997 +0000 UTC m=+21.750617632 container died deb2d3649ff66e78cb41e5f834e392d40d1762d61e40a4aa5c00470828c1a0a2 (image=quay.io/ceph/ceph:v18, name=zen_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:31:53 np0005532048 ceph-mon[75021]: Manager daemon compute-0.ldbkey is now available
Nov 22 03:31:53 np0005532048 ceph-mon[75021]: Found migration_current of "None". Setting to last migration.
Nov 22 03:31:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/mirror_snapshot_schedule"}]: dispatch
Nov 22 03:31:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ldbkey/trash_purge_schedule"}]: dispatch
Nov 22 03:31:54 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:31:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 22 03:31:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7d6a21ff65e4b01f6fb3fb01a577cae3436f1b8f4edc2a7763acd838e7c33460-merged.mount: Deactivated successfully.
Nov 22 03:31:55 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ldbkey(active, since 3s)
Nov 22 03:31:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: [cephadm INFO cherrypy.error] [22/Nov/2025:08:31:55] ENGINE Bus STARTING
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : [22/Nov/2025:08:31:55] ENGINE Bus STARTING
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: [cephadm INFO cherrypy.error] [22/Nov/2025:08:31:55] ENGINE Serving on http://192.168.122.100:8765
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : [22/Nov/2025:08:31:55] ENGINE Serving on http://192.168.122.100:8765
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: [cephadm INFO cherrypy.error] [22/Nov/2025:08:31:55] ENGINE Serving on https://192.168.122.100:7150
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : [22/Nov/2025:08:31:55] ENGINE Serving on https://192.168.122.100:7150
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: [cephadm INFO cherrypy.error] [22/Nov/2025:08:31:55] ENGINE Bus STARTED
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : [22/Nov/2025:08:31:55] ENGINE Bus STARTED
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: [cephadm INFO cherrypy.error] [22/Nov/2025:08:31:55] ENGINE Client ('192.168.122.100', 44404) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 03:31:55 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : [22/Nov/2025:08:31:55] ENGINE Client ('192.168.122.100', 44404) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 03:31:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:31:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:31:56 np0005532048 podman[76004]: 2025-11-22 08:31:56.011185393 +0000 UTC m=+24.408028058 container remove deb2d3649ff66e78cb41e5f834e392d40d1762d61e40a4aa5c00470828c1a0a2 (image=quay.io/ceph/ceph:v18, name=zen_austin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:31:56 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:31:56 np0005532048 systemd[1]: libpod-conmon-deb2d3649ff66e78cb41e5f834e392d40d1762d61e40a4aa5c00470828c1a0a2.scope: Deactivated successfully.
Nov 22 03:31:56 np0005532048 podman[76202]: 2025-11-22 08:31:56.065694759 +0000 UTC m=+0.025066580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:31:56 np0005532048 podman[76202]: 2025-11-22 08:31:56.229394301 +0000 UTC m=+0.188766092 container create 3982003b80c304a7ec3eba5df29e34fbefe520edd021c72753fd79687288103d (image=quay.io/ceph/ceph:v18, name=pedantic_newton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:31:56 np0005532048 ceph-mon[75021]: [22/Nov/2025:08:31:55] ENGINE Bus STARTING
Nov 22 03:31:56 np0005532048 ceph-mon[75021]: [22/Nov/2025:08:31:55] ENGINE Serving on http://192.168.122.100:8765
Nov 22 03:31:56 np0005532048 systemd[1]: Started libpod-conmon-3982003b80c304a7ec3eba5df29e34fbefe520edd021c72753fd79687288103d.scope.
Nov 22 03:31:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:31:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c5873d5f608b37ab031e9f17e9744dd12d0cd761b3e15d75e4a546a37ef951/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c5873d5f608b37ab031e9f17e9744dd12d0cd761b3e15d75e4a546a37ef951/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c5873d5f608b37ab031e9f17e9744dd12d0cd761b3e15d75e4a546a37ef951/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:56 np0005532048 podman[76202]: 2025-11-22 08:31:56.872072342 +0000 UTC m=+0.831444223 container init 3982003b80c304a7ec3eba5df29e34fbefe520edd021c72753fd79687288103d (image=quay.io/ceph/ceph:v18, name=pedantic_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:31:56 np0005532048 podman[76202]: 2025-11-22 08:31:56.877685252 +0000 UTC m=+0.837057043 container start 3982003b80c304a7ec3eba5df29e34fbefe520edd021c72753fd79687288103d (image=quay.io/ceph/ceph:v18, name=pedantic_newton, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:31:57 np0005532048 podman[76202]: 2025-11-22 08:31:57.153574585 +0000 UTC m=+1.112946606 container attach 3982003b80c304a7ec3eba5df29e34fbefe520edd021c72753fd79687288103d (image=quay.io/ceph/ceph:v18, name=pedantic_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:31:57 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:31:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 22 03:31:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053105 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:57 np0005532048 ceph-mon[75021]: [22/Nov/2025:08:31:55] ENGINE Serving on https://192.168.122.100:7150
Nov 22 03:31:57 np0005532048 ceph-mon[75021]: [22/Nov/2025:08:31:55] ENGINE Bus STARTED
Nov 22 03:31:57 np0005532048 ceph-mon[75021]: [22/Nov/2025:08:31:55] ENGINE Client ('192.168.122.100', 44404) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 03:31:58 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:31:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:31:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:31:58 np0005532048 systemd[1]: libpod-3982003b80c304a7ec3eba5df29e34fbefe520edd021c72753fd79687288103d.scope: Deactivated successfully.
Nov 22 03:31:58 np0005532048 podman[76202]: 2025-11-22 08:31:58.265001151 +0000 UTC m=+2.224372942 container died 3982003b80c304a7ec3eba5df29e34fbefe520edd021c72753fd79687288103d (image=quay.io/ceph/ceph:v18, name=pedantic_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:31:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:31:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-67c5873d5f608b37ab031e9f17e9744dd12d0cd761b3e15d75e4a546a37ef951-merged.mount: Deactivated successfully.
Nov 22 03:32:00 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:32:00 np0005532048 podman[76202]: 2025-11-22 08:32:00.609734735 +0000 UTC m=+4.569106526 container remove 3982003b80c304a7ec3eba5df29e34fbefe520edd021c72753fd79687288103d (image=quay.io/ceph/ceph:v18, name=pedantic_newton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:00 np0005532048 systemd[1]: libpod-conmon-3982003b80c304a7ec3eba5df29e34fbefe520edd021c72753fd79687288103d.scope: Deactivated successfully.
Nov 22 03:32:00 np0005532048 podman[76256]: 2025-11-22 08:32:00.674017653 +0000 UTC m=+0.034075212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:00 np0005532048 podman[76256]: 2025-11-22 08:32:00.861300239 +0000 UTC m=+0.221357778 container create 4d4195cd54bc5add536ac983708732df4ed62b0bf3daee9b7b5fe68e6d0b867d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:32:01 np0005532048 systemd[1]: Started libpod-conmon-4d4195cd54bc5add536ac983708732df4ed62b0bf3daee9b7b5fe68e6d0b867d.scope.
Nov 22 03:32:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ea83e1feeab24bea8cfac91b5fe03d6b149bae5b73fb4df8bd0245f5806afc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ea83e1feeab24bea8cfac91b5fe03d6b149bae5b73fb4df8bd0245f5806afc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ea83e1feeab24bea8cfac91b5fe03d6b149bae5b73fb4df8bd0245f5806afc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:01 np0005532048 podman[76256]: 2025-11-22 08:32:01.541380442 +0000 UTC m=+0.901438061 container init 4d4195cd54bc5add536ac983708732df4ed62b0bf3daee9b7b5fe68e6d0b867d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:01 np0005532048 podman[76256]: 2025-11-22 08:32:01.551836341 +0000 UTC m=+0.911893880 container start 4d4195cd54bc5add536ac983708732df4ed62b0bf3daee9b7b5fe68e6d0b867d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:32:01 np0005532048 podman[76256]: 2025-11-22 08:32:01.787956282 +0000 UTC m=+1.148013821 container attach 4d4195cd54bc5add536ac983708732df4ed62b0bf3daee9b7b5fe68e6d0b867d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:32:02 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:32:02 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 22 03:32:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:02 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Set ssh ssh_user
Nov 22 03:32:02 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 22 03:32:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 22 03:32:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:02 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Set ssh ssh_config
Nov 22 03:32:02 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 22 03:32:02 np0005532048 ceph-mgr[75315]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 22 03:32:02 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 22 03:32:02 np0005532048 relaxed_moore[76272]: ssh user set to ceph-admin. sudo will be used
Nov 22 03:32:02 np0005532048 podman[76256]: 2025-11-22 08:32:02.633060842 +0000 UTC m=+1.993118381 container died 4d4195cd54bc5add536ac983708732df4ed62b0bf3daee9b7b5fe68e6d0b867d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:32:02 np0005532048 systemd[1]: libpod-4d4195cd54bc5add536ac983708732df4ed62b0bf3daee9b7b5fe68e6d0b867d.scope: Deactivated successfully.
Nov 22 03:32:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-25ea83e1feeab24bea8cfac91b5fe03d6b149bae5b73fb4df8bd0245f5806afc-merged.mount: Deactivated successfully.
Nov 22 03:32:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:03 np0005532048 ceph-mon[75021]: Set ssh ssh_user
Nov 22 03:32:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:03 np0005532048 ceph-mon[75021]: Set ssh ssh_config
Nov 22 03:32:03 np0005532048 ceph-mon[75021]: ssh user set to ceph-admin. sudo will be used
Nov 22 03:32:04 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:32:04 np0005532048 podman[76256]: 2025-11-22 08:32:04.954799498 +0000 UTC m=+4.314857077 container remove 4d4195cd54bc5add536ac983708732df4ed62b0bf3daee9b7b5fe68e6d0b867d (image=quay.io/ceph/ceph:v18, name=relaxed_moore, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:04 np0005532048 systemd[1]: libpod-conmon-4d4195cd54bc5add536ac983708732df4ed62b0bf3daee9b7b5fe68e6d0b867d.scope: Deactivated successfully.
Nov 22 03:32:05 np0005532048 podman[76312]: 2025-11-22 08:32:05.002198008 +0000 UTC m=+0.024284430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:05 np0005532048 podman[76312]: 2025-11-22 08:32:05.352037578 +0000 UTC m=+0.374123990 container create 07ef930adcea1eb6f3efc2ae21b401b489f2105efbc61fd07750d952c8d42b8c (image=quay.io/ceph/ceph:v18, name=determined_mcnulty, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:32:05 np0005532048 systemd[1]: Started libpod-conmon-07ef930adcea1eb6f3efc2ae21b401b489f2105efbc61fd07750d952c8d42b8c.scope.
Nov 22 03:32:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63037d9dfc0c43e25da1005abde8aa3374ef508201ef2797af563184d8d1f3ee/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63037d9dfc0c43e25da1005abde8aa3374ef508201ef2797af563184d8d1f3ee/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63037d9dfc0c43e25da1005abde8aa3374ef508201ef2797af563184d8d1f3ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63037d9dfc0c43e25da1005abde8aa3374ef508201ef2797af563184d8d1f3ee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63037d9dfc0c43e25da1005abde8aa3374ef508201ef2797af563184d8d1f3ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:05 np0005532048 podman[76312]: 2025-11-22 08:32:05.96883824 +0000 UTC m=+0.990924632 container init 07ef930adcea1eb6f3efc2ae21b401b489f2105efbc61fd07750d952c8d42b8c (image=quay.io/ceph/ceph:v18, name=determined_mcnulty, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:32:05 np0005532048 podman[76312]: 2025-11-22 08:32:05.974340765 +0000 UTC m=+0.996427137 container start 07ef930adcea1eb6f3efc2ae21b401b489f2105efbc61fd07750d952c8d42b8c (image=quay.io/ceph/ceph:v18, name=determined_mcnulty, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:06 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:32:06 np0005532048 podman[76312]: 2025-11-22 08:32:06.274075348 +0000 UTC m=+1.296161720 container attach 07ef930adcea1eb6f3efc2ae21b401b489f2105efbc61fd07750d952c8d42b8c (image=quay.io/ceph/ceph:v18, name=determined_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:32:06 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 22 03:32:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:06 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 22 03:32:06 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 22 03:32:06 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Set ssh private key
Nov 22 03:32:06 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 22 03:32:06 np0005532048 systemd[1]: libpod-07ef930adcea1eb6f3efc2ae21b401b489f2105efbc61fd07750d952c8d42b8c.scope: Deactivated successfully.
Nov 22 03:32:06 np0005532048 podman[76312]: 2025-11-22 08:32:06.807666665 +0000 UTC m=+1.829753037 container died 07ef930adcea1eb6f3efc2ae21b401b489f2105efbc61fd07750d952c8d42b8c (image=quay.io/ceph/ceph:v18, name=determined_mcnulty, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:08 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:32:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-63037d9dfc0c43e25da1005abde8aa3374ef508201ef2797af563184d8d1f3ee-merged.mount: Deactivated successfully.
Nov 22 03:32:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:08 np0005532048 ceph-mon[75021]: Set ssh ssh_identity_key
Nov 22 03:32:08 np0005532048 ceph-mon[75021]: Set ssh private key
Nov 22 03:32:09 np0005532048 podman[76312]: 2025-11-22 08:32:09.413046185 +0000 UTC m=+4.435132577 container remove 07ef930adcea1eb6f3efc2ae21b401b489f2105efbc61fd07750d952c8d42b8c (image=quay.io/ceph/ceph:v18, name=determined_mcnulty, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:09 np0005532048 systemd[1]: libpod-conmon-07ef930adcea1eb6f3efc2ae21b401b489f2105efbc61fd07750d952c8d42b8c.scope: Deactivated successfully.
Nov 22 03:32:09 np0005532048 podman[76366]: 2025-11-22 08:32:09.456749945 +0000 UTC m=+0.022033026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:09 np0005532048 podman[76366]: 2025-11-22 08:32:09.611488247 +0000 UTC m=+0.176771308 container create e29b3223eabfbbf9e8fe0a881bd855887b0c9b8ff590abefdf3b9de70d981e45 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:32:09 np0005532048 systemd[1]: Started libpod-conmon-e29b3223eabfbbf9e8fe0a881bd855887b0c9b8ff590abefdf3b9de70d981e45.scope.
Nov 22 03:32:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fd5eb33f6865ef0eeadb6cf60e6a2211238ef6ac29b307b025b93fc515a731/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fd5eb33f6865ef0eeadb6cf60e6a2211238ef6ac29b307b025b93fc515a731/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fd5eb33f6865ef0eeadb6cf60e6a2211238ef6ac29b307b025b93fc515a731/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fd5eb33f6865ef0eeadb6cf60e6a2211238ef6ac29b307b025b93fc515a731/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fd5eb33f6865ef0eeadb6cf60e6a2211238ef6ac29b307b025b93fc515a731/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:09 np0005532048 podman[76366]: 2025-11-22 08:32:09.985498962 +0000 UTC m=+0.550782053 container init e29b3223eabfbbf9e8fe0a881bd855887b0c9b8ff590abefdf3b9de70d981e45 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:32:09 np0005532048 podman[76366]: 2025-11-22 08:32:09.991498401 +0000 UTC m=+0.556781502 container start e29b3223eabfbbf9e8fe0a881bd855887b0c9b8ff590abefdf3b9de70d981e45 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:10 np0005532048 ceph-mgr[75315]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:32:10 np0005532048 podman[76366]: 2025-11-22 08:32:10.219951862 +0000 UTC m=+0.785234983 container attach e29b3223eabfbbf9e8fe0a881bd855887b0c9b8ff590abefdf3b9de70d981e45 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:32:10 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 22 03:32:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:10 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 22 03:32:10 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 22 03:32:10 np0005532048 systemd[1]: libpod-e29b3223eabfbbf9e8fe0a881bd855887b0c9b8ff590abefdf3b9de70d981e45.scope: Deactivated successfully.
Nov 22 03:32:10 np0005532048 podman[76366]: 2025-11-22 08:32:10.736248712 +0000 UTC m=+1.301531773 container died e29b3223eabfbbf9e8fe0a881bd855887b0c9b8ff590abefdf3b9de70d981e45 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:32:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-52fd5eb33f6865ef0eeadb6cf60e6a2211238ef6ac29b307b025b93fc515a731-merged.mount: Deactivated successfully.
Nov 22 03:32:12 np0005532048 ceph-mgr[75315]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 22 03:32:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:12 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 22 03:32:12 np0005532048 podman[76366]: 2025-11-22 08:32:12.218832119 +0000 UTC m=+2.784115210 container remove e29b3223eabfbbf9e8fe0a881bd855887b0c9b8ff590abefdf3b9de70d981e45 (image=quay.io/ceph/ceph:v18, name=nifty_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:12 np0005532048 ceph-mon[75021]: Set ssh ssh_identity_pub
Nov 22 03:32:12 np0005532048 ceph-mon[75021]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 22 03:32:12 np0005532048 podman[76422]: 2025-11-22 08:32:12.258246632 +0000 UTC m=+0.021637128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:12 np0005532048 podman[76422]: 2025-11-22 08:32:12.355942073 +0000 UTC m=+0.119332549 container create 9d3d7ffce7ef7298854828c99b262964f8a832c465ca73297d3da2fc838c1ad3 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:32:12 np0005532048 systemd[1]: Started libpod-conmon-9d3d7ffce7ef7298854828c99b262964f8a832c465ca73297d3da2fc838c1ad3.scope.
Nov 22 03:32:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03d134ae8b6100f2724c17424fed1d7e3f8e01b0c77d1ae97718a094ed2e1bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03d134ae8b6100f2724c17424fed1d7e3f8e01b0c77d1ae97718a094ed2e1bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03d134ae8b6100f2724c17424fed1d7e3f8e01b0c77d1ae97718a094ed2e1bda/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:12 np0005532048 podman[76422]: 2025-11-22 08:32:12.525889518 +0000 UTC m=+0.289280014 container init 9d3d7ffce7ef7298854828c99b262964f8a832c465ca73297d3da2fc838c1ad3 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:32:12 np0005532048 podman[76422]: 2025-11-22 08:32:12.531542834 +0000 UTC m=+0.294933300 container start 9d3d7ffce7ef7298854828c99b262964f8a832c465ca73297d3da2fc838c1ad3 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:32:12 np0005532048 podman[76422]: 2025-11-22 08:32:12.641131028 +0000 UTC m=+0.404521524 container attach 9d3d7ffce7ef7298854828c99b262964f8a832c465ca73297d3da2fc838c1ad3 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:32:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:12 np0005532048 systemd[1]: libpod-conmon-e29b3223eabfbbf9e8fe0a881bd855887b0c9b8ff590abefdf3b9de70d981e45.scope: Deactivated successfully.
Nov 22 03:32:13 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:13 np0005532048 frosty_feistel[76439]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2/iZ9B33TtrtTmTMVbRfH1BZKY4xK9eF7NXDiopbN9eOJLR1ooApQf05hMcgLbH2TcWehKkanXj9bDgVZ4o6AYUwmMoZFErUq1QpZNxUfc/LlQ8YQ2+/qrJAh1yB9U+KXDpIePg7EA4KjQmyN6Yg7q7X/3Fgj+ktNjHTTzeLBAGrZ/kKGU1gMpTveYr8G6+JTVQhtnwZdshY4ALDw+8a8ZSO/QEqu6oDdmPNLCyd4W81b67HmFT085ycbXM+zeLqsyK/Mm0bApU6sKdmspoUeLCJFjXutJUQK0n+fbhlX0pQ7izGoUoAZ7Hs9s2MleU/Dmuu4TJwv/qElCIHqGHLjN1WmqhJo084UDeG5pDavRgSNicAo2jdVWRQEaeFgO4kXy4l3GFj1bwTZt1dmPwaJVwQBuJmOrZGhJsR1hGxUZydo0Jlgn6yHJ50+X/JLJk3peZooc6hD2ETN9Z9qMnj4oJBaSSqaZ8IaMvWGbZXB8anWXMgahnvXoeXzB+pafWM= zuul@controller
Nov 22 03:32:13 np0005532048 systemd[1]: libpod-9d3d7ffce7ef7298854828c99b262964f8a832c465ca73297d3da2fc838c1ad3.scope: Deactivated successfully.
Nov 22 03:32:13 np0005532048 podman[76422]: 2025-11-22 08:32:13.12239381 +0000 UTC m=+0.885784276 container died 9d3d7ffce7ef7298854828c99b262964f8a832c465ca73297d3da2fc838c1ad3 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:32:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-03d134ae8b6100f2724c17424fed1d7e3f8e01b0c77d1ae97718a094ed2e1bda-merged.mount: Deactivated successfully.
Nov 22 03:32:15 np0005532048 podman[76422]: 2025-11-22 08:32:15.10590409 +0000 UTC m=+2.869294566 container remove 9d3d7ffce7ef7298854828c99b262964f8a832c465ca73297d3da2fc838c1ad3 (image=quay.io/ceph/ceph:v18, name=frosty_feistel, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:32:15 np0005532048 podman[76478]: 2025-11-22 08:32:15.150222392 +0000 UTC m=+0.023525687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:15 np0005532048 podman[76478]: 2025-11-22 08:32:15.624467368 +0000 UTC m=+0.497770653 container create 66f9fad817eaf30fdc02dc8c86e0baacb28ddd47b6902b404c54feb45795ec0c (image=quay.io/ceph/ceph:v18, name=relaxed_neumann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:32:15 np0005532048 systemd[1]: Started libpod-conmon-66f9fad817eaf30fdc02dc8c86e0baacb28ddd47b6902b404c54feb45795ec0c.scope.
Nov 22 03:32:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb520d9090ab90c836b4a308f6aef202afe727e1648834393000cedf35de844a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb520d9090ab90c836b4a308f6aef202afe727e1648834393000cedf35de844a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb520d9090ab90c836b4a308f6aef202afe727e1648834393000cedf35de844a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:16 np0005532048 podman[76478]: 2025-11-22 08:32:16.165777822 +0000 UTC m=+1.039081117 container init 66f9fad817eaf30fdc02dc8c86e0baacb28ddd47b6902b404c54feb45795ec0c (image=quay.io/ceph/ceph:v18, name=relaxed_neumann, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:32:16 np0005532048 podman[76478]: 2025-11-22 08:32:16.170570646 +0000 UTC m=+1.043873921 container start 66f9fad817eaf30fdc02dc8c86e0baacb28ddd47b6902b404c54feb45795ec0c (image=quay.io/ceph/ceph:v18, name=relaxed_neumann, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:32:16 np0005532048 podman[76478]: 2025-11-22 08:32:16.326786573 +0000 UTC m=+1.200089848 container attach 66f9fad817eaf30fdc02dc8c86e0baacb28ddd47b6902b404c54feb45795ec0c (image=quay.io/ceph/ceph:v18, name=relaxed_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:32:16 np0005532048 systemd[1]: libpod-conmon-9d3d7ffce7ef7298854828c99b262964f8a832c465ca73297d3da2fc838c1ad3.scope: Deactivated successfully.
Nov 22 03:32:16 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:16 np0005532048 systemd-logind[822]: New session 21 of user ceph-admin.
Nov 22 03:32:16 np0005532048 systemd[1]: Created slice User Slice of UID 42477.
Nov 22 03:32:16 np0005532048 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 22 03:32:16 np0005532048 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 22 03:32:16 np0005532048 systemd[1]: Starting User Manager for UID 42477...
Nov 22 03:32:17 np0005532048 systemd[76524]: Queued start job for default target Main User Target.
Nov 22 03:32:17 np0005532048 systemd[76524]: Created slice User Application Slice.
Nov 22 03:32:17 np0005532048 systemd[76524]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 03:32:17 np0005532048 systemd[76524]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 03:32:17 np0005532048 systemd[76524]: Reached target Paths.
Nov 22 03:32:17 np0005532048 systemd[76524]: Reached target Timers.
Nov 22 03:32:17 np0005532048 systemd[76524]: Starting D-Bus User Message Bus Socket...
Nov 22 03:32:17 np0005532048 systemd[76524]: Starting Create User's Volatile Files and Directories...
Nov 22 03:32:17 np0005532048 systemd-logind[822]: New session 23 of user ceph-admin.
Nov 22 03:32:17 np0005532048 systemd[76524]: Listening on D-Bus User Message Bus Socket.
Nov 22 03:32:17 np0005532048 systemd[76524]: Finished Create User's Volatile Files and Directories.
Nov 22 03:32:17 np0005532048 systemd[76524]: Reached target Sockets.
Nov 22 03:32:17 np0005532048 systemd[76524]: Reached target Basic System.
Nov 22 03:32:17 np0005532048 systemd[76524]: Reached target Main User Target.
Nov 22 03:32:17 np0005532048 systemd[76524]: Startup finished in 133ms.
Nov 22 03:32:17 np0005532048 systemd[1]: Started User Manager for UID 42477.
Nov 22 03:32:17 np0005532048 systemd[1]: Started Session 21 of User ceph-admin.
Nov 22 03:32:17 np0005532048 systemd[1]: Started Session 23 of User ceph-admin.
Nov 22 03:32:17 np0005532048 systemd-logind[822]: New session 24 of user ceph-admin.
Nov 22 03:32:17 np0005532048 systemd[1]: Started Session 24 of User ceph-admin.
Nov 22 03:32:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:17 np0005532048 systemd-logind[822]: New session 25 of user ceph-admin.
Nov 22 03:32:17 np0005532048 systemd[1]: Started Session 25 of User ceph-admin.
Nov 22 03:32:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:18 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 22 03:32:18 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 22 03:32:18 np0005532048 systemd-logind[822]: New session 26 of user ceph-admin.
Nov 22 03:32:18 np0005532048 systemd[1]: Started Session 26 of User ceph-admin.
Nov 22 03:32:18 np0005532048 systemd-logind[822]: New session 27 of user ceph-admin.
Nov 22 03:32:18 np0005532048 systemd[1]: Started Session 27 of User ceph-admin.
Nov 22 03:32:19 np0005532048 systemd-logind[822]: New session 28 of user ceph-admin.
Nov 22 03:32:19 np0005532048 systemd[1]: Started Session 28 of User ceph-admin.
Nov 22 03:32:19 np0005532048 systemd-logind[822]: New session 29 of user ceph-admin.
Nov 22 03:32:19 np0005532048 systemd[1]: Started Session 29 of User ceph-admin.
Nov 22 03:32:19 np0005532048 systemd-logind[822]: New session 30 of user ceph-admin.
Nov 22 03:32:19 np0005532048 systemd[1]: Started Session 30 of User ceph-admin.
Nov 22 03:32:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:20 np0005532048 ceph-mon[75021]: Deploying cephadm binary to compute-0
Nov 22 03:32:20 np0005532048 systemd-logind[822]: New session 31 of user ceph-admin.
Nov 22 03:32:20 np0005532048 systemd[1]: Started Session 31 of User ceph-admin.
Nov 22 03:32:20 np0005532048 systemd-logind[822]: New session 32 of user ceph-admin.
Nov 22 03:32:20 np0005532048 systemd[1]: Started Session 32 of User ceph-admin.
Nov 22 03:32:21 np0005532048 systemd-logind[822]: New session 33 of user ceph-admin.
Nov 22 03:32:21 np0005532048 systemd[1]: Started Session 33 of User ceph-admin.
Nov 22 03:32:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:32:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:22 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Added host compute-0
Nov 22 03:32:22 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 22 03:32:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:32:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:32:22 np0005532048 relaxed_neumann[76494]: Added host 'compute-0' with addr '192.168.122.100'
Nov 22 03:32:22 np0005532048 systemd[1]: libpod-66f9fad817eaf30fdc02dc8c86e0baacb28ddd47b6902b404c54feb45795ec0c.scope: Deactivated successfully.
Nov 22 03:32:22 np0005532048 podman[76478]: 2025-11-22 08:32:22.292554319 +0000 UTC m=+7.165857624 container died 66f9fad817eaf30fdc02dc8c86e0baacb28ddd47b6902b404c54feb45795ec0c (image=quay.io/ceph/ceph:v18, name=relaxed_neumann, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-eb520d9090ab90c836b4a308f6aef202afe727e1648834393000cedf35de844a-merged.mount: Deactivated successfully.
Nov 22 03:32:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:23 np0005532048 ceph-mon[75021]: Added host compute-0
Nov 22 03:32:23 np0005532048 podman[76478]: 2025-11-22 08:32:23.930964967 +0000 UTC m=+8.804268242 container remove 66f9fad817eaf30fdc02dc8c86e0baacb28ddd47b6902b404c54feb45795ec0c (image=quay.io/ceph/ceph:v18, name=relaxed_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:32:23 np0005532048 systemd[1]: libpod-conmon-66f9fad817eaf30fdc02dc8c86e0baacb28ddd47b6902b404c54feb45795ec0c.scope: Deactivated successfully.
Nov 22 03:32:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:24 np0005532048 podman[77258]: 2025-11-22 08:32:23.974259651 +0000 UTC m=+0.024346666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:24 np0005532048 podman[77258]: 2025-11-22 08:32:24.228613195 +0000 UTC m=+0.278700190 container create 608db94a44a55a54a13517cab84a1a4902c329fe1274f304650e22c5336c4cec (image=quay.io/ceph/ceph:v18, name=brave_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:32:24 np0005532048 systemd[1]: Started libpod-conmon-608db94a44a55a54a13517cab84a1a4902c329fe1274f304650e22c5336c4cec.scope.
Nov 22 03:32:24 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6216481e75e4e8488a4610261caec450bf7c4796fa9effdd62780594daffaa1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6216481e75e4e8488a4610261caec450bf7c4796fa9effdd62780594daffaa1b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6216481e75e4e8488a4610261caec450bf7c4796fa9effdd62780594daffaa1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:24 np0005532048 podman[77270]: 2025-11-22 08:32:24.447663527 +0000 UTC m=+0.456131240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:24 np0005532048 podman[77258]: 2025-11-22 08:32:24.722116582 +0000 UTC m=+0.772203607 container init 608db94a44a55a54a13517cab84a1a4902c329fe1274f304650e22c5336c4cec (image=quay.io/ceph/ceph:v18, name=brave_bhabha, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:32:24 np0005532048 podman[77258]: 2025-11-22 08:32:24.735039992 +0000 UTC m=+0.785126987 container start 608db94a44a55a54a13517cab84a1a4902c329fe1274f304650e22c5336c4cec (image=quay.io/ceph/ceph:v18, name=brave_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:32:24 np0005532048 podman[77258]: 2025-11-22 08:32:24.941944632 +0000 UTC m=+0.992031647 container attach 608db94a44a55a54a13517cab84a1a4902c329fe1274f304650e22c5336c4cec (image=quay.io/ceph/ceph:v18, name=brave_bhabha, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:32:25 np0005532048 podman[77270]: 2025-11-22 08:32:25.404964971 +0000 UTC m=+1.413432684 container create 17c868c954548209d5f9b5f0309fb4fdbd933d0e4eb255d947cb69bdee66ee02 (image=quay.io/ceph/ceph:v18, name=festive_panini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:32:25 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:25 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 22 03:32:25 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 22 03:32:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 03:32:25 np0005532048 systemd[1]: Started libpod-conmon-17c868c954548209d5f9b5f0309fb4fdbd933d0e4eb255d947cb69bdee66ee02.scope.
Nov 22 03:32:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:25 np0005532048 brave_bhabha[77289]: Scheduled mon update...
Nov 22 03:32:25 np0005532048 systemd[1]: libpod-608db94a44a55a54a13517cab84a1a4902c329fe1274f304650e22c5336c4cec.scope: Deactivated successfully.
Nov 22 03:32:25 np0005532048 podman[77270]: 2025-11-22 08:32:25.84574228 +0000 UTC m=+1.854210033 container init 17c868c954548209d5f9b5f0309fb4fdbd933d0e4eb255d947cb69bdee66ee02 (image=quay.io/ceph/ceph:v18, name=festive_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:32:25 np0005532048 podman[77270]: 2025-11-22 08:32:25.853203381 +0000 UTC m=+1.861671084 container start 17c868c954548209d5f9b5f0309fb4fdbd933d0e4eb255d947cb69bdee66ee02 (image=quay.io/ceph/ceph:v18, name=festive_panini, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:26 np0005532048 festive_panini[77316]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 22 03:32:26 np0005532048 podman[77270]: 2025-11-22 08:32:26.153226216 +0000 UTC m=+2.161693949 container attach 17c868c954548209d5f9b5f0309fb4fdbd933d0e4eb255d947cb69bdee66ee02 (image=quay.io/ceph/ceph:v18, name=festive_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:32:26 np0005532048 podman[77258]: 2025-11-22 08:32:26.156043838 +0000 UTC m=+2.206130833 container died 608db94a44a55a54a13517cab84a1a4902c329fe1274f304650e22c5336c4cec (image=quay.io/ceph/ceph:v18, name=brave_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:32:26 np0005532048 systemd[1]: libpod-17c868c954548209d5f9b5f0309fb4fdbd933d0e4eb255d947cb69bdee66ee02.scope: Deactivated successfully.
Nov 22 03:32:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6216481e75e4e8488a4610261caec450bf7c4796fa9effdd62780594daffaa1b-merged.mount: Deactivated successfully.
Nov 22 03:32:27 np0005532048 ceph-mon[75021]: Saving service mon spec with placement count:5
Nov 22 03:32:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:27 np0005532048 podman[77270]: 2025-11-22 08:32:27.392236077 +0000 UTC m=+3.400703790 container died 17c868c954548209d5f9b5f0309fb4fdbd933d0e4eb255d947cb69bdee66ee02 (image=quay.io/ceph/ceph:v18, name=festive_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:32:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-61f89475874cd8e8816f4685b449510cdb553652ac81dc419dc10a863eacb1cf-merged.mount: Deactivated successfully.
Nov 22 03:32:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:28 np0005532048 podman[77270]: 2025-11-22 08:32:28.555857198 +0000 UTC m=+4.564324911 container remove 17c868c954548209d5f9b5f0309fb4fdbd933d0e4eb255d947cb69bdee66ee02 (image=quay.io/ceph/ceph:v18, name=festive_panini, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:32:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 22 03:32:28 np0005532048 podman[77258]: 2025-11-22 08:32:28.784013561 +0000 UTC m=+4.834100596 container remove 608db94a44a55a54a13517cab84a1a4902c329fe1274f304650e22c5336c4cec (image=quay.io/ceph/ceph:v18, name=brave_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:32:28 np0005532048 systemd[1]: libpod-conmon-608db94a44a55a54a13517cab84a1a4902c329fe1274f304650e22c5336c4cec.scope: Deactivated successfully.
Nov 22 03:32:28 np0005532048 systemd[1]: libpod-conmon-17c868c954548209d5f9b5f0309fb4fdbd933d0e4eb255d947cb69bdee66ee02.scope: Deactivated successfully.
Nov 22 03:32:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:28 np0005532048 podman[77349]: 2025-11-22 08:32:28.828783409 +0000 UTC m=+0.023584669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:29 np0005532048 podman[77349]: 2025-11-22 08:32:29.076300165 +0000 UTC m=+0.271101405 container create 355b418d94a2b6afd140ab835e0e6d7d7e1fd2d515045e320f7bf19ce5cf3242 (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:32:29 np0005532048 systemd[1]: Started libpod-conmon-355b418d94a2b6afd140ab835e0e6d7d7e1fd2d515045e320f7bf19ce5cf3242.scope.
Nov 22 03:32:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6b514fa810ad69e4bc7c56d090a50085d1ddcfcaf73d49ed1e5e1921460d3e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6b514fa810ad69e4bc7c56d090a50085d1ddcfcaf73d49ed1e5e1921460d3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6b514fa810ad69e4bc7c56d090a50085d1ddcfcaf73d49ed1e5e1921460d3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:32:29 np0005532048 podman[77349]: 2025-11-22 08:32:29.708034895 +0000 UTC m=+0.902836145 container init 355b418d94a2b6afd140ab835e0e6d7d7e1fd2d515045e320f7bf19ce5cf3242 (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:29 np0005532048 podman[77349]: 2025-11-22 08:32:29.714293293 +0000 UTC m=+0.909094533 container start 355b418d94a2b6afd140ab835e0e6d7d7e1fd2d515045e320f7bf19ce5cf3242 (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:32:29 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:29 np0005532048 podman[77349]: 2025-11-22 08:32:29.932326157 +0000 UTC m=+1.127127397 container attach 355b418d94a2b6afd140ab835e0e6d7d7e1fd2d515045e320f7bf19ce5cf3242 (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:32:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:30 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:30 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 22 03:32:30 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 22 03:32:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:32:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:31 np0005532048 naughty_dubinsky[77479]: Scheduled mgr update...
Nov 22 03:32:31 np0005532048 systemd[1]: libpod-355b418d94a2b6afd140ab835e0e6d7d7e1fd2d515045e320f7bf19ce5cf3242.scope: Deactivated successfully.
Nov 22 03:32:31 np0005532048 podman[77677]: 2025-11-22 08:32:31.447764746 +0000 UTC m=+1.142517376 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:32:31 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:31 np0005532048 ceph-mon[75021]: Saving service mgr spec with placement count:2
Nov 22 03:32:31 np0005532048 podman[77677]: 2025-11-22 08:32:31.760506102 +0000 UTC m=+1.455258732 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:32:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:32 np0005532048 podman[77349]: 2025-11-22 08:32:32.101162683 +0000 UTC m=+3.295963923 container died 355b418d94a2b6afd140ab835e0e6d7d7e1fd2d515045e320f7bf19ce5cf3242 (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:32:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:32:32 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5b6b514fa810ad69e4bc7c56d090a50085d1ddcfcaf73d49ed1e5e1921460d3e-merged.mount: Deactivated successfully.
Nov 22 03:32:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:32 np0005532048 podman[77694]: 2025-11-22 08:32:32.358859159 +0000 UTC m=+1.175507664 container remove 355b418d94a2b6afd140ab835e0e6d7d7e1fd2d515045e320f7bf19ce5cf3242 (image=quay.io/ceph/ceph:v18, name=naughty_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:32:32 np0005532048 systemd[1]: libpod-conmon-355b418d94a2b6afd140ab835e0e6d7d7e1fd2d515045e320f7bf19ce5cf3242.scope: Deactivated successfully.
Nov 22 03:32:32 np0005532048 podman[77775]: 2025-11-22 08:32:32.46259008 +0000 UTC m=+0.082253361 container create d04ae2447ad32df1bbd0db5dcaddc9236d5d87a33be1d327eda4a229d592ec25 (image=quay.io/ceph/ceph:v18, name=elegant_hugle, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:32:32 np0005532048 podman[77775]: 2025-11-22 08:32:32.404798209 +0000 UTC m=+0.024461510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:32 np0005532048 systemd[1]: Started libpod-conmon-d04ae2447ad32df1bbd0db5dcaddc9236d5d87a33be1d327eda4a229d592ec25.scope.
Nov 22 03:32:32 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:32 np0005532048 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77870 (sysctl)
Nov 22 03:32:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5615b1ea53a24a7cb6e6c52abaf11ff316704e10aca8063ea4039dd7e00ff3b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5615b1ea53a24a7cb6e6c52abaf11ff316704e10aca8063ea4039dd7e00ff3b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5615b1ea53a24a7cb6e6c52abaf11ff316704e10aca8063ea4039dd7e00ff3b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:32 np0005532048 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 22 03:32:32 np0005532048 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 22 03:32:33 np0005532048 podman[77775]: 2025-11-22 08:32:33.240859966 +0000 UTC m=+0.860523267 container init d04ae2447ad32df1bbd0db5dcaddc9236d5d87a33be1d327eda4a229d592ec25 (image=quay.io/ceph/ceph:v18, name=elegant_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:32:33 np0005532048 podman[77775]: 2025-11-22 08:32:33.250962674 +0000 UTC m=+0.870625955 container start d04ae2447ad32df1bbd0db5dcaddc9236d5d87a33be1d327eda4a229d592ec25 (image=quay.io/ceph/ceph:v18, name=elegant_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:32:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:33 np0005532048 podman[77775]: 2025-11-22 08:32:33.64597625 +0000 UTC m=+1.265639561 container attach d04ae2447ad32df1bbd0db5dcaddc9236d5d87a33be1d327eda4a229d592ec25 (image=quay.io/ceph/ceph:v18, name=elegant_hugle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:32:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:32:33 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:33 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Saving service crash spec with placement *
Nov 22 03:32:33 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 22 03:32:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 03:32:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:34 np0005532048 elegant_hugle[77871]: Scheduled crash update...
Nov 22 03:32:34 np0005532048 systemd[1]: libpod-d04ae2447ad32df1bbd0db5dcaddc9236d5d87a33be1d327eda4a229d592ec25.scope: Deactivated successfully.
Nov 22 03:32:34 np0005532048 podman[77775]: 2025-11-22 08:32:34.513039284 +0000 UTC m=+2.132702595 container died d04ae2447ad32df1bbd0db5dcaddc9236d5d87a33be1d327eda4a229d592ec25 (image=quay.io/ceph/ceph:v18, name=elegant_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:32:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5615b1ea53a24a7cb6e6c52abaf11ff316704e10aca8063ea4039dd7e00ff3b7-merged.mount: Deactivated successfully.
Nov 22 03:32:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:36 np0005532048 ceph-mon[75021]: Saving service crash spec with placement *
Nov 22 03:32:36 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:37 np0005532048 podman[77775]: 2025-11-22 08:32:37.057635614 +0000 UTC m=+4.677298905 container remove d04ae2447ad32df1bbd0db5dcaddc9236d5d87a33be1d327eda4a229d592ec25 (image=quay.io/ceph/ceph:v18, name=elegant_hugle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:37 np0005532048 systemd[1]: libpod-conmon-d04ae2447ad32df1bbd0db5dcaddc9236d5d87a33be1d327eda4a229d592ec25.scope: Deactivated successfully.
Nov 22 03:32:37 np0005532048 podman[78160]: 2025-11-22 08:32:37.10384694 +0000 UTC m=+0.019510941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:37 np0005532048 podman[78160]: 2025-11-22 08:32:37.41027312 +0000 UTC m=+0.325937101 container create f722aa9ddad793f68e5ce6e70996a0c732c222f4ecb61402b5954ccf3d9c6a19 (image=quay.io/ceph/ceph:v18, name=optimistic_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:37 np0005532048 systemd[1]: Started libpod-conmon-f722aa9ddad793f68e5ce6e70996a0c732c222f4ecb61402b5954ccf3d9c6a19.scope.
Nov 22 03:32:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccee417e1d51f805fda2794a9c65ab0cf16e7476b2ea9307de6bc6c572bef02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccee417e1d51f805fda2794a9c65ab0cf16e7476b2ea9307de6bc6c572bef02/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccee417e1d51f805fda2794a9c65ab0cf16e7476b2ea9307de6bc6c572bef02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:38 np0005532048 podman[78160]: 2025-11-22 08:32:38.059042761 +0000 UTC m=+0.974706752 container init f722aa9ddad793f68e5ce6e70996a0c732c222f4ecb61402b5954ccf3d9c6a19 (image=quay.io/ceph/ceph:v18, name=optimistic_carson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:32:38 np0005532048 podman[78160]: 2025-11-22 08:32:38.066260488 +0000 UTC m=+0.981924469 container start f722aa9ddad793f68e5ce6e70996a0c732c222f4ecb61402b5954ccf3d9c6a19 (image=quay.io/ceph/ceph:v18, name=optimistic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:38 np0005532048 podman[78160]: 2025-11-22 08:32:38.397099537 +0000 UTC m=+1.312763538 container attach f722aa9ddad793f68e5ce6e70996a0c732c222f4ecb61402b5954ccf3d9c6a19 (image=quay.io/ceph/ceph:v18, name=optimistic_carson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:32:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 22 03:32:38 np0005532048 podman[78229]: 2025-11-22 08:32:38.49475272 +0000 UTC m=+0.021163012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:32:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:40 np0005532048 podman[78229]: 2025-11-22 08:32:40.481714934 +0000 UTC m=+2.008125196 container create 866dbf8de4c79d30f88648ecc5cbfe65d3363f78b00d17c186e384c9e68eb997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_villani, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:32:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2176234180' entity='client.admin' 
Nov 22 03:32:40 np0005532048 systemd[1]: libpod-f722aa9ddad793f68e5ce6e70996a0c732c222f4ecb61402b5954ccf3d9c6a19.scope: Deactivated successfully.
Nov 22 03:32:41 np0005532048 systemd[1]: Started libpod-conmon-866dbf8de4c79d30f88648ecc5cbfe65d3363f78b00d17c186e384c9e68eb997.scope.
Nov 22 03:32:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:41 np0005532048 podman[78229]: 2025-11-22 08:32:41.489910558 +0000 UTC m=+3.016320820 container init 866dbf8de4c79d30f88648ecc5cbfe65d3363f78b00d17c186e384c9e68eb997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_villani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:41 np0005532048 podman[78229]: 2025-11-22 08:32:41.495841124 +0000 UTC m=+3.022251386 container start 866dbf8de4c79d30f88648ecc5cbfe65d3363f78b00d17c186e384c9e68eb997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:41 np0005532048 flamboyant_villani[78259]: 167 167
Nov 22 03:32:41 np0005532048 systemd[1]: libpod-866dbf8de4c79d30f88648ecc5cbfe65d3363f78b00d17c186e384c9e68eb997.scope: Deactivated successfully.
Nov 22 03:32:41 np0005532048 podman[78229]: 2025-11-22 08:32:41.585969542 +0000 UTC m=+3.112379834 container attach 866dbf8de4c79d30f88648ecc5cbfe65d3363f78b00d17c186e384c9e68eb997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_villani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:32:41 np0005532048 podman[78229]: 2025-11-22 08:32:41.586408022 +0000 UTC m=+3.112818304 container died 866dbf8de4c79d30f88648ecc5cbfe65d3363f78b00d17c186e384c9e68eb997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:32:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7256b0aacd68d81638d9cc5caed4575ba129c80554d55f8e48917a351eb6dd09-merged.mount: Deactivated successfully.
Nov 22 03:32:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:42 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/2176234180' entity='client.admin' 
Nov 22 03:32:42 np0005532048 podman[78229]: 2025-11-22 08:32:42.403926225 +0000 UTC m=+3.930336497 container remove 866dbf8de4c79d30f88648ecc5cbfe65d3363f78b00d17c186e384c9e68eb997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:32:42 np0005532048 podman[78160]: 2025-11-22 08:32:42.437185214 +0000 UTC m=+5.352849185 container died f722aa9ddad793f68e5ce6e70996a0c732c222f4ecb61402b5954ccf3d9c6a19 (image=quay.io/ceph/ceph:v18, name=optimistic_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:32:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2ccee417e1d51f805fda2794a9c65ab0cf16e7476b2ea9307de6bc6c572bef02-merged.mount: Deactivated successfully.
Nov 22 03:32:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:43 np0005532048 podman[78246]: 2025-11-22 08:32:43.045565561 +0000 UTC m=+2.524378967 container remove f722aa9ddad793f68e5ce6e70996a0c732c222f4ecb61402b5954ccf3d9c6a19 (image=quay.io/ceph/ceph:v18, name=optimistic_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:43 np0005532048 systemd[1]: libpod-conmon-f722aa9ddad793f68e5ce6e70996a0c732c222f4ecb61402b5954ccf3d9c6a19.scope: Deactivated successfully.
Nov 22 03:32:43 np0005532048 systemd[1]: libpod-conmon-866dbf8de4c79d30f88648ecc5cbfe65d3363f78b00d17c186e384c9e68eb997.scope: Deactivated successfully.
Nov 22 03:32:43 np0005532048 podman[78281]: 2025-11-22 08:32:43.093783007 +0000 UTC m=+0.024590555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:43 np0005532048 podman[78281]: 2025-11-22 08:32:43.335416022 +0000 UTC m=+0.266223570 container create cfa394a3d9c2c83fb9e13250e52c14ad1670eedbbddf07a6a675b102c4b105b2 (image=quay.io/ceph/ceph:v18, name=peaceful_pasteur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:43 np0005532048 systemd[1]: Started libpod-conmon-cfa394a3d9c2c83fb9e13250e52c14ad1670eedbbddf07a6a675b102c4b105b2.scope.
Nov 22 03:32:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4933cd49e2313e1fac75bd75a7a5933213c152c8a35c91078b99e668eea75b82/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4933cd49e2313e1fac75bd75a7a5933213c152c8a35c91078b99e668eea75b82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4933cd49e2313e1fac75bd75a7a5933213c152c8a35c91078b99e668eea75b82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:43 np0005532048 podman[78281]: 2025-11-22 08:32:43.628827071 +0000 UTC m=+0.559634619 container init cfa394a3d9c2c83fb9e13250e52c14ad1670eedbbddf07a6a675b102c4b105b2 (image=quay.io/ceph/ceph:v18, name=peaceful_pasteur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:32:43 np0005532048 podman[78281]: 2025-11-22 08:32:43.637085064 +0000 UTC m=+0.567892602 container start cfa394a3d9c2c83fb9e13250e52c14ad1670eedbbddf07a6a675b102c4b105b2 (image=quay.io/ceph/ceph:v18, name=peaceful_pasteur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:43 np0005532048 podman[78281]: 2025-11-22 08:32:43.648817303 +0000 UTC m=+0.579624841 container attach cfa394a3d9c2c83fb9e13250e52c14ad1670eedbbddf07a6a675b102c4b105b2 (image=quay.io/ceph/ceph:v18, name=peaceful_pasteur, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:32:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:44 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 22 03:32:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:44 np0005532048 systemd[1]: libpod-cfa394a3d9c2c83fb9e13250e52c14ad1670eedbbddf07a6a675b102c4b105b2.scope: Deactivated successfully.
Nov 22 03:32:44 np0005532048 podman[78323]: 2025-11-22 08:32:44.425301206 +0000 UTC m=+0.021924130 container died cfa394a3d9c2c83fb9e13250e52c14ad1670eedbbddf07a6a675b102c4b105b2 (image=quay.io/ceph/ceph:v18, name=peaceful_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:32:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4933cd49e2313e1fac75bd75a7a5933213c152c8a35c91078b99e668eea75b82-merged.mount: Deactivated successfully.
Nov 22 03:32:45 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:45 np0005532048 podman[78323]: 2025-11-22 08:32:45.499790001 +0000 UTC m=+1.096412905 container remove cfa394a3d9c2c83fb9e13250e52c14ad1670eedbbddf07a6a675b102c4b105b2 (image=quay.io/ceph/ceph:v18, name=peaceful_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:32:45 np0005532048 systemd[1]: libpod-conmon-cfa394a3d9c2c83fb9e13250e52c14ad1670eedbbddf07a6a675b102c4b105b2.scope: Deactivated successfully.
Nov 22 03:32:45 np0005532048 podman[78337]: 2025-11-22 08:32:45.583251774 +0000 UTC m=+0.059791562 container create 11dd0cc41d54c4e0a0f15da36d61e1f2ad8dced5c5e6b83904361e88ce7fe286 (image=quay.io/ceph/ceph:v18, name=admiring_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:32:45 np0005532048 podman[78337]: 2025-11-22 08:32:45.541385644 +0000 UTC m=+0.017925422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:46 np0005532048 systemd[1]: Started libpod-conmon-11dd0cc41d54c4e0a0f15da36d61e1f2ad8dced5c5e6b83904361e88ce7fe286.scope.
Nov 22 03:32:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb00e74aec5bb9130568c8357530642c57e9a3e77de0f7497f696d7cc4de118/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb00e74aec5bb9130568c8357530642c57e9a3e77de0f7497f696d7cc4de118/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb00e74aec5bb9130568c8357530642c57e9a3e77de0f7497f696d7cc4de118/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:46 np0005532048 podman[78337]: 2025-11-22 08:32:46.230870067 +0000 UTC m=+0.707409835 container init 11dd0cc41d54c4e0a0f15da36d61e1f2ad8dced5c5e6b83904361e88ce7fe286 (image=quay.io/ceph/ceph:v18, name=admiring_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:32:46 np0005532048 podman[78337]: 2025-11-22 08:32:46.238384872 +0000 UTC m=+0.714924630 container start 11dd0cc41d54c4e0a0f15da36d61e1f2ad8dced5c5e6b83904361e88ce7fe286 (image=quay.io/ceph/ceph:v18, name=admiring_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:32:46 np0005532048 podman[78337]: 2025-11-22 08:32:46.409694207 +0000 UTC m=+0.886233955 container attach 11dd0cc41d54c4e0a0f15da36d61e1f2ad8dced5c5e6b83904361e88ce7fe286 (image=quay.io/ceph/ceph:v18, name=admiring_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:32:46 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:32:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:32:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:46 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Added label _admin to host compute-0
Nov 22 03:32:46 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 22 03:32:46 np0005532048 admiring_satoshi[78353]: Added label _admin to host compute-0
Nov 22 03:32:46 np0005532048 systemd[1]: libpod-11dd0cc41d54c4e0a0f15da36d61e1f2ad8dced5c5e6b83904361e88ce7fe286.scope: Deactivated successfully.
Nov 22 03:32:46 np0005532048 podman[78337]: 2025-11-22 08:32:46.965124132 +0000 UTC m=+1.441663890 container died 11dd0cc41d54c4e0a0f15da36d61e1f2ad8dced5c5e6b83904361e88ce7fe286 (image=quay.io/ceph/ceph:v18, name=admiring_satoshi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:32:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-beb00e74aec5bb9130568c8357530642c57e9a3e77de0f7497f696d7cc4de118-merged.mount: Deactivated successfully.
Nov 22 03:32:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:48 np0005532048 podman[78337]: 2025-11-22 08:32:48.711852627 +0000 UTC m=+3.188392375 container remove 11dd0cc41d54c4e0a0f15da36d61e1f2ad8dced5c5e6b83904361e88ce7fe286 (image=quay.io/ceph/ceph:v18, name=admiring_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:32:48 np0005532048 systemd[1]: libpod-conmon-11dd0cc41d54c4e0a0f15da36d61e1f2ad8dced5c5e6b83904361e88ce7fe286.scope: Deactivated successfully.
Nov 22 03:32:48 np0005532048 podman[78393]: 2025-11-22 08:32:48.804211749 +0000 UTC m=+0.073101419 container create ef6ffa496c67f2b21d721d293ac15c4e4f2abce8d73099c965496b57e3183fc0 (image=quay.io/ceph/ceph:v18, name=festive_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:48 np0005532048 podman[78393]: 2025-11-22 08:32:48.752915066 +0000 UTC m=+0.021804766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:48 np0005532048 systemd[1]: Started libpod-conmon-ef6ffa496c67f2b21d721d293ac15c4e4f2abce8d73099c965496b57e3183fc0.scope.
Nov 22 03:32:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a278c405be066134f585d7b580f9a5bfbd0fc86ec9b33a39a11d5568565a49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a278c405be066134f585d7b580f9a5bfbd0fc86ec9b33a39a11d5568565a49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29a278c405be066134f585d7b580f9a5bfbd0fc86ec9b33a39a11d5568565a49/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:49 np0005532048 podman[78393]: 2025-11-22 08:32:49.137065738 +0000 UTC m=+0.405955428 container init ef6ffa496c67f2b21d721d293ac15c4e4f2abce8d73099c965496b57e3183fc0 (image=quay.io/ceph/ceph:v18, name=festive_cannon, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:32:49 np0005532048 podman[78393]: 2025-11-22 08:32:49.14370627 +0000 UTC m=+0.412595940 container start ef6ffa496c67f2b21d721d293ac15c4e4f2abce8d73099c965496b57e3183fc0 (image=quay.io/ceph/ceph:v18, name=festive_cannon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:49 np0005532048 podman[78393]: 2025-11-22 08:32:49.266799079 +0000 UTC m=+0.535688759 container attach ef6ffa496c67f2b21d721d293ac15c4e4f2abce8d73099c965496b57e3183fc0 (image=quay.io/ceph/ceph:v18, name=festive_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:49 np0005532048 ceph-mon[75021]: Added label _admin to host compute-0
Nov 22 03:32:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 22 03:32:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/588604517' entity='client.admin' 
Nov 22 03:32:50 np0005532048 systemd[1]: libpod-ef6ffa496c67f2b21d721d293ac15c4e4f2abce8d73099c965496b57e3183fc0.scope: Deactivated successfully.
Nov 22 03:32:50 np0005532048 podman[78435]: 2025-11-22 08:32:50.243669153 +0000 UTC m=+0.020221348 container died ef6ffa496c67f2b21d721d293ac15c4e4f2abce8d73099c965496b57e3183fc0 (image=quay.io/ceph/ceph:v18, name=festive_cannon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:32:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-29a278c405be066134f585d7b580f9a5bfbd0fc86ec9b33a39a11d5568565a49-merged.mount: Deactivated successfully.
Nov 22 03:32:50 np0005532048 podman[78435]: 2025-11-22 08:32:50.672733548 +0000 UTC m=+0.449285753 container remove ef6ffa496c67f2b21d721d293ac15c4e4f2abce8d73099c965496b57e3183fc0 (image=quay.io/ceph/ceph:v18, name=festive_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:32:50 np0005532048 systemd[1]: libpod-conmon-ef6ffa496c67f2b21d721d293ac15c4e4f2abce8d73099c965496b57e3183fc0.scope: Deactivated successfully.
Nov 22 03:32:50 np0005532048 podman[78449]: 2025-11-22 08:32:50.772450602 +0000 UTC m=+0.074191986 container create 7d68aa313e8b42ab5a850bdc68b738debc5beda07a421544153f7acce508bf5b (image=quay.io/ceph/ceph:v18, name=gallant_taussig, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:32:50 np0005532048 podman[78449]: 2025-11-22 08:32:50.718979796 +0000 UTC m=+0.020721210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:50 np0005532048 systemd[1]: Started libpod-conmon-7d68aa313e8b42ab5a850bdc68b738debc5beda07a421544153f7acce508bf5b.scope.
Nov 22 03:32:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c528de37fca9ebef75e6cbfb7e0d897a6ea5f7fc8307a2f05ec4ddec241498/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c528de37fca9ebef75e6cbfb7e0d897a6ea5f7fc8307a2f05ec4ddec241498/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c528de37fca9ebef75e6cbfb7e0d897a6ea5f7fc8307a2f05ec4ddec241498/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:50 np0005532048 podman[78449]: 2025-11-22 08:32:50.854633603 +0000 UTC m=+0.156375017 container init 7d68aa313e8b42ab5a850bdc68b738debc5beda07a421544153f7acce508bf5b (image=quay.io/ceph/ceph:v18, name=gallant_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:32:50 np0005532048 podman[78449]: 2025-11-22 08:32:50.859580355 +0000 UTC m=+0.161321739 container start 7d68aa313e8b42ab5a850bdc68b738debc5beda07a421544153f7acce508bf5b (image=quay.io/ceph/ceph:v18, name=gallant_taussig, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:50 np0005532048 podman[78449]: 2025-11-22 08:32:50.880960971 +0000 UTC m=+0.182702385 container attach 7d68aa313e8b42ab5a850bdc68b738debc5beda07a421544153f7acce508bf5b (image=quay.io/ceph/ceph:v18, name=gallant_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:32:51 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/588604517' entity='client.admin' 
Nov 22 03:32:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 22 03:32:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2827650631' entity='client.admin' 
Nov 22 03:32:51 np0005532048 gallant_taussig[78465]: set mgr/dashboard/cluster/status
Nov 22 03:32:51 np0005532048 systemd[1]: libpod-7d68aa313e8b42ab5a850bdc68b738debc5beda07a421544153f7acce508bf5b.scope: Deactivated successfully.
Nov 22 03:32:51 np0005532048 podman[78449]: 2025-11-22 08:32:51.947786878 +0000 UTC m=+1.249528252 container died 7d68aa313e8b42ab5a850bdc68b738debc5beda07a421544153f7acce508bf5b (image=quay.io/ceph/ceph:v18, name=gallant_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:32:52
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] No pools available
Nov 22 03:32:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f5c528de37fca9ebef75e6cbfb7e0d897a6ea5f7fc8307a2f05ec4ddec241498-merged.mount: Deactivated successfully.
Nov 22 03:32:52 np0005532048 podman[78449]: 2025-11-22 08:32:52.364065169 +0000 UTC m=+1.665806553 container remove 7d68aa313e8b42ab5a850bdc68b738debc5beda07a421544153f7acce508bf5b (image=quay.io/ceph/ceph:v18, name=gallant_taussig, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:32:52 np0005532048 systemd[1]: libpod-conmon-7d68aa313e8b42ab5a850bdc68b738debc5beda07a421544153f7acce508bf5b.scope: Deactivated successfully.
Nov 22 03:32:52 np0005532048 podman[78510]: 2025-11-22 08:32:52.547678807 +0000 UTC m=+0.021396378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:32:52 np0005532048 podman[78510]: 2025-11-22 08:32:52.651406628 +0000 UTC m=+0.125124119 container create cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:52 np0005532048 systemd[1]: Started libpod-conmon-cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df.scope.
Nov 22 03:32:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ca8f1fa6b2e4c21b72e59aaf08bb67ed5f4f9193dccc3bd0e303ebf664ac40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ca8f1fa6b2e4c21b72e59aaf08bb67ed5f4f9193dccc3bd0e303ebf664ac40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ca8f1fa6b2e4c21b72e59aaf08bb67ed5f4f9193dccc3bd0e303ebf664ac40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ca8f1fa6b2e4c21b72e59aaf08bb67ed5f4f9193dccc3bd0e303ebf664ac40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:52 np0005532048 podman[78510]: 2025-11-22 08:32:52.864379848 +0000 UTC m=+0.338097359 container init cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:32:52 np0005532048 podman[78510]: 2025-11-22 08:32:52.873696186 +0000 UTC m=+0.347413677 container start cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:52 np0005532048 podman[78510]: 2025-11-22 08:32:52.880409632 +0000 UTC m=+0.354127153 container attach cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brattain, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:32:52 np0005532048 python3[78549]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:32:52 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/2827650631' entity='client.admin' 
Nov 22 03:32:52 np0005532048 podman[78557]: 2025-11-22 08:32:52.965004274 +0000 UTC m=+0.044482316 container create baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff (image=quay.io/ceph/ceph:v18, name=magical_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:53 np0005532048 podman[78557]: 2025-11-22 08:32:52.942746856 +0000 UTC m=+0.022224918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:53 np0005532048 systemd[1]: Started libpod-conmon-baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff.scope.
Nov 22 03:32:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49390ae58c55b7c2670c7ea9745e9697aaac7c5585b3839655da1673e30b7b40/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49390ae58c55b7c2670c7ea9745e9697aaac7c5585b3839655da1673e30b7b40/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:53 np0005532048 podman[78557]: 2025-11-22 08:32:53.081691994 +0000 UTC m=+0.161170066 container init baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff (image=quay.io/ceph/ceph:v18, name=magical_keller, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:53 np0005532048 podman[78557]: 2025-11-22 08:32:53.093396542 +0000 UTC m=+0.172874584 container start baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff (image=quay.io/ceph/ceph:v18, name=magical_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:32:53 np0005532048 podman[78557]: 2025-11-22 08:32:53.111883277 +0000 UTC m=+0.191361329 container attach baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff (image=quay.io/ceph/ceph:v18, name=magical_keller, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:32:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 22 03:32:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1659431629' entity='client.admin' 
Nov 22 03:32:53 np0005532048 systemd[1]: libpod-baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff.scope: Deactivated successfully.
Nov 22 03:32:53 np0005532048 conmon[78572]: conmon baa01053815b3f43caac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff.scope/container/memory.events
Nov 22 03:32:53 np0005532048 podman[78557]: 2025-11-22 08:32:53.738747599 +0000 UTC m=+0.818225661 container died baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff (image=quay.io/ceph/ceph:v18, name=magical_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-49390ae58c55b7c2670c7ea9745e9697aaac7c5585b3839655da1673e30b7b40-merged.mount: Deactivated successfully.
Nov 22 03:32:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:54 np0005532048 podman[78557]: 2025-11-22 08:32:54.039188641 +0000 UTC m=+1.118666683 container remove baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff (image=quay.io/ceph/ceph:v18, name=magical_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:32:54 np0005532048 systemd[1]: libpod-conmon-baa01053815b3f43caac80c4f74559848e3cfe433374920a2b2ed04afbf9a2ff.scope: Deactivated successfully.
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]: [
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:    {
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        "available": false,
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        "ceph_device": false,
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        "lsm_data": {},
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        "lvs": [],
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        "path": "/dev/sr0",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        "rejected_reasons": [
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "Has a FileSystem",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "Insufficient space (<5GB)"
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        ],
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        "sys_api": {
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "actuators": null,
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "device_nodes": "sr0",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "devname": "sr0",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "human_readable_size": "482.00 KB",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "id_bus": "ata",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "model": "QEMU DVD-ROM",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "nr_requests": "2",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "parent": "/dev/sr0",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "partitions": {},
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "path": "/dev/sr0",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "removable": "1",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "rev": "2.5+",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "ro": "0",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "rotational": "1",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "sas_address": "",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "sas_device_handle": "",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "scheduler_mode": "mq-deadline",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "sectors": 0,
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "sectorsize": "2048",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "size": 493568.0,
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "support_discard": "2048",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "type": "disk",
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:            "vendor": "QEMU"
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:        }
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]:    }
Nov 22 03:32:54 np0005532048 intelligent_brattain[78552]: ]
Nov 22 03:32:54 np0005532048 systemd[1]: libpod-cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df.scope: Deactivated successfully.
Nov 22 03:32:54 np0005532048 podman[78510]: 2025-11-22 08:32:54.253712988 +0000 UTC m=+1.727430479 container died cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:32:54 np0005532048 systemd[1]: libpod-cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df.scope: Consumed 1.376s CPU time.
Nov 22 03:32:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-07ca8f1fa6b2e4c21b72e59aaf08bb67ed5f4f9193dccc3bd0e303ebf664ac40-merged.mount: Deactivated successfully.
Nov 22 03:32:54 np0005532048 podman[78510]: 2025-11-22 08:32:54.549615348 +0000 UTC m=+2.023332839 container remove cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:32:54 np0005532048 systemd[1]: libpod-conmon-cf4c6b2836d99d88a936c96b0bd67a842060336157495b4e8e62999d7dd6d1df.scope: Deactivated successfully.
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1659431629' entity='client.admin' 
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:32:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:32:54 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 22 03:32:54 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 22 03:32:55 np0005532048 ansible-async_wrapper.py[80313]: Invoked with j123729699375 30 /home/zuul/.ansible/tmp/ansible-tmp-1763800374.38062-36457-183971235079756/AnsiballZ_command.py _
Nov 22 03:32:55 np0005532048 ansible-async_wrapper.py[80339]: Starting module and watcher
Nov 22 03:32:55 np0005532048 ansible-async_wrapper.py[80339]: Start watching 80340 (30)
Nov 22 03:32:55 np0005532048 ansible-async_wrapper.py[80340]: Start module (80340)
Nov 22 03:32:55 np0005532048 ansible-async_wrapper.py[80313]: Return async_wrapper task started.
Nov 22 03:32:55 np0005532048 python3[80341]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:32:55 np0005532048 podman[80412]: 2025-11-22 08:32:55.230388548 +0000 UTC m=+0.042454567 container create 707d865c4bceb44d26e1812ab096ce2595a10f7077a45b7d19707233dc60c1ea (image=quay.io/ceph/ceph:v18, name=elated_hugle, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:32:55 np0005532048 systemd[1]: Started libpod-conmon-707d865c4bceb44d26e1812ab096ce2595a10f7077a45b7d19707233dc60c1ea.scope.
Nov 22 03:32:55 np0005532048 podman[80412]: 2025-11-22 08:32:55.212561329 +0000 UTC m=+0.024627368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacdc268051793d80ee32415904b2fe93c3bbd4ff0b00f9d06698c9ba117cf65/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacdc268051793d80ee32415904b2fe93c3bbd4ff0b00f9d06698c9ba117cf65/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:55 np0005532048 podman[80412]: 2025-11-22 08:32:55.331374162 +0000 UTC m=+0.143440201 container init 707d865c4bceb44d26e1812ab096ce2595a10f7077a45b7d19707233dc60c1ea (image=quay.io/ceph/ceph:v18, name=elated_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:55 np0005532048 podman[80412]: 2025-11-22 08:32:55.339647226 +0000 UTC m=+0.151713245 container start 707d865c4bceb44d26e1812ab096ce2595a10f7077a45b7d19707233dc60c1ea (image=quay.io/ceph/ceph:v18, name=elated_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:32:55 np0005532048 podman[80412]: 2025-11-22 08:32:55.343448798 +0000 UTC m=+0.155514807 container attach 707d865c4bceb44d26e1812ab096ce2595a10f7077a45b7d19707233dc60c1ea (image=quay.io/ceph/ceph:v18, name=elated_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:32:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:32:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:32:55 np0005532048 ceph-mon[75021]: Updating compute-0:/etc/ceph/ceph.conf
Nov 22 03:32:55 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14170 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:32:55 np0005532048 elated_hugle[80458]: 
Nov 22 03:32:55 np0005532048 elated_hugle[80458]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 03:32:55 np0005532048 systemd[1]: libpod-707d865c4bceb44d26e1812ab096ce2595a10f7077a45b7d19707233dc60c1ea.scope: Deactivated successfully.
Nov 22 03:32:55 np0005532048 podman[80412]: 2025-11-22 08:32:55.940776034 +0000 UTC m=+0.752842073 container died 707d865c4bceb44d26e1812ab096ce2595a10f7077a45b7d19707233dc60c1ea (image=quay.io/ceph/ceph:v18, name=elated_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:32:55 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/config/ceph.conf
Nov 22 03:32:55 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/config/ceph.conf
Nov 22 03:32:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-eacdc268051793d80ee32415904b2fe93c3bbd4ff0b00f9d06698c9ba117cf65-merged.mount: Deactivated successfully.
Nov 22 03:32:56 np0005532048 podman[80412]: 2025-11-22 08:32:56.033374402 +0000 UTC m=+0.845440421 container remove 707d865c4bceb44d26e1812ab096ce2595a10f7077a45b7d19707233dc60c1ea (image=quay.io/ceph/ceph:v18, name=elated_hugle, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:32:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:56 np0005532048 ansible-async_wrapper.py[80340]: Module complete (80340)
Nov 22 03:32:56 np0005532048 systemd[1]: libpod-conmon-707d865c4bceb44d26e1812ab096ce2595a10f7077a45b7d19707233dc60c1ea.scope: Deactivated successfully.
Nov 22 03:32:56 np0005532048 python3[80992]: ansible-ansible.legacy.async_status Invoked with jid=j123729699375.80313 mode=status _async_dir=/root/.ansible_async
Nov 22 03:32:56 np0005532048 python3[81184]: ansible-ansible.legacy.async_status Invoked with jid=j123729699375.80313 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 03:32:56 np0005532048 ceph-mon[75021]: Updating compute-0:/var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/config/ceph.conf
Nov 22 03:32:56 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 03:32:56 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 03:32:57 np0005532048 python3[81438]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:32:57 np0005532048 python3[81688]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:32:57 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/config/ceph.client.admin.keyring
Nov 22 03:32:57 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/config/ceph.client.admin.keyring
Nov 22 03:32:57 np0005532048 ceph-mon[75021]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 03:32:57 np0005532048 podman[81754]: 2025-11-22 08:32:57.877258576 +0000 UTC m=+0.088050767 container create feca06e8eef7a43e2fbe944d3662f74c4dce2fe7c5e1ff648ae130292b6f8f39 (image=quay.io/ceph/ceph:v18, name=recursing_engelbart, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:32:57 np0005532048 podman[81754]: 2025-11-22 08:32:57.81606979 +0000 UTC m=+0.026862011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:57 np0005532048 systemd[1]: Started libpod-conmon-feca06e8eef7a43e2fbe944d3662f74c4dce2fe7c5e1ff648ae130292b6f8f39.scope.
Nov 22 03:32:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436516f03587d7efda8e4cfd9bee2cc473287bccb64d8c191ca67b9ccc59c899/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436516f03587d7efda8e4cfd9bee2cc473287bccb64d8c191ca67b9ccc59c899/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436516f03587d7efda8e4cfd9bee2cc473287bccb64d8c191ca67b9ccc59c899/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:58 np0005532048 podman[81754]: 2025-11-22 08:32:58.02657415 +0000 UTC m=+0.237366341 container init feca06e8eef7a43e2fbe944d3662f74c4dce2fe7c5e1ff648ae130292b6f8f39 (image=quay.io/ceph/ceph:v18, name=recursing_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 03:32:58 np0005532048 podman[81754]: 2025-11-22 08:32:58.033534691 +0000 UTC m=+0.244326882 container start feca06e8eef7a43e2fbe944d3662f74c4dce2fe7c5e1ff648ae130292b6f8f39 (image=quay.io/ceph/ceph:v18, name=recursing_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:32:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:32:58 np0005532048 podman[81754]: 2025-11-22 08:32:58.072291394 +0000 UTC m=+0.283083595 container attach feca06e8eef7a43e2fbe944d3662f74c4dce2fe7c5e1ff648ae130292b6f8f39 (image=quay.io/ceph/ceph:v18, name=recursing_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:32:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:58 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:32:58 np0005532048 recursing_engelbart[81846]: 
Nov 22 03:32:58 np0005532048 recursing_engelbart[81846]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 03:32:58 np0005532048 systemd[1]: libpod-feca06e8eef7a43e2fbe944d3662f74c4dce2fe7c5e1ff648ae130292b6f8f39.scope: Deactivated successfully.
Nov 22 03:32:58 np0005532048 podman[81754]: 2025-11-22 08:32:58.601148305 +0000 UTC m=+0.811940506 container died feca06e8eef7a43e2fbe944d3662f74c4dce2fe7c5e1ff648ae130292b6f8f39 (image=quay.io/ceph/ceph:v18, name=recursing_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-436516f03587d7efda8e4cfd9bee2cc473287bccb64d8c191ca67b9ccc59c899-merged.mount: Deactivated successfully.
Nov 22 03:32:58 np0005532048 podman[81754]: 2025-11-22 08:32:58.833743918 +0000 UTC m=+1.044536149 container remove feca06e8eef7a43e2fbe944d3662f74c4dce2fe7c5e1ff648ae130292b6f8f39 (image=quay.io/ceph/ceph:v18, name=recursing_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:32:58 np0005532048 systemd[1]: libpod-conmon-feca06e8eef7a43e2fbe944d3662f74c4dce2fe7c5e1ff648ae130292b6f8f39.scope: Deactivated successfully.
Nov 22 03:32:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:32:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:32:59 np0005532048 ceph-mon[75021]: Updating compute-0:/var/lib/ceph/34829716-a12c-57a6-8915-c1aa615c9d8a/config/ceph.client.admin.keyring
Nov 22 03:32:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:32:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:32:59 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 0eee9dca-3417-46cd-80bb-00d8535711d1 (Updating crash deployment (+1 -> 1))
Nov 22 03:32:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 22 03:32:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 22 03:32:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 22 03:32:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:32:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:32:59 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 22 03:32:59 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 22 03:32:59 np0005532048 python3[82292]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:32:59 np0005532048 podman[82325]: 2025-11-22 08:32:59.318708939 +0000 UTC m=+0.023875269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:32:59 np0005532048 podman[82325]: 2025-11-22 08:32:59.474608985 +0000 UTC m=+0.179775295 container create 92ce5c757c766fc64b2b3536cfb6c2f716a3e776da264459c7fbd7e6d0d9a07f (image=quay.io/ceph/ceph:v18, name=vigilant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:32:59 np0005532048 systemd[1]: Started libpod-conmon-92ce5c757c766fc64b2b3536cfb6c2f716a3e776da264459c7fbd7e6d0d9a07f.scope.
Nov 22 03:32:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:32:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5a8e5525843eb8913eadf40a931bb1fe9be9454b19d730db89976d7284945de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5a8e5525843eb8913eadf40a931bb1fe9be9454b19d730db89976d7284945de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5a8e5525843eb8913eadf40a931bb1fe9be9454b19d730db89976d7284945de/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:59 np0005532048 podman[82325]: 2025-11-22 08:32:59.610441426 +0000 UTC m=+0.315607736 container init 92ce5c757c766fc64b2b3536cfb6c2f716a3e776da264459c7fbd7e6d0d9a07f (image=quay.io/ceph/ceph:v18, name=vigilant_varahamihira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:59 np0005532048 podman[82325]: 2025-11-22 08:32:59.616011383 +0000 UTC m=+0.321177673 container start 92ce5c757c766fc64b2b3536cfb6c2f716a3e776da264459c7fbd7e6d0d9a07f (image=quay.io/ceph/ceph:v18, name=vigilant_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:32:59 np0005532048 podman[82325]: 2025-11-22 08:32:59.688960548 +0000 UTC m=+0.394126858 container attach 92ce5c757c766fc64b2b3536cfb6c2f716a3e776da264459c7fbd7e6d0d9a07f (image=quay.io/ceph/ceph:v18, name=vigilant_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:32:59 np0005532048 podman[82453]: 2025-11-22 08:32:59.851271651 +0000 UTC m=+0.102548203 container create c20b0ec05c0639dbef4dfefcb91f3606bc02c79808ae4147bbaec620effab2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:32:59 np0005532048 podman[82453]: 2025-11-22 08:32:59.772861122 +0000 UTC m=+0.024137684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:00 np0005532048 ansible-async_wrapper.py[80339]: Done in kid B.
Nov 22 03:33:00 np0005532048 systemd[1]: Started libpod-conmon-c20b0ec05c0639dbef4dfefcb91f3606bc02c79808ae4147bbaec620effab2d5.scope.
Nov 22 03:33:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 22 03:33:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 22 03:33:00 np0005532048 ceph-mon[75021]: Deploying daemon crash.compute-0 on compute-0
Nov 22 03:33:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:00 np0005532048 podman[82453]: 2025-11-22 08:33:00.080631694 +0000 UTC m=+0.331908256 container init c20b0ec05c0639dbef4dfefcb91f3606bc02c79808ae4147bbaec620effab2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:00 np0005532048 podman[82453]: 2025-11-22 08:33:00.085722419 +0000 UTC m=+0.336998961 container start c20b0ec05c0639dbef4dfefcb91f3606bc02c79808ae4147bbaec620effab2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 22 03:33:00 np0005532048 gracious_mayer[82489]: 167 167
Nov 22 03:33:00 np0005532048 systemd[1]: libpod-c20b0ec05c0639dbef4dfefcb91f3606bc02c79808ae4147bbaec620effab2d5.scope: Deactivated successfully.
Nov 22 03:33:00 np0005532048 podman[82453]: 2025-11-22 08:33:00.112438796 +0000 UTC m=+0.363715368 container attach c20b0ec05c0639dbef4dfefcb91f3606bc02c79808ae4147bbaec620effab2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:33:00 np0005532048 podman[82453]: 2025-11-22 08:33:00.112745845 +0000 UTC m=+0.364022387 container died c20b0ec05c0639dbef4dfefcb91f3606bc02c79808ae4147bbaec620effab2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 22 03:33:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3401524232' entity='client.admin' 
Nov 22 03:33:00 np0005532048 systemd[1]: libpod-92ce5c757c766fc64b2b3536cfb6c2f716a3e776da264459c7fbd7e6d0d9a07f.scope: Deactivated successfully.
Nov 22 03:33:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ddfec839b23369883e193873c5528a34fd2d75f4c370e17c4f09546b191fe2e3-merged.mount: Deactivated successfully.
Nov 22 03:33:00 np0005532048 podman[82453]: 2025-11-22 08:33:00.415035542 +0000 UTC m=+0.666312084 container remove c20b0ec05c0639dbef4dfefcb91f3606bc02c79808ae4147bbaec620effab2d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mayer, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:00 np0005532048 systemd[1]: libpod-conmon-c20b0ec05c0639dbef4dfefcb91f3606bc02c79808ae4147bbaec620effab2d5.scope: Deactivated successfully.
Nov 22 03:33:00 np0005532048 podman[82325]: 2025-11-22 08:33:00.451271222 +0000 UTC m=+1.156437512 container died 92ce5c757c766fc64b2b3536cfb6c2f716a3e776da264459c7fbd7e6d0d9a07f (image=quay.io/ceph/ceph:v18, name=vigilant_varahamihira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:00 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:00 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:00 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:00 np0005532048 podman[82510]: 2025-11-22 08:33:00.649014838 +0000 UTC m=+0.363532425 container remove 92ce5c757c766fc64b2b3536cfb6c2f716a3e776da264459c7fbd7e6d0d9a07f (image=quay.io/ceph/ceph:v18, name=vigilant_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:33:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f5a8e5525843eb8913eadf40a931bb1fe9be9454b19d730db89976d7284945de-merged.mount: Deactivated successfully.
Nov 22 03:33:00 np0005532048 systemd[1]: libpod-conmon-92ce5c757c766fc64b2b3536cfb6c2f716a3e776da264459c7fbd7e6d0d9a07f.scope: Deactivated successfully.
Nov 22 03:33:00 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:00 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:00 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:00 np0005532048 python3[82588]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:33:01 np0005532048 podman[82627]: 2025-11-22 08:33:01.096607249 +0000 UTC m=+0.101044657 container create ce0fe546b8f233e37ce9bf6064107f754120ba9da9a06d3a09fdc7745c48c668 (image=quay.io/ceph/ceph:v18, name=interesting_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:01 np0005532048 podman[82627]: 2025-11-22 08:33:01.025157161 +0000 UTC m=+0.029594589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:33:01 np0005532048 systemd[1]: Starting Ceph crash.compute-0 for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:33:01 np0005532048 systemd[1]: Started libpod-conmon-ce0fe546b8f233e37ce9bf6064107f754120ba9da9a06d3a09fdc7745c48c668.scope.
Nov 22 03:33:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff70aee019217c594d6eb34975d6de2a66392165dc6eddf22cb43c785201d8b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff70aee019217c594d6eb34975d6de2a66392165dc6eddf22cb43c785201d8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff70aee019217c594d6eb34975d6de2a66392165dc6eddf22cb43c785201d8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 np0005532048 podman[82627]: 2025-11-22 08:33:01.227050808 +0000 UTC m=+0.231488236 container init ce0fe546b8f233e37ce9bf6064107f754120ba9da9a06d3a09fdc7745c48c668 (image=quay.io/ceph/ceph:v18, name=interesting_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 22 03:33:01 np0005532048 podman[82627]: 2025-11-22 08:33:01.239894805 +0000 UTC m=+0.244332223 container start ce0fe546b8f233e37ce9bf6064107f754120ba9da9a06d3a09fdc7745c48c668 (image=quay.io/ceph/ceph:v18, name=interesting_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:01 np0005532048 podman[82627]: 2025-11-22 08:33:01.246736683 +0000 UTC m=+0.251174091 container attach ce0fe546b8f233e37ce9bf6064107f754120ba9da9a06d3a09fdc7745c48c668 (image=quay.io/ceph/ceph:v18, name=interesting_bouman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/3401524232' entity='client.admin' 
Nov 22 03:33:01 np0005532048 podman[82696]: 2025-11-22 08:33:01.386419719 +0000 UTC m=+0.073942989 container create fe02bdcae5c39eb897765fe383513c3176cc0e827dfdee29d2ec78de654ff20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:33:01 np0005532048 podman[82696]: 2025-11-22 08:33:01.334880472 +0000 UTC m=+0.022403762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400a98166734276d665307a05813fb57929022466536b6a3324ede8376f05ec1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400a98166734276d665307a05813fb57929022466536b6a3324ede8376f05ec1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400a98166734276d665307a05813fb57929022466536b6a3324ede8376f05ec1/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400a98166734276d665307a05813fb57929022466536b6a3324ede8376f05ec1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 np0005532048 podman[82696]: 2025-11-22 08:33:01.536346348 +0000 UTC m=+0.223869638 container init fe02bdcae5c39eb897765fe383513c3176cc0e827dfdee29d2ec78de654ff20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:01 np0005532048 podman[82696]: 2025-11-22 08:33:01.54292044 +0000 UTC m=+0.230443710 container start fe02bdcae5c39eb897765fe383513c3176cc0e827dfdee29d2ec78de654ff20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:33:01 np0005532048 bash[82696]: fe02bdcae5c39eb897765fe383513c3176cc0e827dfdee29d2ec78de654ff20a
Nov 22 03:33:01 np0005532048 systemd[1]: Started Ceph crash.compute-0 for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 03:33:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0[82711]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:01 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 0eee9dca-3417-46cd-80bb-00d8535711d1 (Updating crash deployment (+1 -> 1))
Nov 22 03:33:01 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 0eee9dca-3417-46cd-80bb-00d8535711d1 (Updating crash deployment (+1 -> 1)) in 3 seconds
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3f31ae84-760a-4a9f-bc07-1a9516b2ed63 does not exist
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4136472994' entity='client.admin' 
Nov 22 03:33:01 np0005532048 systemd[1]: libpod-ce0fe546b8f233e37ce9bf6064107f754120ba9da9a06d3a09fdc7745c48c668.scope: Deactivated successfully.
Nov 22 03:33:01 np0005532048 podman[82627]: 2025-11-22 08:33:01.881410228 +0000 UTC m=+0.885847636 container died ce0fe546b8f233e37ce9bf6064107f754120ba9da9a06d3a09fdc7745c48c668 (image=quay.io/ceph/ceph:v18, name=interesting_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:01 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 9b93a27b-089d-473e-9316-aa365681b743 (Updating mgr deployment (+1 -> 2))
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.xregww", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 22 03:33:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xregww", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 03:33:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0[82711]: 2025-11-22T08:33:01.934+0000 7f2b284c8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 22 03:33:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0[82711]: 2025-11-22T08:33:01.934+0000 7f2b284c8640 -1 AuthRegistry(0x7f2b20067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 22 03:33:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0[82711]: 2025-11-22T08:33:01.935+0000 7f2b284c8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 22 03:33:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0[82711]: 2025-11-22T08:33:01.935+0000 7f2b284c8640 -1 AuthRegistry(0x7f2b284c7000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 22 03:33:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0[82711]: 2025-11-22T08:33:01.936+0000 7f2b2623d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 22 03:33:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0[82711]: 2025-11-22T08:33:01.936+0000 7f2b284c8640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 22 03:33:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0[82711]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 22 03:33:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-crash-compute-0[82711]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 22 03:33:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xregww", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:33:02 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.xregww on compute-0
Nov 22 03:33:02 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.xregww on compute-0
Nov 22 03:33:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-eff70aee019217c594d6eb34975d6de2a66392165dc6eddf22cb43c785201d8b-merged.mount: Deactivated successfully.
Nov 22 03:33:02 np0005532048 ceph-mgr[75315]: [progress INFO root] Writing back 1 completed events
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:02 np0005532048 podman[82627]: 2025-11-22 08:33:02.708687241 +0000 UTC m=+1.713124649 container remove ce0fe546b8f233e37ce9bf6064107f754120ba9da9a06d3a09fdc7745c48c668 (image=quay.io/ceph/ceph:v18, name=interesting_bouman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/4136472994' entity='client.admin' 
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xregww", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xregww", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 22 03:33:02 np0005532048 ceph-mon[75021]: Deploying daemon mgr.compute-0.xregww on compute-0
Nov 22 03:33:02 np0005532048 systemd[1]: libpod-conmon-ce0fe546b8f233e37ce9bf6064107f754120ba9da9a06d3a09fdc7745c48c668.scope: Deactivated successfully.
Nov 22 03:33:02 np0005532048 podman[82900]: 2025-11-22 08:33:02.891093458 +0000 UTC m=+0.102424881 container create d0b02ab58807b9d0a2bbb22ffa3a7c2f6619d4a37619a13d8fc11d64edbf544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_poitras, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:02 np0005532048 podman[82900]: 2025-11-22 08:33:02.813892459 +0000 UTC m=+0.025223902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:02 np0005532048 systemd[1]: Started libpod-conmon-d0b02ab58807b9d0a2bbb22ffa3a7c2f6619d4a37619a13d8fc11d64edbf544c.scope.
Nov 22 03:33:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:02 np0005532048 podman[82900]: 2025-11-22 08:33:02.968550483 +0000 UTC m=+0.179881916 container init d0b02ab58807b9d0a2bbb22ffa3a7c2f6619d4a37619a13d8fc11d64edbf544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:02 np0005532048 podman[82900]: 2025-11-22 08:33:02.977000801 +0000 UTC m=+0.188332224 container start d0b02ab58807b9d0a2bbb22ffa3a7c2f6619d4a37619a13d8fc11d64edbf544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_poitras, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:02 np0005532048 friendly_poitras[82941]: 167 167
Nov 22 03:33:02 np0005532048 systemd[1]: libpod-d0b02ab58807b9d0a2bbb22ffa3a7c2f6619d4a37619a13d8fc11d64edbf544c.scope: Deactivated successfully.
Nov 22 03:33:03 np0005532048 podman[82900]: 2025-11-22 08:33:03.013040728 +0000 UTC m=+0.224372171 container attach d0b02ab58807b9d0a2bbb22ffa3a7c2f6619d4a37619a13d8fc11d64edbf544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_poitras, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:03 np0005532048 podman[82900]: 2025-11-22 08:33:03.013428477 +0000 UTC m=+0.224759900 container died d0b02ab58807b9d0a2bbb22ffa3a7c2f6619d4a37619a13d8fc11d64edbf544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_poitras, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-000bc781e330f6f6b6e320ae41580ddb0879693c71a82e072b2ff0745888fa08-merged.mount: Deactivated successfully.
Nov 22 03:33:03 np0005532048 podman[82900]: 2025-11-22 08:33:03.077621077 +0000 UTC m=+0.288952500 container remove d0b02ab58807b9d0a2bbb22ffa3a7c2f6619d4a37619a13d8fc11d64edbf544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_poitras, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:33:03 np0005532048 python3[82943]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:33:03 np0005532048 systemd[1]: libpod-conmon-d0b02ab58807b9d0a2bbb22ffa3a7c2f6619d4a37619a13d8fc11d64edbf544c.scope: Deactivated successfully.
Nov 22 03:33:03 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:03 np0005532048 podman[82962]: 2025-11-22 08:33:03.147438495 +0000 UTC m=+0.041783840 container create 4e12aa3fc62c5784b77f05a9e3e9375419a39cd4df23b1c55946fa3e978c5dea (image=quay.io/ceph/ceph:v18, name=agitated_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:03 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:03 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:03 np0005532048 podman[82962]: 2025-11-22 08:33:03.129432682 +0000 UTC m=+0.023777927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:33:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:03 np0005532048 systemd[1]: Started libpod-conmon-4e12aa3fc62c5784b77f05a9e3e9375419a39cd4df23b1c55946fa3e978c5dea.scope.
Nov 22 03:33:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68ab47db2c0a61ca61c2bae593ec071702a23027e1c691fbc4cfd2f0a8acd79/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68ab47db2c0a61ca61c2bae593ec071702a23027e1c691fbc4cfd2f0a8acd79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68ab47db2c0a61ca61c2bae593ec071702a23027e1c691fbc4cfd2f0a8acd79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:03 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:03 np0005532048 podman[82962]: 2025-11-22 08:33:03.439610543 +0000 UTC m=+0.333955788 container init 4e12aa3fc62c5784b77f05a9e3e9375419a39cd4df23b1c55946fa3e978c5dea (image=quay.io/ceph/ceph:v18, name=agitated_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:03 np0005532048 podman[82962]: 2025-11-22 08:33:03.449370403 +0000 UTC m=+0.343715648 container start 4e12aa3fc62c5784b77f05a9e3e9375419a39cd4df23b1c55946fa3e978c5dea (image=quay.io/ceph/ceph:v18, name=agitated_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:33:03 np0005532048 podman[82962]: 2025-11-22 08:33:03.455555525 +0000 UTC m=+0.349900790 container attach 4e12aa3fc62c5784b77f05a9e3e9375419a39cd4df23b1c55946fa3e978c5dea (image=quay.io/ceph/ceph:v18, name=agitated_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:03 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:03 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:03 np0005532048 systemd[1]: Starting Ceph mgr.compute-0.xregww for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:33:03 np0005532048 podman[83126]: 2025-11-22 08:33:03.954683755 +0000 UTC m=+0.042152309 container create db1c385b701d930fdec71800516100e773e29f4750ffa8b46bfa4aa3c9a48143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 22 03:33:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3288779532' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 22 03:33:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96105d84f659843530997f4cf53fd1d0f3baecba70d884c13167f216c1f7715c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96105d84f659843530997f4cf53fd1d0f3baecba70d884c13167f216c1f7715c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96105d84f659843530997f4cf53fd1d0f3baecba70d884c13167f216c1f7715c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96105d84f659843530997f4cf53fd1d0f3baecba70d884c13167f216c1f7715c/merged/var/lib/ceph/mgr/ceph-compute-0.xregww supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:04 np0005532048 podman[83126]: 2025-11-22 08:33:04.014607719 +0000 UTC m=+0.102076283 container init db1c385b701d930fdec71800516100e773e29f4750ffa8b46bfa4aa3c9a48143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:04 np0005532048 podman[83126]: 2025-11-22 08:33:04.020058833 +0000 UTC m=+0.107527387 container start db1c385b701d930fdec71800516100e773e29f4750ffa8b46bfa4aa3c9a48143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:33:04 np0005532048 bash[83126]: db1c385b701d930fdec71800516100e773e29f4750ffa8b46bfa4aa3c9a48143
Nov 22 03:33:04 np0005532048 podman[83126]: 2025-11-22 08:33:03.934936018 +0000 UTC m=+0.022404612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:04 np0005532048 systemd[1]: Started Ceph mgr.compute-0.xregww for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:33:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:04 np0005532048 ceph-mgr[83147]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:33:04 np0005532048 ceph-mgr[83147]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 03:33:04 np0005532048 ceph-mgr[83147]: pidfile_write: ignore empty --pid-file
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:04 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 9b93a27b-089d-473e-9316-aa365681b743 (Updating mgr deployment (+1 -> 2))
Nov 22 03:33:04 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 9b93a27b-089d-473e-9316-aa365681b743 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:04 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'alerts'
Nov 22 03:33:04 np0005532048 ceph-mgr[83147]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:33:04 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'balancer'
Nov 22 03:33:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]: 2025-11-22T08:33:04.525+0000 7f4c8bfe8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/3288779532' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3288779532' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 22 03:33:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 22 03:33:04 np0005532048 agitated_morse[83016]: set require_min_compat_client to mimic
Nov 22 03:33:04 np0005532048 systemd[1]: libpod-4e12aa3fc62c5784b77f05a9e3e9375419a39cd4df23b1c55946fa3e978c5dea.scope: Deactivated successfully.
Nov 22 03:33:04 np0005532048 podman[82962]: 2025-11-22 08:33:04.765547494 +0000 UTC m=+1.659892739 container died 4e12aa3fc62c5784b77f05a9e3e9375419a39cd4df23b1c55946fa3e978c5dea (image=quay.io/ceph/ceph:v18, name=agitated_morse, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d68ab47db2c0a61ca61c2bae593ec071702a23027e1c691fbc4cfd2f0a8acd79-merged.mount: Deactivated successfully.
Nov 22 03:33:04 np0005532048 podman[82962]: 2025-11-22 08:33:04.822181357 +0000 UTC m=+1.716526602 container remove 4e12aa3fc62c5784b77f05a9e3e9375419a39cd4df23b1c55946fa3e978c5dea (image=quay.io/ceph/ceph:v18, name=agitated_morse, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:04 np0005532048 ceph-mgr[83147]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:33:04 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'cephadm'
Nov 22 03:33:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]: 2025-11-22T08:33:04.822+0000 7f4c8bfe8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:33:04 np0005532048 systemd[1]: libpod-conmon-4e12aa3fc62c5784b77f05a9e3e9375419a39cd4df23b1c55946fa3e978c5dea.scope: Deactivated successfully.
Nov 22 03:33:04 np0005532048 podman[83406]: 2025-11-22 08:33:04.97265464 +0000 UTC m=+0.069577303 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:33:05 np0005532048 podman[83406]: 2025-11-22 08:33:05.061661369 +0000 UTC m=+0.158584022 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 24b69a6e-66e7-45e9-9e85-6fa1834c7970 does not exist
Nov 22 03:33:05 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e1e0dde5-d27d-4aa3-8988-cd08f45e4627 does not exist
Nov 22 03:33:05 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev f06a6864-75b5-4f23-9398-17735775db0c does not exist
Nov 22 03:33:05 np0005532048 python3[83516]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 22 03:33:05 np0005532048 podman[83555]: 2025-11-22 08:33:05.534582974 +0000 UTC m=+0.056304596 container create da29e7e8713740f1d13fe85c6f2a53dfd50db25f81606c4b1a934bec3f15e902 (image=quay.io/ceph/ceph:v18, name=magical_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 03:33:05 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:33:05 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 03:33:05 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 03:33:05 np0005532048 systemd[1]: Started libpod-conmon-da29e7e8713740f1d13fe85c6f2a53dfd50db25f81606c4b1a934bec3f15e902.scope.
Nov 22 03:33:05 np0005532048 podman[83555]: 2025-11-22 08:33:05.500766092 +0000 UTC m=+0.022487734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:33:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2d52046f501cab280e1519c7fe23a2faa8abf58ce8bb39893f48adb3ed251c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2d52046f501cab280e1519c7fe23a2faa8abf58ce8bb39893f48adb3ed251c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2d52046f501cab280e1519c7fe23a2faa8abf58ce8bb39893f48adb3ed251c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:05 np0005532048 podman[83555]: 2025-11-22 08:33:05.631303814 +0000 UTC m=+0.153025446 container init da29e7e8713740f1d13fe85c6f2a53dfd50db25f81606c4b1a934bec3f15e902 (image=quay.io/ceph/ceph:v18, name=magical_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:33:05 np0005532048 podman[83555]: 2025-11-22 08:33:05.645530763 +0000 UTC m=+0.167252385 container start da29e7e8713740f1d13fe85c6f2a53dfd50db25f81606c4b1a934bec3f15e902 (image=quay.io/ceph/ceph:v18, name=magical_ardinghelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:05 np0005532048 podman[83555]: 2025-11-22 08:33:05.649628924 +0000 UTC m=+0.171350576 container attach da29e7e8713740f1d13fe85c6f2a53dfd50db25f81606c4b1a934bec3f15e902 (image=quay.io/ceph/ceph:v18, name=magical_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/3288779532' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 22 03:33:05 np0005532048 ceph-mon[75021]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 03:33:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:06 np0005532048 podman[83721]: 2025-11-22 08:33:06.095622457 +0000 UTC m=+0.061027863 container create 1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:06 np0005532048 systemd[1]: Started libpod-conmon-1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80.scope.
Nov 22 03:33:06 np0005532048 podman[83721]: 2025-11-22 08:33:06.057973741 +0000 UTC m=+0.023379177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:06 np0005532048 podman[83721]: 2025-11-22 08:33:06.208487833 +0000 UTC m=+0.173893259 container init 1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:33:06 np0005532048 podman[83721]: 2025-11-22 08:33:06.213893107 +0000 UTC m=+0.179298513 container start 1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:06 np0005532048 elegant_kare[83738]: 167 167
Nov 22 03:33:06 np0005532048 systemd[1]: libpod-1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80.scope: Deactivated successfully.
Nov 22 03:33:06 np0005532048 conmon[83738]: conmon 1c62d978d3e73b818e66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80.scope/container/memory.events
Nov 22 03:33:06 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:33:06 np0005532048 podman[83721]: 2025-11-22 08:33:06.334552795 +0000 UTC m=+0.299958201 container attach 1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:33:06 np0005532048 podman[83721]: 2025-11-22 08:33:06.334890904 +0000 UTC m=+0.300296300 container died 1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-990831a04e9fbebd05dbcd897e48048c95b5d512aa43c5eadad41251a0d6353e-merged.mount: Deactivated successfully.
Nov 22 03:33:06 np0005532048 podman[83721]: 2025-11-22 08:33:06.934828243 +0000 UTC m=+0.900233649 container remove 1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:06 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'crash'
Nov 22 03:33:07 np0005532048 systemd[1]: libpod-conmon-1c62d978d3e73b818e668288c82a55d8221cb748a808c2747ced0e821fa4ce80.scope: Deactivated successfully.
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ldbkey (unknown last config time)...
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ldbkey (unknown last config time)...
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ldbkey", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ldbkey", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ldbkey on compute-0
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ldbkey on compute-0
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mgr[83147]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:33:07 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'dashboard'
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]: 2025-11-22T08:33:07.299+0000 7f4c8bfe8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Added host compute-0
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 magical_ardinghelli[83584]: Added host 'compute-0' with addr '192.168.122.100'
Nov 22 03:33:07 np0005532048 magical_ardinghelli[83584]: Scheduled mon update...
Nov 22 03:33:07 np0005532048 magical_ardinghelli[83584]: Scheduled mgr update...
Nov 22 03:33:07 np0005532048 magical_ardinghelli[83584]: Scheduled osd.default_drive_group update...
Nov 22 03:33:07 np0005532048 systemd[1]: libpod-da29e7e8713740f1d13fe85c6f2a53dfd50db25f81606c4b1a934bec3f15e902.scope: Deactivated successfully.
Nov 22 03:33:07 np0005532048 podman[83555]: 2025-11-22 08:33:07.547513257 +0000 UTC m=+2.069234889 container died da29e7e8713740f1d13fe85c6f2a53dfd50db25f81606c4b1a934bec3f15e902 (image=quay.io/ceph/ceph:v18, name=magical_ardinghelli, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:33:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7d2d52046f501cab280e1519c7fe23a2faa8abf58ce8bb39893f48adb3ed251c-merged.mount: Deactivated successfully.
Nov 22 03:33:07 np0005532048 ceph-mgr[75315]: [progress INFO root] Writing back 2 completed events
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:07 np0005532048 podman[83555]: 2025-11-22 08:33:07.82975746 +0000 UTC m=+2.351479082 container remove da29e7e8713740f1d13fe85c6f2a53dfd50db25f81606c4b1a934bec3f15e902 (image=quay.io/ceph/ceph:v18, name=magical_ardinghelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:07 np0005532048 systemd[1]: libpod-conmon-da29e7e8713740f1d13fe85c6f2a53dfd50db25f81606c4b1a934bec3f15e902.scope: Deactivated successfully.
Nov 22 03:33:07 np0005532048 podman[84019]: 2025-11-22 08:33:07.933000971 +0000 UTC m=+0.231936248 container create 586fab607bfaf522dbf6899db753188ca52f8e278598334bbb856e76463e51a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:33:07 np0005532048 podman[84019]: 2025-11-22 08:33:07.853763551 +0000 UTC m=+0.152698858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:07 np0005532048 systemd[1]: Started libpod-conmon-586fab607bfaf522dbf6899db753188ca52f8e278598334bbb856e76463e51a2.scope.
Nov 22 03:33:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:08 np0005532048 podman[84019]: 2025-11-22 08:33:08.0431006 +0000 UTC m=+0.342035917 container init 586fab607bfaf522dbf6899db753188ca52f8e278598334bbb856e76463e51a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:33:08 np0005532048 podman[84019]: 2025-11-22 08:33:08.050249046 +0000 UTC m=+0.349184333 container start 586fab607bfaf522dbf6899db753188ca52f8e278598334bbb856e76463e51a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:08 np0005532048 amazing_nash[84035]: 167 167
Nov 22 03:33:08 np0005532048 systemd[1]: libpod-586fab607bfaf522dbf6899db753188ca52f8e278598334bbb856e76463e51a2.scope: Deactivated successfully.
Nov 22 03:33:08 np0005532048 podman[84019]: 2025-11-22 08:33:08.059565364 +0000 UTC m=+0.358500671 container attach 586fab607bfaf522dbf6899db753188ca52f8e278598334bbb856e76463e51a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:33:08 np0005532048 podman[84019]: 2025-11-22 08:33:08.060004596 +0000 UTC m=+0.358939883 container died 586fab607bfaf522dbf6899db753188ca52f8e278598334bbb856e76463e51a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: Reconfiguring mgr.compute-0.ldbkey (unknown last config time)...
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ldbkey", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: Reconfiguring daemon mgr.compute-0.ldbkey on compute-0
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: Added host compute-0
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: Saving service mon spec with placement compute-0
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: Saving service mgr spec with placement compute-0
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: Saving service osd.default_drive_group spec with placement compute-0
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-43aedb109757c8f4b53debd557f0c921d457e8ff3050b7687bd310d8e6e8a3c9-merged.mount: Deactivated successfully.
Nov 22 03:33:08 np0005532048 python3[84076]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:33:08 np0005532048 podman[84019]: 2025-11-22 08:33:08.467733657 +0000 UTC m=+0.766668944 container remove 586fab607bfaf522dbf6899db753188ca52f8e278598334bbb856e76463e51a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:08 np0005532048 systemd[1]: libpod-conmon-586fab607bfaf522dbf6899db753188ca52f8e278598334bbb856e76463e51a2.scope: Deactivated successfully.
Nov 22 03:33:08 np0005532048 podman[84079]: 2025-11-22 08:33:08.507773041 +0000 UTC m=+0.202108553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:33:08 np0005532048 podman[84079]: 2025-11-22 08:33:08.631978407 +0000 UTC m=+0.326313829 container create 0818fd8e75e86ef7e7110ffbd0ae5c56a76873e320886a3c1d97204bf5f7eac3 (image=quay.io/ceph/ceph:v18, name=magical_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 systemd[1]: Started libpod-conmon-0818fd8e75e86ef7e7110ffbd0ae5c56a76873e320886a3c1d97204bf5f7eac3.scope.
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477a273e924b10d9df751124566c3e3a2b2eacce5acb235e02ec50469e849575/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477a273e924b10d9df751124566c3e3a2b2eacce5acb235e02ec50469e849575/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477a273e924b10d9df751124566c3e3a2b2eacce5acb235e02ec50469e849575/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:08 np0005532048 podman[84079]: 2025-11-22 08:33:08.83002696 +0000 UTC m=+0.524362402 container init 0818fd8e75e86ef7e7110ffbd0ae5c56a76873e320886a3c1d97204bf5f7eac3 (image=quay.io/ceph/ceph:v18, name=magical_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:08 np0005532048 podman[84079]: 2025-11-22 08:33:08.835369861 +0000 UTC m=+0.529705283 container start 0818fd8e75e86ef7e7110ffbd0ae5c56a76873e320886a3c1d97204bf5f7eac3 (image=quay.io/ceph/ceph:v18, name=magical_tu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:08 np0005532048 podman[84079]: 2025-11-22 08:33:08.861641327 +0000 UTC m=+0.555976769 container attach 0818fd8e75e86ef7e7110ffbd0ae5c56a76873e320886a3c1d97204bf5f7eac3 (image=quay.io/ceph/ceph:v18, name=magical_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:33:08 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'devicehealth'
Nov 22 03:33:09 np0005532048 ceph-mgr[83147]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:33:09 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 03:33:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]: 2025-11-22T08:33:09.180+0000 7f4c8bfe8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:33:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 03:33:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417848087' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:33:09 np0005532048 magical_tu[84098]: 
Nov 22 03:33:09 np0005532048 magical_tu[84098]: {"fsid":"34829716-a12c-57a6-8915-c1aa615c9d8a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":131,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T08:32:54.036030+0000","services":{}},"progress_events":{}}
Nov 22 03:33:09 np0005532048 systemd[1]: libpod-0818fd8e75e86ef7e7110ffbd0ae5c56a76873e320886a3c1d97204bf5f7eac3.scope: Deactivated successfully.
Nov 22 03:33:09 np0005532048 podman[84290]: 2025-11-22 08:33:09.630683447 +0000 UTC m=+0.126543664 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:33:09 np0005532048 podman[84079]: 2025-11-22 08:33:09.683240361 +0000 UTC m=+1.377575783 container died 0818fd8e75e86ef7e7110ffbd0ae5c56a76873e320886a3c1d97204bf5f7eac3 (image=quay.io/ceph/ceph:v18, name=magical_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:33:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 03:33:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 03:33:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]:  from numpy import show_config as show_numpy_config
Nov 22 03:33:09 np0005532048 ceph-mgr[83147]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:33:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]: 2025-11-22T08:33:09.740+0000 7f4c8bfe8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:33:09 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'influx'
Nov 22 03:33:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-477a273e924b10d9df751124566c3e3a2b2eacce5acb235e02ec50469e849575-merged.mount: Deactivated successfully.
Nov 22 03:33:09 np0005532048 ceph-mgr[83147]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:33:09 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'insights'
Nov 22 03:33:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]: 2025-11-22T08:33:09.993+0000 7f4c8bfe8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:33:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:10 np0005532048 podman[84304]: 2025-11-22 08:33:10.13886609 +0000 UTC m=+0.574645928 container remove 0818fd8e75e86ef7e7110ffbd0ae5c56a76873e320886a3c1d97204bf5f7eac3 (image=quay.io/ceph/ceph:v18, name=magical_tu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:10 np0005532048 systemd[1]: libpod-conmon-0818fd8e75e86ef7e7110ffbd0ae5c56a76873e320886a3c1d97204bf5f7eac3.scope: Deactivated successfully.
Nov 22 03:33:10 np0005532048 podman[84290]: 2025-11-22 08:33:10.163624729 +0000 UTC m=+0.659484946 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:33:10 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'iostat'
Nov 22 03:33:10 np0005532048 ceph-mgr[83147]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:33:10 np0005532048 ceph-mgr[83147]: mgr[py] Loading python module 'k8sevents'
Nov 22 03:33:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww[83143]: 2025-11-22T08:33:10.554+0000 7f4c8bfe8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:10 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ef4dd72d-d132-435e-9cf3-ead620c6c538 does not exist
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 03:33:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:10 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev bb24918a-9acb-4ac0-a59c-4197b860e7dd (Updating mgr deployment (-1 -> 1))
Nov 22 03:33:10 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.xregww from compute-0 -- ports [8765]
Nov 22 03:33:10 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.xregww from compute-0 -- ports [8765]
Nov 22 03:33:11 np0005532048 systemd[1]: Stopping Ceph mgr.compute-0.xregww for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:33:11 np0005532048 podman[84561]: 2025-11-22 08:33:11.538887274 +0000 UTC m=+0.090920208 container died db1c385b701d930fdec71800516100e773e29f4750ffa8b46bfa4aa3c9a48143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:33:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-96105d84f659843530997f4cf53fd1d0f3baecba70d884c13167f216c1f7715c-merged.mount: Deactivated successfully.
Nov 22 03:33:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:33:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:11 np0005532048 podman[84561]: 2025-11-22 08:33:11.65092525 +0000 UTC m=+0.202958184 container remove db1c385b701d930fdec71800516100e773e29f4750ffa8b46bfa4aa3c9a48143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:33:11 np0005532048 bash[84561]: ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-xregww
Nov 22 03:33:11 np0005532048 systemd[1]: ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@mgr.compute-0.xregww.service: Main process exited, code=exited, status=143/n/a
Nov 22 03:33:11 np0005532048 systemd[1]: ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@mgr.compute-0.xregww.service: Failed with result 'exit-code'.
Nov 22 03:33:11 np0005532048 systemd[1]: Stopped Ceph mgr.compute-0.xregww for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:33:11 np0005532048 systemd[1]: ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@mgr.compute-0.xregww.service: Consumed 8.257s CPU time.
Nov 22 03:33:11 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:11 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:11 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:12 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.xregww
Nov 22 03:33:12 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.xregww
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.xregww"} v 0) v1
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.xregww"}]: dispatch
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.xregww"}]': finished
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:12 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev bb24918a-9acb-4ac0-a59c-4197b860e7dd (Updating mgr deployment (-1 -> 1))
Nov 22 03:33:12 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event bb24918a-9acb-4ac0-a59c-4197b860e7dd (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:12 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev cb7e4bdf-b5a5-489c-81bd-f259af8f6060 does not exist
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:33:12 np0005532048 ceph-mgr[75315]: [progress INFO root] Writing back 3 completed events
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: Removing daemon mgr.compute-0.xregww from compute-0 -- ports [8765]
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.xregww"}]: dispatch
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.xregww"}]': finished
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:12 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:33:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:13 np0005532048 podman[84796]: 2025-11-22 08:33:13.157658629 +0000 UTC m=+0.022659589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:13 np0005532048 podman[84796]: 2025-11-22 08:33:13.263574605 +0000 UTC m=+0.128575545 container create e6b4501f2642da2b1bf7b2914d3e85a03b01c732493c4ac00ebeeb9769ce7d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:13 np0005532048 systemd[1]: Started libpod-conmon-e6b4501f2642da2b1bf7b2914d3e85a03b01c732493c4ac00ebeeb9769ce7d8b.scope.
Nov 22 03:33:13 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:13 np0005532048 podman[84796]: 2025-11-22 08:33:13.544046515 +0000 UTC m=+0.409047465 container init e6b4501f2642da2b1bf7b2914d3e85a03b01c732493c4ac00ebeeb9769ce7d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:13 np0005532048 podman[84796]: 2025-11-22 08:33:13.553240051 +0000 UTC m=+0.418240991 container start e6b4501f2642da2b1bf7b2914d3e85a03b01c732493c4ac00ebeeb9769ce7d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_moore, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:13 np0005532048 relaxed_moore[84809]: 167 167
Nov 22 03:33:13 np0005532048 systemd[1]: libpod-e6b4501f2642da2b1bf7b2914d3e85a03b01c732493c4ac00ebeeb9769ce7d8b.scope: Deactivated successfully.
Nov 22 03:33:13 np0005532048 podman[84796]: 2025-11-22 08:33:13.756925272 +0000 UTC m=+0.621926212 container attach e6b4501f2642da2b1bf7b2914d3e85a03b01c732493c4ac00ebeeb9769ce7d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:13 np0005532048 podman[84796]: 2025-11-22 08:33:13.757649011 +0000 UTC m=+0.622649941 container died e6b4501f2642da2b1bf7b2914d3e85a03b01c732493c4ac00ebeeb9769ce7d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_moore, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:33:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dea00b449d616bee5ad8beb91b577aa51a30f2d833e0b304b35c1dbcd7d9a0f5-merged.mount: Deactivated successfully.
Nov 22 03:33:13 np0005532048 podman[84796]: 2025-11-22 08:33:13.831692892 +0000 UTC m=+0.696693832 container remove e6b4501f2642da2b1bf7b2914d3e85a03b01c732493c4ac00ebeeb9769ce7d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 22 03:33:13 np0005532048 ceph-mon[75021]: Removing key for mgr.compute-0.xregww
Nov 22 03:33:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:13 np0005532048 systemd[1]: libpod-conmon-e6b4501f2642da2b1bf7b2914d3e85a03b01c732493c4ac00ebeeb9769ce7d8b.scope: Deactivated successfully.
Nov 22 03:33:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:14 np0005532048 podman[84834]: 2025-11-22 08:33:13.95559625 +0000 UTC m=+0.021723185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:14 np0005532048 podman[84834]: 2025-11-22 08:33:14.206353369 +0000 UTC m=+0.272480284 container create a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:33:14 np0005532048 systemd[1]: Started libpod-conmon-a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438.scope.
Nov 22 03:33:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c196ef577a4f944e2a410960bb8e4d6dd737cbf81307b51085e4d8f35aac26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c196ef577a4f944e2a410960bb8e4d6dd737cbf81307b51085e4d8f35aac26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c196ef577a4f944e2a410960bb8e4d6dd737cbf81307b51085e4d8f35aac26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c196ef577a4f944e2a410960bb8e4d6dd737cbf81307b51085e4d8f35aac26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c196ef577a4f944e2a410960bb8e4d6dd737cbf81307b51085e4d8f35aac26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:14 np0005532048 podman[84834]: 2025-11-22 08:33:14.418429637 +0000 UTC m=+0.484556572 container init a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:14 np0005532048 podman[84834]: 2025-11-22 08:33:14.423867551 +0000 UTC m=+0.489994476 container start a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_cray, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:14 np0005532048 podman[84834]: 2025-11-22 08:33:14.427724876 +0000 UTC m=+0.493851781 container attach a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_cray, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:15 np0005532048 zealous_cray[84850]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:33:15 np0005532048 zealous_cray[84850]: --> relative data size: 1.0
Nov 22 03:33:15 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:33:15 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4cbd1f75-f268-432e-8433-131a982bebcd
Nov 22 03:33:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4cbd1f75-f268-432e-8433-131a982bebcd"} v 0) v1
Nov 22 03:33:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2048826220' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4cbd1f75-f268-432e-8433-131a982bebcd"}]: dispatch
Nov 22 03:33:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 22 03:33:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2048826220' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4cbd1f75-f268-432e-8433-131a982bebcd"}]': finished
Nov 22 03:33:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 22 03:33:15 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 22 03:33:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:15 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:16 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/2048826220' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4cbd1f75-f268-432e-8433-131a982bebcd"}]: dispatch
Nov 22 03:33:16 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/2048826220' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4cbd1f75-f268-432e-8433-131a982bebcd"}]': finished
Nov 22 03:33:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:33:16 np0005532048 lvm[84912]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 03:33:16 np0005532048 lvm[84912]: VG ceph_vg0 finished
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 22 03:33:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 03:33:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/491005449' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: stderr: got monmap epoch 1
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: --> Creating keyring file for osd.0
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 22 03:33:16 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 4cbd1f75-f268-432e-8433-131a982bebcd --setuser ceph --setgroup ceph
Nov 22 03:33:17 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 22 03:33:17 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 03:33:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:18 np0005532048 ceph-mon[75021]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 22 03:33:18 np0005532048 ceph-mon[75021]: Cluster is now healthy
Nov 22 03:33:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:16.709+0000 7fe483360740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:16.709+0000 7fe483360740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:16.709+0000 7fe483360740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:16.709+0000 7fe483360740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:33:19 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 02f6ddd3-1c9e-4a14-b90d-23afe4793555
Nov 22 03:33:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555"} v 0) v1
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2061718782' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555"}]: dispatch
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2061718782' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555"}]': finished
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:20 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:20 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/2061718782' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555"}]: dispatch
Nov 22 03:33:20 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/2061718782' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555"}]': finished
Nov 22 03:33:20 np0005532048 lvm[85869]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 03:33:20 np0005532048 lvm[85869]: VG ceph_vg1 finished
Nov 22 03:33:20 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:33:20 np0005532048 zealous_cray[84850]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 22 03:33:20 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 22 03:33:20 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 03:33:20 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:20 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 22 03:33:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 03:33:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3449599639' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 03:33:21 np0005532048 zealous_cray[84850]: stderr: got monmap epoch 1
Nov 22 03:33:21 np0005532048 zealous_cray[84850]: --> Creating keyring file for osd.1
Nov 22 03:33:21 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 22 03:33:21 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 22 03:33:21 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 02f6ddd3-1c9e-4a14-b90d-23afe4793555 --setuser ceph --setgroup ceph
Nov 22 03:33:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:21.140+0000 7f956b99b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:21.140+0000 7f956b99b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:21.140+0000 7f956b99b740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:21.140+0000 7f956b99b740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:33:24 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 975ec419-dbeb-4688-a406-de4eff9337c5
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "975ec419-dbeb-4688-a406-de4eff9337c5"} v 0) v1
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4005919928' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "975ec419-dbeb-4688-a406-de4eff9337c5"}]: dispatch
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4005919928' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "975ec419-dbeb-4688-a406-de4eff9337c5"}]': finished
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:33:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:33:25 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:25 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:25 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:33:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 03:33:26 np0005532048 lvm[86829]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:33:26 np0005532048 lvm[86829]: VG ceph_vg2 finished
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 22 03:33:26 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/4005919928' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "975ec419-dbeb-4688-a406-de4eff9337c5"}]: dispatch
Nov 22 03:33:26 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/4005919928' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "975ec419-dbeb-4688-a406-de4eff9337c5"}]': finished
Nov 22 03:33:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 03:33:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225517385' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: stderr: got monmap epoch 1
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: --> Creating keyring file for osd.2
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 22 03:33:26 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 975ec419-dbeb-4688-a406-de4eff9337c5 --setuser ceph --setgroup ceph
Nov 22 03:33:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:26.940+0000 7fd9496cb740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:26.940+0000 7fd9496cb740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:26.940+0000 7fd9496cb740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: stderr: 2025-11-22T08:33:26.941+0000 7fd9496cb740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 22 03:33:38 np0005532048 zealous_cray[84850]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 22 03:33:38 np0005532048 systemd[1]: libpod-a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438.scope: Deactivated successfully.
Nov 22 03:33:38 np0005532048 systemd[1]: libpod-a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438.scope: Consumed 5.897s CPU time.
Nov 22 03:33:38 np0005532048 podman[87757]: 2025-11-22 08:33:38.788359363 +0000 UTC m=+0.039346507 container died a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_cray, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:33:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v51: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:40 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c0c196ef577a4f944e2a410960bb8e4d6dd737cbf81307b51085e4d8f35aac26-merged.mount: Deactivated successfully.
Nov 22 03:33:40 np0005532048 python3[87797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:33:40 np0005532048 podman[87757]: 2025-11-22 08:33:40.796941716 +0000 UTC m=+2.047928860 container remove a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:33:40 np0005532048 systemd[1]: libpod-conmon-a283acd1c7a1d0925b6f71efca1d1b1c94e0ab6ea44e4faea03f89dc97117438.scope: Deactivated successfully.
Nov 22 03:33:40 np0005532048 podman[87799]: 2025-11-22 08:33:40.848577611 +0000 UTC m=+0.208057217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:33:40 np0005532048 podman[87799]: 2025-11-22 08:33:40.997765356 +0000 UTC m=+0.357244942 container create 0a4de3542fb863ec1322bb753266f53bd7c17f5be1c19d5bfa4c84abc68ab323 (image=quay.io/ceph/ceph:v18, name=cool_proskuriakova, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:33:41 np0005532048 systemd[1]: Started libpod-conmon-0a4de3542fb863ec1322bb753266f53bd7c17f5be1c19d5bfa4c84abc68ab323.scope.
Nov 22 03:33:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def61b57fb74503d36640968da3754d9e6b8583d7df7dbb2a529be73bedbafd2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def61b57fb74503d36640968da3754d9e6b8583d7df7dbb2a529be73bedbafd2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def61b57fb74503d36640968da3754d9e6b8583d7df7dbb2a529be73bedbafd2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:41 np0005532048 podman[87799]: 2025-11-22 08:33:41.118711826 +0000 UTC m=+0.478191492 container init 0a4de3542fb863ec1322bb753266f53bd7c17f5be1c19d5bfa4c84abc68ab323 (image=quay.io/ceph/ceph:v18, name=cool_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:33:41 np0005532048 podman[87799]: 2025-11-22 08:33:41.128183585 +0000 UTC m=+0.487663171 container start 0a4de3542fb863ec1322bb753266f53bd7c17f5be1c19d5bfa4c84abc68ab323 (image=quay.io/ceph/ceph:v18, name=cool_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:41 np0005532048 podman[87799]: 2025-11-22 08:33:41.170612817 +0000 UTC m=+0.530092403 container attach 0a4de3542fb863ec1322bb753266f53bd7c17f5be1c19d5bfa4c84abc68ab323 (image=quay.io/ceph/ceph:v18, name=cool_proskuriakova, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:41 np0005532048 podman[87956]: 2025-11-22 08:33:41.420376067 +0000 UTC m=+0.050261033 container create edcb07d06da3b9da5b70acae71ac692ec1a72e9c525662b505e781f648dc918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_buck, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:33:41 np0005532048 systemd[1]: Started libpod-conmon-edcb07d06da3b9da5b70acae71ac692ec1a72e9c525662b505e781f648dc918c.scope.
Nov 22 03:33:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:41 np0005532048 podman[87956]: 2025-11-22 08:33:41.494494298 +0000 UTC m=+0.124379284 container init edcb07d06da3b9da5b70acae71ac692ec1a72e9c525662b505e781f648dc918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:33:41 np0005532048 podman[87956]: 2025-11-22 08:33:41.403731182 +0000 UTC m=+0.033616178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:41 np0005532048 podman[87956]: 2025-11-22 08:33:41.499827198 +0000 UTC m=+0.129712164 container start edcb07d06da3b9da5b70acae71ac692ec1a72e9c525662b505e781f648dc918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:41 np0005532048 loving_buck[87991]: 167 167
Nov 22 03:33:41 np0005532048 podman[87956]: 2025-11-22 08:33:41.504613094 +0000 UTC m=+0.134498060 container attach edcb07d06da3b9da5b70acae71ac692ec1a72e9c525662b505e781f648dc918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:41 np0005532048 systemd[1]: libpod-edcb07d06da3b9da5b70acae71ac692ec1a72e9c525662b505e781f648dc918c.scope: Deactivated successfully.
Nov 22 03:33:41 np0005532048 podman[87956]: 2025-11-22 08:33:41.505394473 +0000 UTC m=+0.135279439 container died edcb07d06da3b9da5b70acae71ac692ec1a72e9c525662b505e781f648dc918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_buck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fd103b4d4821e62b2b228d7797aedc7768f08621534ab94d6367623932211b7a-merged.mount: Deactivated successfully.
Nov 22 03:33:41 np0005532048 podman[87956]: 2025-11-22 08:33:41.549228698 +0000 UTC m=+0.179113664 container remove edcb07d06da3b9da5b70acae71ac692ec1a72e9c525662b505e781f648dc918c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_buck, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:41 np0005532048 systemd[1]: libpod-conmon-edcb07d06da3b9da5b70acae71ac692ec1a72e9c525662b505e781f648dc918c.scope: Deactivated successfully.
Nov 22 03:33:41 np0005532048 podman[88013]: 2025-11-22 08:33:41.748210914 +0000 UTC m=+0.063998836 container create dd442b2c72b593efe2eb24f675ef5d6e8c01638a977a6d52b1484d181fab7d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galileo, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:33:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 03:33:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093435105' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:33:41 np0005532048 cool_proskuriakova[87891]: 
Nov 22 03:33:41 np0005532048 cool_proskuriakova[87891]: {"fsid":"34829716-a12c-57a6-8915-c1aa615c9d8a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":164,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":6,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1763800405,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T08:32:54.036030+0000","services":{}},"progress_events":{}}
Nov 22 03:33:41 np0005532048 systemd[1]: libpod-0a4de3542fb863ec1322bb753266f53bd7c17f5be1c19d5bfa4c84abc68ab323.scope: Deactivated successfully.
Nov 22 03:33:41 np0005532048 podman[87799]: 2025-11-22 08:33:41.770594437 +0000 UTC m=+1.130074043 container died 0a4de3542fb863ec1322bb753266f53bd7c17f5be1c19d5bfa4c84abc68ab323 (image=quay.io/ceph/ceph:v18, name=cool_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:33:41 np0005532048 systemd[1]: Started libpod-conmon-dd442b2c72b593efe2eb24f675ef5d6e8c01638a977a6d52b1484d181fab7d68.scope.
Nov 22 03:33:41 np0005532048 podman[88013]: 2025-11-22 08:33:41.722817457 +0000 UTC m=+0.038605349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-def61b57fb74503d36640968da3754d9e6b8583d7df7dbb2a529be73bedbafd2-merged.mount: Deactivated successfully.
Nov 22 03:33:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1587db0e495fb03f3f2db74a555862236ab90a279163ceceef9561e5f01a46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1587db0e495fb03f3f2db74a555862236ab90a279163ceceef9561e5f01a46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1587db0e495fb03f3f2db74a555862236ab90a279163ceceef9561e5f01a46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1587db0e495fb03f3f2db74a555862236ab90a279163ceceef9561e5f01a46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:41 np0005532048 podman[87799]: 2025-11-22 08:33:41.852966519 +0000 UTC m=+1.212446085 container remove 0a4de3542fb863ec1322bb753266f53bd7c17f5be1c19d5bfa4c84abc68ab323 (image=quay.io/ceph/ceph:v18, name=cool_proskuriakova, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:33:41 np0005532048 podman[88013]: 2025-11-22 08:33:41.86079507 +0000 UTC m=+0.176582902 container init dd442b2c72b593efe2eb24f675ef5d6e8c01638a977a6d52b1484d181fab7d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galileo, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:33:41 np0005532048 systemd[1]: libpod-conmon-0a4de3542fb863ec1322bb753266f53bd7c17f5be1c19d5bfa4c84abc68ab323.scope: Deactivated successfully.
Nov 22 03:33:41 np0005532048 podman[88013]: 2025-11-22 08:33:41.872522035 +0000 UTC m=+0.188309857 container start dd442b2c72b593efe2eb24f675ef5d6e8c01638a977a6d52b1484d181fab7d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:41 np0005532048 podman[88013]: 2025-11-22 08:33:41.888265367 +0000 UTC m=+0.204053269 container attach dd442b2c72b593efe2eb24f675ef5d6e8c01638a977a6d52b1484d181fab7d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v52: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]: {
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:    "0": [
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:        {
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "devices": [
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "/dev/loop3"
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            ],
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_name": "ceph_lv0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_size": "21470642176",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "name": "ceph_lv0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "tags": {
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.cluster_name": "ceph",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.crush_device_class": "",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.encrypted": "0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.osd_id": "0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.type": "block",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.vdo": "0"
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            },
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "type": "block",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "vg_name": "ceph_vg0"
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:        }
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:    ],
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:    "1": [
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:        {
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "devices": [
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "/dev/loop4"
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            ],
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_name": "ceph_lv1",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_size": "21470642176",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "name": "ceph_lv1",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "tags": {
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.cluster_name": "ceph",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.crush_device_class": "",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.encrypted": "0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.osd_id": "1",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.type": "block",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.vdo": "0"
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            },
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "type": "block",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "vg_name": "ceph_vg1"
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:        }
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:    ],
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:    "2": [
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:        {
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "devices": [
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "/dev/loop5"
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            ],
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_name": "ceph_lv2",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_size": "21470642176",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "name": "ceph_lv2",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "tags": {
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.cluster_name": "ceph",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.crush_device_class": "",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.encrypted": "0",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.osd_id": "2",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.type": "block",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:                "ceph.vdo": "0"
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            },
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "type": "block",
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:            "vg_name": "ceph_vg2"
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:        }
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]:    ]
Nov 22 03:33:42 np0005532048 relaxed_galileo[88043]: }
Nov 22 03:33:42 np0005532048 systemd[1]: libpod-dd442b2c72b593efe2eb24f675ef5d6e8c01638a977a6d52b1484d181fab7d68.scope: Deactivated successfully.
Nov 22 03:33:42 np0005532048 podman[88013]: 2025-11-22 08:33:42.651627548 +0000 UTC m=+0.967415360 container died dd442b2c72b593efe2eb24f675ef5d6e8c01638a977a6d52b1484d181fab7d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:33:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1e1587db0e495fb03f3f2db74a555862236ab90a279163ceceef9561e5f01a46-merged.mount: Deactivated successfully.
Nov 22 03:33:42 np0005532048 podman[88013]: 2025-11-22 08:33:42.726200291 +0000 UTC m=+1.041988093 container remove dd442b2c72b593efe2eb24f675ef5d6e8c01638a977a6d52b1484d181fab7d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:42 np0005532048 systemd[1]: libpod-conmon-dd442b2c72b593efe2eb24f675ef5d6e8c01638a977a6d52b1484d181fab7d68.scope: Deactivated successfully.
Nov 22 03:33:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 22 03:33:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 22 03:33:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:33:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:33:42 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 22 03:33:42 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 22 03:33:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 22 03:33:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:43 np0005532048 podman[88208]: 2025-11-22 08:33:43.37387748 +0000 UTC m=+0.043640691 container create 6ed94a17f1ab3cc514c3c2995c49854192d3ddc5e8016836690340f716ff1fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:33:43 np0005532048 systemd[1]: Started libpod-conmon-6ed94a17f1ab3cc514c3c2995c49854192d3ddc5e8016836690340f716ff1fcd.scope.
Nov 22 03:33:43 np0005532048 podman[88208]: 2025-11-22 08:33:43.353452105 +0000 UTC m=+0.023215306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:43 np0005532048 podman[88208]: 2025-11-22 08:33:43.465213561 +0000 UTC m=+0.134976762 container init 6ed94a17f1ab3cc514c3c2995c49854192d3ddc5e8016836690340f716ff1fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:43 np0005532048 podman[88208]: 2025-11-22 08:33:43.473567224 +0000 UTC m=+0.143330405 container start 6ed94a17f1ab3cc514c3c2995c49854192d3ddc5e8016836690340f716ff1fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:33:43 np0005532048 podman[88208]: 2025-11-22 08:33:43.477397546 +0000 UTC m=+0.147160847 container attach 6ed94a17f1ab3cc514c3c2995c49854192d3ddc5e8016836690340f716ff1fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:33:43 np0005532048 pensive_hugle[88224]: 167 167
Nov 22 03:33:43 np0005532048 systemd[1]: libpod-6ed94a17f1ab3cc514c3c2995c49854192d3ddc5e8016836690340f716ff1fcd.scope: Deactivated successfully.
Nov 22 03:33:43 np0005532048 podman[88208]: 2025-11-22 08:33:43.479457327 +0000 UTC m=+0.149220508 container died 6ed94a17f1ab3cc514c3c2995c49854192d3ddc5e8016836690340f716ff1fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:33:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8859e0adf5bb8c9a7bcc934a1366575cd345c522e9507be74b2033b291ba8c4a-merged.mount: Deactivated successfully.
Nov 22 03:33:43 np0005532048 podman[88208]: 2025-11-22 08:33:43.524604134 +0000 UTC m=+0.194367315 container remove 6ed94a17f1ab3cc514c3c2995c49854192d3ddc5e8016836690340f716ff1fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:33:43 np0005532048 systemd[1]: libpod-conmon-6ed94a17f1ab3cc514c3c2995c49854192d3ddc5e8016836690340f716ff1fcd.scope: Deactivated successfully.
Nov 22 03:33:43 np0005532048 ceph-mon[75021]: Deploying daemon osd.0 on compute-0
Nov 22 03:33:43 np0005532048 podman[88258]: 2025-11-22 08:33:43.812943571 +0000 UTC m=+0.044430421 container create 646e12e7c7dd885ea18508f442db144f9ca65ce376996ade1f2eb126950ca527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate-test, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:43 np0005532048 systemd[1]: Started libpod-conmon-646e12e7c7dd885ea18508f442db144f9ca65ce376996ade1f2eb126950ca527.scope.
Nov 22 03:33:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a1179330d91fb5dfb6f6bcd6e0e4453ef3c36775680e0062afaada97f4dda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:43 np0005532048 podman[88258]: 2025-11-22 08:33:43.795096257 +0000 UTC m=+0.026583107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a1179330d91fb5dfb6f6bcd6e0e4453ef3c36775680e0062afaada97f4dda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a1179330d91fb5dfb6f6bcd6e0e4453ef3c36775680e0062afaada97f4dda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a1179330d91fb5dfb6f6bcd6e0e4453ef3c36775680e0062afaada97f4dda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a1179330d91fb5dfb6f6bcd6e0e4453ef3c36775680e0062afaada97f4dda/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:43 np0005532048 podman[88258]: 2025-11-22 08:33:43.904179428 +0000 UTC m=+0.135666318 container init 646e12e7c7dd885ea18508f442db144f9ca65ce376996ade1f2eb126950ca527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:43 np0005532048 podman[88258]: 2025-11-22 08:33:43.912196743 +0000 UTC m=+0.143683633 container start 646e12e7c7dd885ea18508f442db144f9ca65ce376996ade1f2eb126950ca527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:43 np0005532048 podman[88258]: 2025-11-22 08:33:43.917339918 +0000 UTC m=+0.148826808 container attach 646e12e7c7dd885ea18508f442db144f9ca65ce376996ade1f2eb126950ca527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:33:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate-test[88274]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 03:33:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate-test[88274]:                            [--no-systemd] [--no-tmpfs]
Nov 22 03:33:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate-test[88274]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 03:33:44 np0005532048 systemd[1]: libpod-646e12e7c7dd885ea18508f442db144f9ca65ce376996ade1f2eb126950ca527.scope: Deactivated successfully.
Nov 22 03:33:44 np0005532048 podman[88258]: 2025-11-22 08:33:44.586712046 +0000 UTC m=+0.818198886 container died 646e12e7c7dd885ea18508f442db144f9ca65ce376996ade1f2eb126950ca527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d21a1179330d91fb5dfb6f6bcd6e0e4453ef3c36775680e0062afaada97f4dda-merged.mount: Deactivated successfully.
Nov 22 03:33:44 np0005532048 podman[88258]: 2025-11-22 08:33:44.657838803 +0000 UTC m=+0.889325653 container remove 646e12e7c7dd885ea18508f442db144f9ca65ce376996ade1f2eb126950ca527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate-test, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:44 np0005532048 systemd[1]: libpod-conmon-646e12e7c7dd885ea18508f442db144f9ca65ce376996ade1f2eb126950ca527.scope: Deactivated successfully.
Nov 22 03:33:44 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:44 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:44 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:45 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:45 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:45 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:45 np0005532048 systemd[1]: Starting Ceph osd.0 for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:33:45 np0005532048 podman[88432]: 2025-11-22 08:33:45.65760479 +0000 UTC m=+0.060621004 container create 898ef7069aec5876cffdd6237052c5aa998830b59c537b4a6bd71aca6c579ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:45 np0005532048 podman[88432]: 2025-11-22 08:33:45.622524807 +0000 UTC m=+0.025541011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2d864bcb76a54ecdeefef40a4527989cf3f33153cfbd158817733da334379e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2d864bcb76a54ecdeefef40a4527989cf3f33153cfbd158817733da334379e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2d864bcb76a54ecdeefef40a4527989cf3f33153cfbd158817733da334379e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2d864bcb76a54ecdeefef40a4527989cf3f33153cfbd158817733da334379e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2d864bcb76a54ecdeefef40a4527989cf3f33153cfbd158817733da334379e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:45 np0005532048 podman[88432]: 2025-11-22 08:33:45.805709679 +0000 UTC m=+0.208725873 container init 898ef7069aec5876cffdd6237052c5aa998830b59c537b4a6bd71aca6c579ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 22 03:33:45 np0005532048 podman[88432]: 2025-11-22 08:33:45.811215303 +0000 UTC m=+0.214231477 container start 898ef7069aec5876cffdd6237052c5aa998830b59c537b4a6bd71aca6c579ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:45 np0005532048 podman[88432]: 2025-11-22 08:33:45.841668103 +0000 UTC m=+0.244684367 container attach 898ef7069aec5876cffdd6237052c5aa998830b59c537b4a6bd71aca6c579ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:33:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate[88447]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:33:46 np0005532048 bash[88432]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:33:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate[88447]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 03:33:46 np0005532048 bash[88432]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 03:33:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate[88447]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 03:33:46 np0005532048 bash[88432]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 03:33:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate[88447]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 03:33:46 np0005532048 bash[88432]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 03:33:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate[88447]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:46 np0005532048 bash[88432]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate[88447]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:33:46 np0005532048 bash[88432]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:33:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate[88447]: --> ceph-volume raw activate successful for osd ID: 0
Nov 22 03:33:46 np0005532048 bash[88432]: --> ceph-volume raw activate successful for osd ID: 0
Nov 22 03:33:46 np0005532048 systemd[1]: libpod-898ef7069aec5876cffdd6237052c5aa998830b59c537b4a6bd71aca6c579ba3.scope: Deactivated successfully.
Nov 22 03:33:46 np0005532048 podman[88432]: 2025-11-22 08:33:46.86200966 +0000 UTC m=+1.265025834 container died 898ef7069aec5876cffdd6237052c5aa998830b59c537b4a6bd71aca6c579ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:46 np0005532048 systemd[1]: libpod-898ef7069aec5876cffdd6237052c5aa998830b59c537b4a6bd71aca6c579ba3.scope: Consumed 1.065s CPU time.
Nov 22 03:33:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8c2d864bcb76a54ecdeefef40a4527989cf3f33153cfbd158817733da334379e-merged.mount: Deactivated successfully.
Nov 22 03:33:46 np0005532048 podman[88432]: 2025-11-22 08:33:46.943977532 +0000 UTC m=+1.346993706 container remove 898ef7069aec5876cffdd6237052c5aa998830b59c537b4a6bd71aca6c579ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:33:47 np0005532048 podman[88636]: 2025-11-22 08:33:47.135119047 +0000 UTC m=+0.044669546 container create b5e05752dfef24c020dfb75abd72bd6cfd2fc209fc2ea82ffbec0a6f22692115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:33:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae99065882f690732c9f2818a411485cd3844f5da2fb5cf2364ae366c2bdf9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae99065882f690732c9f2818a411485cd3844f5da2fb5cf2364ae366c2bdf9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae99065882f690732c9f2818a411485cd3844f5da2fb5cf2364ae366c2bdf9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae99065882f690732c9f2818a411485cd3844f5da2fb5cf2364ae366c2bdf9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae99065882f690732c9f2818a411485cd3844f5da2fb5cf2364ae366c2bdf9f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:47 np0005532048 podman[88636]: 2025-11-22 08:33:47.197548863 +0000 UTC m=+0.107099352 container init b5e05752dfef24c020dfb75abd72bd6cfd2fc209fc2ea82ffbec0a6f22692115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:33:47 np0005532048 podman[88636]: 2025-11-22 08:33:47.203596781 +0000 UTC m=+0.113147260 container start b5e05752dfef24c020dfb75abd72bd6cfd2fc209fc2ea82ffbec0a6f22692115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:47 np0005532048 bash[88636]: b5e05752dfef24c020dfb75abd72bd6cfd2fc209fc2ea82ffbec0a6f22692115
Nov 22 03:33:47 np0005532048 podman[88636]: 2025-11-22 08:33:47.114871195 +0000 UTC m=+0.024421664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:47 np0005532048 systemd[1]: Started Ceph osd.0 for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: pidfile_write: ignore empty --pid-file
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207e2b1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207e2b1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207e2b1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207e2b1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f0e9800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f0e9800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f0e9800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f0e9800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f0e9800 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:33:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 22 03:33:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 22 03:33:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:33:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:33:47 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 22 03:33:47 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207e2b1800 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: load: jerasure load: lrc 
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:47 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:33:47 np0005532048 podman[88815]: 2025-11-22 08:33:47.862255568 +0000 UTC m=+0.044455712 container create 1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:47 np0005532048 systemd[1]: Started libpod-conmon-1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756.scope.
Nov 22 03:33:47 np0005532048 podman[88815]: 2025-11-22 08:33:47.840611002 +0000 UTC m=+0.022811176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:47 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:47 np0005532048 podman[88815]: 2025-11-22 08:33:47.962491734 +0000 UTC m=+0.144691898 container init 1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:33:47 np0005532048 podman[88815]: 2025-11-22 08:33:47.973018439 +0000 UTC m=+0.155218583 container start 1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:33:47 np0005532048 podman[88815]: 2025-11-22 08:33:47.977204101 +0000 UTC m=+0.159404275 container attach 1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:47 np0005532048 xenodochial_gagarin[88832]: 167 167
Nov 22 03:33:47 np0005532048 systemd[1]: libpod-1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756.scope: Deactivated successfully.
Nov 22 03:33:47 np0005532048 conmon[88832]: conmon 1b677846c5baf5e99435 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756.scope/container/memory.events
Nov 22 03:33:47 np0005532048 podman[88815]: 2025-11-22 08:33:47.982551801 +0000 UTC m=+0.164751945 container died 1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:33:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-66c9c559478bc631c21c26f891a60ca3f343254d3bbcf2790fb5451ee55b92e8-merged.mount: Deactivated successfully.
Nov 22 03:33:48 np0005532048 podman[88815]: 2025-11-22 08:33:48.024770477 +0000 UTC m=+0.206970621 container remove 1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:33:48 np0005532048 systemd[1]: libpod-conmon-1b677846c5baf5e99435d4cbc27d9fbe6ef9802c809bd526d322c429dd6ff756.scope: Deactivated successfully.
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:33:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16ac00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16b400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16b400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16b400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16b400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluefs mount
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluefs mount shared_bdev_used = 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Git sha 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: DB SUMMARY
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: DB Session ID:  T5OJ8Q5XUCNO35NHP6MW
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                                     Options.env: 0x56207f13bc70
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                                Options.info_log: 0x56207e3388a0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.write_buffer_manager: 0x56207f244460
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.row_cache: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                              Options.wal_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.wal_compression: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Compression algorithms supported:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kZSTD supported: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kXpressCompression supported: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kZlibCompression supported: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e3382c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e3382c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e3382c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e3382c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e3382c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e3382c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e3382c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e338240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e325090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e338240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e325090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e338240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e325090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a1e48fec-7fb4-48e8-87bd-7de1a3ab1730
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800428343696, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800428343928, "job": 1, "event": "recovery_finished"}
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: freelist init
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: freelist _read_cfg
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluefs umount
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16b400 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:33:48 np0005532048 podman[88869]: 2025-11-22 08:33:48.287376499 +0000 UTC m=+0.024504967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 22 03:33:48 np0005532048 ceph-mon[75021]: Deploying daemon osd.1 on compute-0
Nov 22 03:33:48 np0005532048 podman[88869]: 2025-11-22 08:33:48.475223034 +0000 UTC m=+0.212351482 container create bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:48 np0005532048 systemd[1]: Started libpod-conmon-bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab.scope.
Nov 22 03:33:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d2582f39852f478e2893c28acc3030a65c73791b21c64b2ca1785ed99df96f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d2582f39852f478e2893c28acc3030a65c73791b21c64b2ca1785ed99df96f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d2582f39852f478e2893c28acc3030a65c73791b21c64b2ca1785ed99df96f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d2582f39852f478e2893c28acc3030a65c73791b21c64b2ca1785ed99df96f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d2582f39852f478e2893c28acc3030a65c73791b21c64b2ca1785ed99df96f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16b400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16b400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16b400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bdev(0x56207f16b400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluefs mount
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluefs mount shared_bdev_used = 4718592
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Git sha 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: DB SUMMARY
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: DB Session ID:  T5OJ8Q5XUCNO35NHP6MX
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                                     Options.env: 0x56207f2ec380
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                                Options.info_log: 0x56207e32eb40
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.write_buffer_manager: 0x56207f2446e0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.row_cache: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                              Options.wal_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.wal_compression: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Compression algorithms supported:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kZSTD supported: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kXpressCompression supported: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kZlibCompression supported: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 podman[88869]: 2025-11-22 08:33:48.615903523 +0000 UTC m=+0.353031981 container init bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:33:48 np0005532048 podman[88869]: 2025-11-22 08:33:48.622831951 +0000 UTC m=+0.359960389 container start bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate-test, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 podman[88869]: 2025-11-22 08:33:48.626380178 +0000 UTC m=+0.363508656 container attach bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate-test, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e3251f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f160)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e325090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f160)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e325090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56207e32f160)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56207e325090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a1e48fec-7fb4-48e8-87bd-7de1a3ab1730
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800428628858, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800428634053, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800428, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1e48fec-7fb4-48e8-87bd-7de1a3ab1730", "db_session_id": "T5OJ8Q5XUCNO35NHP6MX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800428649719, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800428, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1e48fec-7fb4-48e8-87bd-7de1a3ab1730", "db_session_id": "T5OJ8Q5XUCNO35NHP6MX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800428654125, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800428, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a1e48fec-7fb4-48e8-87bd-7de1a3ab1730", "db_session_id": "T5OJ8Q5XUCNO35NHP6MX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800428656052, "job": 1, "event": "recovery_finished"}
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56207e493c00
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: DB pointer 0x56207f22da00
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 460.80 MB usag
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: _get_class not permitted to load lua
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: _get_class not permitted to load sdk
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: _get_class not permitted to load test_remote_reads
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: osd.0 0 load_pgs
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: osd.0 0 load_pgs opened 0 pgs
Nov 22 03:33:48 np0005532048 ceph-osd[88656]: osd.0 0 log_to_monitors true
Nov 22 03:33:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0[88652]: 2025-11-22T08:33:48.708+0000 7f5c6960f740 -1 osd.0 0 log_to_monitors true
Nov 22 03:33:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 22 03:33:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 22 03:33:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate-test[89079]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 03:33:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate-test[89079]:                            [--no-systemd] [--no-tmpfs]
Nov 22 03:33:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate-test[89079]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 03:33:49 np0005532048 systemd[1]: libpod-bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab.scope: Deactivated successfully.
Nov 22 03:33:49 np0005532048 conmon[89079]: conmon bdb9a8905e4ac1aeccf1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab.scope/container/memory.events
Nov 22 03:33:49 np0005532048 podman[88869]: 2025-11-22 08:33:49.257883674 +0000 UTC m=+0.995012112 container died bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:33:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-01d2582f39852f478e2893c28acc3030a65c73791b21c64b2ca1785ed99df96f-merged.mount: Deactivated successfully.
Nov 22 03:33:49 np0005532048 podman[88869]: 2025-11-22 08:33:49.331440642 +0000 UTC m=+1.068569080 container remove bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate-test, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:33:49 np0005532048 systemd[1]: libpod-conmon-bdb9a8905e4ac1aeccf1ea185ad8a2dfea13a0fb212e9286c689609f25f697ab.scope: Deactivated successfully.
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:33:49 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:33:49 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:49 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:33:49 np0005532048 ceph-mon[75021]: from='osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 22 03:33:49 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:49 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:49 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:49 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 03:33:49 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 03:33:49 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:49 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:49 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v57: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:50 np0005532048 systemd[1]: Starting Ceph osd.1 for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:33:50 np0005532048 podman[89456]: 2025-11-22 08:33:50.355864267 +0000 UTC m=+0.033628768 container create e258b43b70fa1ecf57fa0794d44d7cacebc34904b27886d783af672c7ec9c2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:33:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25862fb0a9b08d9a26ff505ed317608e3f7efc5345549dae107e36062ad087a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25862fb0a9b08d9a26ff505ed317608e3f7efc5345549dae107e36062ad087a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25862fb0a9b08d9a26ff505ed317608e3f7efc5345549dae107e36062ad087a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25862fb0a9b08d9a26ff505ed317608e3f7efc5345549dae107e36062ad087a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25862fb0a9b08d9a26ff505ed317608e3f7efc5345549dae107e36062ad087a0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:50 np0005532048 podman[89456]: 2025-11-22 08:33:50.425834668 +0000 UTC m=+0.103599169 container init e258b43b70fa1ecf57fa0794d44d7cacebc34904b27886d783af672c7ec9c2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:50 np0005532048 podman[89456]: 2025-11-22 08:33:50.432752726 +0000 UTC m=+0.110517227 container start e258b43b70fa1ecf57fa0794d44d7cacebc34904b27886d783af672c7ec9c2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 22 03:33:50 np0005532048 podman[89456]: 2025-11-22 08:33:50.341244833 +0000 UTC m=+0.019009354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:50 np0005532048 podman[89456]: 2025-11-22 08:33:50.438204269 +0000 UTC m=+0.115968770 container attach e258b43b70fa1ecf57fa0794d44d7cacebc34904b27886d783af672c7ec9c2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 22 03:33:50 np0005532048 ceph-osd[88656]: osd.0 0 done with init, starting boot process
Nov 22 03:33:50 np0005532048 ceph-osd[88656]: osd.0 0 start_boot
Nov 22 03:33:50 np0005532048 ceph-osd[88656]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 03:33:50 np0005532048 ceph-osd[88656]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 03:33:50 np0005532048 ceph-osd[88656]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 03:33:50 np0005532048 ceph-osd[88656]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 03:33:50 np0005532048 ceph-osd[88656]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:33:50 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:50 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2651910451; not ready for session (expect reconnect)
Nov 22 03:33:50 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:50 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:50 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: from='osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: from='osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:33:50 np0005532048 ceph-mon[75021]: from='osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:33:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate[89471]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:33:51 np0005532048 bash[89456]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:33:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate[89471]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 03:33:51 np0005532048 bash[89456]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 03:33:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate[89471]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 03:33:51 np0005532048 bash[89456]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 03:33:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate[89471]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 03:33:51 np0005532048 bash[89456]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 03:33:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate[89471]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:51 np0005532048 bash[89456]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate[89471]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:33:51 np0005532048 bash[89456]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:33:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate[89471]: --> ceph-volume raw activate successful for osd ID: 1
Nov 22 03:33:51 np0005532048 bash[89456]: --> ceph-volume raw activate successful for osd ID: 1
Nov 22 03:33:51 np0005532048 systemd[1]: libpod-e258b43b70fa1ecf57fa0794d44d7cacebc34904b27886d783af672c7ec9c2c8.scope: Deactivated successfully.
Nov 22 03:33:51 np0005532048 systemd[1]: libpod-e258b43b70fa1ecf57fa0794d44d7cacebc34904b27886d783af672c7ec9c2c8.scope: Consumed 1.047s CPU time.
Nov 22 03:33:51 np0005532048 podman[89456]: 2025-11-22 08:33:51.468064446 +0000 UTC m=+1.145828987 container died e258b43b70fa1ecf57fa0794d44d7cacebc34904b27886d783af672c7ec9c2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:51 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2651910451; not ready for session (expect reconnect)
Nov 22 03:33:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:51 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-25862fb0a9b08d9a26ff505ed317608e3f7efc5345549dae107e36062ad087a0-merged.mount: Deactivated successfully.
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:33:52
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] No pools available
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v59: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:52 np0005532048 podman[89456]: 2025-11-22 08:33:52.242107567 +0000 UTC m=+1.919872068 container remove e258b43b70fa1ecf57fa0794d44d7cacebc34904b27886d783af672c7ec9c2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:33:52 np0005532048 podman[89659]: 2025-11-22 08:33:52.433261023 +0000 UTC m=+0.037763359 container create 74e38918ab8878e5143d19ff25351327e0731f05af530438ac8ec9f6dbb36f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:33:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649b73da077b6ba2b6be3eec03210e3ad909ef5fe8ba319039cc99e0bd3f4e8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649b73da077b6ba2b6be3eec03210e3ad909ef5fe8ba319039cc99e0bd3f4e8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649b73da077b6ba2b6be3eec03210e3ad909ef5fe8ba319039cc99e0bd3f4e8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649b73da077b6ba2b6be3eec03210e3ad909ef5fe8ba319039cc99e0bd3f4e8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649b73da077b6ba2b6be3eec03210e3ad909ef5fe8ba319039cc99e0bd3f4e8e/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:52 np0005532048 podman[89659]: 2025-11-22 08:33:52.41712943 +0000 UTC m=+0.021631786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2651910451; not ready for session (expect reconnect)
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:52 np0005532048 podman[89659]: 2025-11-22 08:33:52.556700083 +0000 UTC m=+0.161202449 container init 74e38918ab8878e5143d19ff25351327e0731f05af530438ac8ec9f6dbb36f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:33:52 np0005532048 podman[89659]: 2025-11-22 08:33:52.562173846 +0000 UTC m=+0.166676182 container start 74e38918ab8878e5143d19ff25351327e0731f05af530438ac8ec9f6dbb36f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: pidfile_write: ignore empty --pid-file
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613819bd800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613819bd800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613819bd800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613819bd800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613827f5800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613827f5800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613827f5800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613827f5800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613827f5800 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:33:52 np0005532048 bash[89659]: 74e38918ab8878e5143d19ff25351327e0731f05af530438ac8ec9f6dbb36f32
Nov 22 03:33:52 np0005532048 systemd[1]: Started Ceph osd.1 for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 22 03:33:52 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 22 03:33:52 np0005532048 ceph-osd[89679]: bdev(0x5613819bd800 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: load: jerasure load: lrc 
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:33:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:33:53 np0005532048 podman[89841]: 2025-11-22 08:33:53.44047299 +0000 UTC m=+0.085029778 container create 1e717fe622f5692fd19f79170f392779afa0cda0c1980834ca5c3e4fedd39cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:33:53 np0005532048 podman[89841]: 2025-11-22 08:33:53.386908368 +0000 UTC m=+0.031465186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:53 np0005532048 systemd[1]: Started libpod-conmon-1e717fe622f5692fd19f79170f392779afa0cda0c1980834ca5c3e4fedd39cf0.scope.
Nov 22 03:33:53 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2651910451; not ready for session (expect reconnect)
Nov 22 03:33:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:53 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:53 np0005532048 podman[89841]: 2025-11-22 08:33:53.561278025 +0000 UTC m=+0.205834843 container init 1e717fe622f5692fd19f79170f392779afa0cda0c1980834ca5c3e4fedd39cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:33:53 np0005532048 podman[89841]: 2025-11-22 08:33:53.570152942 +0000 UTC m=+0.214709730 container start 1e717fe622f5692fd19f79170f392779afa0cda0c1980834ca5c3e4fedd39cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:33:53 np0005532048 kind_brown[89861]: 167 167
Nov 22 03:33:53 np0005532048 systemd[1]: libpod-1e717fe622f5692fd19f79170f392779afa0cda0c1980834ca5c3e4fedd39cf0.scope: Deactivated successfully.
Nov 22 03:33:53 np0005532048 podman[89841]: 2025-11-22 08:33:53.605544262 +0000 UTC m=+0.250101070 container attach 1e717fe622f5692fd19f79170f392779afa0cda0c1980834ca5c3e4fedd39cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:33:53 np0005532048 podman[89841]: 2025-11-22 08:33:53.605985372 +0000 UTC m=+0.250542160 container died 1e717fe622f5692fd19f79170f392779afa0cda0c1980834ca5c3e4fedd39cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382876c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382877400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382877400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382877400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382877400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluefs mount
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluefs mount shared_bdev_used = 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Git sha 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: DB SUMMARY
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: DB Session ID:  Y7OG3JMEOQD6FBCZOEZY
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                                     Options.env: 0x561382847c70
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                                Options.info_log: 0x561381a448a0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.write_buffer_manager: 0x561382950460
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.row_cache: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                              Options.wal_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.wal_compression: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Compression algorithms supported:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kZSTD supported: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kXpressCompression supported: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kZlibCompression supported: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a442c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a442c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a442c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a442c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a442c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a442c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a442c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a44240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a31090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a44240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a31090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a44240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a31090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3dd87f4f-33ec-46bd-ab4e-8b40a0927633
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800433741053, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800433741271, "job": 1, "event": "recovery_finished"}
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: freelist init
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: freelist _read_cfg
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluefs umount
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382877400 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:33:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-acfe7c892ce3b7f2d84639cd4946fcaf710943c16b2d09f521e66c1fa4712034-merged.mount: Deactivated successfully.
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382877400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382877400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382877400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bdev(0x561382877400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluefs mount
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluefs mount shared_bdev_used = 4718592
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Git sha 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: DB SUMMARY
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: DB Session ID:  Y7OG3JMEOQD6FBCZOEZZ
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                                     Options.env: 0x5613829f8380
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                                Options.info_log: 0x561382843680
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.write_buffer_manager: 0x5613829506e0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.row_cache: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                              Options.wal_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.wal_compression: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Compression algorithms supported:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kZSTD supported: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kXpressCompression supported: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kZlibCompression supported: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a400)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a400)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a400)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a400)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:53 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a400)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a400)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a400)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a3c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a31090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a3c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a31090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:           Options.merge_operator: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561381a3a3c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561381a31090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.compression: LZ4
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.num_levels: 7
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3dd87f4f-33ec-46bd-ab4e-8b40a0927633
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800433987278, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:33:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v60: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:54 np0005532048 ceph-mon[75021]: Deploying daemon osd.2 on compute-0
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800434315530, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800433, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3dd87f4f-33ec-46bd-ab4e-8b40a0927633", "db_session_id": "Y7OG3JMEOQD6FBCZOEZZ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:33:54 np0005532048 podman[89841]: 2025-11-22 08:33:54.316885029 +0000 UTC m=+0.961441817 container remove 1e717fe622f5692fd19f79170f392779afa0cda0c1980834ca5c3e4fedd39cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:33:54 np0005532048 systemd[1]: libpod-conmon-1e717fe622f5692fd19f79170f392779afa0cda0c1980834ca5c3e4fedd39cf0.scope: Deactivated successfully.
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800434352308, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800434, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3dd87f4f-33ec-46bd-ab4e-8b40a0927633", "db_session_id": "Y7OG3JMEOQD6FBCZOEZZ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800434356717, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800434, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3dd87f4f-33ec-46bd-ab4e-8b40a0927633", "db_session_id": "Y7OG3JMEOQD6FBCZOEZZ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800434358408, "job": 1, "event": "recovery_finished"}
Nov 22 03:33:54 np0005532048 ceph-osd[89679]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 03:33:55 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2651910451; not ready for session (expect reconnect)
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:55 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561382a05c00
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: rocksdb: DB pointer 0x561382939a00
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.2 total, 1.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.2 total, 1.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.2 total, 1.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.2 total, 1.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 460.80 MB usag
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: _get_class not permitted to load lua
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: _get_class not permitted to load sdk
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: _get_class not permitted to load test_remote_reads
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: osd.1 0 load_pgs
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: osd.1 0 load_pgs opened 0 pgs
Nov 22 03:33:55 np0005532048 ceph-osd[89679]: osd.1 0 log_to_monitors true
Nov 22 03:33:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T08:33:55.149+0000 7fa69f2f8740 -1 osd.1 0 log_to_monitors true
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 22 03:33:55 np0005532048 podman[90306]: 2025-11-22 08:33:55.299209182 +0000 UTC m=+0.063012942 container create 0985fd01a4e055c2fb2ada81e0951a70ecd430ceb71f15a34359fbef1c85057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate-test, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:55 np0005532048 podman[90306]: 2025-11-22 08:33:55.269207852 +0000 UTC m=+0.033011612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:55 np0005532048 systemd[1]: Started libpod-conmon-0985fd01a4e055c2fb2ada81e0951a70ecd430ceb71f15a34359fbef1c85057b.scope.
Nov 22 03:33:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b55ca3af1c142983d53cbc4428eed712ffb99f512b155136480f16766f9944f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b55ca3af1c142983d53cbc4428eed712ffb99f512b155136480f16766f9944f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b55ca3af1c142983d53cbc4428eed712ffb99f512b155136480f16766f9944f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b55ca3af1c142983d53cbc4428eed712ffb99f512b155136480f16766f9944f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: from='osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 22 03:33:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b55ca3af1c142983d53cbc4428eed712ffb99f512b155136480f16766f9944f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Nov 22 03:33:55 np0005532048 podman[90306]: 2025-11-22 08:33:55.445013545 +0000 UTC m=+0.208817325 container init 0985fd01a4e055c2fb2ada81e0951a70ecd430ceb71f15a34359fbef1c85057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:33:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:33:55 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:55 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:55 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:33:55 np0005532048 podman[90306]: 2025-11-22 08:33:55.456108485 +0000 UTC m=+0.219912245 container start 0985fd01a4e055c2fb2ada81e0951a70ecd430ceb71f15a34359fbef1c85057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:55 np0005532048 podman[90306]: 2025-11-22 08:33:55.485119709 +0000 UTC m=+0.248923489 container attach 0985fd01a4e055c2fb2ada81e0951a70ecd430ceb71f15a34359fbef1c85057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v62: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate-test[90321]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 03:33:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate-test[90321]:                            [--no-systemd] [--no-tmpfs]
Nov 22 03:33:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate-test[90321]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 03:33:56 np0005532048 systemd[1]: libpod-0985fd01a4e055c2fb2ada81e0951a70ecd430ceb71f15a34359fbef1c85057b.scope: Deactivated successfully.
Nov 22 03:33:56 np0005532048 podman[90306]: 2025-11-22 08:33:56.107502105 +0000 UTC m=+0.871305865 container died 0985fd01a4e055c2fb2ada81e0951a70ecd430ceb71f15a34359fbef1c85057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 03:33:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 03:33:56 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2651910451; not ready for session (expect reconnect)
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:56 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1b55ca3af1c142983d53cbc4428eed712ffb99f512b155136480f16766f9944f-merged.mount: Deactivated successfully.
Nov 22 03:33:56 np0005532048 podman[90306]: 2025-11-22 08:33:56.17642722 +0000 UTC m=+0.940230980 container remove 0985fd01a4e055c2fb2ada81e0951a70ecd430ceb71f15a34359fbef1c85057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate-test, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:33:56 np0005532048 systemd[1]: libpod-conmon-0985fd01a4e055c2fb2ada81e0951a70ecd430ceb71f15a34359fbef1c85057b.scope: Deactivated successfully.
Nov 22 03:33:56 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Nov 22 03:33:56 np0005532048 ceph-osd[89679]: osd.1 0 done with init, starting boot process
Nov 22 03:33:56 np0005532048 ceph-osd[89679]: osd.1 0 start_boot
Nov 22 03:33:56 np0005532048 ceph-osd[89679]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 03:33:56 np0005532048 ceph-osd[89679]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 03:33:56 np0005532048 ceph-osd[89679]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 03:33:56 np0005532048 ceph-osd[89679]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 03:33:56 np0005532048 ceph-osd[89679]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:33:56 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:56 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:56 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:33:56 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3001320351; not ready for session (expect reconnect)
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:56 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: from='osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 22 03:33:56 np0005532048 ceph-mon[75021]: from='osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 38.042 iops: 9738.744 elapsed_sec: 0.308
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: log_channel(cluster) log [WRN] : OSD bench result of 9738.744038 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: osd.0 0 waiting for initial osdmap
Nov 22 03:33:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0[88652]: 2025-11-22T08:33:56.487+0000 7f5c65da6640 -1 osd.0 0 waiting for initial osdmap
Nov 22 03:33:56 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:56 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: osd.0 10 check_osdmap_features require_osd_release unknown -> reef
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:33:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-0[88652]: 2025-11-22T08:33:56.536+0000 7f5c60bb7640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: osd.0 10 set_numa_affinity not setting numa affinity
Nov 22 03:33:56 np0005532048 ceph-osd[88656]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 22 03:33:56 np0005532048 systemd[1]: Reloading.
Nov 22 03:33:56 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:33:56 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:33:56 np0005532048 systemd[1]: Starting Ceph osd.2 for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:33:57 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2651910451; not ready for session (expect reconnect)
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:57 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:33:57 np0005532048 podman[90486]: 2025-11-22 08:33:57.223743602 +0000 UTC m=+0.062283235 container create 6dda10b78879c278186f58acdfee0693833193211766b867e8dd7920df24b2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:57 np0005532048 podman[90486]: 2025-11-22 08:33:57.181820443 +0000 UTC m=+0.020360096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad65d731ddec65084c4d0329c88fb27fe5985fc268d0b4b1c906e81dc962e081/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad65d731ddec65084c4d0329c88fb27fe5985fc268d0b4b1c906e81dc962e081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad65d731ddec65084c4d0329c88fb27fe5985fc268d0b4b1c906e81dc962e081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad65d731ddec65084c4d0329c88fb27fe5985fc268d0b4b1c906e81dc962e081/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad65d731ddec65084c4d0329c88fb27fe5985fc268d0b4b1c906e81dc962e081/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:57 np0005532048 podman[90486]: 2025-11-22 08:33:57.299134664 +0000 UTC m=+0.137674327 container init 6dda10b78879c278186f58acdfee0693833193211766b867e8dd7920df24b2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:57 np0005532048 podman[90486]: 2025-11-22 08:33:57.306037362 +0000 UTC m=+0.144576995 container start 6dda10b78879c278186f58acdfee0693833193211766b867e8dd7920df24b2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:33:57 np0005532048 podman[90486]: 2025-11-22 08:33:57.332184357 +0000 UTC m=+0.170724070 container attach 6dda10b78879c278186f58acdfee0693833193211766b867e8dd7920df24b2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:57 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3001320351; not ready for session (expect reconnect)
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:57 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: from='osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: OSD bench result of 9738.744038 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:33:57 np0005532048 ceph-osd[88656]: osd.0 10 tick checking mon for new map
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451] boot
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:33:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:33:57 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:57 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:33:57 np0005532048 ceph-osd[88656]: osd.0 11 state: booting -> active
Nov 22 03:33:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v65: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate[90502]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:33:58 np0005532048 bash[90486]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:33:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate[90502]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 03:33:58 np0005532048 bash[90486]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 03:33:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate[90502]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 03:33:58 np0005532048 bash[90486]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 03:33:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate[90502]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 03:33:58 np0005532048 bash[90486]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 03:33:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate[90502]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 03:33:58 np0005532048 bash[90486]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 03:33:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate[90502]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:33:58 np0005532048 bash[90486]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:33:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate[90502]: --> ceph-volume raw activate successful for osd ID: 2
Nov 22 03:33:58 np0005532048 bash[90486]: --> ceph-volume raw activate successful for osd ID: 2
Nov 22 03:33:58 np0005532048 systemd[1]: libpod-6dda10b78879c278186f58acdfee0693833193211766b867e8dd7920df24b2c8.scope: Deactivated successfully.
Nov 22 03:33:58 np0005532048 podman[90486]: 2025-11-22 08:33:58.365416347 +0000 UTC m=+1.203956000 container died 6dda10b78879c278186f58acdfee0693833193211766b867e8dd7920df24b2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:33:58 np0005532048 systemd[1]: libpod-6dda10b78879c278186f58acdfee0693833193211766b867e8dd7920df24b2c8.scope: Consumed 1.061s CPU time.
Nov 22 03:33:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ad65d731ddec65084c4d0329c88fb27fe5985fc268d0b4b1c906e81dc962e081-merged.mount: Deactivated successfully.
Nov 22 03:33:58 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3001320351; not ready for session (expect reconnect)
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:58 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:58 np0005532048 podman[90486]: 2025-11-22 08:33:58.51781345 +0000 UTC m=+1.356353083 container remove 6dda10b78879c278186f58acdfee0693833193211766b867e8dd7920df24b2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: osd.0 [v2:192.168.122.100:6802/2651910451,v1:192.168.122.100:6803/2651910451] boot
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:33:58 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:33:58 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:58 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] creating mgr pool
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 22 03:33:58 np0005532048 podman[90684]: 2025-11-22 08:33:58.75479949 +0000 UTC m=+0.049556746 container create 75945265ccc41a11be2e00483e19f9aa7d009a9ec60873c1f36e55e57cf9b201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:33:58 np0005532048 podman[90684]: 2025-11-22 08:33:58.730570601 +0000 UTC m=+0.025327897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee07fa9e64ab30ebefdaed3ee31e15c2bf695b70d0acdf8d66b7341c164eec3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee07fa9e64ab30ebefdaed3ee31e15c2bf695b70d0acdf8d66b7341c164eec3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee07fa9e64ab30ebefdaed3ee31e15c2bf695b70d0acdf8d66b7341c164eec3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee07fa9e64ab30ebefdaed3ee31e15c2bf695b70d0acdf8d66b7341c164eec3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee07fa9e64ab30ebefdaed3ee31e15c2bf695b70d0acdf8d66b7341c164eec3/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:58 np0005532048 podman[90684]: 2025-11-22 08:33:58.866919945 +0000 UTC m=+0.161677221 container init 75945265ccc41a11be2e00483e19f9aa7d009a9ec60873c1f36e55e57cf9b201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:58 np0005532048 podman[90684]: 2025-11-22 08:33:58.879261734 +0000 UTC m=+0.174018980 container start 75945265ccc41a11be2e00483e19f9aa7d009a9ec60873c1f36e55e57cf9b201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:33:58 np0005532048 bash[90684]: 75945265ccc41a11be2e00483e19f9aa7d009a9ec60873c1f36e55e57cf9b201
Nov 22 03:33:58 np0005532048 systemd[1]: Started Ceph osd.2 for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: pidfile_write: ignore empty --pid-file
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bdev(0x55a6ca25d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bdev(0x55a6ca25d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bdev(0x55a6ca25d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bdev(0x55a6ca25d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bdev(0x55a6cb095800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bdev(0x55a6cb095800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bdev(0x55a6cb095800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bdev(0x55a6cb095800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 03:33:58 np0005532048 ceph-osd[90703]: bdev(0x55a6cb095800 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6ca25d800 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:33:59 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3001320351; not ready for session (expect reconnect)
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:59 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: load: jerasure load: lrc 
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e12 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:33:59 np0005532048 podman[90860]: 2025-11-22 08:33:59.677090323 +0000 UTC m=+0.047586128 container create 361b92dcaca9e73f35577193037cdbaea0c6b209606701360951a87039b306b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e13 crush map has features 3314933000852226048, adjusting msgr requires
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:33:59 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:33:59 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 22 03:33:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 22 03:33:59 np0005532048 systemd[1]: Started libpod-conmon-361b92dcaca9e73f35577193037cdbaea0c6b209606701360951a87039b306b2.scope.
Nov 22 03:33:59 np0005532048 podman[90860]: 2025-11-22 08:33:59.650560418 +0000 UTC m=+0.021056253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:33:59 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:33:59 np0005532048 ceph-osd[88656]: osd.0 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 03:33:59 np0005532048 ceph-osd[88656]: osd.0 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 22 03:33:59 np0005532048 ceph-osd[88656]: osd.0 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 03:33:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:33:59 np0005532048 podman[90860]: 2025-11-22 08:33:59.816555802 +0000 UTC m=+0.187051637 container init 361b92dcaca9e73f35577193037cdbaea0c6b209606701360951a87039b306b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:59 np0005532048 podman[90860]: 2025-11-22 08:33:59.825439938 +0000 UTC m=+0.195935743 container start 361b92dcaca9e73f35577193037cdbaea0c6b209606701360951a87039b306b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:59 np0005532048 festive_feynman[90877]: 167 167
Nov 22 03:33:59 np0005532048 systemd[1]: libpod-361b92dcaca9e73f35577193037cdbaea0c6b209606701360951a87039b306b2.scope: Deactivated successfully.
Nov 22 03:33:59 np0005532048 podman[90860]: 2025-11-22 08:33:59.846232334 +0000 UTC m=+0.216728179 container attach 361b92dcaca9e73f35577193037cdbaea0c6b209606701360951a87039b306b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:33:59 np0005532048 podman[90860]: 2025-11-22 08:33:59.847652108 +0000 UTC m=+0.218147953 container died 361b92dcaca9e73f35577193037cdbaea0c6b209606701360951a87039b306b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:33:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-db8b08ba756714eb0ad4cb5d6fde31a3cabd6040facaf4d7b015dfad15b14b58-merged.mount: Deactivated successfully.
Nov 22 03:33:59 np0005532048 podman[90860]: 2025-11-22 08:33:59.974617864 +0000 UTC m=+0.345113669 container remove 361b92dcaca9e73f35577193037cdbaea0c6b209606701360951a87039b306b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:33:59 np0005532048 systemd[1]: libpod-conmon-361b92dcaca9e73f35577193037cdbaea0c6b209606701360951a87039b306b2.scope: Deactivated successfully.
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb116c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb117400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb117400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb117400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb117400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluefs mount
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluefs mount shared_bdev_used = 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Git sha 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: DB SUMMARY
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: DB Session ID:  B7MN77L5TL5AM4U7DQJV
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                                     Options.env: 0x55a6cb0e7c70
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                                Options.info_log: 0x55a6ca2e48a0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.write_buffer_manager: 0x55a6cb1f0460
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.row_cache: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                              Options.wal_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.wal_compression: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Compression algorithms supported:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kZSTD supported: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kXpressCompression supported: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kZlibCompression supported: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v68: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e42c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e4240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e4240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca2e4240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0fd7e08b-911f-4c39-837a-8f88cfd19cc7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800440064926, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800440065122, "job": 1, "event": "recovery_finished"}
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: freelist init
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: freelist _read_cfg
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluefs umount
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb117400 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:34:00 np0005532048 podman[91100]: 2025-11-22 08:34:00.145445015 +0000 UTC m=+0.045662691 container create e324a960c693a5f33f2a2d0d47f98d4631c72c103c046a0cc3252aaa31355d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:00 np0005532048 systemd[1]: Started libpod-conmon-e324a960c693a5f33f2a2d0d47f98d4631c72c103c046a0cc3252aaa31355d37.scope.
Nov 22 03:34:00 np0005532048 podman[91100]: 2025-11-22 08:34:00.123446481 +0000 UTC m=+0.023664167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c209b2e2f14fd19af259f4e9f7e5702c3f3b40006b6619ba83a5e16c9770bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c209b2e2f14fd19af259f4e9f7e5702c3f3b40006b6619ba83a5e16c9770bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c209b2e2f14fd19af259f4e9f7e5702c3f3b40006b6619ba83a5e16c9770bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55c209b2e2f14fd19af259f4e9f7e5702c3f3b40006b6619ba83a5e16c9770bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:00 np0005532048 podman[91100]: 2025-11-22 08:34:00.265688368 +0000 UTC m=+0.165906064 container init e324a960c693a5f33f2a2d0d47f98d4631c72c103c046a0cc3252aaa31355d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:34:00 np0005532048 podman[91100]: 2025-11-22 08:34:00.273044716 +0000 UTC m=+0.173262382 container start e324a960c693a5f33f2a2d0d47f98d4631c72c103c046a0cc3252aaa31355d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:00 np0005532048 podman[91100]: 2025-11-22 08:34:00.277123225 +0000 UTC m=+0.177340911 container attach e324a960c693a5f33f2a2d0d47f98d4631c72c103c046a0cc3252aaa31355d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb117400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb117400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb117400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bdev(0x55a6cb117400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluefs mount
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluefs mount shared_bdev_used = 4718592
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Git sha 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: DB SUMMARY
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: DB Session ID:  B7MN77L5TL5AM4U7DQJU
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                                     Options.env: 0x55a6cb298460
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                                Options.info_log: 0x55a6cb0e3be0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.write_buffer_manager: 0x55a6cb1f06e0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.row_cache: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                              Options.wal_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.wal_compression: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Compression algorithms supported:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kZSTD supported: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kXpressCompression supported: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kBZip2Compression supported: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kLZ4Compression supported: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kZlibCompression supported: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: #011kSnappyCompression supported: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aae20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aae20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aae20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aae20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aae20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aae20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aae20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aad80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aad80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:           Options.merge_operator: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a6ca5aad80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a6ca2d1090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.compression: LZ4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.num_levels: 7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0fd7e08b-911f-4c39-837a-8f88cfd19cc7
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800440341160, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800440345081, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800440, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0fd7e08b-911f-4c39-837a-8f88cfd19cc7", "db_session_id": "B7MN77L5TL5AM4U7DQJU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800440348250, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800440, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0fd7e08b-911f-4c39-837a-8f88cfd19cc7", "db_session_id": "B7MN77L5TL5AM4U7DQJU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800440355186, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800440, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0fd7e08b-911f-4c39-837a-8f88cfd19cc7", "db_session_id": "B7MN77L5TL5AM4U7DQJU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800440357203, "job": 1, "event": "recovery_finished"}
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a6cb2a5c00
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: DB pointer 0x55a6cb1d9a00
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 460.80 MB usag
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: _get_class not permitted to load lua
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: _get_class not permitted to load sdk
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: _get_class not permitted to load test_remote_reads
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: osd.2 0 load_pgs
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: osd.2 0 load_pgs opened 0 pgs
Nov 22 03:34:00 np0005532048 ceph-osd[90703]: osd.2 0 log_to_monitors true
Nov 22 03:34:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2[90699]: 2025-11-22T08:34:00.405+0000 7f26be7f9740 -1 osd.2 0 log_to_monitors true
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 22 03:34:00 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3001320351; not ready for session (expect reconnect)
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:34:00 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 31.465 iops: 8054.986 elapsed_sec: 0.372
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : OSD bench result of 8054.985890 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 0 waiting for initial osdmap
Nov 22 03:34:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T08:34:00.661+0000 7fa69b278640 -1 osd.1 0 waiting for initial osdmap
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 13 check_osdmap_features require_osd_release unknown -> reef
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:34:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T08:34:00.691+0000 7fa6968a0640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 13 set_numa_affinity not setting numa affinity
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351] boot
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:00 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 14 state: booting -> active
Nov 22 03:34:00 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 22 03:34:00 np0005532048 ceph-mon[75021]: from='osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]: {
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "osd_id": 1,
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "type": "bluestore"
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:    },
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "osd_id": 0,
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "type": "bluestore"
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:    },
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "osd_id": 2,
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:        "type": "bluestore"
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]:    }
Nov 22 03:34:01 np0005532048 eloquent_easley[91117]: }
Nov 22 03:34:01 np0005532048 systemd[1]: libpod-e324a960c693a5f33f2a2d0d47f98d4631c72c103c046a0cc3252aaa31355d37.scope: Deactivated successfully.
Nov 22 03:34:01 np0005532048 podman[91100]: 2025-11-22 08:34:01.237412343 +0000 UTC m=+1.137630039 container died e324a960c693a5f33f2a2d0d47f98d4631c72c103c046a0cc3252aaa31355d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:34:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-55c209b2e2f14fd19af259f4e9f7e5702c3f3b40006b6619ba83a5e16c9770bf-merged.mount: Deactivated successfully.
Nov 22 03:34:01 np0005532048 podman[91100]: 2025-11-22 08:34:01.298756654 +0000 UTC m=+1.198974330 container remove e324a960c693a5f33f2a2d0d47f98d4631c72c103c046a0cc3252aaa31355d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:01 np0005532048 systemd[1]: libpod-conmon-e324a960c693a5f33f2a2d0d47f98d4631c72c103c046a0cc3252aaa31355d37.scope: Deactivated successfully.
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:01 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 03:34:01 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 22 03:34:01 np0005532048 ceph-osd[90703]: osd.2 0 done with init, starting boot process
Nov 22 03:34:01 np0005532048 ceph-osd[90703]: osd.2 0 start_boot
Nov 22 03:34:01 np0005532048 ceph-osd[90703]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 03:34:01 np0005532048 ceph-osd[90703]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 03:34:01 np0005532048 ceph-osd[90703]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 03:34:01 np0005532048 ceph-osd[90703]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 03:34:01 np0005532048 ceph-osd[90703]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:01 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:34:01 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3650411801; not ready for session (expect reconnect)
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:01 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: OSD bench result of 8054.985890 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: from='osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: osd.1 [v2:192.168.122.100:6806/3001320351,v1:192.168.122.100:6807/3001320351] boot
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: from='osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:01 np0005532048 ceph-mon[75021]: from='osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:34:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:01 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] creating main.db for devicehealth
Nov 22 03:34:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v71: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 03:34:02 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 03:34:02 np0005532048 ceph-mgr[75315]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 22 03:34:02 np0005532048 podman[91608]: 2025-11-22 08:34:02.173508511 +0000 UTC m=+0.066297712 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 03:34:02 np0005532048 podman[91608]: 2025-11-22 08:34:02.285701607 +0000 UTC m=+0.178490818 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:02 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3650411801; not ready for session (expect reconnect)
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:02 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:02 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:03 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3650411801; not ready for session (expect reconnect)
Nov 22 03:34:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:03 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:34:03 np0005532048 ceph-mon[75021]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 03:34:03 np0005532048 ceph-mon[75021]: Cluster is now healthy
Nov 22 03:34:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ldbkey(active, since 2m)
Nov 22 03:34:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v73: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 03:34:04 np0005532048 podman[92016]: 2025-11-22 08:34:04.114616175 +0000 UTC m=+0.071107719 container create d785039c84a1b2bafeb4677f5dfce30fc64b5d40795b66d0f77d19f5d1054130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:34:04 np0005532048 podman[92016]: 2025-11-22 08:34:04.064268471 +0000 UTC m=+0.020760035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:04 np0005532048 systemd[1]: Started libpod-conmon-d785039c84a1b2bafeb4677f5dfce30fc64b5d40795b66d0f77d19f5d1054130.scope.
Nov 22 03:34:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:04 np0005532048 podman[92016]: 2025-11-22 08:34:04.239302795 +0000 UTC m=+0.195794429 container init d785039c84a1b2bafeb4677f5dfce30fc64b5d40795b66d0f77d19f5d1054130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:34:04 np0005532048 podman[92016]: 2025-11-22 08:34:04.246398547 +0000 UTC m=+0.202890091 container start d785039c84a1b2bafeb4677f5dfce30fc64b5d40795b66d0f77d19f5d1054130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:04 np0005532048 sharp_bohr[92032]: 167 167
Nov 22 03:34:04 np0005532048 systemd[1]: libpod-d785039c84a1b2bafeb4677f5dfce30fc64b5d40795b66d0f77d19f5d1054130.scope: Deactivated successfully.
Nov 22 03:34:04 np0005532048 podman[92016]: 2025-11-22 08:34:04.275203197 +0000 UTC m=+0.231694771 container attach d785039c84a1b2bafeb4677f5dfce30fc64b5d40795b66d0f77d19f5d1054130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:04 np0005532048 podman[92016]: 2025-11-22 08:34:04.275919985 +0000 UTC m=+0.232411529 container died d785039c84a1b2bafeb4677f5dfce30fc64b5d40795b66d0f77d19f5d1054130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:34:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e19fee5b094bb06dcb41dc0ffa3b4aa721cc870e2ec36d4ae2ead7232993dd45-merged.mount: Deactivated successfully.
Nov 22 03:34:04 np0005532048 podman[92016]: 2025-11-22 08:34:04.394816084 +0000 UTC m=+0.351307648 container remove d785039c84a1b2bafeb4677f5dfce30fc64b5d40795b66d0f77d19f5d1054130 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:04 np0005532048 systemd[1]: libpod-conmon-d785039c84a1b2bafeb4677f5dfce30fc64b5d40795b66d0f77d19f5d1054130.scope: Deactivated successfully.
Nov 22 03:34:04 np0005532048 podman[92059]: 2025-11-22 08:34:04.547894734 +0000 UTC m=+0.043264233 container create 021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:34:04 np0005532048 podman[92059]: 2025-11-22 08:34:04.528021161 +0000 UTC m=+0.023390700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:04 np0005532048 systemd[1]: Started libpod-conmon-021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3.scope.
Nov 22 03:34:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bf50433acadddf01e7da1d91e043d6dd7efc3dc2f257c9fed4e81a75404352/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bf50433acadddf01e7da1d91e043d6dd7efc3dc2f257c9fed4e81a75404352/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bf50433acadddf01e7da1d91e043d6dd7efc3dc2f257c9fed4e81a75404352/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bf50433acadddf01e7da1d91e043d6dd7efc3dc2f257c9fed4e81a75404352/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:04 np0005532048 podman[92059]: 2025-11-22 08:34:04.699727684 +0000 UTC m=+0.195097173 container init 021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:04 np0005532048 podman[92059]: 2025-11-22 08:34:04.707169555 +0000 UTC m=+0.202539014 container start 021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:04 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3650411801; not ready for session (expect reconnect)
Nov 22 03:34:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:04 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:34:04 np0005532048 podman[92059]: 2025-11-22 08:34:04.736085198 +0000 UTC m=+0.231454687 container attach 021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:05 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3650411801; not ready for session (expect reconnect)
Nov 22 03:34:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:05 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]: [
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:    {
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        "available": false,
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        "ceph_device": false,
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        "lsm_data": {},
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        "lvs": [],
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        "path": "/dev/sr0",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        "rejected_reasons": [
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "Has a FileSystem",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "Insufficient space (<5GB)"
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        ],
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        "sys_api": {
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "actuators": null,
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "device_nodes": "sr0",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "devname": "sr0",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "human_readable_size": "482.00 KB",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "id_bus": "ata",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "model": "QEMU DVD-ROM",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "nr_requests": "2",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "parent": "/dev/sr0",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "partitions": {},
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "path": "/dev/sr0",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "removable": "1",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "rev": "2.5+",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "ro": "0",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "rotational": "1",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "sas_address": "",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "sas_device_handle": "",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "scheduler_mode": "mq-deadline",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "sectors": 0,
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "sectorsize": "2048",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "size": 493568.0,
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "support_discard": "2048",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "type": "disk",
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:            "vendor": "QEMU"
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:        }
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]:    }
Nov 22 03:34:06 np0005532048 ecstatic_curie[92075]: ]
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v74: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.393 iops: 7268.688 elapsed_sec: 0.413
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: log_channel(cluster) log [WRN] : OSD bench result of 7268.688331 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 0 waiting for initial osdmap
Nov 22 03:34:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2[90699]: 2025-11-22T08:34:06.060+0000 7f26ba779640 -1 osd.2 0 waiting for initial osdmap
Nov 22 03:34:06 np0005532048 systemd[1]: libpod-021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3.scope: Deactivated successfully.
Nov 22 03:34:06 np0005532048 podman[92059]: 2025-11-22 08:34:06.06694778 +0000 UTC m=+1.562317249 container died 021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:06 np0005532048 systemd[1]: libpod-021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3.scope: Consumed 1.372s CPU time.
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 16 set_numa_affinity not setting numa affinity
Nov 22 03:34:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-2[90699]: 2025-11-22T08:34:06.085+0000 7f26b5da1640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 22 03:34:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-64bf50433acadddf01e7da1d91e043d6dd7efc3dc2f257c9fed4e81a75404352-merged.mount: Deactivated successfully.
Nov 22 03:34:06 np0005532048 podman[92059]: 2025-11-22 08:34:06.126438906 +0000 UTC m=+1.621808375 container remove 021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:34:06 np0005532048 systemd[1]: libpod-conmon-021b2cdd0f7ed38964e15a022ae1aefc7012259a79fa2934e3c45134f45983c3.scope: Deactivated successfully.
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43688k
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43688k
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bb5a90c1-2bce-472e-a5aa-982f7773ca9f does not exist
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0fb81afd-92eb-4b64-9992-4e0dcd6db16d does not exist
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev af612398-5f3a-4dcb-afd0-7f28871c2963 does not exist
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:06 np0005532048 podman[93995]: 2025-11-22 08:34:06.69693135 +0000 UTC m=+0.036683912 container create 5f968d45506995d0861d700c340e19d6075233558b9d2523535150cafb5d73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3650411801; not ready for session (expect reconnect)
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:06 np0005532048 ceph-mgr[75315]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:34:06 np0005532048 systemd[1]: Started libpod-conmon-5f968d45506995d0861d700c340e19d6075233558b9d2523535150cafb5d73c2.scope.
Nov 22 03:34:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:06 np0005532048 podman[93995]: 2025-11-22 08:34:06.681450724 +0000 UTC m=+0.021203306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:06 np0005532048 podman[93995]: 2025-11-22 08:34:06.780141532 +0000 UTC m=+0.119894094 container init 5f968d45506995d0861d700c340e19d6075233558b9d2523535150cafb5d73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:06 np0005532048 podman[93995]: 2025-11-22 08:34:06.787663715 +0000 UTC m=+0.127416277 container start 5f968d45506995d0861d700c340e19d6075233558b9d2523535150cafb5d73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_engelbart, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:34:06 np0005532048 podman[93995]: 2025-11-22 08:34:06.792499562 +0000 UTC m=+0.132252154 container attach 5f968d45506995d0861d700c340e19d6075233558b9d2523535150cafb5d73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:34:06 np0005532048 thirsty_engelbart[94012]: 167 167
Nov 22 03:34:06 np0005532048 systemd[1]: libpod-5f968d45506995d0861d700c340e19d6075233558b9d2523535150cafb5d73c2.scope: Deactivated successfully.
Nov 22 03:34:06 np0005532048 podman[93995]: 2025-11-22 08:34:06.797225608 +0000 UTC m=+0.136978170 container died 5f968d45506995d0861d700c340e19d6075233558b9d2523535150cafb5d73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-96356de3d0d9c725fe8bc4be2cce8decc5e4887100f3c59ee78487a7092a7836-merged.mount: Deactivated successfully.
Nov 22 03:34:06 np0005532048 podman[93995]: 2025-11-22 08:34:06.840167061 +0000 UTC m=+0.179919633 container remove 5f968d45506995d0861d700c340e19d6075233558b9d2523535150cafb5d73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:34:06 np0005532048 systemd[1]: libpod-conmon-5f968d45506995d0861d700c340e19d6075233558b9d2523535150cafb5d73c2.scope: Deactivated successfully.
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801] boot
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:34:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:34:06 np0005532048 ceph-osd[90703]: osd.2 17 state: booting -> active
Nov 22 03:34:06 np0005532048 podman[94037]: 2025-11-22 08:34:06.997467924 +0000 UTC m=+0.049534365 container create e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:07 np0005532048 systemd[1]: Started libpod-conmon-e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14.scope.
Nov 22 03:34:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425065025fdc22e84d4fd6f6fce41a303505a638c125b2278c2715fdd0d215fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425065025fdc22e84d4fd6f6fce41a303505a638c125b2278c2715fdd0d215fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425065025fdc22e84d4fd6f6fce41a303505a638c125b2278c2715fdd0d215fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425065025fdc22e84d4fd6f6fce41a303505a638c125b2278c2715fdd0d215fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425065025fdc22e84d4fd6f6fce41a303505a638c125b2278c2715fdd0d215fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 np0005532048 podman[94037]: 2025-11-22 08:34:06.977077178 +0000 UTC m=+0.029143659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:07 np0005532048 podman[94037]: 2025-11-22 08:34:07.072048166 +0000 UTC m=+0.124114617 container init e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_robinson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:07 np0005532048 podman[94037]: 2025-11-22 08:34:07.080219185 +0000 UTC m=+0.132285636 container start e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:07 np0005532048 podman[94037]: 2025-11-22 08:34:07.084265553 +0000 UTC m=+0.136332004 container attach e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_robinson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: OSD bench result of 7268.688331 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: Adjusting osd_memory_target on compute-0 to 43688k
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:07 np0005532048 ceph-mon[75021]: osd.2 [v2:192.168.122.100:6810/3650411801,v1:192.168.122.100:6811/3650411801] boot
Nov 22 03:34:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Nov 22 03:34:08 np0005532048 vibrant_robinson[94051]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:34:08 np0005532048 vibrant_robinson[94051]: --> relative data size: 1.0
Nov 22 03:34:08 np0005532048 vibrant_robinson[94051]: --> All data devices are unavailable
Nov 22 03:34:08 np0005532048 systemd[1]: libpod-e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14.scope: Deactivated successfully.
Nov 22 03:34:08 np0005532048 systemd[1]: libpod-e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14.scope: Consumed 1.011s CPU time.
Nov 22 03:34:08 np0005532048 podman[94037]: 2025-11-22 08:34:08.145303408 +0000 UTC m=+1.197369859 container died e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_robinson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 22 03:34:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-425065025fdc22e84d4fd6f6fce41a303505a638c125b2278c2715fdd0d215fa-merged.mount: Deactivated successfully.
Nov 22 03:34:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 22 03:34:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 22 03:34:08 np0005532048 podman[94037]: 2025-11-22 08:34:08.212987364 +0000 UTC m=+1.265053815 container remove e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_robinson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:08 np0005532048 systemd[1]: libpod-conmon-e0bde73d76c4441ebdb8be4919d3c5d5222ef79c4d6d2ad7bb1a889e505f1e14.scope: Deactivated successfully.
Nov 22 03:34:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:08 np0005532048 podman[94237]: 2025-11-22 08:34:08.818042138 +0000 UTC m=+0.049926465 container create 61437f8978ee73465cc4256a1bd5717b4c844cf72a71a960ecf25e5e424f8e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:34:08 np0005532048 systemd[1]: Started libpod-conmon-61437f8978ee73465cc4256a1bd5717b4c844cf72a71a960ecf25e5e424f8e44.scope.
Nov 22 03:34:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:08 np0005532048 podman[94237]: 2025-11-22 08:34:08.893777598 +0000 UTC m=+0.125661935 container init 61437f8978ee73465cc4256a1bd5717b4c844cf72a71a960ecf25e5e424f8e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:08 np0005532048 podman[94237]: 2025-11-22 08:34:08.800925292 +0000 UTC m=+0.032809639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:08 np0005532048 podman[94237]: 2025-11-22 08:34:08.902607592 +0000 UTC m=+0.134491909 container start 61437f8978ee73465cc4256a1bd5717b4c844cf72a71a960ecf25e5e424f8e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:08 np0005532048 confident_carver[94254]: 167 167
Nov 22 03:34:08 np0005532048 systemd[1]: libpod-61437f8978ee73465cc4256a1bd5717b4c844cf72a71a960ecf25e5e424f8e44.scope: Deactivated successfully.
Nov 22 03:34:08 np0005532048 podman[94237]: 2025-11-22 08:34:08.907247736 +0000 UTC m=+0.139132083 container attach 61437f8978ee73465cc4256a1bd5717b4c844cf72a71a960ecf25e5e424f8e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:08 np0005532048 podman[94237]: 2025-11-22 08:34:08.908437384 +0000 UTC m=+0.140321701 container died 61437f8978ee73465cc4256a1bd5717b4c844cf72a71a960ecf25e5e424f8e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:34:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-495c051613cf1d6b37cc94943c5c11631f51c38ec074ccafb4c9fb6610e5bdad-merged.mount: Deactivated successfully.
Nov 22 03:34:08 np0005532048 podman[94237]: 2025-11-22 08:34:08.951005579 +0000 UTC m=+0.182889896 container remove 61437f8978ee73465cc4256a1bd5717b4c844cf72a71a960ecf25e5e424f8e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_carver, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 22 03:34:08 np0005532048 systemd[1]: libpod-conmon-61437f8978ee73465cc4256a1bd5717b4c844cf72a71a960ecf25e5e424f8e44.scope: Deactivated successfully.
Nov 22 03:34:09 np0005532048 podman[94278]: 2025-11-22 08:34:09.104667723 +0000 UTC m=+0.049585356 container create c3ab66e6c6bd820b520a8672e0ec826275b0cef639afda8ff41a1cf916df0891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poincare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:34:09 np0005532048 systemd[1]: Started libpod-conmon-c3ab66e6c6bd820b520a8672e0ec826275b0cef639afda8ff41a1cf916df0891.scope.
Nov 22 03:34:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05257d1ba8b1db9ea6f93142416890cbe176f10696552f1f8893e88ec99ac058/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:09 np0005532048 podman[94278]: 2025-11-22 08:34:09.082929865 +0000 UTC m=+0.027847518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05257d1ba8b1db9ea6f93142416890cbe176f10696552f1f8893e88ec99ac058/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05257d1ba8b1db9ea6f93142416890cbe176f10696552f1f8893e88ec99ac058/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05257d1ba8b1db9ea6f93142416890cbe176f10696552f1f8893e88ec99ac058/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:09 np0005532048 podman[94278]: 2025-11-22 08:34:09.189853333 +0000 UTC m=+0.134770986 container init c3ab66e6c6bd820b520a8672e0ec826275b0cef639afda8ff41a1cf916df0891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:34:09 np0005532048 podman[94278]: 2025-11-22 08:34:09.199881327 +0000 UTC m=+0.144798960 container start c3ab66e6c6bd820b520a8672e0ec826275b0cef639afda8ff41a1cf916df0891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:09 np0005532048 podman[94278]: 2025-11-22 08:34:09.203495595 +0000 UTC m=+0.148413238 container attach c3ab66e6c6bd820b520a8672e0ec826275b0cef639afda8ff41a1cf916df0891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]: {
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:    "0": [
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:        {
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "devices": [
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "/dev/loop3"
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            ],
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_name": "ceph_lv0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_size": "21470642176",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "name": "ceph_lv0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "tags": {
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.crush_device_class": "",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.encrypted": "0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.osd_id": "0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.type": "block",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.vdo": "0"
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            },
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "type": "block",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "vg_name": "ceph_vg0"
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:        }
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:    ],
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:    "1": [
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:        {
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "devices": [
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "/dev/loop4"
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            ],
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_name": "ceph_lv1",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_size": "21470642176",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "name": "ceph_lv1",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "tags": {
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.crush_device_class": "",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.encrypted": "0",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.osd_id": "1",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.type": "block",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "ceph.vdo": "0"
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            },
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "type": "block",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "vg_name": "ceph_vg1"
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:        }
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:    ],
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:    "2": [
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:        {
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "devices": [
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:                "/dev/loop5"
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            ],
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_name": "ceph_lv2",
Nov 22 03:34:09 np0005532048 romantic_poincare[94294]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:            "lv_size": "21470642176",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:            "name": "ceph_lv2",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:            "tags": {
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.crush_device_class": "",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.encrypted": "0",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.osd_id": "2",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.type": "block",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:                "ceph.vdo": "0"
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:            },
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:            "type": "block",
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:            "vg_name": "ceph_vg2"
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:        }
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]:    ]
Nov 22 03:34:10 np0005532048 romantic_poincare[94294]: }
Nov 22 03:34:10 np0005532048 systemd[1]: libpod-c3ab66e6c6bd820b520a8672e0ec826275b0cef639afda8ff41a1cf916df0891.scope: Deactivated successfully.
Nov 22 03:34:10 np0005532048 podman[94278]: 2025-11-22 08:34:10.021811382 +0000 UTC m=+0.966729015 container died c3ab66e6c6bd820b520a8672e0ec826275b0cef639afda8ff41a1cf916df0891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:34:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-05257d1ba8b1db9ea6f93142416890cbe176f10696552f1f8893e88ec99ac058-merged.mount: Deactivated successfully.
Nov 22 03:34:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 1.2 GiB used, 59 GiB / 60 GiB avail
Nov 22 03:34:10 np0005532048 podman[94278]: 2025-11-22 08:34:10.080903978 +0000 UTC m=+1.025821611 container remove c3ab66e6c6bd820b520a8672e0ec826275b0cef639afda8ff41a1cf916df0891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_poincare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:10 np0005532048 systemd[1]: libpod-conmon-c3ab66e6c6bd820b520a8672e0ec826275b0cef639afda8ff41a1cf916df0891.scope: Deactivated successfully.
Nov 22 03:34:10 np0005532048 podman[94454]: 2025-11-22 08:34:10.667852312 +0000 UTC m=+0.038239850 container create 10a589445afe8a46a4bed218f3ab963461cd0dc2ec154493272e24c7d84c6676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:34:10 np0005532048 systemd[1]: Started libpod-conmon-10a589445afe8a46a4bed218f3ab963461cd0dc2ec154493272e24c7d84c6676.scope.
Nov 22 03:34:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:10 np0005532048 podman[94454]: 2025-11-22 08:34:10.651089934 +0000 UTC m=+0.021477512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:10 np0005532048 podman[94454]: 2025-11-22 08:34:10.747348924 +0000 UTC m=+0.117736502 container init 10a589445afe8a46a4bed218f3ab963461cd0dc2ec154493272e24c7d84c6676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:34:10 np0005532048 podman[94454]: 2025-11-22 08:34:10.753492303 +0000 UTC m=+0.123879851 container start 10a589445afe8a46a4bed218f3ab963461cd0dc2ec154493272e24c7d84c6676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:34:10 np0005532048 clever_taussig[94470]: 167 167
Nov 22 03:34:10 np0005532048 podman[94454]: 2025-11-22 08:34:10.757129972 +0000 UTC m=+0.127517540 container attach 10a589445afe8a46a4bed218f3ab963461cd0dc2ec154493272e24c7d84c6676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:10 np0005532048 systemd[1]: libpod-10a589445afe8a46a4bed218f3ab963461cd0dc2ec154493272e24c7d84c6676.scope: Deactivated successfully.
Nov 22 03:34:10 np0005532048 podman[94454]: 2025-11-22 08:34:10.758115496 +0000 UTC m=+0.128503044 container died 10a589445afe8a46a4bed218f3ab963461cd0dc2ec154493272e24c7d84c6676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 22 03:34:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1d083d2c1a5b970795fd3ac0ce21c9cbb5bd670fa5e0a485ffa6e97dc7e3f84a-merged.mount: Deactivated successfully.
Nov 22 03:34:10 np0005532048 podman[94454]: 2025-11-22 08:34:10.796411436 +0000 UTC m=+0.166798984 container remove 10a589445afe8a46a4bed218f3ab963461cd0dc2ec154493272e24c7d84c6676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:10 np0005532048 systemd[1]: libpod-conmon-10a589445afe8a46a4bed218f3ab963461cd0dc2ec154493272e24c7d84c6676.scope: Deactivated successfully.
Nov 22 03:34:10 np0005532048 podman[94492]: 2025-11-22 08:34:10.93771093 +0000 UTC m=+0.040622628 container create a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:34:10 np0005532048 systemd[1]: Started libpod-conmon-a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62.scope.
Nov 22 03:34:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c27eee3712b0dd528ec2a848eab19bb25b251b059f0c294195ce6c84f21aa07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c27eee3712b0dd528ec2a848eab19bb25b251b059f0c294195ce6c84f21aa07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c27eee3712b0dd528ec2a848eab19bb25b251b059f0c294195ce6c84f21aa07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c27eee3712b0dd528ec2a848eab19bb25b251b059f0c294195ce6c84f21aa07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:11 np0005532048 podman[94492]: 2025-11-22 08:34:11.012876926 +0000 UTC m=+0.115788654 container init a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:34:11 np0005532048 podman[94492]: 2025-11-22 08:34:10.919391915 +0000 UTC m=+0.022303633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:11 np0005532048 podman[94492]: 2025-11-22 08:34:11.020447281 +0000 UTC m=+0.123358979 container start a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:34:11 np0005532048 podman[94492]: 2025-11-22 08:34:11.024771646 +0000 UTC m=+0.127683344 container attach a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]: {
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "osd_id": 1,
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "type": "bluestore"
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:    },
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "osd_id": 0,
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "type": "bluestore"
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:    },
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "osd_id": 2,
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:        "type": "bluestore"
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]:    }
Nov 22 03:34:11 np0005532048 frosty_boyd[94508]: }
Nov 22 03:34:12 np0005532048 systemd[1]: libpod-a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62.scope: Deactivated successfully.
Nov 22 03:34:12 np0005532048 systemd[1]: libpod-a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62.scope: Consumed 1.013s CPU time.
Nov 22 03:34:12 np0005532048 podman[94492]: 2025-11-22 08:34:12.029093383 +0000 UTC m=+1.132005081 container died a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 22 03:34:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9c27eee3712b0dd528ec2a848eab19bb25b251b059f0c294195ce6c84f21aa07-merged.mount: Deactivated successfully.
Nov 22 03:34:12 np0005532048 podman[94492]: 2025-11-22 08:34:12.094582214 +0000 UTC m=+1.197493912 container remove a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:34:12 np0005532048 python3[94561]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:12 np0005532048 systemd[1]: libpod-conmon-a0dda31b458718f135c42f1245d04de419c6d5812c3e1f74500c2ba0e4a7fd62.scope: Deactivated successfully.
Nov 22 03:34:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:12 np0005532048 podman[94581]: 2025-11-22 08:34:12.156336585 +0000 UTC m=+0.043677222 container create eaba32fc444760288abe844da18752d56c1f4582236a55501b2cdd0047cf83d4 (image=quay.io/ceph/ceph:v18, name=stupefied_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:12 np0005532048 systemd[1]: Started libpod-conmon-eaba32fc444760288abe844da18752d56c1f4582236a55501b2cdd0047cf83d4.scope.
Nov 22 03:34:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a11216c1143a48ff7a4ef25b789564f60cf07591fd2eb4590b36f7376a9354/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a11216c1143a48ff7a4ef25b789564f60cf07591fd2eb4590b36f7376a9354/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a11216c1143a48ff7a4ef25b789564f60cf07591fd2eb4590b36f7376a9354/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:12 np0005532048 podman[94581]: 2025-11-22 08:34:12.229086683 +0000 UTC m=+0.116427320 container init eaba32fc444760288abe844da18752d56c1f4582236a55501b2cdd0047cf83d4 (image=quay.io/ceph/ceph:v18, name=stupefied_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:12 np0005532048 podman[94581]: 2025-11-22 08:34:12.134564197 +0000 UTC m=+0.021904854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:12 np0005532048 podman[94581]: 2025-11-22 08:34:12.236670888 +0000 UTC m=+0.124011505 container start eaba32fc444760288abe844da18752d56c1f4582236a55501b2cdd0047cf83d4 (image=quay.io/ceph/ceph:v18, name=stupefied_boyd, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:12 np0005532048 podman[94581]: 2025-11-22 08:34:12.240166373 +0000 UTC m=+0.127507020 container attach eaba32fc444760288abe844da18752d56c1f4582236a55501b2cdd0047cf83d4 (image=quay.io/ceph/ceph:v18, name=stupefied_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 03:34:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4112399694' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:34:12 np0005532048 stupefied_boyd[94620]: 
Nov 22 03:34:12 np0005532048 stupefied_boyd[94620]: {"fsid":"34829716-a12c-57a6-8915-c1aa615c9d8a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":195,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":18,"num_osds":3,"num_up_osds":3,"osd_up_since":1763800446,"num_in_osds":3,"osd_in_since":1763800405,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":922169344,"bytes_avail":63489757184,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T08:32:54.036030+0000","services":{}},"progress_events":{}}
Nov 22 03:34:12 np0005532048 systemd[1]: libpod-eaba32fc444760288abe844da18752d56c1f4582236a55501b2cdd0047cf83d4.scope: Deactivated successfully.
Nov 22 03:34:12 np0005532048 podman[94581]: 2025-11-22 08:34:12.934282191 +0000 UTC m=+0.821622818 container died eaba32fc444760288abe844da18752d56c1f4582236a55501b2cdd0047cf83d4 (image=quay.io/ceph/ceph:v18, name=stupefied_boyd, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:34:12 np0005532048 podman[94844]: 2025-11-22 08:34:12.936040374 +0000 UTC m=+0.065598026 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:34:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-55a11216c1143a48ff7a4ef25b789564f60cf07591fd2eb4590b36f7376a9354-merged.mount: Deactivated successfully.
Nov 22 03:34:12 np0005532048 podman[94581]: 2025-11-22 08:34:12.991087511 +0000 UTC m=+0.878428138 container remove eaba32fc444760288abe844da18752d56c1f4582236a55501b2cdd0047cf83d4 (image=quay.io/ceph/ceph:v18, name=stupefied_boyd, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:34:13 np0005532048 systemd[1]: libpod-conmon-eaba32fc444760288abe844da18752d56c1f4582236a55501b2cdd0047cf83d4.scope: Deactivated successfully.
Nov 22 03:34:13 np0005532048 podman[94844]: 2025-11-22 08:34:13.043431154 +0000 UTC m=+0.172988786 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2eaf287d-e505-4bab-89ea-0cbe84161c58 does not exist
Nov 22 03:34:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ede822a1-af50-471a-9dd7-2502c4bc6091 does not exist
Nov 22 03:34:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 13033481-224a-4fc1-b2bc-38341ce2ed23 does not exist
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:13 np0005532048 python3[94987]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:13 np0005532048 podman[95010]: 2025-11-22 08:34:13.551418459 +0000 UTC m=+0.042172267 container create d1339bb990e6d292cb91ad601b27c6a0e35376d98b815234fd180fcdd666c143 (image=quay.io/ceph/ceph:v18, name=intelligent_keldysh, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:13 np0005532048 systemd[1]: Started libpod-conmon-d1339bb990e6d292cb91ad601b27c6a0e35376d98b815234fd180fcdd666c143.scope.
Nov 22 03:34:13 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa3ad7cdeaa495167a0b74ed37854ac1dbd5603a394d0908f4fdf190f0e3d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa3ad7cdeaa495167a0b74ed37854ac1dbd5603a394d0908f4fdf190f0e3d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:13 np0005532048 podman[95010]: 2025-11-22 08:34:13.608370872 +0000 UTC m=+0.099124710 container init d1339bb990e6d292cb91ad601b27c6a0e35376d98b815234fd180fcdd666c143 (image=quay.io/ceph/ceph:v18, name=intelligent_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:34:13 np0005532048 podman[95010]: 2025-11-22 08:34:13.615168478 +0000 UTC m=+0.105922286 container start d1339bb990e6d292cb91ad601b27c6a0e35376d98b815234fd180fcdd666c143 (image=quay.io/ceph/ceph:v18, name=intelligent_keldysh, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:34:13 np0005532048 podman[95010]: 2025-11-22 08:34:13.619028482 +0000 UTC m=+0.109782310 container attach d1339bb990e6d292cb91ad601b27c6a0e35376d98b815234fd180fcdd666c143 (image=quay.io/ceph/ceph:v18, name=intelligent_keldysh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:34:13 np0005532048 podman[95010]: 2025-11-22 08:34:13.534962799 +0000 UTC m=+0.025716627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:14 np0005532048 podman[95163]: 2025-11-22 08:34:14.015020915 +0000 UTC m=+0.050406095 container create f5d9d771eaaf6e485da1125163b5539b9683ae30b153563e8dfb5353c739fe54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 22 03:34:14 np0005532048 systemd[1]: Started libpod-conmon-f5d9d771eaaf6e485da1125163b5539b9683ae30b153563e8dfb5353c739fe54.scope.
Nov 22 03:34:14 np0005532048 podman[95163]: 2025-11-22 08:34:13.986883222 +0000 UTC m=+0.022268422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:14 np0005532048 podman[95163]: 2025-11-22 08:34:14.095706046 +0000 UTC m=+0.131091236 container init f5d9d771eaaf6e485da1125163b5539b9683ae30b153563e8dfb5353c739fe54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:34:14 np0005532048 podman[95163]: 2025-11-22 08:34:14.101422114 +0000 UTC m=+0.136807294 container start f5d9d771eaaf6e485da1125163b5539b9683ae30b153563e8dfb5353c739fe54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:34:14 np0005532048 podman[95163]: 2025-11-22 08:34:14.105104004 +0000 UTC m=+0.140489184 container attach f5d9d771eaaf6e485da1125163b5539b9683ae30b153563e8dfb5353c739fe54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:34:14 np0005532048 goofy_carver[95199]: 167 167
Nov 22 03:34:14 np0005532048 systemd[1]: libpod-f5d9d771eaaf6e485da1125163b5539b9683ae30b153563e8dfb5353c739fe54.scope: Deactivated successfully.
Nov 22 03:34:14 np0005532048 podman[95163]: 2025-11-22 08:34:14.107429821 +0000 UTC m=+0.142815001 container died f5d9d771eaaf6e485da1125163b5539b9683ae30b153563e8dfb5353c739fe54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:34:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5288dffa29e02131022967840dfc923ce68b032c74f97c9c655d783d99c09e6a-merged.mount: Deactivated successfully.
Nov 22 03:34:14 np0005532048 podman[95163]: 2025-11-22 08:34:14.161346881 +0000 UTC m=+0.196732061 container remove f5d9d771eaaf6e485da1125163b5539b9683ae30b153563e8dfb5353c739fe54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:34:14 np0005532048 systemd[1]: libpod-conmon-f5d9d771eaaf6e485da1125163b5539b9683ae30b153563e8dfb5353c739fe54.scope: Deactivated successfully.
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/398275249' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:14 np0005532048 podman[95226]: 2025-11-22 08:34:14.314183495 +0000 UTC m=+0.037298887 container create 7a85fa597a1236c85f0c3b124f0d1a9607cd33880f55465a2835102e5e677f14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:34:14 np0005532048 systemd[1]: Started libpod-conmon-7a85fa597a1236c85f0c3b124f0d1a9607cd33880f55465a2835102e5e677f14.scope.
Nov 22 03:34:14 np0005532048 podman[95226]: 2025-11-22 08:34:14.298105605 +0000 UTC m=+0.021221027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c48ee526df71d5266d260922dc34be1b61609421ad34aa140c0cb5789396ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c48ee526df71d5266d260922dc34be1b61609421ad34aa140c0cb5789396ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c48ee526df71d5266d260922dc34be1b61609421ad34aa140c0cb5789396ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c48ee526df71d5266d260922dc34be1b61609421ad34aa140c0cb5789396ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c48ee526df71d5266d260922dc34be1b61609421ad34aa140c0cb5789396ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:14 np0005532048 podman[95226]: 2025-11-22 08:34:14.413025568 +0000 UTC m=+0.136140970 container init 7a85fa597a1236c85f0c3b124f0d1a9607cd33880f55465a2835102e5e677f14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:14 np0005532048 podman[95226]: 2025-11-22 08:34:14.419260439 +0000 UTC m=+0.142375831 container start 7a85fa597a1236c85f0c3b124f0d1a9607cd33880f55465a2835102e5e677f14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:14 np0005532048 podman[95226]: 2025-11-22 08:34:14.422863717 +0000 UTC m=+0.145979119 container attach 7a85fa597a1236c85f0c3b124f0d1a9607cd33880f55465a2835102e5e677f14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/398275249' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/398275249' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 22 03:34:14 np0005532048 intelligent_keldysh[95067]: pool 'vms' created
Nov 22 03:34:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 22 03:34:14 np0005532048 systemd[1]: libpod-d1339bb990e6d292cb91ad601b27c6a0e35376d98b815234fd180fcdd666c143.scope: Deactivated successfully.
Nov 22 03:34:14 np0005532048 podman[95010]: 2025-11-22 08:34:14.503611959 +0000 UTC m=+0.994365767 container died d1339bb990e6d292cb91ad601b27c6a0e35376d98b815234fd180fcdd666c143 (image=quay.io/ceph/ceph:v18, name=intelligent_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:34:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cffa3ad7cdeaa495167a0b74ed37854ac1dbd5603a394d0908f4fdf190f0e3d6-merged.mount: Deactivated successfully.
Nov 22 03:34:14 np0005532048 podman[95010]: 2025-11-22 08:34:14.638439726 +0000 UTC m=+1.129193534 container remove d1339bb990e6d292cb91ad601b27c6a0e35376d98b815234fd180fcdd666c143 (image=quay.io/ceph/ceph:v18, name=intelligent_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:34:14 np0005532048 systemd[1]: libpod-conmon-d1339bb990e6d292cb91ad601b27c6a0e35376d98b815234fd180fcdd666c143.scope: Deactivated successfully.
Nov 22 03:34:14 np0005532048 python3[95286]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:14 np0005532048 podman[95287]: 2025-11-22 08:34:14.963130386 +0000 UTC m=+0.044980354 container create 5effd71ba8edb18e2aebbfe20e2850f38b98431389e10fd3ab0f84ce7d9731f1 (image=quay.io/ceph/ceph:v18, name=competent_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:34:14 np0005532048 systemd[1]: Started libpod-conmon-5effd71ba8edb18e2aebbfe20e2850f38b98431389e10fd3ab0f84ce7d9731f1.scope.
Nov 22 03:34:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7964b5f4325c96009b81815d6fdb4ba02191ee7d39b4561e7d356a8ec54e324/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7964b5f4325c96009b81815d6fdb4ba02191ee7d39b4561e7d356a8ec54e324/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:15 np0005532048 podman[95287]: 2025-11-22 08:34:15.035082415 +0000 UTC m=+0.116932413 container init 5effd71ba8edb18e2aebbfe20e2850f38b98431389e10fd3ab0f84ce7d9731f1 (image=quay.io/ceph/ceph:v18, name=competent_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:34:15 np0005532048 podman[95287]: 2025-11-22 08:34:14.943466689 +0000 UTC m=+0.025316687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:15 np0005532048 podman[95287]: 2025-11-22 08:34:15.040623159 +0000 UTC m=+0.122473137 container start 5effd71ba8edb18e2aebbfe20e2850f38b98431389e10fd3ab0f84ce7d9731f1 (image=quay.io/ceph/ceph:v18, name=competent_bartik, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:15 np0005532048 podman[95287]: 2025-11-22 08:34:15.044713359 +0000 UTC m=+0.126563327 container attach 5effd71ba8edb18e2aebbfe20e2850f38b98431389e10fd3ab0f84ce7d9731f1 (image=quay.io/ceph/ceph:v18, name=competent_bartik, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:15 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:15 np0005532048 upbeat_raman[95243]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:34:15 np0005532048 upbeat_raman[95243]: --> relative data size: 1.0
Nov 22 03:34:15 np0005532048 upbeat_raman[95243]: --> All data devices are unavailable
Nov 22 03:34:15 np0005532048 systemd[1]: libpod-7a85fa597a1236c85f0c3b124f0d1a9607cd33880f55465a2835102e5e677f14.scope: Deactivated successfully.
Nov 22 03:34:15 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/398275249' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:15 np0005532048 podman[95350]: 2025-11-22 08:34:15.492086861 +0000 UTC m=+0.028639707 container died 7a85fa597a1236c85f0c3b124f0d1a9607cd33880f55465a2835102e5e677f14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-56c48ee526df71d5266d260922dc34be1b61609421ad34aa140c0cb5789396ea-merged.mount: Deactivated successfully.
Nov 22 03:34:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 22 03:34:15 np0005532048 podman[95350]: 2025-11-22 08:34:15.544272799 +0000 UTC m=+0.080825625 container remove 7a85fa597a1236c85f0c3b124f0d1a9607cd33880f55465a2835102e5e677f14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:34:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 22 03:34:15 np0005532048 systemd[1]: libpod-conmon-7a85fa597a1236c85f0c3b124f0d1a9607cd33880f55465a2835102e5e677f14.scope: Deactivated successfully.
Nov 22 03:34:15 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 22 03:34:15 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:34:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1122705449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v83: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 22 03:34:16 np0005532048 podman[95508]: 2025-11-22 08:34:16.151790713 +0000 UTC m=+0.083408468 container create d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:34:16 np0005532048 podman[95508]: 2025-11-22 08:34:16.089268354 +0000 UTC m=+0.020886129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:16 np0005532048 systemd[1]: Started libpod-conmon-d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce.scope.
Nov 22 03:34:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:16 np0005532048 podman[95508]: 2025-11-22 08:34:16.506475272 +0000 UTC m=+0.438093037 container init d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:34:16 np0005532048 podman[95508]: 2025-11-22 08:34:16.514375795 +0000 UTC m=+0.445993560 container start d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:16 np0005532048 systemd[1]: libpod-d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce.scope: Deactivated successfully.
Nov 22 03:34:16 np0005532048 jovial_blackburn[95524]: 167 167
Nov 22 03:34:16 np0005532048 conmon[95524]: conmon d2ecf55143c59740aac0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce.scope/container/memory.events
Nov 22 03:34:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 22 03:34:16 np0005532048 podman[95508]: 2025-11-22 08:34:16.549634772 +0000 UTC m=+0.481252547 container attach d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:16 np0005532048 podman[95508]: 2025-11-22 08:34:16.550031221 +0000 UTC m=+0.481648976 container died d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:34:16 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1122705449' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 22 03:34:16 np0005532048 competent_bartik[95303]: pool 'volumes' created
Nov 22 03:34:16 np0005532048 systemd[1]: libpod-5effd71ba8edb18e2aebbfe20e2850f38b98431389e10fd3ab0f84ce7d9731f1.scope: Deactivated successfully.
Nov 22 03:34:16 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 22 03:34:16 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1122705449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:16 np0005532048 systemd[1]: var-lib-containers-storage-overlay-11d3835c287a498deccdfa0a73d0a4841342c8808ad37da0d8b90e90120b84d6-merged.mount: Deactivated successfully.
Nov 22 03:34:16 np0005532048 podman[95508]: 2025-11-22 08:34:16.75451014 +0000 UTC m=+0.686127895 container remove d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:34:16 np0005532048 systemd[1]: libpod-conmon-d2ecf55143c59740aac043f22da2a23fe955e26bd20101784032a25661fbc5ce.scope: Deactivated successfully.
Nov 22 03:34:16 np0005532048 podman[95287]: 2025-11-22 08:34:16.796549462 +0000 UTC m=+1.878399460 container died 5effd71ba8edb18e2aebbfe20e2850f38b98431389e10fd3ab0f84ce7d9731f1 (image=quay.io/ceph/ceph:v18, name=competent_bartik, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:34:16 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b7964b5f4325c96009b81815d6fdb4ba02191ee7d39b4561e7d356a8ec54e324-merged.mount: Deactivated successfully.
Nov 22 03:34:16 np0005532048 podman[95542]: 2025-11-22 08:34:16.842963409 +0000 UTC m=+0.215668922 container remove 5effd71ba8edb18e2aebbfe20e2850f38b98431389e10fd3ab0f84ce7d9731f1 (image=quay.io/ceph/ceph:v18, name=competent_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:16 np0005532048 systemd[1]: libpod-conmon-5effd71ba8edb18e2aebbfe20e2850f38b98431389e10fd3ab0f84ce7d9731f1.scope: Deactivated successfully.
Nov 22 03:34:16 np0005532048 podman[95567]: 2025-11-22 08:34:16.899620977 +0000 UTC m=+0.038598310 container create a90e0c8b9a9e3c752badb67de185743f5e615d5fa3a16b1987d3c7559578486c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 22 03:34:16 np0005532048 systemd[1]: Started libpod-conmon-a90e0c8b9a9e3c752badb67de185743f5e615d5fa3a16b1987d3c7559578486c.scope.
Nov 22 03:34:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855735d79420569adf6c4089a315925b151d29593cfff77f5bf5b88770d6fb1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855735d79420569adf6c4089a315925b151d29593cfff77f5bf5b88770d6fb1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855735d79420569adf6c4089a315925b151d29593cfff77f5bf5b88770d6fb1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855735d79420569adf6c4089a315925b151d29593cfff77f5bf5b88770d6fb1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:16 np0005532048 podman[95567]: 2025-11-22 08:34:16.97749581 +0000 UTC m=+0.116473173 container init a90e0c8b9a9e3c752badb67de185743f5e615d5fa3a16b1987d3c7559578486c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:34:16 np0005532048 podman[95567]: 2025-11-22 08:34:16.882773647 +0000 UTC m=+0.021751030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:16 np0005532048 podman[95567]: 2025-11-22 08:34:16.98492302 +0000 UTC m=+0.123900363 container start a90e0c8b9a9e3c752badb67de185743f5e615d5fa3a16b1987d3c7559578486c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:16 np0005532048 podman[95567]: 2025-11-22 08:34:16.98822822 +0000 UTC m=+0.127205583 container attach a90e0c8b9a9e3c752badb67de185743f5e615d5fa3a16b1987d3c7559578486c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hopper, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:17 np0005532048 python3[95612]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:17 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:17 np0005532048 podman[95614]: 2025-11-22 08:34:17.174709572 +0000 UTC m=+0.045722542 container create d4142742cc60b839695b01e388e46dbdeb9dab153c0ac392107ac1850c485052 (image=quay.io/ceph/ceph:v18, name=zen_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:34:17 np0005532048 systemd[76524]: Starting Mark boot as successful...
Nov 22 03:34:17 np0005532048 systemd[76524]: Finished Mark boot as successful.
Nov 22 03:34:17 np0005532048 systemd[1]: Started libpod-conmon-d4142742cc60b839695b01e388e46dbdeb9dab153c0ac392107ac1850c485052.scope.
Nov 22 03:34:17 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:17 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5afe7feea035088eaf2b82b0ef9b0e6ddd761148a52f8bda60bc3e65a9f97e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:17 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5afe7feea035088eaf2b82b0ef9b0e6ddd761148a52f8bda60bc3e65a9f97e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:17 np0005532048 podman[95614]: 2025-11-22 08:34:17.23305361 +0000 UTC m=+0.104066600 container init d4142742cc60b839695b01e388e46dbdeb9dab153c0ac392107ac1850c485052 (image=quay.io/ceph/ceph:v18, name=zen_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:17 np0005532048 podman[95614]: 2025-11-22 08:34:17.239955928 +0000 UTC m=+0.110968898 container start d4142742cc60b839695b01e388e46dbdeb9dab153c0ac392107ac1850c485052 (image=quay.io/ceph/ceph:v18, name=zen_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:17 np0005532048 podman[95614]: 2025-11-22 08:34:17.244552549 +0000 UTC m=+0.115565539 container attach d4142742cc60b839695b01e388e46dbdeb9dab153c0ac392107ac1850c485052 (image=quay.io/ceph/ceph:v18, name=zen_elion, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 03:34:17 np0005532048 podman[95614]: 2025-11-22 08:34:17.150283048 +0000 UTC m=+0.021296018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 22 03:34:17 np0005532048 ceph-mon[75021]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:17 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1122705449' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 22 03:34:17 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 22 03:34:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:34:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1582168542' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]: {
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:    "0": [
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:        {
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "devices": [
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "/dev/loop3"
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            ],
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_name": "ceph_lv0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_size": "21470642176",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "name": "ceph_lv0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "tags": {
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.crush_device_class": "",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.encrypted": "0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.osd_id": "0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.type": "block",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.vdo": "0"
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            },
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "type": "block",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "vg_name": "ceph_vg0"
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:        }
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:    ],
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:    "1": [
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:        {
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "devices": [
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "/dev/loop4"
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            ],
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_name": "ceph_lv1",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_size": "21470642176",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "name": "ceph_lv1",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "tags": {
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.crush_device_class": "",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.encrypted": "0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.osd_id": "1",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.type": "block",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.vdo": "0"
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            },
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "type": "block",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "vg_name": "ceph_vg1"
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:        }
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:    ],
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:    "2": [
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:        {
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "devices": [
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "/dev/loop5"
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            ],
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_name": "ceph_lv2",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_size": "21470642176",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "name": "ceph_lv2",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "tags": {
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.crush_device_class": "",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.encrypted": "0",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.osd_id": "2",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.type": "block",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:                "ceph.vdo": "0"
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            },
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "type": "block",
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:            "vg_name": "ceph_vg2"
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:        }
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]:    ]
Nov 22 03:34:17 np0005532048 gifted_hopper[95584]: }
Nov 22 03:34:17 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:17 np0005532048 systemd[1]: libpod-a90e0c8b9a9e3c752badb67de185743f5e615d5fa3a16b1987d3c7559578486c.scope: Deactivated successfully.
Nov 22 03:34:17 np0005532048 podman[95567]: 2025-11-22 08:34:17.813143637 +0000 UTC m=+0.952120990 container died a90e0c8b9a9e3c752badb67de185743f5e615d5fa3a16b1987d3c7559578486c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-855735d79420569adf6c4089a315925b151d29593cfff77f5bf5b88770d6fb1d-merged.mount: Deactivated successfully.
Nov 22 03:34:17 np0005532048 podman[95567]: 2025-11-22 08:34:17.864627568 +0000 UTC m=+1.003604911 container remove a90e0c8b9a9e3c752badb67de185743f5e615d5fa3a16b1987d3c7559578486c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:17 np0005532048 systemd[1]: libpod-conmon-a90e0c8b9a9e3c752badb67de185743f5e615d5fa3a16b1987d3c7559578486c.scope: Deactivated successfully.
Nov 22 03:34:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v86: 3 pgs: 2 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:18 np0005532048 podman[95812]: 2025-11-22 08:34:18.443847385 +0000 UTC m=+0.040311291 container create a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:34:18 np0005532048 systemd[1]: Started libpod-conmon-a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c.scope.
Nov 22 03:34:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:18 np0005532048 podman[95812]: 2025-11-22 08:34:18.511310734 +0000 UTC m=+0.107774660 container init a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:18 np0005532048 podman[95812]: 2025-11-22 08:34:18.517021343 +0000 UTC m=+0.113485249 container start a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:34:18 np0005532048 podman[95812]: 2025-11-22 08:34:18.520839526 +0000 UTC m=+0.117303462 container attach a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:34:18 np0005532048 inspiring_cartwright[95829]: 167 167
Nov 22 03:34:18 np0005532048 podman[95812]: 2025-11-22 08:34:18.426823041 +0000 UTC m=+0.023286977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:18 np0005532048 systemd[1]: libpod-a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c.scope: Deactivated successfully.
Nov 22 03:34:18 np0005532048 conmon[95829]: conmon a376442fd24d74570856 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c.scope/container/memory.events
Nov 22 03:34:18 np0005532048 podman[95812]: 2025-11-22 08:34:18.524183857 +0000 UTC m=+0.120647763 container died a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay-04708b5fa01ec70a76659e330032268724bd19c1dc32a9f3ebcf91da48b35dba-merged.mount: Deactivated successfully.
Nov 22 03:34:18 np0005532048 podman[95812]: 2025-11-22 08:34:18.564875236 +0000 UTC m=+0.161339162 container remove a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cartwright, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:34:18 np0005532048 systemd[1]: libpod-conmon-a376442fd24d74570856fa020642e77e6cd1e98061222354883cb3585bf7d17c.scope: Deactivated successfully.
Nov 22 03:34:18 np0005532048 podman[95853]: 2025-11-22 08:34:18.706836646 +0000 UTC m=+0.040107746 container create 091255aaf7770369c23ac279b5962bc165b18523953b55f83856cf8896b80390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:18 np0005532048 systemd[1]: Started libpod-conmon-091255aaf7770369c23ac279b5962bc165b18523953b55f83856cf8896b80390.scope.
Nov 22 03:34:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 22 03:34:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1582168542' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7257f1b6b484f4244c351cc16783fb0049cc2243102b68a92e1e00d4b73d2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 22 03:34:18 np0005532048 zen_elion[95630]: pool 'backups' created
Nov 22 03:34:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7257f1b6b484f4244c351cc16783fb0049cc2243102b68a92e1e00d4b73d2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7257f1b6b484f4244c351cc16783fb0049cc2243102b68a92e1e00d4b73d2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:18 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1582168542' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f7257f1b6b484f4244c351cc16783fb0049cc2243102b68a92e1e00d4b73d2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:18 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 22 03:34:18 np0005532048 podman[95853]: 2025-11-22 08:34:18.781383068 +0000 UTC m=+0.114654178 container init 091255aaf7770369c23ac279b5962bc165b18523953b55f83856cf8896b80390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:34:18 np0005532048 podman[95853]: 2025-11-22 08:34:18.690227602 +0000 UTC m=+0.023498722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:18 np0005532048 podman[95853]: 2025-11-22 08:34:18.789802012 +0000 UTC m=+0.123073112 container start 091255aaf7770369c23ac279b5962bc165b18523953b55f83856cf8896b80390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:18 np0005532048 podman[95853]: 2025-11-22 08:34:18.793703497 +0000 UTC m=+0.126974617 container attach 091255aaf7770369c23ac279b5962bc165b18523953b55f83856cf8896b80390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:34:18 np0005532048 systemd[1]: libpod-d4142742cc60b839695b01e388e46dbdeb9dab153c0ac392107ac1850c485052.scope: Deactivated successfully.
Nov 22 03:34:18 np0005532048 podman[95614]: 2025-11-22 08:34:18.796941046 +0000 UTC m=+1.667954016 container died d4142742cc60b839695b01e388e46dbdeb9dab153c0ac392107ac1850c485052 (image=quay.io/ceph/ceph:v18, name=zen_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:34:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cf5afe7feea035088eaf2b82b0ef9b0e6ddd761148a52f8bda60bc3e65a9f97e-merged.mount: Deactivated successfully.
Nov 22 03:34:18 np0005532048 podman[95614]: 2025-11-22 08:34:18.841372026 +0000 UTC m=+1.712384996 container remove d4142742cc60b839695b01e388e46dbdeb9dab153c0ac392107ac1850c485052 (image=quay.io/ceph/ceph:v18, name=zen_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:34:18 np0005532048 systemd[1]: libpod-conmon-d4142742cc60b839695b01e388e46dbdeb9dab153c0ac392107ac1850c485052.scope: Deactivated successfully.
Nov 22 03:34:18 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:19 np0005532048 python3[95911]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:19 np0005532048 podman[95912]: 2025-11-22 08:34:19.24028515 +0000 UTC m=+0.045186590 container create b544fa99401568e26a703aa9b38a1c186e1df315a454a88fd95b560fe0297a3e (image=quay.io/ceph/ceph:v18, name=sleepy_shaw, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:19 np0005532048 systemd[1]: Started libpod-conmon-b544fa99401568e26a703aa9b38a1c186e1df315a454a88fd95b560fe0297a3e.scope.
Nov 22 03:34:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6539da2259f78a18222b554eb6223b26e54c2a066bf787999d08dafc0026a5ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:19 np0005532048 podman[95912]: 2025-11-22 08:34:19.219010832 +0000 UTC m=+0.023912292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6539da2259f78a18222b554eb6223b26e54c2a066bf787999d08dafc0026a5ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:19 np0005532048 podman[95912]: 2025-11-22 08:34:19.33163226 +0000 UTC m=+0.136533720 container init b544fa99401568e26a703aa9b38a1c186e1df315a454a88fd95b560fe0297a3e (image=quay.io/ceph/ceph:v18, name=sleepy_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:34:19 np0005532048 podman[95912]: 2025-11-22 08:34:19.337684727 +0000 UTC m=+0.142586167 container start b544fa99401568e26a703aa9b38a1c186e1df315a454a88fd95b560fe0297a3e (image=quay.io/ceph/ceph:v18, name=sleepy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:19 np0005532048 podman[95912]: 2025-11-22 08:34:19.341649983 +0000 UTC m=+0.146551443 container attach b544fa99401568e26a703aa9b38a1c186e1df315a454a88fd95b560fe0297a3e (image=quay.io/ceph/ceph:v18, name=sleepy_shaw, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]: {
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "osd_id": 1,
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "type": "bluestore"
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:    },
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "osd_id": 0,
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "type": "bluestore"
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:    },
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "osd_id": 2,
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:        "type": "bluestore"
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]:    }
Nov 22 03:34:19 np0005532048 beautiful_driscoll[95870]: }
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 22 03:34:19 np0005532048 systemd[1]: libpod-091255aaf7770369c23ac279b5962bc165b18523953b55f83856cf8896b80390.scope: Deactivated successfully.
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1582168542' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:19 np0005532048 podman[95853]: 2025-11-22 08:34:19.780369345 +0000 UTC m=+1.113640455 container died 091255aaf7770369c23ac279b5962bc165b18523953b55f83856cf8896b80390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 22 03:34:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2f7257f1b6b484f4244c351cc16783fb0049cc2243102b68a92e1e00d4b73d2a-merged.mount: Deactivated successfully.
Nov 22 03:34:19 np0005532048 podman[95853]: 2025-11-22 08:34:19.879263048 +0000 UTC m=+1.212534158 container remove 091255aaf7770369c23ac279b5962bc165b18523953b55f83856cf8896b80390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:34:19 np0005532048 systemd[1]: libpod-conmon-091255aaf7770369c23ac279b5962bc165b18523953b55f83856cf8896b80390.scope: Deactivated successfully.
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3570805776' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v89: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 22 03:34:20 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/3570805776' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3570805776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 22 03:34:20 np0005532048 sleepy_shaw[95928]: pool 'images' created
Nov 22 03:34:20 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 22 03:34:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:20 np0005532048 systemd[1]: libpod-b544fa99401568e26a703aa9b38a1c186e1df315a454a88fd95b560fe0297a3e.scope: Deactivated successfully.
Nov 22 03:34:20 np0005532048 podman[95912]: 2025-11-22 08:34:20.852574791 +0000 UTC m=+1.657476241 container died b544fa99401568e26a703aa9b38a1c186e1df315a454a88fd95b560fe0297a3e (image=quay.io/ceph/ceph:v18, name=sleepy_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:20 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6539da2259f78a18222b554eb6223b26e54c2a066bf787999d08dafc0026a5ce-merged.mount: Deactivated successfully.
Nov 22 03:34:20 np0005532048 podman[95912]: 2025-11-22 08:34:20.896278984 +0000 UTC m=+1.701180424 container remove b544fa99401568e26a703aa9b38a1c186e1df315a454a88fd95b560fe0297a3e (image=quay.io/ceph/ceph:v18, name=sleepy_shaw, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:34:20 np0005532048 systemd[1]: libpod-conmon-b544fa99401568e26a703aa9b38a1c186e1df315a454a88fd95b560fe0297a3e.scope: Deactivated successfully.
Nov 22 03:34:21 np0005532048 python3[96085]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:21 np0005532048 podman[96086]: 2025-11-22 08:34:21.252418629 +0000 UTC m=+0.042677118 container create e600a579d0a219c2f3f3ca3bce0cce3cd642264c985b99c27e7aae9a40700d60 (image=quay.io/ceph/ceph:v18, name=crazy_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:34:21 np0005532048 systemd[1]: Started libpod-conmon-e600a579d0a219c2f3f3ca3bce0cce3cd642264c985b99c27e7aae9a40700d60.scope.
Nov 22 03:34:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ce4ec9bb785eba6fec6d5e0dfb32343e6dc7aa89117c96dc813a67467fc9a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48ce4ec9bb785eba6fec6d5e0dfb32343e6dc7aa89117c96dc813a67467fc9a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:21 np0005532048 podman[96086]: 2025-11-22 08:34:21.322176905 +0000 UTC m=+0.112435314 container init e600a579d0a219c2f3f3ca3bce0cce3cd642264c985b99c27e7aae9a40700d60 (image=quay.io/ceph/ceph:v18, name=crazy_kowalevski, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:34:21 np0005532048 podman[96086]: 2025-11-22 08:34:21.328023537 +0000 UTC m=+0.118281926 container start e600a579d0a219c2f3f3ca3bce0cce3cd642264c985b99c27e7aae9a40700d60 (image=quay.io/ceph/ceph:v18, name=crazy_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:34:21 np0005532048 podman[96086]: 2025-11-22 08:34:21.233401257 +0000 UTC m=+0.023659666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:21 np0005532048 podman[96086]: 2025-11-22 08:34:21.332948816 +0000 UTC m=+0.123207245 container attach e600a579d0a219c2f3f3ca3bce0cce3cd642264c985b99c27e7aae9a40700d60 (image=quay.io/ceph/ceph:v18, name=crazy_kowalevski, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 22 03:34:21 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/3570805776' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:34:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3908782653' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 22 03:34:22 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 22 03:34:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v92: 5 pgs: 3 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:22 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 26 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [2] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:22 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/3908782653' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:22 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 22 03:34:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3908782653' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 22 03:34:23 np0005532048 crazy_kowalevski[96101]: pool 'cephfs.cephfs.meta' created
Nov 22 03:34:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 22 03:34:23 np0005532048 systemd[1]: libpod-e600a579d0a219c2f3f3ca3bce0cce3cd642264c985b99c27e7aae9a40700d60.scope: Deactivated successfully.
Nov 22 03:34:23 np0005532048 podman[96086]: 2025-11-22 08:34:23.125622845 +0000 UTC m=+1.915881264 container died e600a579d0a219c2f3f3ca3bce0cce3cd642264c985b99c27e7aae9a40700d60 (image=quay.io/ceph/ceph:v18, name=crazy_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 03:34:23 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-48ce4ec9bb785eba6fec6d5e0dfb32343e6dc7aa89117c96dc813a67467fc9a3-merged.mount: Deactivated successfully.
Nov 22 03:34:23 np0005532048 podman[96086]: 2025-11-22 08:34:23.573817339 +0000 UTC m=+2.364075718 container remove e600a579d0a219c2f3f3ca3bce0cce3cd642264c985b99c27e7aae9a40700d60 (image=quay.io/ceph/ceph:v18, name=crazy_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:23 np0005532048 systemd[1]: libpod-conmon-e600a579d0a219c2f3f3ca3bce0cce3cd642264c985b99c27e7aae9a40700d60.scope: Deactivated successfully.
Nov 22 03:34:23 np0005532048 python3[96165]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:23 np0005532048 podman[96166]: 2025-11-22 08:34:23.89358421 +0000 UTC m=+0.022953276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v94: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 22 03:34:24 np0005532048 podman[96166]: 2025-11-22 08:34:24.210184887 +0000 UTC m=+0.339553973 container create 6085894a3751c77d5af8a20a855796b434861f518312f4f8dea9af4393aa1db9 (image=quay.io/ceph/ceph:v18, name=gallant_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:24 np0005532048 ceph-mon[75021]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:24 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/3908782653' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:24 np0005532048 systemd[1]: Started libpod-conmon-6085894a3751c77d5af8a20a855796b434861f518312f4f8dea9af4393aa1db9.scope.
Nov 22 03:34:24 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd2d6b86bc807ba1a84a5e20cf5a7015cd17c6edada4b87b61c7ca53cd86ac0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd2d6b86bc807ba1a84a5e20cf5a7015cd17c6edada4b87b61c7ca53cd86ac0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 22 03:34:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 22 03:34:24 np0005532048 podman[96166]: 2025-11-22 08:34:24.485344678 +0000 UTC m=+0.614713774 container init 6085894a3751c77d5af8a20a855796b434861f518312f4f8dea9af4393aa1db9 (image=quay.io/ceph/ceph:v18, name=gallant_keldysh, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:24 np0005532048 podman[96166]: 2025-11-22 08:34:24.491596916 +0000 UTC m=+0.620965982 container start 6085894a3751c77d5af8a20a855796b434861f518312f4f8dea9af4393aa1db9 (image=quay.io/ceph/ceph:v18, name=gallant_keldysh, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:24 np0005532048 podman[96166]: 2025-11-22 08:34:24.587849154 +0000 UTC m=+0.717218220 container attach 6085894a3751c77d5af8a20a855796b434861f518312f4f8dea9af4393aa1db9 (image=quay.io/ceph/ceph:v18, name=gallant_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:24 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:34:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3012948199' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 22 03:34:25 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/3012948199' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:34:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3012948199' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 22 03:34:25 np0005532048 gallant_keldysh[96181]: pool 'cephfs.cephfs.data' created
Nov 22 03:34:25 np0005532048 systemd[1]: libpod-6085894a3751c77d5af8a20a855796b434861f518312f4f8dea9af4393aa1db9.scope: Deactivated successfully.
Nov 22 03:34:25 np0005532048 podman[96166]: 2025-11-22 08:34:25.598218914 +0000 UTC m=+1.727587980 container died 6085894a3751c77d5af8a20a855796b434861f518312f4f8dea9af4393aa1db9 (image=quay.io/ceph/ceph:v18, name=gallant_keldysh, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:25 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 22 03:34:25 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=0/0 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4cd2d6b86bc807ba1a84a5e20cf5a7015cd17c6edada4b87b61c7ca53cd86ac0-merged.mount: Deactivated successfully.
Nov 22 03:34:26 np0005532048 podman[96166]: 2025-11-22 08:34:26.011442857 +0000 UTC m=+2.140811923 container remove 6085894a3751c77d5af8a20a855796b434861f518312f4f8dea9af4393aa1db9 (image=quay.io/ceph/ceph:v18, name=gallant_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v97: 7 pgs: 5 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:26 np0005532048 systemd[1]: libpod-conmon-6085894a3751c77d5af8a20a855796b434861f518312f4f8dea9af4393aa1db9.scope: Deactivated successfully.
Nov 22 03:34:26 np0005532048 python3[96246]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:26 np0005532048 podman[96247]: 2025-11-22 08:34:26.38433555 +0000 UTC m=+0.021170544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:26 np0005532048 podman[96247]: 2025-11-22 08:34:26.497833238 +0000 UTC m=+0.134668212 container create 3ab09409219173125bb37807fe72a2b9aecdd04546507283a9e025267444ab67 (image=quay.io/ceph/ceph:v18, name=jolly_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:26 np0005532048 systemd[1]: Started libpod-conmon-3ab09409219173125bb37807fe72a2b9aecdd04546507283a9e025267444ab67.scope.
Nov 22 03:34:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 22 03:34:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e008702611a71d5058593b6d809cc38f0de367b39eb34cb71b27e617037af8a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e008702611a71d5058593b6d809cc38f0de367b39eb34cb71b27e617037af8a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:26 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/3012948199' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:34:26 np0005532048 podman[96247]: 2025-11-22 08:34:26.805903512 +0000 UTC m=+0.442738526 container init 3ab09409219173125bb37807fe72a2b9aecdd04546507283a9e025267444ab67 (image=quay.io/ceph/ceph:v18, name=jolly_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:34:26 np0005532048 podman[96247]: 2025-11-22 08:34:26.81423565 +0000 UTC m=+0.451070624 container start 3ab09409219173125bb37807fe72a2b9aecdd04546507283a9e025267444ab67 (image=quay.io/ceph/ceph:v18, name=jolly_robinson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:26 np0005532048 podman[96247]: 2025-11-22 08:34:26.928285832 +0000 UTC m=+0.565120806 container attach 3ab09409219173125bb37807fe72a2b9aecdd04546507283a9e025267444ab67 (image=quay.io/ceph/ceph:v18, name=jolly_robinson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:34:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 22 03:34:27 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 22 03:34:27 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 30 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 22 03:34:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4030583959' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 22 03:34:27 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/4030583959' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 22 03:34:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 22 03:34:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4030583959' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 22 03:34:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 22 03:34:28 np0005532048 jolly_robinson[96262]: enabled application 'rbd' on pool 'vms'
Nov 22 03:34:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 22 03:34:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v100: 7 pgs: 1 creating+peering, 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:28 np0005532048 systemd[1]: libpod-3ab09409219173125bb37807fe72a2b9aecdd04546507283a9e025267444ab67.scope: Deactivated successfully.
Nov 22 03:34:28 np0005532048 podman[96247]: 2025-11-22 08:34:28.06195068 +0000 UTC m=+1.698785674 container died 3ab09409219173125bb37807fe72a2b9aecdd04546507283a9e025267444ab67 (image=quay.io/ceph/ceph:v18, name=jolly_robinson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e008702611a71d5058593b6d809cc38f0de367b39eb34cb71b27e617037af8a2-merged.mount: Deactivated successfully.
Nov 22 03:34:28 np0005532048 podman[96247]: 2025-11-22 08:34:28.119285583 +0000 UTC m=+1.756120547 container remove 3ab09409219173125bb37807fe72a2b9aecdd04546507283a9e025267444ab67 (image=quay.io/ceph/ceph:v18, name=jolly_robinson, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:28 np0005532048 systemd[1]: libpod-conmon-3ab09409219173125bb37807fe72a2b9aecdd04546507283a9e025267444ab67.scope: Deactivated successfully.
Nov 22 03:34:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:28 np0005532048 python3[96325]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:28 np0005532048 podman[96326]: 2025-11-22 08:34:28.49396026 +0000 UTC m=+0.057261962 container create ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282 (image=quay.io/ceph/ceph:v18, name=eloquent_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:28 np0005532048 systemd[1]: Started libpod-conmon-ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282.scope.
Nov 22 03:34:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f53ef35510dedaaed9154aa68871de454ec422c368c7a9e07e2063a59a7580/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71f53ef35510dedaaed9154aa68871de454ec422c368c7a9e07e2063a59a7580/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:28 np0005532048 podman[96326]: 2025-11-22 08:34:28.472177612 +0000 UTC m=+0.035479334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:28 np0005532048 podman[96326]: 2025-11-22 08:34:28.578739595 +0000 UTC m=+0.142041317 container init ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282 (image=quay.io/ceph/ceph:v18, name=eloquent_fermi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:34:28 np0005532048 podman[96326]: 2025-11-22 08:34:28.584265686 +0000 UTC m=+0.147567388 container start ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282 (image=quay.io/ceph/ceph:v18, name=eloquent_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:34:28 np0005532048 podman[96326]: 2025-11-22 08:34:28.592785819 +0000 UTC m=+0.156087551 container attach ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282 (image=quay.io/ceph/ceph:v18, name=eloquent_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:29 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/4030583959' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 22 03:34:29 np0005532048 ceph-mon[75021]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 22 03:34:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4265230860' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 22 03:34:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v101: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 22 03:34:30 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/4265230860' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 22 03:34:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4265230860' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 22 03:34:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 22 03:34:30 np0005532048 eloquent_fermi[96341]: enabled application 'rbd' on pool 'volumes'
Nov 22 03:34:30 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 22 03:34:30 np0005532048 systemd[1]: libpod-ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282.scope: Deactivated successfully.
Nov 22 03:34:30 np0005532048 conmon[96341]: conmon ea4323c7dcf3a49ad826 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282.scope/container/memory.events
Nov 22 03:34:30 np0005532048 podman[96326]: 2025-11-22 08:34:30.144784033 +0000 UTC m=+1.708085745 container died ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282 (image=quay.io/ceph/ceph:v18, name=eloquent_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:34:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-71f53ef35510dedaaed9154aa68871de454ec422c368c7a9e07e2063a59a7580-merged.mount: Deactivated successfully.
Nov 22 03:34:30 np0005532048 podman[96326]: 2025-11-22 08:34:30.217530522 +0000 UTC m=+1.780832214 container remove ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282 (image=quay.io/ceph/ceph:v18, name=eloquent_fermi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:30 np0005532048 systemd[1]: libpod-conmon-ea4323c7dcf3a49ad82616c76e1ffce1fa3430d6c03762be333011415032e282.scope: Deactivated successfully.
Nov 22 03:34:30 np0005532048 python3[96403]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:30 np0005532048 podman[96404]: 2025-11-22 08:34:30.665558553 +0000 UTC m=+0.108544942 container create 5992443aeaa3e81489d2340f2f2d1126d8f160643c2c799d7a3f3c199c991ee7 (image=quay.io/ceph/ceph:v18, name=jolly_feistel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:34:30 np0005532048 podman[96404]: 2025-11-22 08:34:30.586102804 +0000 UTC m=+0.029089213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:30 np0005532048 systemd[1]: Started libpod-conmon-5992443aeaa3e81489d2340f2f2d1126d8f160643c2c799d7a3f3c199c991ee7.scope.
Nov 22 03:34:30 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86912b4c748c74c0e066608a1a00e654a23b567828c99c3b60ba556729a0fdd8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86912b4c748c74c0e066608a1a00e654a23b567828c99c3b60ba556729a0fdd8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:30 np0005532048 podman[96404]: 2025-11-22 08:34:30.854011002 +0000 UTC m=+0.296997411 container init 5992443aeaa3e81489d2340f2f2d1126d8f160643c2c799d7a3f3c199c991ee7 (image=quay.io/ceph/ceph:v18, name=jolly_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:34:30 np0005532048 podman[96404]: 2025-11-22 08:34:30.858662694 +0000 UTC m=+0.301649093 container start 5992443aeaa3e81489d2340f2f2d1126d8f160643c2c799d7a3f3c199c991ee7 (image=quay.io/ceph/ceph:v18, name=jolly_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:30 np0005532048 podman[96404]: 2025-11-22 08:34:30.906583382 +0000 UTC m=+0.349569791 container attach 5992443aeaa3e81489d2340f2f2d1126d8f160643c2c799d7a3f3c199c991ee7 (image=quay.io/ceph/ceph:v18, name=jolly_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:34:31 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/4265230860' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 22 03:34:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 22 03:34:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/35157547' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 22 03:34:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v103: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 22 03:34:32 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/35157547' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 22 03:34:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/35157547' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 22 03:34:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 22 03:34:32 np0005532048 jolly_feistel[96419]: enabled application 'rbd' on pool 'backups'
Nov 22 03:34:32 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 22 03:34:32 np0005532048 systemd[1]: libpod-5992443aeaa3e81489d2340f2f2d1126d8f160643c2c799d7a3f3c199c991ee7.scope: Deactivated successfully.
Nov 22 03:34:32 np0005532048 podman[96404]: 2025-11-22 08:34:32.497591483 +0000 UTC m=+1.940577932 container died 5992443aeaa3e81489d2340f2f2d1126d8f160643c2c799d7a3f3c199c991ee7 (image=quay.io/ceph/ceph:v18, name=jolly_feistel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:32 np0005532048 systemd[1]: var-lib-containers-storage-overlay-86912b4c748c74c0e066608a1a00e654a23b567828c99c3b60ba556729a0fdd8-merged.mount: Deactivated successfully.
Nov 22 03:34:32 np0005532048 podman[96404]: 2025-11-22 08:34:32.94205202 +0000 UTC m=+2.385038409 container remove 5992443aeaa3e81489d2340f2f2d1126d8f160643c2c799d7a3f3c199c991ee7 (image=quay.io/ceph/ceph:v18, name=jolly_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:32 np0005532048 systemd[1]: libpod-conmon-5992443aeaa3e81489d2340f2f2d1126d8f160643c2c799d7a3f3c199c991ee7.scope: Deactivated successfully.
Nov 22 03:34:33 np0005532048 python3[96482]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:33 np0005532048 podman[96483]: 2025-11-22 08:34:33.335733908 +0000 UTC m=+0.112997898 container create a7764dc278869fe2023b81f3bec525a2f513550bce2c4e2f0e6b6200b860491a (image=quay.io/ceph/ceph:v18, name=fervent_booth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:33 np0005532048 podman[96483]: 2025-11-22 08:34:33.250386179 +0000 UTC m=+0.027650169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:33 np0005532048 systemd[1]: Started libpod-conmon-a7764dc278869fe2023b81f3bec525a2f513550bce2c4e2f0e6b6200b860491a.scope.
Nov 22 03:34:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2764a0262221031accfe0a5e6b60828baf0ecf9b3978c48763fa611cdf0837bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2764a0262221031accfe0a5e6b60828baf0ecf9b3978c48763fa611cdf0837bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:33 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/35157547' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 22 03:34:33 np0005532048 ceph-mon[75021]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:33 np0005532048 podman[96483]: 2025-11-22 08:34:33.537854713 +0000 UTC m=+0.315118713 container init a7764dc278869fe2023b81f3bec525a2f513550bce2c4e2f0e6b6200b860491a (image=quay.io/ceph/ceph:v18, name=fervent_booth, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:34:33 np0005532048 podman[96483]: 2025-11-22 08:34:33.544203874 +0000 UTC m=+0.321467844 container start a7764dc278869fe2023b81f3bec525a2f513550bce2c4e2f0e6b6200b860491a (image=quay.io/ceph/ceph:v18, name=fervent_booth, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:34:33 np0005532048 podman[96483]: 2025-11-22 08:34:33.549249163 +0000 UTC m=+0.326513143 container attach a7764dc278869fe2023b81f3bec525a2f513550bce2c4e2f0e6b6200b860491a (image=quay.io/ceph/ceph:v18, name=fervent_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:34:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v105: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 22 03:34:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/801117129' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 22 03:34:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 22 03:34:34 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/801117129' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 22 03:34:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/801117129' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 22 03:34:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 22 03:34:34 np0005532048 fervent_booth[96498]: enabled application 'rbd' on pool 'images'
Nov 22 03:34:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 22 03:34:34 np0005532048 systemd[1]: libpod-a7764dc278869fe2023b81f3bec525a2f513550bce2c4e2f0e6b6200b860491a.scope: Deactivated successfully.
Nov 22 03:34:34 np0005532048 podman[96483]: 2025-11-22 08:34:34.57385005 +0000 UTC m=+1.351114050 container died a7764dc278869fe2023b81f3bec525a2f513550bce2c4e2f0e6b6200b860491a (image=quay.io/ceph/ceph:v18, name=fervent_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2764a0262221031accfe0a5e6b60828baf0ecf9b3978c48763fa611cdf0837bc-merged.mount: Deactivated successfully.
Nov 22 03:34:34 np0005532048 podman[96483]: 2025-11-22 08:34:34.623506911 +0000 UTC m=+1.400770881 container remove a7764dc278869fe2023b81f3bec525a2f513550bce2c4e2f0e6b6200b860491a (image=quay.io/ceph/ceph:v18, name=fervent_booth, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:34:34 np0005532048 systemd[1]: libpod-conmon-a7764dc278869fe2023b81f3bec525a2f513550bce2c4e2f0e6b6200b860491a.scope: Deactivated successfully.
Nov 22 03:34:34 np0005532048 python3[96560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:35 np0005532048 podman[96561]: 2025-11-22 08:34:35.043075385 +0000 UTC m=+0.044248923 container create d93eb126be630d5c448f50d99c2991dfd64a1aaa0aa74b74e66b363aab2f2d80 (image=quay.io/ceph/ceph:v18, name=peaceful_swirles, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:34:35 np0005532048 systemd[1]: Started libpod-conmon-d93eb126be630d5c448f50d99c2991dfd64a1aaa0aa74b74e66b363aab2f2d80.scope.
Nov 22 03:34:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f342a9bc5539ebb9f515d3546ad1fc509fca27b68d739177190a3820acbc0a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f342a9bc5539ebb9f515d3546ad1fc509fca27b68d739177190a3820acbc0a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:35 np0005532048 podman[96561]: 2025-11-22 08:34:35.118885727 +0000 UTC m=+0.120059295 container init d93eb126be630d5c448f50d99c2991dfd64a1aaa0aa74b74e66b363aab2f2d80 (image=quay.io/ceph/ceph:v18, name=peaceful_swirles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:35 np0005532048 podman[96561]: 2025-11-22 08:34:35.025813525 +0000 UTC m=+0.026987093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:35 np0005532048 podman[96561]: 2025-11-22 08:34:35.12781857 +0000 UTC m=+0.128992108 container start d93eb126be630d5c448f50d99c2991dfd64a1aaa0aa74b74e66b363aab2f2d80 (image=quay.io/ceph/ceph:v18, name=peaceful_swirles, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:35 np0005532048 podman[96561]: 2025-11-22 08:34:35.131762973 +0000 UTC m=+0.132936531 container attach d93eb126be630d5c448f50d99c2991dfd64a1aaa0aa74b74e66b363aab2f2d80 (image=quay.io/ceph/ceph:v18, name=peaceful_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:34:35 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/801117129' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 22 03:34:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 22 03:34:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2618281577' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 22 03:34:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v107: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 22 03:34:36 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/2618281577' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 22 03:34:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2618281577' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 22 03:34:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 22 03:34:36 np0005532048 peaceful_swirles[96576]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 22 03:34:36 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 22 03:34:36 np0005532048 systemd[1]: libpod-d93eb126be630d5c448f50d99c2991dfd64a1aaa0aa74b74e66b363aab2f2d80.scope: Deactivated successfully.
Nov 22 03:34:36 np0005532048 podman[96561]: 2025-11-22 08:34:36.761390461 +0000 UTC m=+1.762563999 container died d93eb126be630d5c448f50d99c2991dfd64a1aaa0aa74b74e66b363aab2f2d80 (image=quay.io/ceph/ceph:v18, name=peaceful_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 03:34:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1f342a9bc5539ebb9f515d3546ad1fc509fca27b68d739177190a3820acbc0a1-merged.mount: Deactivated successfully.
Nov 22 03:34:36 np0005532048 podman[96561]: 2025-11-22 08:34:36.974667632 +0000 UTC m=+1.975841170 container remove d93eb126be630d5c448f50d99c2991dfd64a1aaa0aa74b74e66b363aab2f2d80 (image=quay.io/ceph/ceph:v18, name=peaceful_swirles, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:36 np0005532048 systemd[1]: libpod-conmon-d93eb126be630d5c448f50d99c2991dfd64a1aaa0aa74b74e66b363aab2f2d80.scope: Deactivated successfully.
Nov 22 03:34:37 np0005532048 python3[96640]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:37 np0005532048 podman[96641]: 2025-11-22 08:34:37.284842465 +0000 UTC m=+0.025982728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:37 np0005532048 podman[96641]: 2025-11-22 08:34:37.437134595 +0000 UTC m=+0.178274778 container create 7fc2b840cf01c86127c54fde705b77f1144a668d65b32c00d7ee926cbf340726 (image=quay.io/ceph/ceph:v18, name=agitated_sammet, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:34:37 np0005532048 systemd[1]: Started libpod-conmon-7fc2b840cf01c86127c54fde705b77f1144a668d65b32c00d7ee926cbf340726.scope.
Nov 22 03:34:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4433ccf2574e6a9276da6e1b626817794f3844163ba660d7706bc375ce57691/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4433ccf2574e6a9276da6e1b626817794f3844163ba660d7706bc375ce57691/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:37 np0005532048 podman[96641]: 2025-11-22 08:34:37.561164824 +0000 UTC m=+0.302305017 container init 7fc2b840cf01c86127c54fde705b77f1144a668d65b32c00d7ee926cbf340726 (image=quay.io/ceph/ceph:v18, name=agitated_sammet, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:34:37 np0005532048 podman[96641]: 2025-11-22 08:34:37.567616657 +0000 UTC m=+0.308756840 container start 7fc2b840cf01c86127c54fde705b77f1144a668d65b32c00d7ee926cbf340726 (image=quay.io/ceph/ceph:v18, name=agitated_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:37 np0005532048 podman[96641]: 2025-11-22 08:34:37.581579859 +0000 UTC m=+0.322720072 container attach 7fc2b840cf01c86127c54fde705b77f1144a668d65b32c00d7ee926cbf340726 (image=quay.io/ceph/ceph:v18, name=agitated_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:37 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/2618281577' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 22 03:34:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v109: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/325903681' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/325903681' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/325903681' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 22 03:34:38 np0005532048 agitated_sammet[96658]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 22 03:34:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 22 03:34:38 np0005532048 systemd[1]: libpod-7fc2b840cf01c86127c54fde705b77f1144a668d65b32c00d7ee926cbf340726.scope: Deactivated successfully.
Nov 22 03:34:38 np0005532048 podman[96641]: 2025-11-22 08:34:38.739095506 +0000 UTC m=+1.480235709 container died 7fc2b840cf01c86127c54fde705b77f1144a668d65b32c00d7ee926cbf340726 (image=quay.io/ceph/ceph:v18, name=agitated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e4433ccf2574e6a9276da6e1b626817794f3844163ba660d7706bc375ce57691-merged.mount: Deactivated successfully.
Nov 22 03:34:39 np0005532048 podman[96641]: 2025-11-22 08:34:39.065145666 +0000 UTC m=+1.806285869 container remove 7fc2b840cf01c86127c54fde705b77f1144a668d65b32c00d7ee926cbf340726 (image=quay.io/ceph/ceph:v18, name=agitated_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 03:34:39 np0005532048 systemd[1]: libpod-conmon-7fc2b840cf01c86127c54fde705b77f1144a668d65b32c00d7ee926cbf340726.scope: Deactivated successfully.
Nov 22 03:34:39 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/325903681' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 22 03:34:39 np0005532048 python3[96772]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:34:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v111: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:40 np0005532048 python3[96843]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800479.7094333-36574-144486074888386/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:40 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 03:34:40 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 03:34:40 np0005532048 ceph-mon[75021]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 03:34:40 np0005532048 ceph-mon[75021]: Cluster is now healthy
Nov 22 03:34:40 np0005532048 python3[96945]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:34:41 np0005532048 python3[97020]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800480.660297-36588-263031082599297/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=d6a21845cafd39be313c1101e13c44e667fae22d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:41 np0005532048 python3[97070]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:41 np0005532048 podman[97071]: 2025-11-22 08:34:41.800618974 +0000 UTC m=+0.049697163 container create f34b2229ca0cad002ad9e59a42f9aa28decf1edc44054d2deab1f14a0b1c1614 (image=quay.io/ceph/ceph:v18, name=eloquent_rubin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:41 np0005532048 systemd[1]: Started libpod-conmon-f34b2229ca0cad002ad9e59a42f9aa28decf1edc44054d2deab1f14a0b1c1614.scope.
Nov 22 03:34:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990c7aed9756b5680e9cc38d920c51235d9b207d4cbdb291b101441e690a4d1e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990c7aed9756b5680e9cc38d920c51235d9b207d4cbdb291b101441e690a4d1e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990c7aed9756b5680e9cc38d920c51235d9b207d4cbdb291b101441e690a4d1e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:41 np0005532048 podman[97071]: 2025-11-22 08:34:41.776254804 +0000 UTC m=+0.025333013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:41 np0005532048 podman[97071]: 2025-11-22 08:34:41.890583372 +0000 UTC m=+0.139661591 container init f34b2229ca0cad002ad9e59a42f9aa28decf1edc44054d2deab1f14a0b1c1614 (image=quay.io/ceph/ceph:v18, name=eloquent_rubin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:41 np0005532048 podman[97071]: 2025-11-22 08:34:41.903594121 +0000 UTC m=+0.152672320 container start f34b2229ca0cad002ad9e59a42f9aa28decf1edc44054d2deab1f14a0b1c1614 (image=quay.io/ceph/ceph:v18, name=eloquent_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:41 np0005532048 podman[97071]: 2025-11-22 08:34:41.910170838 +0000 UTC m=+0.159249047 container attach f34b2229ca0cad002ad9e59a42f9aa28decf1edc44054d2deab1f14a0b1c1614 (image=quay.io/ceph/ceph:v18, name=eloquent_rubin, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v112: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 03:34:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/306995445' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:34:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/306995445' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 03:34:42 np0005532048 eloquent_rubin[97086]: 
Nov 22 03:34:42 np0005532048 eloquent_rubin[97086]: [global]
Nov 22 03:34:42 np0005532048 eloquent_rubin[97086]: #011fsid = 34829716-a12c-57a6-8915-c1aa615c9d8a
Nov 22 03:34:42 np0005532048 eloquent_rubin[97086]: #011mon_host = 192.168.122.100
Nov 22 03:34:42 np0005532048 systemd[1]: libpod-f34b2229ca0cad002ad9e59a42f9aa28decf1edc44054d2deab1f14a0b1c1614.scope: Deactivated successfully.
Nov 22 03:34:42 np0005532048 podman[97071]: 2025-11-22 08:34:42.461125894 +0000 UTC m=+0.710204083 container died f34b2229ca0cad002ad9e59a42f9aa28decf1edc44054d2deab1f14a0b1c1614 (image=quay.io/ceph/ceph:v18, name=eloquent_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:34:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-990c7aed9756b5680e9cc38d920c51235d9b207d4cbdb291b101441e690a4d1e-merged.mount: Deactivated successfully.
Nov 22 03:34:42 np0005532048 podman[97071]: 2025-11-22 08:34:42.526645702 +0000 UTC m=+0.775723891 container remove f34b2229ca0cad002ad9e59a42f9aa28decf1edc44054d2deab1f14a0b1c1614 (image=quay.io/ceph/ceph:v18, name=eloquent_rubin, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:42 np0005532048 systemd[1]: libpod-conmon-f34b2229ca0cad002ad9e59a42f9aa28decf1edc44054d2deab1f14a0b1c1614.scope: Deactivated successfully.
Nov 22 03:34:42 np0005532048 python3[97246]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:42 np0005532048 podman[97262]: 2025-11-22 08:34:42.943237336 +0000 UTC m=+0.050978004 container create 458d994bc76d1ced6a7d99f6c357f82840cab6b896616f207fba6895b2db2e1c (image=quay.io/ceph/ceph:v18, name=intelligent_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:42 np0005532048 systemd[1]: Started libpod-conmon-458d994bc76d1ced6a7d99f6c357f82840cab6b896616f207fba6895b2db2e1c.scope.
Nov 22 03:34:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbf0b8a1c79a55ceb6950122acf36dbefdc26212e245f473fdffa867fcdb4ca/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbf0b8a1c79a55ceb6950122acf36dbefdc26212e245f473fdffa867fcdb4ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfbf0b8a1c79a55ceb6950122acf36dbefdc26212e245f473fdffa867fcdb4ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:43 np0005532048 podman[97262]: 2025-11-22 08:34:42.915101037 +0000 UTC m=+0.022841705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:43 np0005532048 podman[97262]: 2025-11-22 08:34:43.027091069 +0000 UTC m=+0.134831737 container init 458d994bc76d1ced6a7d99f6c357f82840cab6b896616f207fba6895b2db2e1c (image=quay.io/ceph/ceph:v18, name=intelligent_montalcini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:43 np0005532048 podman[97262]: 2025-11-22 08:34:43.033496741 +0000 UTC m=+0.141237389 container start 458d994bc76d1ced6a7d99f6c357f82840cab6b896616f207fba6895b2db2e1c (image=quay.io/ceph/ceph:v18, name=intelligent_montalcini, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:43 np0005532048 podman[97262]: 2025-11-22 08:34:43.039707769 +0000 UTC m=+0.147448437 container attach 458d994bc76d1ced6a7d99f6c357f82840cab6b896616f207fba6895b2db2e1c (image=quay.io/ceph/ceph:v18, name=intelligent_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:34:43 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/306995445' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:34:43 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/306995445' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 03:34:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:43 np0005532048 podman[97342]: 2025-11-22 08:34:43.269787998 +0000 UTC m=+0.062741272 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:34:43 np0005532048 podman[97342]: 2025-11-22 08:34:43.391739077 +0000 UTC m=+0.184692311 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:34:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 22 03:34:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1919705596' entity='client.admin' 
Nov 22 03:34:43 np0005532048 intelligent_montalcini[97292]: set ssl_option
Nov 22 03:34:43 np0005532048 systemd[1]: libpod-458d994bc76d1ced6a7d99f6c357f82840cab6b896616f207fba6895b2db2e1c.scope: Deactivated successfully.
Nov 22 03:34:43 np0005532048 podman[97262]: 2025-11-22 08:34:43.789493722 +0000 UTC m=+0.897234390 container died 458d994bc76d1ced6a7d99f6c357f82840cab6b896616f207fba6895b2db2e1c (image=quay.io/ceph/ceph:v18, name=intelligent_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:34:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dfbf0b8a1c79a55ceb6950122acf36dbefdc26212e245f473fdffa867fcdb4ca-merged.mount: Deactivated successfully.
Nov 22 03:34:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v113: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:44 np0005532048 podman[97262]: 2025-11-22 08:34:44.10110306 +0000 UTC m=+1.208843708 container remove 458d994bc76d1ced6a7d99f6c357f82840cab6b896616f207fba6895b2db2e1c (image=quay.io/ceph/ceph:v18, name=intelligent_montalcini, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:34:44 np0005532048 systemd[1]: libpod-conmon-458d994bc76d1ced6a7d99f6c357f82840cab6b896616f207fba6895b2db2e1c.scope: Deactivated successfully.
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 65c93f2b-8ba4-4631-9e6e-037f63a79834 does not exist
Nov 22 03:34:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 75789936-9424-40ac-83ba-38a0193f700d does not exist
Nov 22 03:34:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev f33687fb-784d-4b14-ab57-4c9a5b1b6629 does not exist
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:44 np0005532048 python3[97522]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:44 np0005532048 podman[97548]: 2025-11-22 08:34:44.517007647 +0000 UTC m=+0.021118693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:44 np0005532048 podman[97548]: 2025-11-22 08:34:44.631305374 +0000 UTC m=+0.135416430 container create 8f040ebb7f844656c75d5c6bd120ccbb01610712f705ad86256a5f170348a829 (image=quay.io/ceph/ceph:v18, name=upbeat_meitner, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:44 np0005532048 systemd[1]: Started libpod-conmon-8f040ebb7f844656c75d5c6bd120ccbb01610712f705ad86256a5f170348a829.scope.
Nov 22 03:34:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84983ef73da21708bcbee83cb9748f3d6ba741e54a728d9af962981fd68eee16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84983ef73da21708bcbee83cb9748f3d6ba741e54a728d9af962981fd68eee16/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84983ef73da21708bcbee83cb9748f3d6ba741e54a728d9af962981fd68eee16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:44 np0005532048 podman[97548]: 2025-11-22 08:34:44.841445039 +0000 UTC m=+0.345556095 container init 8f040ebb7f844656c75d5c6bd120ccbb01610712f705ad86256a5f170348a829 (image=quay.io/ceph/ceph:v18, name=upbeat_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:34:44 np0005532048 podman[97548]: 2025-11-22 08:34:44.849515691 +0000 UTC m=+0.353626717 container start 8f040ebb7f844656c75d5c6bd120ccbb01610712f705ad86256a5f170348a829 (image=quay.io/ceph/ceph:v18, name=upbeat_meitner, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1919705596' entity='client.admin' 
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:44 np0005532048 podman[97548]: 2025-11-22 08:34:44.882138796 +0000 UTC m=+0.386249842 container attach 8f040ebb7f844656c75d5c6bd120ccbb01610712f705ad86256a5f170348a829 (image=quay.io/ceph/ceph:v18, name=upbeat_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:34:45 np0005532048 podman[97680]: 2025-11-22 08:34:45.091226247 +0000 UTC m=+0.067934336 container create 86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:45 np0005532048 podman[97680]: 2025-11-22 08:34:45.042080859 +0000 UTC m=+0.018788978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:45 np0005532048 systemd[1]: Started libpod-conmon-86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90.scope.
Nov 22 03:34:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:45 np0005532048 podman[97680]: 2025-11-22 08:34:45.293896994 +0000 UTC m=+0.270605173 container init 86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:34:45 np0005532048 podman[97680]: 2025-11-22 08:34:45.300195855 +0000 UTC m=+0.276903954 container start 86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:45 np0005532048 nifty_zhukovsky[97715]: 167 167
Nov 22 03:34:45 np0005532048 systemd[1]: libpod-86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90.scope: Deactivated successfully.
Nov 22 03:34:45 np0005532048 conmon[97715]: conmon 86054c992d08da40573f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90.scope/container/memory.events
Nov 22 03:34:45 np0005532048 podman[97680]: 2025-11-22 08:34:45.370135737 +0000 UTC m=+0.346843846 container attach 86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:34:45 np0005532048 podman[97680]: 2025-11-22 08:34:45.371127431 +0000 UTC m=+0.347835520 container died 86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:34:45 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14242 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:34:45 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 22 03:34:45 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 22 03:34:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 03:34:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:45 np0005532048 upbeat_meitner[97635]: Scheduled rgw.rgw update...
Nov 22 03:34:45 np0005532048 systemd[1]: libpod-8f040ebb7f844656c75d5c6bd120ccbb01610712f705ad86256a5f170348a829.scope: Deactivated successfully.
Nov 22 03:34:45 np0005532048 podman[97548]: 2025-11-22 08:34:45.507523083 +0000 UTC m=+1.011634109 container died 8f040ebb7f844656c75d5c6bd120ccbb01610712f705ad86256a5f170348a829 (image=quay.io/ceph/ceph:v18, name=upbeat_meitner, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-45b292b0c6c8bb3f47c57882afbbc7050c1c5bb5d6ae1434c05b2b2905d9c425-merged.mount: Deactivated successfully.
Nov 22 03:34:45 np0005532048 podman[97680]: 2025-11-22 08:34:45.630248001 +0000 UTC m=+0.606956110 container remove 86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-84983ef73da21708bcbee83cb9748f3d6ba741e54a728d9af962981fd68eee16-merged.mount: Deactivated successfully.
Nov 22 03:34:45 np0005532048 podman[97548]: 2025-11-22 08:34:45.680223878 +0000 UTC m=+1.184334904 container remove 8f040ebb7f844656c75d5c6bd120ccbb01610712f705ad86256a5f170348a829 (image=quay.io/ceph/ceph:v18, name=upbeat_meitner, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:45 np0005532048 systemd[1]: libpod-conmon-8f040ebb7f844656c75d5c6bd120ccbb01610712f705ad86256a5f170348a829.scope: Deactivated successfully.
Nov 22 03:34:45 np0005532048 systemd[1]: libpod-conmon-86054c992d08da40573f3d90dbf1a21b1a5d2d78991520752766bae1f875fa90.scope: Deactivated successfully.
Nov 22 03:34:45 np0005532048 podman[97757]: 2025-11-22 08:34:45.808854596 +0000 UTC m=+0.048934804 container create c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:34:45 np0005532048 systemd[1]: Started libpod-conmon-c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de.scope.
Nov 22 03:34:45 np0005532048 podman[97757]: 2025-11-22 08:34:45.786883534 +0000 UTC m=+0.026963762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6324434d7a06e4b2344f83624799c78f842536525ed868e7d6dfc51860dd694e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6324434d7a06e4b2344f83624799c78f842536525ed868e7d6dfc51860dd694e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6324434d7a06e4b2344f83624799c78f842536525ed868e7d6dfc51860dd694e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6324434d7a06e4b2344f83624799c78f842536525ed868e7d6dfc51860dd694e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6324434d7a06e4b2344f83624799c78f842536525ed868e7d6dfc51860dd694e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:45 np0005532048 podman[97757]: 2025-11-22 08:34:45.91165543 +0000 UTC m=+0.151735638 container init c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:45 np0005532048 podman[97757]: 2025-11-22 08:34:45.919027975 +0000 UTC m=+0.159108213 container start c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:45 np0005532048 podman[97757]: 2025-11-22 08:34:45.923514262 +0000 UTC m=+0.163594470 container attach c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:34:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v114: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:46 np0005532048 ceph-mon[75021]: Saving service rgw.rgw spec with placement compute-0
Nov 22 03:34:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:46 np0005532048 python3[97855]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:34:46 np0005532048 sad_booth[97775]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:34:46 np0005532048 sad_booth[97775]: --> relative data size: 1.0
Nov 22 03:34:46 np0005532048 sad_booth[97775]: --> All data devices are unavailable
Nov 22 03:34:46 np0005532048 python3[97941]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800486.3715353-36629-148464253551324/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:47 np0005532048 systemd[1]: libpod-c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de.scope: Deactivated successfully.
Nov 22 03:34:47 np0005532048 systemd[1]: libpod-c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de.scope: Consumed 1.012s CPU time.
Nov 22 03:34:47 np0005532048 podman[97757]: 2025-11-22 08:34:47.001889806 +0000 UTC m=+1.241970034 container died c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:34:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6324434d7a06e4b2344f83624799c78f842536525ed868e7d6dfc51860dd694e-merged.mount: Deactivated successfully.
Nov 22 03:34:47 np0005532048 podman[97757]: 2025-11-22 08:34:47.063493331 +0000 UTC m=+1.303573529 container remove c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:34:47 np0005532048 systemd[1]: libpod-conmon-c44e3083e5700fa3259514983cf3bc54b6c04da747ee58f648b0cd63c12cd6de.scope: Deactivated successfully.
Nov 22 03:34:47 np0005532048 python3[98109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:47 np0005532048 podman[98125]: 2025-11-22 08:34:47.517663828 +0000 UTC m=+0.039376537 container create af9424d38dbaa7b269cd98b2b52a2b44f60d1c10a6cd77453adca0ef9c3df4e2 (image=quay.io/ceph/ceph:v18, name=stupefied_bell, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:47 np0005532048 systemd[1]: Started libpod-conmon-af9424d38dbaa7b269cd98b2b52a2b44f60d1c10a6cd77453adca0ef9c3df4e2.scope.
Nov 22 03:34:47 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac8003ce1b5426704bdb7ca71a69fcd7bcd754be127445200cbf3a29e28da7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac8003ce1b5426704bdb7ca71a69fcd7bcd754be127445200cbf3a29e28da7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac8003ce1b5426704bdb7ca71a69fcd7bcd754be127445200cbf3a29e28da7f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:47 np0005532048 podman[98125]: 2025-11-22 08:34:47.499100726 +0000 UTC m=+0.020813455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:47 np0005532048 podman[98125]: 2025-11-22 08:34:47.606237503 +0000 UTC m=+0.127950242 container init af9424d38dbaa7b269cd98b2b52a2b44f60d1c10a6cd77453adca0ef9c3df4e2 (image=quay.io/ceph/ceph:v18, name=stupefied_bell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:47 np0005532048 podman[98125]: 2025-11-22 08:34:47.61244102 +0000 UTC m=+0.134153729 container start af9424d38dbaa7b269cd98b2b52a2b44f60d1c10a6cd77453adca0ef9c3df4e2 (image=quay.io/ceph/ceph:v18, name=stupefied_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:34:47 np0005532048 podman[98125]: 2025-11-22 08:34:47.617359708 +0000 UTC m=+0.139072417 container attach af9424d38dbaa7b269cd98b2b52a2b44f60d1c10a6cd77453adca0ef9c3df4e2 (image=quay.io/ceph/ceph:v18, name=stupefied_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 22 03:34:47 np0005532048 podman[98172]: 2025-11-22 08:34:47.747670805 +0000 UTC m=+0.045931193 container create de7a1ccfe88fe02d82516ed0f461d149e18aefe10d95217548796a36162bd4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poitras, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 22 03:34:47 np0005532048 systemd[1]: Started libpod-conmon-de7a1ccfe88fe02d82516ed0f461d149e18aefe10d95217548796a36162bd4c1.scope.
Nov 22 03:34:47 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:47 np0005532048 podman[98172]: 2025-11-22 08:34:47.727850154 +0000 UTC m=+0.026110552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:47 np0005532048 podman[98172]: 2025-11-22 08:34:47.854619667 +0000 UTC m=+0.152880065 container init de7a1ccfe88fe02d82516ed0f461d149e18aefe10d95217548796a36162bd4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:47 np0005532048 podman[98172]: 2025-11-22 08:34:47.86646895 +0000 UTC m=+0.164729348 container start de7a1ccfe88fe02d82516ed0f461d149e18aefe10d95217548796a36162bd4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poitras, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 22 03:34:47 np0005532048 hungry_poitras[98189]: 167 167
Nov 22 03:34:47 np0005532048 systemd[1]: libpod-de7a1ccfe88fe02d82516ed0f461d149e18aefe10d95217548796a36162bd4c1.scope: Deactivated successfully.
Nov 22 03:34:47 np0005532048 podman[98172]: 2025-11-22 08:34:47.872267757 +0000 UTC m=+0.170528185 container attach de7a1ccfe88fe02d82516ed0f461d149e18aefe10d95217548796a36162bd4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poitras, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:47 np0005532048 podman[98172]: 2025-11-22 08:34:47.87281384 +0000 UTC m=+0.171074268 container died de7a1ccfe88fe02d82516ed0f461d149e18aefe10d95217548796a36162bd4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:34:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8bc1d739bc8a202f647ca11b7741e464f7e0c3fbcbeeb5d0dabbe7595008161a-merged.mount: Deactivated successfully.
Nov 22 03:34:47 np0005532048 podman[98172]: 2025-11-22 08:34:47.929022197 +0000 UTC m=+0.227282585 container remove de7a1ccfe88fe02d82516ed0f461d149e18aefe10d95217548796a36162bd4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poitras, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:34:47 np0005532048 systemd[1]: libpod-conmon-de7a1ccfe88fe02d82516ed0f461d149e18aefe10d95217548796a36162bd4c1.scope: Deactivated successfully.
Nov 22 03:34:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v115: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:48 np0005532048 podman[98232]: 2025-11-22 08:34:48.082781151 +0000 UTC m=+0.045712267 container create eb4c322a33dab32480a5a6f9f8f4316060403cb305bb59afa221530df8f019c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:48 np0005532048 systemd[1]: Started libpod-conmon-eb4c322a33dab32480a5a6f9f8f4316060403cb305bb59afa221530df8f019c7.scope.
Nov 22 03:34:48 np0005532048 podman[98232]: 2025-11-22 08:34:48.055750369 +0000 UTC m=+0.018681495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd5d60370879d7c5f1d7a65d1ceeccc9b476d21d58f0244a9bb9fe2bb04c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd5d60370879d7c5f1d7a65d1ceeccc9b476d21d58f0244a9bb9fe2bb04c1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd5d60370879d7c5f1d7a65d1ceeccc9b476d21d58f0244a9bb9fe2bb04c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd5d60370879d7c5f1d7a65d1ceeccc9b476d21d58f0244a9bb9fe2bb04c1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:48 np0005532048 podman[98232]: 2025-11-22 08:34:48.191437645 +0000 UTC m=+0.154368811 container init eb4c322a33dab32480a5a6f9f8f4316060403cb305bb59afa221530df8f019c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bohr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:34:48 np0005532048 podman[98232]: 2025-11-22 08:34:48.198040162 +0000 UTC m=+0.160971278 container start eb4c322a33dab32480a5a6f9f8f4316060403cb305bb59afa221530df8f019c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:48 np0005532048 podman[98232]: 2025-11-22 08:34:48.205371346 +0000 UTC m=+0.168302502 container attach eb4c322a33dab32480a5a6f9f8f4316060403cb305bb59afa221530df8f019c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bohr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:48 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:34:48 np0005532048 ceph-mgr[75315]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 22 03:34:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0[75017]: 2025-11-22T08:34:48.256+0000 7f8f0b5ab640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e2 new map
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-22T08:34:48.256238+0000#012modified#0112025-11-22T08:34:48.256284+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:48 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 22 03:34:48 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:48 np0005532048 ceph-mgr[75315]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 22 03:34:48 np0005532048 podman[98125]: 2025-11-22 08:34:48.297828844 +0000 UTC m=+0.819541563 container died af9424d38dbaa7b269cd98b2b52a2b44f60d1c10a6cd77453adca0ef9c3df4e2 (image=quay.io/ceph/ceph:v18, name=stupefied_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 03:34:48 np0005532048 systemd[1]: libpod-af9424d38dbaa7b269cd98b2b52a2b44f60d1c10a6cd77453adca0ef9c3df4e2.scope: Deactivated successfully.
Nov 22 03:34:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5ac8003ce1b5426704bdb7ca71a69fcd7bcd754be127445200cbf3a29e28da7f-merged.mount: Deactivated successfully.
Nov 22 03:34:48 np0005532048 podman[98125]: 2025-11-22 08:34:48.368536674 +0000 UTC m=+0.890249383 container remove af9424d38dbaa7b269cd98b2b52a2b44f60d1c10a6cd77453adca0ef9c3df4e2 (image=quay.io/ceph/ceph:v18, name=stupefied_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:34:48 np0005532048 systemd[1]: libpod-conmon-af9424d38dbaa7b269cd98b2b52a2b44f60d1c10a6cd77453adca0ef9c3df4e2.scope: Deactivated successfully.
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 22 03:34:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:48 np0005532048 python3[98292]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:48 np0005532048 podman[98293]: 2025-11-22 08:34:48.777210889 +0000 UTC m=+0.048185456 container create 3a876fc45e147bb1503cc11658c1815535c2ef9d802662d38082cb543f05e577 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:48 np0005532048 systemd[1]: Started libpod-conmon-3a876fc45e147bb1503cc11658c1815535c2ef9d802662d38082cb543f05e577.scope.
Nov 22 03:34:48 np0005532048 podman[98293]: 2025-11-22 08:34:48.754246134 +0000 UTC m=+0.025220721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30feda6dd12255cf3c20ff2028c13e8bb30a658d5d42920daf9595634785bf5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30feda6dd12255cf3c20ff2028c13e8bb30a658d5d42920daf9595634785bf5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30feda6dd12255cf3c20ff2028c13e8bb30a658d5d42920daf9595634785bf5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:48 np0005532048 podman[98293]: 2025-11-22 08:34:48.907585138 +0000 UTC m=+0.178559725 container init 3a876fc45e147bb1503cc11658c1815535c2ef9d802662d38082cb543f05e577 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 22 03:34:48 np0005532048 podman[98293]: 2025-11-22 08:34:48.91562658 +0000 UTC m=+0.186601147 container start 3a876fc45e147bb1503cc11658c1815535c2ef9d802662d38082cb543f05e577 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:48 np0005532048 podman[98293]: 2025-11-22 08:34:48.985663925 +0000 UTC m=+0.256638552 container attach 3a876fc45e147bb1503cc11658c1815535c2ef9d802662d38082cb543f05e577 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]: {
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:    "0": [
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:        {
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "devices": [
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "/dev/loop3"
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            ],
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_name": "ceph_lv0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_size": "21470642176",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "name": "ceph_lv0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "tags": {
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.crush_device_class": "",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.encrypted": "0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.osd_id": "0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.type": "block",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.vdo": "0"
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            },
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "type": "block",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "vg_name": "ceph_vg0"
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:        }
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:    ],
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:    "1": [
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:        {
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "devices": [
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "/dev/loop4"
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            ],
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_name": "ceph_lv1",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_size": "21470642176",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "name": "ceph_lv1",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "tags": {
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.crush_device_class": "",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.encrypted": "0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.osd_id": "1",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.type": "block",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.vdo": "0"
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            },
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "type": "block",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "vg_name": "ceph_vg1"
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:        }
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:    ],
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:    "2": [
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:        {
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "devices": [
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "/dev/loop5"
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            ],
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_name": "ceph_lv2",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_size": "21470642176",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "name": "ceph_lv2",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "tags": {
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.crush_device_class": "",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.encrypted": "0",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.osd_id": "2",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.type": "block",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:                "ceph.vdo": "0"
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            },
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "type": "block",
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:            "vg_name": "ceph_vg2"
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:        }
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]:    ]
Nov 22 03:34:49 np0005532048 admiring_bohr[98247]: }
Nov 22 03:34:49 np0005532048 systemd[1]: libpod-eb4c322a33dab32480a5a6f9f8f4316060403cb305bb59afa221530df8f019c7.scope: Deactivated successfully.
Nov 22 03:34:49 np0005532048 podman[98232]: 2025-11-22 08:34:49.085000236 +0000 UTC m=+1.047931362 container died eb4c322a33dab32480a5a6f9f8f4316060403cb305bb59afa221530df8f019c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-07cd5d60370879d7c5f1d7a65d1ceeccc9b476d21d58f0244a9bb9fe2bb04c1b-merged.mount: Deactivated successfully.
Nov 22 03:34:49 np0005532048 podman[98232]: 2025-11-22 08:34:49.389349272 +0000 UTC m=+1.352280398 container remove eb4c322a33dab32480a5a6f9f8f4316060403cb305bb59afa221530df8f019c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bohr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:49 np0005532048 systemd[1]: libpod-conmon-eb4c322a33dab32480a5a6f9f8f4316060403cb305bb59afa221530df8f019c7.scope: Deactivated successfully.
Nov 22 03:34:49 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:34:49 np0005532048 ceph-mgr[75315]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 22 03:34:49 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 22 03:34:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 03:34:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:49 np0005532048 pensive_chaum[98308]: Scheduled mds.cephfs update...
Nov 22 03:34:49 np0005532048 ceph-mon[75021]: Saving service mds.cephfs spec with placement compute-0
Nov 22 03:34:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:49 np0005532048 systemd[1]: libpod-3a876fc45e147bb1503cc11658c1815535c2ef9d802662d38082cb543f05e577.scope: Deactivated successfully.
Nov 22 03:34:49 np0005532048 podman[98293]: 2025-11-22 08:34:49.54748658 +0000 UTC m=+0.818461157 container died 3a876fc45e147bb1503cc11658c1815535c2ef9d802662d38082cb543f05e577 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:34:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f30feda6dd12255cf3c20ff2028c13e8bb30a658d5d42920daf9595634785bf5-merged.mount: Deactivated successfully.
Nov 22 03:34:49 np0005532048 podman[98293]: 2025-11-22 08:34:49.604793152 +0000 UTC m=+0.875767719 container remove 3a876fc45e147bb1503cc11658c1815535c2ef9d802662d38082cb543f05e577 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:34:49 np0005532048 systemd[1]: libpod-conmon-3a876fc45e147bb1503cc11658c1815535c2ef9d802662d38082cb543f05e577.scope: Deactivated successfully.
Nov 22 03:34:50 np0005532048 podman[98505]: 2025-11-22 08:34:50.029648702 +0000 UTC m=+0.049631581 container create 049e48d0cbf7d5db81f08ac1613f5fef4e2ce93ce2f0cad5456693b3dfb5e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v117: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:50 np0005532048 systemd[1]: Started libpod-conmon-049e48d0cbf7d5db81f08ac1613f5fef4e2ce93ce2f0cad5456693b3dfb5e49c.scope.
Nov 22 03:34:50 np0005532048 podman[98505]: 2025-11-22 08:34:50.001193506 +0000 UTC m=+0.021176415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:50 np0005532048 podman[98505]: 2025-11-22 08:34:50.135544769 +0000 UTC m=+0.155527698 container init 049e48d0cbf7d5db81f08ac1613f5fef4e2ce93ce2f0cad5456693b3dfb5e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:50 np0005532048 podman[98505]: 2025-11-22 08:34:50.145110237 +0000 UTC m=+0.165093106 container start 049e48d0cbf7d5db81f08ac1613f5fef4e2ce93ce2f0cad5456693b3dfb5e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:34:50 np0005532048 admiring_grothendieck[98545]: 167 167
Nov 22 03:34:50 np0005532048 systemd[1]: libpod-049e48d0cbf7d5db81f08ac1613f5fef4e2ce93ce2f0cad5456693b3dfb5e49c.scope: Deactivated successfully.
Nov 22 03:34:50 np0005532048 podman[98505]: 2025-11-22 08:34:50.152430811 +0000 UTC m=+0.172413700 container attach 049e48d0cbf7d5db81f08ac1613f5fef4e2ce93ce2f0cad5456693b3dfb5e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:34:50 np0005532048 podman[98505]: 2025-11-22 08:34:50.153326462 +0000 UTC m=+0.173309341 container died 049e48d0cbf7d5db81f08ac1613f5fef4e2ce93ce2f0cad5456693b3dfb5e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:34:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-42599f865de43e8ed9f8a157c1d98f4aaa8a413bc8b71aafaa9e85b4e70463ca-merged.mount: Deactivated successfully.
Nov 22 03:34:50 np0005532048 podman[98505]: 2025-11-22 08:34:50.205148964 +0000 UTC m=+0.225131843 container remove 049e48d0cbf7d5db81f08ac1613f5fef4e2ce93ce2f0cad5456693b3dfb5e49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:50 np0005532048 systemd[1]: libpod-conmon-049e48d0cbf7d5db81f08ac1613f5fef4e2ce93ce2f0cad5456693b3dfb5e49c.scope: Deactivated successfully.
Nov 22 03:34:50 np0005532048 podman[98623]: 2025-11-22 08:34:50.35473206 +0000 UTC m=+0.045400980 container create f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:50 np0005532048 python3[98617]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:34:50 np0005532048 systemd[1]: Started libpod-conmon-f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d.scope.
Nov 22 03:34:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd72b589b15d578afd01b7ab4d03ba186019a6895c833d2f501d4ae1f6abd4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd72b589b15d578afd01b7ab4d03ba186019a6895c833d2f501d4ae1f6abd4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:50 np0005532048 podman[98623]: 2025-11-22 08:34:50.332943982 +0000 UTC m=+0.023612922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd72b589b15d578afd01b7ab4d03ba186019a6895c833d2f501d4ae1f6abd4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd72b589b15d578afd01b7ab4d03ba186019a6895c833d2f501d4ae1f6abd4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:50 np0005532048 podman[98623]: 2025-11-22 08:34:50.442780023 +0000 UTC m=+0.133448963 container init f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:50 np0005532048 podman[98623]: 2025-11-22 08:34:50.448829797 +0000 UTC m=+0.139498707 container start f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:50 np0005532048 podman[98623]: 2025-11-22 08:34:50.456351136 +0000 UTC m=+0.147020086 container attach f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:34:50 np0005532048 ceph-mon[75021]: Saving service mds.cephfs spec with placement compute-0
Nov 22 03:34:50 np0005532048 python3[98716]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800490.0750706-36659-201655539840184/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=4960bd1f30f6819c36201db3694f6bf9dc55bf29 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:51 np0005532048 python3[98766]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:51 np0005532048 podman[98769]: 2025-11-22 08:34:51.305460661 +0000 UTC m=+0.058337609 container create 0ee478d04247b9cafb6d87233883449a8aa81e1842ff83f67fe4a0fd1a334494 (image=quay.io/ceph/ceph:v18, name=flamboyant_nash, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:51 np0005532048 systemd[1]: Started libpod-conmon-0ee478d04247b9cafb6d87233883449a8aa81e1842ff83f67fe4a0fd1a334494.scope.
Nov 22 03:34:51 np0005532048 podman[98769]: 2025-11-22 08:34:51.281045921 +0000 UTC m=+0.033922899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:51 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b5d54082a225b6896a0f395ec1435b0139cb795537926e35757775e9d9205/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/543b5d54082a225b6896a0f395ec1435b0139cb795537926e35757775e9d9205/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:51 np0005532048 podman[98769]: 2025-11-22 08:34:51.40848742 +0000 UTC m=+0.161364388 container init 0ee478d04247b9cafb6d87233883449a8aa81e1842ff83f67fe4a0fd1a334494 (image=quay.io/ceph/ceph:v18, name=flamboyant_nash, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:34:51 np0005532048 podman[98769]: 2025-11-22 08:34:51.416246484 +0000 UTC m=+0.169123442 container start 0ee478d04247b9cafb6d87233883449a8aa81e1842ff83f67fe4a0fd1a334494 (image=quay.io/ceph/ceph:v18, name=flamboyant_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:34:51 np0005532048 podman[98769]: 2025-11-22 08:34:51.425118905 +0000 UTC m=+0.177995853 container attach 0ee478d04247b9cafb6d87233883449a8aa81e1842ff83f67fe4a0fd1a334494 (image=quay.io/ceph/ceph:v18, name=flamboyant_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]: {
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "osd_id": 1,
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "type": "bluestore"
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:    },
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "osd_id": 0,
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "type": "bluestore"
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:    },
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "osd_id": 2,
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:        "type": "bluestore"
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]:    }
Nov 22 03:34:51 np0005532048 kind_brahmagupta[98640]: }
Nov 22 03:34:51 np0005532048 systemd[1]: libpod-f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d.scope: Deactivated successfully.
Nov 22 03:34:51 np0005532048 systemd[1]: libpod-f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d.scope: Consumed 1.061s CPU time.
Nov 22 03:34:51 np0005532048 podman[98623]: 2025-11-22 08:34:51.503900278 +0000 UTC m=+1.194569198 container died f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-acd72b589b15d578afd01b7ab4d03ba186019a6895c833d2f501d4ae1f6abd4c-merged.mount: Deactivated successfully.
Nov 22 03:34:51 np0005532048 podman[98623]: 2025-11-22 08:34:51.596799976 +0000 UTC m=+1.287468896 container remove f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brahmagupta, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:34:51 np0005532048 systemd[1]: libpod-conmon-f8b4792a1fbf2be02ae9c6975519dd8962f65a1a1ba8120711eb78af7abbc45d.scope: Deactivated successfully.
Nov 22 03:34:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 22 03:34:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1336860090' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:34:52
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', '.mgr', 'volumes']
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v118: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1336860090' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 22 03:34:52 np0005532048 systemd[1]: libpod-0ee478d04247b9cafb6d87233883449a8aa81e1842ff83f67fe4a0fd1a334494.scope: Deactivated successfully.
Nov 22 03:34:52 np0005532048 podman[98769]: 2025-11-22 08:34:52.128491036 +0000 UTC m=+0.881367984 container died 0ee478d04247b9cafb6d87233883449a8aa81e1842ff83f67fe4a0fd1a334494 (image=quay.io/ceph/ceph:v18, name=flamboyant_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-543b5d54082a225b6896a0f395ec1435b0139cb795537926e35757775e9d9205-merged.mount: Deactivated successfully.
Nov 22 03:34:52 np0005532048 podman[98769]: 2025-11-22 08:34:52.361475484 +0000 UTC m=+1.114352432 container remove 0ee478d04247b9cafb6d87233883449a8aa81e1842ff83f67fe4a0fd1a334494 (image=quay.io/ceph/ceph:v18, name=flamboyant_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:34:52 np0005532048 systemd[1]: libpod-conmon-0ee478d04247b9cafb6d87233883449a8aa81e1842ff83f67fe4a0fd1a334494.scope: Deactivated successfully.
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:34:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:34:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:52 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1336860090' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 22 03:34:52 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1336860090' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 22 03:34:52 np0005532048 podman[99082]: 2025-11-22 08:34:52.714494725 +0000 UTC m=+0.137857487 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:52 np0005532048 podman[99082]: 2025-11-22 08:34:52.811885771 +0000 UTC m=+0.235248533 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:53 np0005532048 python3[99139]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 22 03:34:53 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 107996be-0562-4e53-9a3b-0632a523b7b9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:53 np0005532048 podman[99141]: 2025-11-22 08:34:53.209102733 +0000 UTC m=+0.122251717 container create 803ccdd31efc1793904e479b67730c8ffa69fd3a2d731ebbf0a6cf165af6fa5b (image=quay.io/ceph/ceph:v18, name=festive_diffie, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:53 np0005532048 podman[99141]: 2025-11-22 08:34:53.121672015 +0000 UTC m=+0.034821019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:53 np0005532048 systemd[1]: Started libpod-conmon-803ccdd31efc1793904e479b67730c8ffa69fd3a2d731ebbf0a6cf165af6fa5b.scope.
Nov 22 03:34:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e171c7ab129aa143fa3e8659b80cbecc8496e4b494c22bf53910e45732672ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e171c7ab129aa143fa3e8659b80cbecc8496e4b494c22bf53910e45732672ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:53 np0005532048 podman[99141]: 2025-11-22 08:34:53.339925412 +0000 UTC m=+0.253074406 container init 803ccdd31efc1793904e479b67730c8ffa69fd3a2d731ebbf0a6cf165af6fa5b (image=quay.io/ceph/ceph:v18, name=festive_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:34:53 np0005532048 podman[99141]: 2025-11-22 08:34:53.348380733 +0000 UTC m=+0.261529727 container start 803ccdd31efc1793904e479b67730c8ffa69fd3a2d731ebbf0a6cf165af6fa5b (image=quay.io/ceph/ceph:v18, name=festive_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:34:53 np0005532048 podman[99141]: 2025-11-22 08:34:53.380341433 +0000 UTC m=+0.293490437 container attach 803ccdd31efc1793904e479b67730c8ffa69fd3a2d731ebbf0a6cf165af6fa5b (image=quay.io/ceph/ceph:v18, name=festive_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305736231' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:34:53 np0005532048 festive_diffie[99174]: 
Nov 22 03:34:53 np0005532048 festive_diffie[99174]: {"fsid":"34829716-a12c-57a6-8915-c1aa615c9d8a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":236,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":38,"num_osds":3,"num_up_osds":3,"osd_up_since":1763800446,"num_in_osds":3,"osd_in_since":1763800405,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83959808,"bytes_avail":64327966720,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T08:32:54.036030+0000","services":{}},"progress_events":{}}
Nov 22 03:34:53 np0005532048 systemd[1]: libpod-803ccdd31efc1793904e479b67730c8ffa69fd3a2d731ebbf0a6cf165af6fa5b.scope: Deactivated successfully.
Nov 22 03:34:53 np0005532048 podman[99141]: 2025-11-22 08:34:53.964040958 +0000 UTC m=+0.877189942 container died 803ccdd31efc1793904e479b67730c8ffa69fd3a2d731ebbf0a6cf165af6fa5b (image=quay.io/ceph/ceph:v18, name=festive_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v120: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 22 03:34:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7e171c7ab129aa143fa3e8659b80cbecc8496e4b494c22bf53910e45732672ad-merged.mount: Deactivated successfully.
Nov 22 03:34:54 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 8626aaee-dda2-4db8-9585-2f192220d96c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:54 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 122cdece-5e97-414e-ba43-c2c2f5dee90d does not exist
Nov 22 03:34:54 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 83681c2c-362c-4b7d-a4dc-b24265985f8e does not exist
Nov 22 03:34:54 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 332e8a5f-1e7f-484f-b74c-4ce13da9ef5e does not exist
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:54 np0005532048 podman[99141]: 2025-11-22 08:34:54.570135747 +0000 UTC m=+1.483284771 container remove 803ccdd31efc1793904e479b67730c8ffa69fd3a2d731ebbf0a6cf165af6fa5b (image=quay.io/ceph/ceph:v18, name=festive_diffie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:54 np0005532048 systemd[1]: libpod-conmon-803ccdd31efc1793904e479b67730c8ffa69fd3a2d731ebbf0a6cf165af6fa5b.scope: Deactivated successfully.
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:34:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:54 np0005532048 python3[99382]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:54 np0005532048 podman[99410]: 2025-11-22 08:34:54.967053422 +0000 UTC m=+0.039628083 container create 0f6a8fd528c63f9577592c567cdfa0debe76acbdb122f852713bce31a8541ef6 (image=quay.io/ceph/ceph:v18, name=pensive_bohr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:55 np0005532048 systemd[1]: Started libpod-conmon-0f6a8fd528c63f9577592c567cdfa0debe76acbdb122f852713bce31a8541ef6.scope.
Nov 22 03:34:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799cafd909eaa735eb6e590daaabe658143cccfe5cc7bc1cc4a6143c31b08b15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799cafd909eaa735eb6e590daaabe658143cccfe5cc7bc1cc4a6143c31b08b15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:55 np0005532048 podman[99410]: 2025-11-22 08:34:54.951218166 +0000 UTC m=+0.023792827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:55 np0005532048 podman[99410]: 2025-11-22 08:34:55.054956162 +0000 UTC m=+0.127530843 container init 0f6a8fd528c63f9577592c567cdfa0debe76acbdb122f852713bce31a8541ef6 (image=quay.io/ceph/ceph:v18, name=pensive_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:55 np0005532048 podman[99410]: 2025-11-22 08:34:55.061483347 +0000 UTC m=+0.134058048 container start 0f6a8fd528c63f9577592c567cdfa0debe76acbdb122f852713bce31a8541ef6 (image=quay.io/ceph/ceph:v18, name=pensive_bohr, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:55 np0005532048 podman[99410]: 2025-11-22 08:34:55.066815453 +0000 UTC m=+0.139390134 container attach 0f6a8fd528c63f9577592c567cdfa0debe76acbdb122f852713bce31a8541ef6 (image=quay.io/ceph/ceph:v18, name=pensive_bohr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:34:55 np0005532048 podman[99468]: 2025-11-22 08:34:55.2134953 +0000 UTC m=+0.043923625 container create a7e4cd22ab65d7559c36ebbf458c35747758fb01247b8e79b078edcb7d74f80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:34:55 np0005532048 systemd[1]: Started libpod-conmon-a7e4cd22ab65d7559c36ebbf458c35747758fb01247b8e79b078edcb7d74f80d.scope.
Nov 22 03:34:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:55 np0005532048 podman[99468]: 2025-11-22 08:34:55.191379744 +0000 UTC m=+0.021808089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:55 np0005532048 podman[99468]: 2025-11-22 08:34:55.289714922 +0000 UTC m=+0.120143277 container init a7e4cd22ab65d7559c36ebbf458c35747758fb01247b8e79b078edcb7d74f80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:55 np0005532048 podman[99468]: 2025-11-22 08:34:55.294744671 +0000 UTC m=+0.125172996 container start a7e4cd22ab65d7559c36ebbf458c35747758fb01247b8e79b078edcb7d74f80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_grothendieck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:34:55 np0005532048 epic_grothendieck[99484]: 167 167
Nov 22 03:34:55 np0005532048 systemd[1]: libpod-a7e4cd22ab65d7559c36ebbf458c35747758fb01247b8e79b078edcb7d74f80d.scope: Deactivated successfully.
Nov 22 03:34:55 np0005532048 podman[99468]: 2025-11-22 08:34:55.299432643 +0000 UTC m=+0.129860998 container attach a7e4cd22ab65d7559c36ebbf458c35747758fb01247b8e79b078edcb7d74f80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_grothendieck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:55 np0005532048 podman[99468]: 2025-11-22 08:34:55.299656499 +0000 UTC m=+0.130084824 container died a7e4cd22ab65d7559c36ebbf458c35747758fb01247b8e79b078edcb7d74f80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_grothendieck, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:34:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5e34c72e55c57c054b24f146d16ab76904f7f5eecbbd4d5cb9b5e3626145db8a-merged.mount: Deactivated successfully.
Nov 22 03:34:55 np0005532048 podman[99468]: 2025-11-22 08:34:55.347206929 +0000 UTC m=+0.177635254 container remove a7e4cd22ab65d7559c36ebbf458c35747758fb01247b8e79b078edcb7d74f80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_grothendieck, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:34:55 np0005532048 systemd[1]: libpod-conmon-a7e4cd22ab65d7559c36ebbf458c35747758fb01247b8e79b078edcb7d74f80d.scope: Deactivated successfully.
Nov 22 03:34:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 22 03:34:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 22 03:34:55 np0005532048 podman[99528]: 2025-11-22 08:34:55.516687928 +0000 UTC m=+0.061270998 container create 3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mccarthy, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:34:55 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 22 03:34:55 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev a4d53522-32ac-4b7a-94c3-3833a8029eb1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 22 03:34:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:34:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:55 np0005532048 systemd[1]: Started libpod-conmon-3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582.scope.
Nov 22 03:34:55 np0005532048 podman[99528]: 2025-11-22 08:34:55.479698068 +0000 UTC m=+0.024281168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72fcd12a4c551f01038d4f930612913639bfc239d9f9dd4d28b77a4ca3cb6b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72fcd12a4c551f01038d4f930612913639bfc239d9f9dd4d28b77a4ca3cb6b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72fcd12a4c551f01038d4f930612913639bfc239d9f9dd4d28b77a4ca3cb6b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72fcd12a4c551f01038d4f930612913639bfc239d9f9dd4d28b77a4ca3cb6b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72fcd12a4c551f01038d4f930612913639bfc239d9f9dd4d28b77a4ca3cb6b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:55 np0005532048 podman[99528]: 2025-11-22 08:34:55.626910377 +0000 UTC m=+0.171493537 container init 3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mccarthy, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:55 np0005532048 podman[99528]: 2025-11-22 08:34:55.639505937 +0000 UTC m=+0.184089017 container start 3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mccarthy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:34:55 np0005532048 podman[99528]: 2025-11-22 08:34:55.646678758 +0000 UTC m=+0.191261828 container attach 3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mccarthy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:34:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3123912952' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:34:55 np0005532048 pensive_bohr[99427]: 
Nov 22 03:34:55 np0005532048 pensive_bohr[99427]: {"epoch":1,"fsid":"34829716-a12c-57a6-8915-c1aa615c9d8a","modified":"2025-11-22T08:30:52.601238Z","created":"2025-11-22T08:30:52.601238Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 22 03:34:55 np0005532048 pensive_bohr[99427]: dumped monmap epoch 1
Nov 22 03:34:55 np0005532048 systemd[1]: libpod-0f6a8fd528c63f9577592c567cdfa0debe76acbdb122f852713bce31a8541ef6.scope: Deactivated successfully.
Nov 22 03:34:55 np0005532048 podman[99410]: 2025-11-22 08:34:55.78391082 +0000 UTC m=+0.856485491 container died 0f6a8fd528c63f9577592c567cdfa0debe76acbdb122f852713bce31a8541ef6 (image=quay.io/ceph/ceph:v18, name=pensive_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-799cafd909eaa735eb6e590daaabe658143cccfe5cc7bc1cc4a6143c31b08b15-merged.mount: Deactivated successfully.
Nov 22 03:34:55 np0005532048 podman[99410]: 2025-11-22 08:34:55.847407899 +0000 UTC m=+0.919982560 container remove 0f6a8fd528c63f9577592c567cdfa0debe76acbdb122f852713bce31a8541ef6 (image=quay.io/ceph/ceph:v18, name=pensive_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:34:55 np0005532048 systemd[1]: libpod-conmon-0f6a8fd528c63f9577592c567cdfa0debe76acbdb122f852713bce31a8541ef6.scope: Deactivated successfully.
Nov 22 03:34:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v123: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:34:56 np0005532048 python3[99588]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:56 np0005532048 podman[99593]: 2025-11-22 08:34:56.457875421 +0000 UTC m=+0.042234054 container create 8aa8c848de839e9f3477b34603cbc61f68894a88fbf3f63abb47351cf79afce8 (image=quay.io/ceph/ceph:v18, name=nostalgic_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:34:56 np0005532048 systemd[1]: Started libpod-conmon-8aa8c848de839e9f3477b34603cbc61f68894a88fbf3f63abb47351cf79afce8.scope.
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 22 03:34:56 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev e260ea9b-7ce0-43fc-b566-bc123afe7977 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 22 03:34:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:34:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:34:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6df088d671398fcf20fd342bbc0ffb1e8f2b9dfba57e6e40213d2d1345f6bf1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6df088d671398fcf20fd342bbc0ffb1e8f2b9dfba57e6e40213d2d1345f6bf1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:56 np0005532048 podman[99593]: 2025-11-22 08:34:56.435277704 +0000 UTC m=+0.019636367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 39 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=39 pruub=15.027576447s) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active pruub 71.163841248s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=39 pruub=15.027576447s) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown pruub 71.163841248s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=19/20 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 podman[99593]: 2025-11-22 08:34:56.547232026 +0000 UTC m=+0.131590679 container init 8aa8c848de839e9f3477b34603cbc61f68894a88fbf3f63abb47351cf79afce8 (image=quay.io/ceph/ceph:v18, name=nostalgic_spence, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:34:56 np0005532048 podman[99593]: 2025-11-22 08:34:56.553484164 +0000 UTC m=+0.137842797 container start 8aa8c848de839e9f3477b34603cbc61f68894a88fbf3f63abb47351cf79afce8 (image=quay.io/ceph/ceph:v18, name=nostalgic_spence, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:56 np0005532048 podman[99593]: 2025-11-22 08:34:56.559104587 +0000 UTC m=+0.143463240 container attach 8aa8c848de839e9f3477b34603cbc61f68894a88fbf3f63abb47351cf79afce8 (image=quay.io/ceph/ceph:v18, name=nostalgic_spence, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:56 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41 pruub=9.210806847s) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active pruub 70.633636475s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:34:56 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41 pruub=9.210806847s) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown pruub 70.633636475s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:56 np0005532048 goofy_mccarthy[99544]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:34:56 np0005532048 goofy_mccarthy[99544]: --> relative data size: 1.0
Nov 22 03:34:56 np0005532048 goofy_mccarthy[99544]: --> All data devices are unavailable
Nov 22 03:34:56 np0005532048 systemd[1]: libpod-3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582.scope: Deactivated successfully.
Nov 22 03:34:56 np0005532048 systemd[1]: libpod-3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582.scope: Consumed 1.061s CPU time.
Nov 22 03:34:56 np0005532048 podman[99528]: 2025-11-22 08:34:56.767533242 +0000 UTC m=+1.312116312 container died 3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:34:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b72fcd12a4c551f01038d4f930612913639bfc239d9f9dd4d28b77a4ca3cb6b6-merged.mount: Deactivated successfully.
Nov 22 03:34:56 np0005532048 podman[99528]: 2025-11-22 08:34:56.839462202 +0000 UTC m=+1.384045272 container remove 3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mccarthy, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:56 np0005532048 systemd[1]: libpod-conmon-3f7f5efc188128e3502d2e11acb660cd5cc381173f33010f4e25c9e07241c582.scope: Deactivated successfully.
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=41 pruub=10.768535614s) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active pruub 79.140609741s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=41 pruub=10.768535614s) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown pruub 79.140609741s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1978832546' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 22 03:34:57 np0005532048 nostalgic_spence[99615]: [client.openstack]
Nov 22 03:34:57 np0005532048 nostalgic_spence[99615]: #011key = AQCWdCFpAAAAABAAs7L/D9aKWvawfNMYeVGyIQ==
Nov 22 03:34:57 np0005532048 nostalgic_spence[99615]: #011caps mgr = "allow *"
Nov 22 03:34:57 np0005532048 nostalgic_spence[99615]: #011caps mon = "profile rbd"
Nov 22 03:34:57 np0005532048 nostalgic_spence[99615]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 22 03:34:57 np0005532048 systemd[1]: libpod-8aa8c848de839e9f3477b34603cbc61f68894a88fbf3f63abb47351cf79afce8.scope: Deactivated successfully.
Nov 22 03:34:57 np0005532048 podman[99593]: 2025-11-22 08:34:57.216749681 +0000 UTC m=+0.801108314 container died 8aa8c848de839e9f3477b34603cbc61f68894a88fbf3f63abb47351cf79afce8 (image=quay.io/ceph/ceph:v18, name=nostalgic_spence, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:34:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d6df088d671398fcf20fd342bbc0ffb1e8f2b9dfba57e6e40213d2d1345f6bf1-merged.mount: Deactivated successfully.
Nov 22 03:34:57 np0005532048 podman[99593]: 2025-11-22 08:34:57.274817281 +0000 UTC m=+0.859175914 container remove 8aa8c848de839e9f3477b34603cbc61f68894a88fbf3f63abb47351cf79afce8 (image=quay.io/ceph/ceph:v18, name=nostalgic_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:34:57 np0005532048 systemd[1]: libpod-conmon-8aa8c848de839e9f3477b34603cbc61f68894a88fbf3f63abb47351cf79afce8.scope: Deactivated successfully.
Nov 22 03:34:57 np0005532048 podman[99819]: 2025-11-22 08:34:57.456356527 +0000 UTC m=+0.039934760 container create d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:34:57 np0005532048 systemd[1]: Started libpod-conmon-d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02.scope.
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 22 03:34:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 6a226e21-bdc7-42da-aea2-3a842500b321 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 podman[99819]: 2025-11-22 08:34:57.437204261 +0000 UTC m=+0.020782514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1978832546' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 22 03:34:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1c( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.4( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.2( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.b( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.d( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.10( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.13( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.14( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.19( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=41/42 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 podman[99819]: 2025-11-22 08:34:57.552910822 +0000 UTC m=+0.136489095 container init d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=41/42 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.4( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.2( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=39/42 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 podman[99819]: 2025-11-22 08:34:57.559714734 +0000 UTC m=+0.143292967 container start d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.10( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.13( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.14( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 42 pg[3.19( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [1] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 systemd[1]: libpod-d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02.scope: Deactivated successfully.
Nov 22 03:34:57 np0005532048 silly_ptolemy[99835]: 167 167
Nov 22 03:34:57 np0005532048 conmon[99835]: conmon d378d6416416163935dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02.scope/container/memory.events
Nov 22 03:34:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=19/19 les/c/f=20/20/0 sis=39) [2] r=0 lpr=39 pi=[19,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:57 np0005532048 podman[99819]: 2025-11-22 08:34:57.578075511 +0000 UTC m=+0.161653764 container attach d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ptolemy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:34:57 np0005532048 podman[99819]: 2025-11-22 08:34:57.578536591 +0000 UTC m=+0.162114824 container died d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ptolemy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0272b7f02246cb41efa6f135dcb24489197676829b2e7b51cec8004d8c5c3efb-merged.mount: Deactivated successfully.
Nov 22 03:34:57 np0005532048 podman[99819]: 2025-11-22 08:34:57.641459197 +0000 UTC m=+0.225037430 container remove d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:34:57 np0005532048 systemd[1]: libpod-conmon-d378d6416416163935dd4cc20ef9cab351992818a4db4d0801cf8d070e6c1f02.scope: Deactivated successfully.
Nov 22 03:34:57 np0005532048 podman[99862]: 2025-11-22 08:34:57.789243131 +0000 UTC m=+0.042427510 container create 8f1c3010f556c4ab6ae04c1ccc5324e8deb5fab0072140fd95885aa9bb76c1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:34:57 np0005532048 systemd[1]: Started libpod-conmon-8f1c3010f556c4ab6ae04c1ccc5324e8deb5fab0072140fd95885aa9bb76c1cc.scope.
Nov 22 03:34:57 np0005532048 podman[99862]: 2025-11-22 08:34:57.767999516 +0000 UTC m=+0.021183925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800339290f8390a42df40bfd32896dc30c68d6da73077f43e1267f1589e7b4f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800339290f8390a42df40bfd32896dc30c68d6da73077f43e1267f1589e7b4f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800339290f8390a42df40bfd32896dc30c68d6da73077f43e1267f1589e7b4f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800339290f8390a42df40bfd32896dc30c68d6da73077f43e1267f1589e7b4f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:57 np0005532048 podman[99862]: 2025-11-22 08:34:57.907264936 +0000 UTC m=+0.160449345 container init 8f1c3010f556c4ab6ae04c1ccc5324e8deb5fab0072140fd95885aa9bb76c1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:34:57 np0005532048 podman[99862]: 2025-11-22 08:34:57.913604307 +0000 UTC m=+0.166788696 container start 8f1c3010f556c4ab6ae04c1ccc5324e8deb5fab0072140fd95885aa9bb76c1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:57 np0005532048 podman[99862]: 2025-11-22 08:34:57.940294742 +0000 UTC m=+0.193479161 container attach 8f1c3010f556c4ab6ae04c1ccc5324e8deb5fab0072140fd95885aa9bb76c1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v126: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Nov 22 03:34:58 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 22 03:34:58 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 4c667ad5-ef8a-472c-bb56-a59e30d41883 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 107996be-0562-4e53-9a3b-0632a523b7b9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 107996be-0562-4e53-9a3b-0632a523b7b9 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 8626aaee-dda2-4db8-9585-2f192220d96c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 8626aaee-dda2-4db8-9585-2f192220d96c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev a4d53522-32ac-4b7a-94c3-3833a8029eb1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event a4d53522-32ac-4b7a-94c3-3833a8029eb1 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev e260ea9b-7ce0-43fc-b566-bc123afe7977 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event e260ea9b-7ce0-43fc-b566-bc123afe7977 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 6a226e21-bdc7-42da-aea2-3a842500b321 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 6a226e21-bdc7-42da-aea2-3a842500b321 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 4c667ad5-ef8a-472c-bb56-a59e30d41883 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:34:58 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 4c667ad5-ef8a-472c-bb56-a59e30d41883 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 22 03:34:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:34:58 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 43 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43 pruub=14.115500450s) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active pruub 83.970695496s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:34:58 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 43 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43 pruub=14.115500450s) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown pruub 83.970695496s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:58 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=43 pruub=11.585489273s) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active pruub 69.812988281s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:34:58 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=43 pruub=11.585489273s) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown pruub 69.812988281s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]: {
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:    "0": [
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:        {
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "devices": [
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "/dev/loop3"
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            ],
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_name": "ceph_lv0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_size": "21470642176",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "name": "ceph_lv0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "tags": {
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.crush_device_class": "",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.encrypted": "0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.osd_id": "0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.type": "block",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.vdo": "0"
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            },
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "type": "block",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "vg_name": "ceph_vg0"
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:        }
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:    ],
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:    "1": [
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:        {
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "devices": [
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "/dev/loop4"
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            ],
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_name": "ceph_lv1",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_size": "21470642176",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "name": "ceph_lv1",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "tags": {
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.crush_device_class": "",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.encrypted": "0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.osd_id": "1",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.type": "block",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.vdo": "0"
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            },
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "type": "block",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "vg_name": "ceph_vg1"
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:        }
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:    ],
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:    "2": [
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:        {
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "devices": [
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "/dev/loop5"
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            ],
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_name": "ceph_lv2",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_size": "21470642176",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "name": "ceph_lv2",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "tags": {
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.cluster_name": "ceph",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.crush_device_class": "",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.encrypted": "0",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.osd_id": "2",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.type": "block",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:                "ceph.vdo": "0"
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            },
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "type": "block",
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:            "vg_name": "ceph_vg2"
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:        }
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]:    ]
Nov 22 03:34:58 np0005532048 intelligent_khayyam[99879]: }
Nov 22 03:34:58 np0005532048 systemd[1]: libpod-8f1c3010f556c4ab6ae04c1ccc5324e8deb5fab0072140fd95885aa9bb76c1cc.scope: Deactivated successfully.
Nov 22 03:34:58 np0005532048 podman[99862]: 2025-11-22 08:34:58.739740376 +0000 UTC m=+0.992924785 container died 8f1c3010f556c4ab6ae04c1ccc5324e8deb5fab0072140fd95885aa9bb76c1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:34:58 np0005532048 ansible-async_wrapper.py[100034]: Invoked with j786464039649 30 /home/zuul/.ansible/tmp/ansible-tmp-1763800498.270962-36731-265525285185620/AnsiballZ_command.py _
Nov 22 03:34:58 np0005532048 ansible-async_wrapper.py[100042]: Starting module and watcher
Nov 22 03:34:58 np0005532048 ansible-async_wrapper.py[100042]: Start watching 100047 (30)
Nov 22 03:34:58 np0005532048 ansible-async_wrapper.py[100047]: Start module (100047)
Nov 22 03:34:58 np0005532048 ansible-async_wrapper.py[100034]: Return async_wrapper task started.
Nov 22 03:34:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-800339290f8390a42df40bfd32896dc30c68d6da73077f43e1267f1589e7b4f2-merged.mount: Deactivated successfully.
Nov 22 03:34:58 np0005532048 podman[99862]: 2025-11-22 08:34:58.824826798 +0000 UTC m=+1.078011187 container remove 8f1c3010f556c4ab6ae04c1ccc5324e8deb5fab0072140fd95885aa9bb76c1cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:34:58 np0005532048 systemd[1]: libpod-conmon-8f1c3010f556c4ab6ae04c1ccc5324e8deb5fab0072140fd95885aa9bb76c1cc.scope: Deactivated successfully.
Nov 22 03:34:58 np0005532048 python3[100048]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:58 np0005532048 podman[100071]: 2025-11-22 08:34:58.985213371 +0000 UTC m=+0.065161561 container create dbda1991a7c6b2be8faed3b21be95877bb49a594e2e431a9a6a7ec412962360c (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:59 np0005532048 systemd[1]: Started libpod-conmon-dbda1991a7c6b2be8faed3b21be95877bb49a594e2e431a9a6a7ec412962360c.scope.
Nov 22 03:34:59 np0005532048 podman[100071]: 2025-11-22 08:34:58.959734525 +0000 UTC m=+0.039682735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:34:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e005fc20c159edf673792fac3401067b5ac754e651ba93fab8c30c4633506c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e005fc20c159edf673792fac3401067b5ac754e651ba93fab8c30c4633506c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:59 np0005532048 podman[100071]: 2025-11-22 08:34:59.078669363 +0000 UTC m=+0.158617573 container init dbda1991a7c6b2be8faed3b21be95877bb49a594e2e431a9a6a7ec412962360c (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:34:59 np0005532048 podman[100071]: 2025-11-22 08:34:59.087797059 +0000 UTC m=+0.167745249 container start dbda1991a7c6b2be8faed3b21be95877bb49a594e2e431a9a6a7ec412962360c (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:34:59 np0005532048 podman[100071]: 2025-11-22 08:34:59.092989013 +0000 UTC m=+0.172937203 container attach dbda1991a7c6b2be8faed3b21be95877bb49a594e2e431a9a6a7ec412962360c (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:59 np0005532048 podman[100229]: 2025-11-22 08:34:59.465871267 +0000 UTC m=+0.045475732 container create 49c5bab688a7960108abbb1416987fefd675524e4935a94f0d9d180977aaf078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:34:59 np0005532048 systemd[1]: Started libpod-conmon-49c5bab688a7960108abbb1416987fefd675524e4935a94f0d9d180977aaf078.scope.
Nov 22 03:34:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:59 np0005532048 podman[100229]: 2025-11-22 08:34:59.444452618 +0000 UTC m=+0.024057133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:59 np0005532048 podman[100229]: 2025-11-22 08:34:59.550745185 +0000 UTC m=+0.130349670 container init 49c5bab688a7960108abbb1416987fefd675524e4935a94f0d9d180977aaf078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:34:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 22 03:34:59 np0005532048 podman[100229]: 2025-11-22 08:34:59.555362904 +0000 UTC m=+0.134967369 container start 49c5bab688a7960108abbb1416987fefd675524e4935a94f0d9d180977aaf078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:59 np0005532048 nervous_allen[100245]: 167 167
Nov 22 03:34:59 np0005532048 systemd[1]: libpod-49c5bab688a7960108abbb1416987fefd675524e4935a94f0d9d180977aaf078.scope: Deactivated successfully.
Nov 22 03:34:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 22 03:34:59 np0005532048 podman[100229]: 2025-11-22 08:34:59.562491954 +0000 UTC m=+0.142096439 container attach 49c5bab688a7960108abbb1416987fefd675524e4935a94f0d9d180977aaf078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:34:59 np0005532048 podman[100229]: 2025-11-22 08:34:59.562811002 +0000 UTC m=+0.142415477 container died 49c5bab688a7960108abbb1416987fefd675524e4935a94f0d9d180977aaf078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:34:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 22 03:34:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:34:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 22 03:34:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=25/26 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=27/28 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=43/44 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=25/25 les/c/f=26/26/0 sis=43) [2] r=0 lpr=43 pi=[25,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.0( empty local-lis/les=43/44 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=27/27 les/c/f=28/28/0 sis=43) [0] r=0 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:34:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-818ffcc7da41329786d8e98eb21df10b874338d065891eb8bbb2af30bb7e6f2d-merged.mount: Deactivated successfully.
Nov 22 03:34:59 np0005532048 podman[100229]: 2025-11-22 08:34:59.621210089 +0000 UTC m=+0.200814554 container remove 49c5bab688a7960108abbb1416987fefd675524e4935a94f0d9d180977aaf078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:34:59 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:34:59 np0005532048 vibrant_grothendieck[100138]: 
Nov 22 03:34:59 np0005532048 vibrant_grothendieck[100138]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 03:34:59 np0005532048 systemd[1]: libpod-conmon-49c5bab688a7960108abbb1416987fefd675524e4935a94f0d9d180977aaf078.scope: Deactivated successfully.
Nov 22 03:34:59 np0005532048 systemd[1]: libpod-dbda1991a7c6b2be8faed3b21be95877bb49a594e2e431a9a6a7ec412962360c.scope: Deactivated successfully.
Nov 22 03:34:59 np0005532048 podman[100071]: 2025-11-22 08:34:59.645915367 +0000 UTC m=+0.725863557 container died dbda1991a7c6b2be8faed3b21be95877bb49a594e2e431a9a6a7ec412962360c (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-13e005fc20c159edf673792fac3401067b5ac754e651ba93fab8c30c4633506c-merged.mount: Deactivated successfully.
Nov 22 03:34:59 np0005532048 podman[100071]: 2025-11-22 08:34:59.696800866 +0000 UTC m=+0.776749056 container remove dbda1991a7c6b2be8faed3b21be95877bb49a594e2e431a9a6a7ec412962360c (image=quay.io/ceph/ceph:v18, name=vibrant_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 03:34:59 np0005532048 systemd[1]: libpod-conmon-dbda1991a7c6b2be8faed3b21be95877bb49a594e2e431a9a6a7ec412962360c.scope: Deactivated successfully.
Nov 22 03:34:59 np0005532048 ansible-async_wrapper.py[100047]: Module complete (100047)
Nov 22 03:34:59 np0005532048 podman[100281]: 2025-11-22 08:34:59.779308478 +0000 UTC m=+0.042875300 container create 41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:34:59 np0005532048 systemd[1]: Started libpod-conmon-41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495.scope.
Nov 22 03:34:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:34:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3174070c901e7085611ae1075e6841a6e001862595c5bc2c9ade30788dd8f3dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:59 np0005532048 podman[100281]: 2025-11-22 08:34:59.758735008 +0000 UTC m=+0.022301860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3174070c901e7085611ae1075e6841a6e001862595c5bc2c9ade30788dd8f3dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3174070c901e7085611ae1075e6841a6e001862595c5bc2c9ade30788dd8f3dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3174070c901e7085611ae1075e6841a6e001862595c5bc2c9ade30788dd8f3dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:59 np0005532048 podman[100281]: 2025-11-22 08:34:59.866221234 +0000 UTC m=+0.129788056 container init 41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:59 np0005532048 podman[100281]: 2025-11-22 08:34:59.873779574 +0000 UTC m=+0.137346396 container start 41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:34:59 np0005532048 podman[100281]: 2025-11-22 08:34:59.880015332 +0000 UTC m=+0.143582154 container attach 41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:35:00 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 22 03:35:00 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 22 03:35:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 22 03:35:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v129: 146 pgs: 77 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:35:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 22 03:35:00 np0005532048 python3[100351]: ansible-ansible.legacy.async_status Invoked with jid=j786464039649.100034 mode=status _async_dir=/root/.ansible_async
Nov 22 03:35:00 np0005532048 python3[100400]: ansible-ansible.legacy.async_status Invoked with jid=j786464039649.100034 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 03:35:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 22 03:35:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:35:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 22 03:35:00 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 22 03:35:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:00 np0005532048 bold_euclid[100304]: {
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "osd_id": 1,
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "type": "bluestore"
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:    },
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "osd_id": 0,
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "type": "bluestore"
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:    },
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "osd_id": 2,
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:        "type": "bluestore"
Nov 22 03:35:00 np0005532048 bold_euclid[100304]:    }
Nov 22 03:35:00 np0005532048 bold_euclid[100304]: }
Nov 22 03:35:00 np0005532048 systemd[1]: libpod-41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495.scope: Deactivated successfully.
Nov 22 03:35:00 np0005532048 podman[100281]: 2025-11-22 08:35:00.887735687 +0000 UTC m=+1.151302519 container died 41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:35:00 np0005532048 systemd[1]: libpod-41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495.scope: Consumed 1.011s CPU time.
Nov 22 03:35:00 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:35:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3174070c901e7085611ae1075e6841a6e001862595c5bc2c9ade30788dd8f3dc-merged.mount: Deactivated successfully.
Nov 22 03:35:00 np0005532048 podman[100281]: 2025-11-22 08:35:00.973149978 +0000 UTC m=+1.236716800 container remove 41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_euclid, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:35:00 np0005532048 systemd[1]: libpod-conmon-41535a73e51e79bda537187945b551efc344079aa32c32930916dcc1f1e5a495.scope: Deactivated successfully.
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:01 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 723e348a-aeab-496b-8d1d-99140140d56c (Updating rgw.rgw deployment (+1 -> 1))
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkpyxa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkpyxa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkpyxa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 22 03:35:01 np0005532048 python3[100454]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:01 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.qkpyxa on compute-0
Nov 22 03:35:01 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.qkpyxa on compute-0
Nov 22 03:35:01 np0005532048 podman[100466]: 2025-11-22 08:35:01.111698851 +0000 UTC m=+0.044704043 container create cf80fff64b1d87c2ebe61426f048ea7a4e38c7725965779b377f49715178436f (image=quay.io/ceph/ceph:v18, name=interesting_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:35:01 np0005532048 systemd[1]: Started libpod-conmon-cf80fff64b1d87c2ebe61426f048ea7a4e38c7725965779b377f49715178436f.scope.
Nov 22 03:35:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:01 np0005532048 podman[100466]: 2025-11-22 08:35:01.0935305 +0000 UTC m=+0.026535712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:35:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf42a2039a6c43b512c913fd1f3f156060afcf1ffc33c708d599b835f2cd25ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf42a2039a6c43b512c913fd1f3f156060afcf1ffc33c708d599b835f2cd25ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:01 np0005532048 podman[100466]: 2025-11-22 08:35:01.206042664 +0000 UTC m=+0.139047876 container init cf80fff64b1d87c2ebe61426f048ea7a4e38c7725965779b377f49715178436f (image=quay.io/ceph/ceph:v18, name=interesting_boyd, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:35:01 np0005532048 podman[100466]: 2025-11-22 08:35:01.215153641 +0000 UTC m=+0.148158833 container start cf80fff64b1d87c2ebe61426f048ea7a4e38c7725965779b377f49715178436f (image=quay.io/ceph/ceph:v18, name=interesting_boyd, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:01 np0005532048 podman[100466]: 2025-11-22 08:35:01.223443688 +0000 UTC m=+0.156448930 container attach cf80fff64b1d87c2ebe61426f048ea7a4e38c7725965779b377f49715178436f (image=quay.io/ceph/ceph:v18, name=interesting_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkpyxa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkpyxa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 22 03:35:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:01 np0005532048 podman[100646]: 2025-11-22 08:35:01.686378902 +0000 UTC m=+0.044325304 container create c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_burnell, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:01 np0005532048 systemd[1]: Started libpod-conmon-c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc.scope.
Nov 22 03:35:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:01 np0005532048 podman[100646]: 2025-11-22 08:35:01.661063171 +0000 UTC m=+0.019009563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:01 np0005532048 podman[100646]: 2025-11-22 08:35:01.75904504 +0000 UTC m=+0.116991432 container init c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_burnell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:35:01 np0005532048 podman[100646]: 2025-11-22 08:35:01.764708424 +0000 UTC m=+0.122654796 container start c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_burnell, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:35:01 np0005532048 relaxed_burnell[100663]: 167 167
Nov 22 03:35:01 np0005532048 systemd[1]: libpod-c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc.scope: Deactivated successfully.
Nov 22 03:35:01 np0005532048 conmon[100663]: conmon c9de314bba52470ac60a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc.scope/container/memory.events
Nov 22 03:35:01 np0005532048 podman[100646]: 2025-11-22 08:35:01.771743462 +0000 UTC m=+0.129689854 container attach c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:01 np0005532048 podman[100646]: 2025-11-22 08:35:01.772172093 +0000 UTC m=+0.130118465 container died c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:35:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-87a6dc56adbc9809e163c72191dde2a8949e4dbe4775571d27796586c2ecd641-merged.mount: Deactivated successfully.
Nov 22 03:35:01 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:35:01 np0005532048 interesting_boyd[100515]: 
Nov 22 03:35:01 np0005532048 interesting_boyd[100515]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 03:35:01 np0005532048 systemd[1]: libpod-cf80fff64b1d87c2ebe61426f048ea7a4e38c7725965779b377f49715178436f.scope: Deactivated successfully.
Nov 22 03:35:01 np0005532048 podman[100466]: 2025-11-22 08:35:01.831261837 +0000 UTC m=+0.764267029 container died cf80fff64b1d87c2ebe61426f048ea7a4e38c7725965779b377f49715178436f (image=quay.io/ceph/ceph:v18, name=interesting_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:35:01 np0005532048 podman[100646]: 2025-11-22 08:35:01.856061166 +0000 UTC m=+0.214007538 container remove c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_burnell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 22 03:35:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bf42a2039a6c43b512c913fd1f3f156060afcf1ffc33c708d599b835f2cd25ab-merged.mount: Deactivated successfully.
Nov 22 03:35:01 np0005532048 podman[100466]: 2025-11-22 08:35:01.910844479 +0000 UTC m=+0.843849671 container remove cf80fff64b1d87c2ebe61426f048ea7a4e38c7725965779b377f49715178436f (image=quay.io/ceph/ceph:v18, name=interesting_boyd, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:35:01 np0005532048 systemd[1]: libpod-conmon-c9de314bba52470ac60aba84e9f29366d4022d30f11c24b2f31353359e4c75dc.scope: Deactivated successfully.
Nov 22 03:35:01 np0005532048 systemd[1]: libpod-conmon-cf80fff64b1d87c2ebe61426f048ea7a4e38c7725965779b377f49715178436f.scope: Deactivated successfully.
Nov 22 03:35:01 np0005532048 systemd[1]: Reloading.
Nov 22 03:35:02 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:35:02 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:35:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v131: 177 pgs: 77 unknown, 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:02 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 22 03:35:02 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 22 03:35:02 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=12.852910995s) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active pruub 79.981178284s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=45 pruub=12.852910995s) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown pruub 79.981178284s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 22 03:35:02 np0005532048 systemd[1]: Reloading.
Nov 22 03:35:02 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:35:02 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:35:02 np0005532048 systemd[1]: Starting Ceph rgw.rgw.compute-0.qkpyxa for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:35:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 22 03:35:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 22 03:35:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-mon[75021]: Deploying daemon rgw.rgw.compute-0.qkpyxa on compute-0
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=29/30 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=45/46 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=29/29 les/c/f=30/30/0 sis=45) [1] r=0 lpr=45 pi=[29,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:02 np0005532048 python3[100818]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:02 np0005532048 podman[100845]: 2025-11-22 08:35:02.814963641 +0000 UTC m=+0.050957592 container create 0076b4ead36ccb2745f4ecd112a89f36fb639bf8bc815af3f6b3e794a881d2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-rgw-rgw-compute-0-qkpyxa, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:35:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797b17fcfad7ab57eb6ac0423071f37de93dde64fefb3222ba1c1b95346e9ee4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797b17fcfad7ab57eb6ac0423071f37de93dde64fefb3222ba1c1b95346e9ee4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797b17fcfad7ab57eb6ac0423071f37de93dde64fefb3222ba1c1b95346e9ee4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797b17fcfad7ab57eb6ac0423071f37de93dde64fefb3222ba1c1b95346e9ee4/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.qkpyxa supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:02 np0005532048 podman[100845]: 2025-11-22 08:35:02.789106636 +0000 UTC m=+0.025100607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:02 np0005532048 podman[100859]: 2025-11-22 08:35:02.891139272 +0000 UTC m=+0.059747462 container create f1fffe4e46d6544fbc0cd726861e4a0bbd3c214681ab8da0e4add637f05bec19 (image=quay.io/ceph/ceph:v18, name=stupefied_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:35:02 np0005532048 podman[100845]: 2025-11-22 08:35:02.903531387 +0000 UTC m=+0.139525358 container init 0076b4ead36ccb2745f4ecd112a89f36fb639bf8bc815af3f6b3e794a881d2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-rgw-rgw-compute-0-qkpyxa, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:35:02 np0005532048 podman[100845]: 2025-11-22 08:35:02.917906598 +0000 UTC m=+0.153900549 container start 0076b4ead36ccb2745f4ecd112a89f36fb639bf8bc815af3f6b3e794a881d2c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-rgw-rgw-compute-0-qkpyxa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:35:02 np0005532048 bash[100845]: 0076b4ead36ccb2745f4ecd112a89f36fb639bf8bc815af3f6b3e794a881d2c9
Nov 22 03:35:02 np0005532048 podman[100859]: 2025-11-22 08:35:02.85737062 +0000 UTC m=+0.025978820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:35:02 np0005532048 systemd[1]: Started libpod-conmon-f1fffe4e46d6544fbc0cd726861e4a0bbd3c214681ab8da0e4add637f05bec19.scope.
Nov 22 03:35:02 np0005532048 systemd[1]: Started Ceph rgw.rgw.compute-0.qkpyxa for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:35:02 np0005532048 radosgw[100878]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:35:02 np0005532048 radosgw[100878]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 22 03:35:02 np0005532048 radosgw[100878]: framework: beast
Nov 22 03:35:02 np0005532048 radosgw[100878]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 22 03:35:02 np0005532048 radosgw[100878]: init_numa not setting numa affinity
Nov 22 03:35:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ac7c0f0513c66f15a37de58d53e1bd6929af7b243d6ba81a37c0088796eda8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ac7c0f0513c66f15a37de58d53e1bd6929af7b243d6ba81a37c0088796eda8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:35:03 np0005532048 podman[100859]: 2025-11-22 08:35:03.022472473 +0000 UTC m=+0.191080673 container init f1fffe4e46d6544fbc0cd726861e4a0bbd3c214681ab8da0e4add637f05bec19 (image=quay.io/ceph/ceph:v18, name=stupefied_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:35:03 np0005532048 podman[100859]: 2025-11-22 08:35:03.030434123 +0000 UTC m=+0.199042293 container start f1fffe4e46d6544fbc0cd726861e4a0bbd3c214681ab8da0e4add637f05bec19 (image=quay.io/ceph/ceph:v18, name=stupefied_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 03:35:03 np0005532048 podman[100859]: 2025-11-22 08:35:03.036400884 +0000 UTC m=+0.205009054 container attach f1fffe4e46d6544fbc0cd726861e4a0bbd3c214681ab8da0e4add637f05bec19 (image=quay.io/ceph/ceph:v18, name=stupefied_hypatia, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 723e348a-aeab-496b-8d1d-99140140d56c (Updating rgw.rgw deployment (+1 -> 1))
Nov 22 03:35:03 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 723e348a-aeab-496b-8d1d-99140140d56c (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 22 03:35:03 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 22 03:35:03 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 03:35:03 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev ca259372-da21-4e8d-b668-47917aeef3f3 (Updating mds.cephfs deployment (+1 -> 1))
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.myffln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.myffln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.myffln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 22 03:35:03 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:03 np0005532048 ceph-mgr[75315]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.myffln on compute-0
Nov 22 03:35:03 np0005532048 ceph-mgr[75315]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.myffln on compute-0
Nov 22 03:35:03 np0005532048 ceph-mgr[75315]: [progress INFO root] Writing back 10 completed events
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:03 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:35:03 np0005532048 stupefied_hypatia[100889]: 
Nov 22 03:35:03 np0005532048 stupefied_hypatia[100889]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 22 03:35:03 np0005532048 systemd[1]: libpod-f1fffe4e46d6544fbc0cd726861e4a0bbd3c214681ab8da0e4add637f05bec19.scope: Deactivated successfully.
Nov 22 03:35:03 np0005532048 podman[100859]: 2025-11-22 08:35:03.641725365 +0000 UTC m=+0.810333545 container died f1fffe4e46d6544fbc0cd726861e4a0bbd3c214681ab8da0e4add637f05bec19 (image=quay.io/ceph/ceph:v18, name=stupefied_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.myffln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.myffln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 22 03:35:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:03 np0005532048 ansible-async_wrapper.py[100042]: Done in kid B.
Nov 22 03:35:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e7ac7c0f0513c66f15a37de58d53e1bd6929af7b243d6ba81a37c0088796eda8-merged.mount: Deactivated successfully.
Nov 22 03:35:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 22 03:35:04 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 22 03:35:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 31 unknown, 146 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:04 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 22 03:35:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 22 03:35:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 22 03:35:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 22 03:35:04 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 47 pg[8.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 22 03:35:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 22 03:35:04 np0005532048 podman[100859]: 2025-11-22 08:35:04.157010664 +0000 UTC m=+1.325618834 container remove f1fffe4e46d6544fbc0cd726861e4a0bbd3c214681ab8da0e4add637f05bec19 (image=quay.io/ceph/ceph:v18, name=stupefied_hypatia, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:04 np0005532048 systemd[1]: libpod-conmon-f1fffe4e46d6544fbc0cd726861e4a0bbd3c214681ab8da0e4add637f05bec19.scope: Deactivated successfully.
Nov 22 03:35:04 np0005532048 podman[101107]: 2025-11-22 08:35:04.235967981 +0000 UTC m=+0.579366724 container create f85097e6fdae7559701e2c74e117bcd9207ba7bbc65943ac53da8b1d41755f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galileo, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:35:04 np0005532048 systemd[1]: Started libpod-conmon-f85097e6fdae7559701e2c74e117bcd9207ba7bbc65943ac53da8b1d41755f0c.scope.
Nov 22 03:35:04 np0005532048 podman[101107]: 2025-11-22 08:35:04.207534365 +0000 UTC m=+0.550933138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:04 np0005532048 podman[101107]: 2025-11-22 08:35:04.345539915 +0000 UTC m=+0.688938748 container init f85097e6fdae7559701e2c74e117bcd9207ba7bbc65943ac53da8b1d41755f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:35:04 np0005532048 podman[101107]: 2025-11-22 08:35:04.356836285 +0000 UTC m=+0.700235068 container start f85097e6fdae7559701e2c74e117bcd9207ba7bbc65943ac53da8b1d41755f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:35:04 np0005532048 charming_galileo[101133]: 167 167
Nov 22 03:35:04 np0005532048 systemd[1]: libpod-f85097e6fdae7559701e2c74e117bcd9207ba7bbc65943ac53da8b1d41755f0c.scope: Deactivated successfully.
Nov 22 03:35:04 np0005532048 podman[101107]: 2025-11-22 08:35:04.383406865 +0000 UTC m=+0.726805638 container attach f85097e6fdae7559701e2c74e117bcd9207ba7bbc65943ac53da8b1d41755f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galileo, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:04 np0005532048 podman[101107]: 2025-11-22 08:35:04.3886272 +0000 UTC m=+0.732026013 container died f85097e6fdae7559701e2c74e117bcd9207ba7bbc65943ac53da8b1d41755f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galileo, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-531728ca87564ad1c48a2eaeb8b166ae9aa99b4b8cfb35583c1f549c259d9d41-merged.mount: Deactivated successfully.
Nov 22 03:35:04 np0005532048 podman[101107]: 2025-11-22 08:35:04.503698345 +0000 UTC m=+0.847097088 container remove f85097e6fdae7559701e2c74e117bcd9207ba7bbc65943ac53da8b1d41755f0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_galileo, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:35:04 np0005532048 systemd[1]: libpod-conmon-f85097e6fdae7559701e2c74e117bcd9207ba7bbc65943ac53da8b1d41755f0c.scope: Deactivated successfully.
Nov 22 03:35:04 np0005532048 systemd[1]: Reloading.
Nov 22 03:35:04 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:35:04 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:35:04 np0005532048 ceph-mon[75021]: Saving service rgw.rgw spec with placement compute-0
Nov 22 03:35:04 np0005532048 ceph-mon[75021]: Deploying daemon mds.cephfs.compute-0.myffln on compute-0
Nov 22 03:35:04 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 22 03:35:04 np0005532048 systemd[1]: Reloading.
Nov 22 03:35:04 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:35:05 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 22 03:35:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 48 pg[8.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:05 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 22 03:35:05 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 22 03:35:05 np0005532048 systemd[1]: Starting Ceph mds.cephfs.compute-0.myffln for 34829716-a12c-57a6-8915-c1aa615c9d8a...
Nov 22 03:35:05 np0005532048 python3[101258]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:05 np0005532048 podman[101300]: 2025-11-22 08:35:05.417640111 +0000 UTC m=+0.055481159 container create ccad90a6e835d8ceb71f132db9e32e21390e00e51ed870711f7bc24951676f8e (image=quay.io/ceph/ceph:v18, name=optimistic_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:35:05 np0005532048 podman[101317]: 2025-11-22 08:35:05.443702871 +0000 UTC m=+0.061022542 container create db6616763703d165c27ba032d180e45b1e3696de800d958463060191928fdc28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mds-cephfs-compute-0-myffln, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:35:05 np0005532048 podman[101300]: 2025-11-22 08:35:05.384793831 +0000 UTC m=+0.022634889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:35:05 np0005532048 systemd[1]: Started libpod-conmon-ccad90a6e835d8ceb71f132db9e32e21390e00e51ed870711f7bc24951676f8e.scope.
Nov 22 03:35:05 np0005532048 podman[101317]: 2025-11-22 08:35:05.40327652 +0000 UTC m=+0.020596211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449dd72778b659a1126f5d794464a9f7868bbda1516c2781639d34024e873d75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449dd72778b659a1126f5d794464a9f7868bbda1516c2781639d34024e873d75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449dd72778b659a1126f5d794464a9f7868bbda1516c2781639d34024e873d75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449dd72778b659a1126f5d794464a9f7868bbda1516c2781639d34024e873d75/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.myffln supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c3d71d0325e8701ade781e74850517beedeaae394fd293311a9b5906bc0c73/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c3d71d0325e8701ade781e74850517beedeaae394fd293311a9b5906bc0c73/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:05 np0005532048 podman[101317]: 2025-11-22 08:35:05.541222289 +0000 UTC m=+0.158541980 container init db6616763703d165c27ba032d180e45b1e3696de800d958463060191928fdc28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mds-cephfs-compute-0-myffln, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:05 np0005532048 podman[101317]: 2025-11-22 08:35:05.546413892 +0000 UTC m=+0.163733563 container start db6616763703d165c27ba032d180e45b1e3696de800d958463060191928fdc28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mds-cephfs-compute-0-myffln, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:35:05 np0005532048 bash[101317]: db6616763703d165c27ba032d180e45b1e3696de800d958463060191928fdc28
Nov 22 03:35:05 np0005532048 systemd[1]: Started Ceph mds.cephfs.compute-0.myffln for 34829716-a12c-57a6-8915-c1aa615c9d8a.
Nov 22 03:35:05 np0005532048 ceph-mds[101348]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:35:05 np0005532048 ceph-mds[101348]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 22 03:35:05 np0005532048 ceph-mds[101348]: main not setting numa affinity
Nov 22 03:35:05 np0005532048 ceph-mds[101348]: pidfile_write: ignore empty --pid-file
Nov 22 03:35:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mds-cephfs-compute-0-myffln[101342]: starting mds.cephfs.compute-0.myffln at 
Nov 22 03:35:05 np0005532048 podman[101300]: 2025-11-22 08:35:05.593954163 +0000 UTC m=+0.231795211 container init ccad90a6e835d8ceb71f132db9e32e21390e00e51ed870711f7bc24951676f8e (image=quay.io/ceph/ceph:v18, name=optimistic_swirles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:05 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln Updating MDS map to version 2 from mon.0
Nov 22 03:35:05 np0005532048 podman[101300]: 2025-11-22 08:35:05.601852421 +0000 UTC m=+0.239693459 container start ccad90a6e835d8ceb71f132db9e32e21390e00e51ed870711f7bc24951676f8e (image=quay.io/ceph/ceph:v18, name=optimistic_swirles, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:35:05 np0005532048 podman[101300]: 2025-11-22 08:35:05.614179714 +0000 UTC m=+0.252020782 container attach ccad90a6e835d8ceb71f132db9e32e21390e00e51ed870711f7bc24951676f8e (image=quay.io/ceph/ceph:v18, name=optimistic_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:05 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev ca259372-da21-4e8d-b668-47917aeef3f3 (Updating mds.cephfs deployment (+1 -> 1))
Nov 22 03:35:05 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event ca259372-da21-4e8d-b668-47917aeef3f3 (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 03:35:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:06 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 22 03:35:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v136: 178 pgs: 32 unknown, 146 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:06 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e3 new map
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-22T08:34:48.256238+0000#012modified#0112025-11-22T08:34:48.256284+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.myffln{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/3475097318,v1:192.168.122.100:6815/3475097318] compat {c=[1],r=[1],i=[7ff]}]
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln Updating MDS map to version 3 from mon.0
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln Monitors have assigned me to become a standby.
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3475097318,v1:192.168.122.100:6815/3475097318] up:boot
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3475097318,v1:192.168.122.100:6815/3475097318] as mds.0
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.myffln assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.myffln"} v 0) v1
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.myffln"}]: dispatch
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e3 all = 0
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e4 new map
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-22T08:34:48.256238+0000#012modified#0112025-11-22T08:35:06.109328+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.myffln{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/3475097318,v1:192.168.122.100:6815/3475097318] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln Updating MDS map to version 4 from mon.0
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x1
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x100
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x600
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x601
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x602
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x603
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x604
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x605
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x606
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x607
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x608
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.cache creating system inode with ino:0x609
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.myffln=up:creating}
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 22 03:35:06 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 49 pg[9.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:06 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:35:06 np0005532048 optimistic_swirles[101341]: 
Nov 22 03:35:06 np0005532048 optimistic_swirles[101341]: [{"container_id": "fe02bdcae5c3", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.36%", "created": "2025-11-22T08:33:01.596705Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-22T08:33:01.703533Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T08:34:53.710568Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2025-11-22T08:33:01.340300Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.myffln", "daemon_name": "mds.cephfs.compute-0.myffln", "daemon_type": "mds", "events": ["2025-11-22T08:35:05.667817Z daemon:mds.cephfs.compute-0.myffln [INFO] \"Deployed mds.cephfs.compute-0.myffln on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "bcf277aed6bb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "21.82%", "created": "2025-11-22T08:31:00.569513Z", "daemon_id": "compute-0.ldbkey", "daemon_name": "mgr.compute-0.ldbkey", "daemon_type": "mgr", "events": ["2025-11-22T08:33:08.831971Z daemon:mgr.compute-0.ldbkey [INFO] \"Reconfigured mgr.compute-0.ldbkey on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T08:34:53.710503Z", "memory_usage": 548719820, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-22T08:31:00.441549Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@mgr.compute-0.ldbkey", "version": "18.2.7"}, {"container_id": "621c1e8cf040", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.53%", "created": "2025-11-22T08:30:54.743374Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-22T08:33:07.175203Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T08:34:53.710408Z", "memory_request": 2147483648, "memory_usage": 38545653, "ports": [], "service_name": "mon", "started": "2025-11-22T08:30:57.443389Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@mon.compute-0", "version": "18.2.7"}, {"container_id": "b5e05752dfef", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.40%", "created": "2025-11-22T08:33:47.219786Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-22T08:33:47.282750Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T08:34:53.710631Z", "memory_request": 4294967296, "memory_usage": 57073991, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T08:33:47.118655Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@osd.0", "version": "18.2.7"}, {"container_id": "74e38918ab88", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.54%", "created": "2025-11-22T08:33:52.620913Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-22T08:33:52.725478Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T08:34:53.710693Z", "memory_request": 4294967296, "memory_usage": 59758346, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T08:33:52.420998Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@osd.1", "version": "18.2.7"}, {"container_id": "75945265ccc4", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.62%", "created": "2025-11-22T08:33:58.911062Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-22T08:33:59.075588Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T08:34:53.710771Z", "memory_request": 4294967296, "memory_usage": 55186554, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T08:33:58.736500Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-34829716-a12c-57a6-8915-c1aa615c9d8a@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.qkpyxa", "daemon_name": "rgw.rgw.compute-0.qkpyxa", "daemon_type": "rgw", "events": ["2025-11-22T08:35:03.032497Z daemon:rgw.rgw.compute-0.qkpyxa [INFO] \"Deployed rgw.rgw.compute-0.qkpyxa on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Nov 22 03:35:06 np0005532048 systemd[1]: libpod-ccad90a6e835d8ceb71f132db9e32e21390e00e51ed870711f7bc24951676f8e.scope: Deactivated successfully.
Nov 22 03:35:06 np0005532048 podman[101300]: 2025-11-22 08:35:06.190936724 +0000 UTC m=+0.828777762 container died ccad90a6e835d8ceb71f132db9e32e21390e00e51ed870711f7bc24951676f8e (image=quay.io/ceph/ceph:v18, name=optimistic_swirles, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:35:06 np0005532048 ceph-mds[101348]: mds.0.4 creating_done
Nov 22 03:35:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.myffln is now active in filesystem cephfs as rank 0
Nov 22 03:35:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c9c3d71d0325e8701ade781e74850517beedeaae394fd293311a9b5906bc0c73-merged.mount: Deactivated successfully.
Nov 22 03:35:06 np0005532048 podman[101300]: 2025-11-22 08:35:06.36194355 +0000 UTC m=+0.999784588 container remove ccad90a6e835d8ceb71f132db9e32e21390e00e51ed870711f7bc24951676f8e (image=quay.io/ceph/ceph:v18, name=optimistic_swirles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:06 np0005532048 systemd[1]: libpod-conmon-ccad90a6e835d8ceb71f132db9e32e21390e00e51ed870711f7bc24951676f8e.scope: Deactivated successfully.
Nov 22 03:35:06 np0005532048 podman[101631]: 2025-11-22 08:35:06.646604036 +0000 UTC m=+0.074485051 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:35:06 np0005532048 podman[101631]: 2025-11-22 08:35:06.744635346 +0000 UTC m=+0.172516341 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:07 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 22 03:35:07 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 22 03:35:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 22 03:35:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: daemon mds.cephfs.compute-0.myffln assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: Cluster is now healthy
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: daemon mds.cephfs.compute-0.myffln is now active in filesystem cephfs as rank 0
Nov 22 03:35:07 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 22 03:35:07 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e5 new map
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-22T08:34:48.256238+0000#012modified#0112025-11-22T08:35:07.133156+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.myffln{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3475097318,v1:192.168.122.100:6815/3475097318] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 22 03:35:07 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln Updating MDS map to version 5 from mon.0
Nov 22 03:35:07 np0005532048 ceph-mds[101348]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 22 03:35:07 np0005532048 ceph-mds[101348]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 22 03:35:07 np0005532048 ceph-mds[101348]: mds.0.4 recovery_done -- successful recovery!
Nov 22 03:35:07 np0005532048 ceph-mds[101348]: mds.0.4 active_start
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3475097318,v1:192.168.122.100:6815/3475097318] up:active
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.myffln=up:active}
Nov 22 03:35:07 np0005532048 python3[101796]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:35:07 np0005532048 podman[101817]: 2025-11-22 08:35:07.517421557 +0000 UTC m=+0.021380789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:35:07 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 50 pg[9.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:07 np0005532048 podman[101817]: 2025-11-22 08:35:07.67866656 +0000 UTC m=+0.182625772 container create 311eb05c863242b66175a8901453214015d737084fb3d8d22a95434e41ca880f (image=quay.io/ceph/ceph:v18, name=eager_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:35:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:07 np0005532048 systemd[1]: Started libpod-conmon-311eb05c863242b66175a8901453214015d737084fb3d8d22a95434e41ca880f.scope.
Nov 22 03:35:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626e301419d5a51199c99c4c38b3607e1b467e108a18e896d653b1a0d246d117/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626e301419d5a51199c99c4c38b3607e1b467e108a18e896d653b1a0d246d117/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:07 np0005532048 podman[101817]: 2025-11-22 08:35:07.860732238 +0000 UTC m=+0.364691470 container init 311eb05c863242b66175a8901453214015d737084fb3d8d22a95434e41ca880f (image=quay.io/ceph/ceph:v18, name=eager_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:35:07 np0005532048 podman[101817]: 2025-11-22 08:35:07.868558524 +0000 UTC m=+0.372517736 container start 311eb05c863242b66175a8901453214015d737084fb3d8d22a95434e41ca880f (image=quay.io/ceph/ceph:v18, name=eager_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:35:07 np0005532048 podman[101817]: 2025-11-22 08:35:07.970964178 +0000 UTC m=+0.474923390 container attach 311eb05c863242b66175a8901453214015d737084fb3d8d22a95434e41ca880f (image=quay.io/ceph/ceph:v18, name=eager_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:35:08 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 22 03:35:08 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 22 03:35:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v139: 179 pgs: 1 creating+peering, 178 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Nov 22 03:35:08 np0005532048 ceph-mgr[75315]: [progress INFO root] Writing back 11 completed events
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:35:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 22 03:35:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:08 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 22 03:35:08 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1417735130' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:35:08 np0005532048 eager_engelbart[101881]: 
Nov 22 03:35:08 np0005532048 eager_engelbart[101881]: {"fsid":"34829716-a12c-57a6-8915-c1aa615c9d8a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":250,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":50,"num_osds":3,"num_up_osds":3,"osd_up_since":1763800446,"num_in_osds":3,"osd_in_since":1763800405,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":146},{"state_name":"unknown","count":32}],"num_pgs":178,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":84246528,"bytes_avail":64327680000,"bytes_total":64411926528,"unknown_pgs_ratio":0.17977528274059296},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.myffln","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-22T08:35:02.061768+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"5a1d02ec-e8a0-443b-ad8e-fe2a7c9264c6":{"message":"Global Recovery Event (5s)\n      [================............] (remaining: 3s)","progress":0.57803469896316528,"add_to_ceph_s":true}}}
Nov 22 03:35:08 np0005532048 systemd[1]: libpod-311eb05c863242b66175a8901453214015d737084fb3d8d22a95434e41ca880f.scope: Deactivated successfully.
Nov 22 03:35:08 np0005532048 podman[101817]: 2025-11-22 08:35:08.684944031 +0000 UTC m=+1.188903253 container died 311eb05c863242b66175a8901453214015d737084fb3d8d22a95434e41ca880f (image=quay.io/ceph/ceph:v18, name=eager_engelbart, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 22 03:35:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 69314470-3d08-479e-8f61-8553621674b7 does not exist
Nov 22 03:35:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 50d90b0d-6118-417f-990c-7159039e4131 does not exist
Nov 22 03:35:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 313eca1f-44c3-475f-9da6-76e804ccc58f does not exist
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:35:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-626e301419d5a51199c99c4c38b3607e1b467e108a18e896d653b1a0d246d117-merged.mount: Deactivated successfully.
Nov 22 03:35:09 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 51 pg[10.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:09 np0005532048 podman[101817]: 2025-11-22 08:35:09.465716471 +0000 UTC m=+1.969675693 container remove 311eb05c863242b66175a8901453214015d737084fb3d8d22a95434e41ca880f (image=quay.io/ceph/ceph:v18, name=eager_engelbart, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:09 np0005532048 systemd[1]: libpod-conmon-311eb05c863242b66175a8901453214015d737084fb3d8d22a95434e41ca880f.scope: Deactivated successfully.
Nov 22 03:35:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:35:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:09 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 22 03:35:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:35:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 22 03:35:09 np0005532048 podman[102141]: 2025-11-22 08:35:09.636282136 +0000 UTC m=+0.020806606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:09 np0005532048 podman[102141]: 2025-11-22 08:35:09.749970849 +0000 UTC m=+0.134495299 container create 1529fcc77f2e52603bdb98070814b235bd63b01c4af30140a673c0fb203089b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:35:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 22 03:35:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 22 03:35:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 22 03:35:09 np0005532048 systemd[1]: Started libpod-conmon-1529fcc77f2e52603bdb98070814b235bd63b01c4af30140a673c0fb203089b9.scope.
Nov 22 03:35:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:09 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 52 pg[10.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:09 np0005532048 podman[102141]: 2025-11-22 08:35:09.89221627 +0000 UTC m=+0.276740740 container init 1529fcc77f2e52603bdb98070814b235bd63b01c4af30140a673c0fb203089b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:35:09 np0005532048 podman[102141]: 2025-11-22 08:35:09.900003676 +0000 UTC m=+0.284528156 container start 1529fcc77f2e52603bdb98070814b235bd63b01c4af30140a673c0fb203089b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:09 np0005532048 fervent_mirzakhani[102157]: 167 167
Nov 22 03:35:09 np0005532048 systemd[1]: libpod-1529fcc77f2e52603bdb98070814b235bd63b01c4af30140a673c0fb203089b9.scope: Deactivated successfully.
Nov 22 03:35:09 np0005532048 podman[102141]: 2025-11-22 08:35:09.930829569 +0000 UTC m=+0.315354109 container attach 1529fcc77f2e52603bdb98070814b235bd63b01c4af30140a673c0fb203089b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:09 np0005532048 podman[102141]: 2025-11-22 08:35:09.93173965 +0000 UTC m=+0.316264140 container died 1529fcc77f2e52603bdb98070814b235bd63b01c4af30140a673c0fb203089b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:35:09 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 22 03:35:10 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 22 03:35:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v142: 180 pgs: 1 unknown, 1 creating+peering, 178 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 22 03:35:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-63f4dd9396fa926ccacde59e9cbf8ceb9174f0c01d2a811241173d115d4760df-merged.mount: Deactivated successfully.
Nov 22 03:35:10 np0005532048 podman[102141]: 2025-11-22 08:35:10.180613976 +0000 UTC m=+0.565138436 container remove 1529fcc77f2e52603bdb98070814b235bd63b01c4af30140a673c0fb203089b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:35:10 np0005532048 systemd[1]: libpod-conmon-1529fcc77f2e52603bdb98070814b235bd63b01c4af30140a673c0fb203089b9.scope: Deactivated successfully.
Nov 22 03:35:10 np0005532048 podman[102193]: 2025-11-22 08:35:10.355679108 +0000 UTC m=+0.059699041 container create d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 22 03:35:10 np0005532048 systemd[1]: Started libpod-conmon-d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2.scope.
Nov 22 03:35:10 np0005532048 podman[102193]: 2025-11-22 08:35:10.322861488 +0000 UTC m=+0.026881451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691e4952b652c75ce33a461206e0ce61157b40c8545772f23905d94e739a7b52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691e4952b652c75ce33a461206e0ce61157b40c8545772f23905d94e739a7b52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691e4952b652c75ce33a461206e0ce61157b40c8545772f23905d94e739a7b52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691e4952b652c75ce33a461206e0ce61157b40c8545772f23905d94e739a7b52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691e4952b652c75ce33a461206e0ce61157b40c8545772f23905d94e739a7b52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:10 np0005532048 podman[102193]: 2025-11-22 08:35:10.450766588 +0000 UTC m=+0.154786541 container init d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcnulty, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:35:10 np0005532048 podman[102193]: 2025-11-22 08:35:10.458339678 +0000 UTC m=+0.162359621 container start d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:35:10 np0005532048 podman[102193]: 2025-11-22 08:35:10.468159911 +0000 UTC m=+0.172179884 container attach d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:35:10 np0005532048 python3[102217]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:10 np0005532048 podman[102241]: 2025-11-22 08:35:10.568296432 +0000 UTC m=+0.051647339 container create edbf8008edba340453aed3ab6e9432bc40d18b76eea94bf3530d5663f19794f0 (image=quay.io/ceph/ceph:v18, name=recursing_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:35:10 np0005532048 systemd[1]: Started libpod-conmon-edbf8008edba340453aed3ab6e9432bc40d18b76eea94bf3530d5663f19794f0.scope.
Nov 22 03:35:10 np0005532048 podman[102241]: 2025-11-22 08:35:10.542421107 +0000 UTC m=+0.025772034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:35:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4c781423fb5aa7c612fa0dcbc5347540c449a38f69a0a5b6d60e29b90e9dc7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4c781423fb5aa7c612fa0dcbc5347540c449a38f69a0a5b6d60e29b90e9dc7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:10 np0005532048 podman[102241]: 2025-11-22 08:35:10.680393737 +0000 UTC m=+0.163744674 container init edbf8008edba340453aed3ab6e9432bc40d18b76eea94bf3530d5663f19794f0 (image=quay.io/ceph/ceph:v18, name=recursing_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:10 np0005532048 podman[102241]: 2025-11-22 08:35:10.686523222 +0000 UTC m=+0.169874129 container start edbf8008edba340453aed3ab6e9432bc40d18b76eea94bf3530d5663f19794f0 (image=quay.io/ceph/ceph:v18, name=recursing_golick, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:35:10 np0005532048 podman[102241]: 2025-11-22 08:35:10.690463906 +0000 UTC m=+0.173814813 container attach edbf8008edba340453aed3ab6e9432bc40d18b76eea94bf3530d5663f19794f0 (image=quay.io/ceph/ceph:v18, name=recursing_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:35:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 22 03:35:10 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/1812802162' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 22 03:35:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 22 03:35:10 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 22 03:35:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 22 03:35:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/580952523' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 22 03:35:11 np0005532048 ceph-mds[101348]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 22 03:35:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mds-cephfs-compute-0-myffln[101342]: 2025-11-22T08:35:11.143+0000 7f5cdc330640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 22 03:35:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=0/0 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:35:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4197001199' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:35:11 np0005532048 recursing_golick[102257]: 
Nov 22 03:35:11 np0005532048 recursing_golick[102257]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.qkpyxa","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 22 03:35:11 np0005532048 systemd[1]: libpod-edbf8008edba340453aed3ab6e9432bc40d18b76eea94bf3530d5663f19794f0.scope: Deactivated successfully.
Nov 22 03:35:11 np0005532048 podman[102241]: 2025-11-22 08:35:11.229280585 +0000 UTC m=+0.712631492 container died edbf8008edba340453aed3ab6e9432bc40d18b76eea94bf3530d5663f19794f0 (image=quay.io/ceph/ceph:v18, name=recursing_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:35:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4f4c781423fb5aa7c612fa0dcbc5347540c449a38f69a0a5b6d60e29b90e9dc7-merged.mount: Deactivated successfully.
Nov 22 03:35:11 np0005532048 podman[102241]: 2025-11-22 08:35:11.383351557 +0000 UTC m=+0.866702494 container remove edbf8008edba340453aed3ab6e9432bc40d18b76eea94bf3530d5663f19794f0 (image=quay.io/ceph/ceph:v18, name=recursing_golick, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:35:11 np0005532048 systemd[1]: libpod-conmon-edbf8008edba340453aed3ab6e9432bc40d18b76eea94bf3530d5663f19794f0.scope: Deactivated successfully.
Nov 22 03:35:11 np0005532048 reverent_mcnulty[102223]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:35:11 np0005532048 reverent_mcnulty[102223]: --> relative data size: 1.0
Nov 22 03:35:11 np0005532048 reverent_mcnulty[102223]: --> All data devices are unavailable
Nov 22 03:35:11 np0005532048 systemd[1]: libpod-d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2.scope: Deactivated successfully.
Nov 22 03:35:11 np0005532048 systemd[1]: libpod-d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2.scope: Consumed 1.082s CPU time.
Nov 22 03:35:11 np0005532048 podman[102318]: 2025-11-22 08:35:11.661733745 +0000 UTC m=+0.028690083 container died d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcnulty, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:35:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-691e4952b652c75ce33a461206e0ce61157b40c8545772f23905d94e739a7b52-merged.mount: Deactivated successfully.
Nov 22 03:35:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 22 03:35:11 np0005532048 podman[102318]: 2025-11-22 08:35:11.852183372 +0000 UTC m=+0.219139690 container remove d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mcnulty, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:35:11 np0005532048 systemd[1]: libpod-conmon-d8d73698311d1913374fd313f1dc30b899c3da25a71db92682586ea6d44f86a2.scope: Deactivated successfully.
Nov 22 03:35:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/580952523' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 22 03:35:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 22 03:35:11 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 22 03:35:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 22 03:35:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/580952523' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 22 03:35:11 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/580952523' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 22 03:35:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v145: 181 pgs: 1 creating+peering, 1 unknown, 179 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 12 op/s
Nov 22 03:35:12 np0005532048 python3[102458]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:12 np0005532048 podman[102481]: 2025-11-22 08:35:12.427615231 +0000 UTC m=+0.057902488 container create 57eab9c860628cfee89fd2c739dbb7e34463feb52be7f78ef35dbc428d8ed139 (image=quay.io/ceph/ceph:v18, name=modest_bohr, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:35:12 np0005532048 podman[102481]: 2025-11-22 08:35:12.391602595 +0000 UTC m=+0.021889872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:35:12 np0005532048 systemd[1]: Started libpod-conmon-57eab9c860628cfee89fd2c739dbb7e34463feb52be7f78ef35dbc428d8ed139.scope.
Nov 22 03:35:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588faa96c25ce93d945da1b7a853508319d2b5c4092f507245ab88ddfd5df798/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588faa96c25ce93d945da1b7a853508319d2b5c4092f507245ab88ddfd5df798/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:12 np0005532048 podman[102481]: 2025-11-22 08:35:12.636694782 +0000 UTC m=+0.266982129 container init 57eab9c860628cfee89fd2c739dbb7e34463feb52be7f78ef35dbc428d8ed139 (image=quay.io/ceph/ceph:v18, name=modest_bohr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 03:35:12 np0005532048 podman[102481]: 2025-11-22 08:35:12.644499587 +0000 UTC m=+0.274786874 container start 57eab9c860628cfee89fd2c739dbb7e34463feb52be7f78ef35dbc428d8ed139 (image=quay.io/ceph/ceph:v18, name=modest_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:35:12 np0005532048 podman[102481]: 2025-11-22 08:35:12.703226104 +0000 UTC m=+0.333513391 container attach 57eab9c860628cfee89fd2c739dbb7e34463feb52be7f78ef35dbc428d8ed139 (image=quay.io/ceph/ceph:v18, name=modest_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:35:12 np0005532048 podman[102518]: 2025-11-22 08:35:12.820120262 +0000 UTC m=+0.085480733 container create 9c173f7376cb40b346e2a9a48a1a0e0c8cf184a67c443012bb81122423263128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:12 np0005532048 podman[102518]: 2025-11-22 08:35:12.759694616 +0000 UTC m=+0.025055107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 22 03:35:12 np0005532048 systemd[1]: Started libpod-conmon-9c173f7376cb40b346e2a9a48a1a0e0c8cf184a67c443012bb81122423263128.scope.
Nov 22 03:35:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/580952523' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 22 03:35:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 22 03:35:12 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 22 03:35:12 np0005532048 podman[102518]: 2025-11-22 08:35:12.999542497 +0000 UTC m=+0.264902988 container init 9c173f7376cb40b346e2a9a48a1a0e0c8cf184a67c443012bb81122423263128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:35:13 np0005532048 podman[102518]: 2025-11-22 08:35:13.005884598 +0000 UTC m=+0.271245069 container start 9c173f7376cb40b346e2a9a48a1a0e0c8cf184a67c443012bb81122423263128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:35:13 np0005532048 keen_curie[102534]: 167 167
Nov 22 03:35:13 np0005532048 systemd[1]: libpod-9c173f7376cb40b346e2a9a48a1a0e0c8cf184a67c443012bb81122423263128.scope: Deactivated successfully.
Nov 22 03:35:13 np0005532048 podman[102518]: 2025-11-22 08:35:13.184857043 +0000 UTC m=+0.450217544 container attach 9c173f7376cb40b346e2a9a48a1a0e0c8cf184a67c443012bb81122423263128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 22 03:35:13 np0005532048 podman[102518]: 2025-11-22 08:35:13.185295763 +0000 UTC m=+0.450656234 container died 9c173f7376cb40b346e2a9a48a1a0e0c8cf184a67c443012bb81122423263128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:13 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/580952523' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 22 03:35:13 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/580952523' entity='client.rgw.rgw.compute-0.qkpyxa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 22 03:35:13 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 22 03:35:13 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 22 03:35:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 22 03:35:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044710549' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 22 03:35:13 np0005532048 modest_bohr[102514]: mimic
Nov 22 03:35:13 np0005532048 systemd[1]: libpod-57eab9c860628cfee89fd2c739dbb7e34463feb52be7f78ef35dbc428d8ed139.scope: Deactivated successfully.
Nov 22 03:35:13 np0005532048 podman[102481]: 2025-11-22 08:35:13.438087992 +0000 UTC m=+1.068375269 container died 57eab9c860628cfee89fd2c739dbb7e34463feb52be7f78ef35dbc428d8ed139 (image=quay.io/ceph/ceph:v18, name=modest_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:35:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1fe15aa21368e2722ab7604d09a3c6448efa0dd95302121ff54d13b9b5ad3aaa-merged.mount: Deactivated successfully.
Nov 22 03:35:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v147: 181 pgs: 1 creating+peering, 180 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 240 B/s rd, 481 B/s wr, 1 op/s
Nov 22 03:35:14 np0005532048 podman[102518]: 2025-11-22 08:35:14.28265806 +0000 UTC m=+1.548018531 container remove 9c173f7376cb40b346e2a9a48a1a0e0c8cf184a67c443012bb81122423263128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:35:14 np0005532048 ceph-mon[75021]: from='client.? 192.168.122.100:0/580952523' entity='client.rgw.rgw.compute-0.qkpyxa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 22 03:35:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-588faa96c25ce93d945da1b7a853508319d2b5c4092f507245ab88ddfd5df798-merged.mount: Deactivated successfully.
Nov 22 03:35:15 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 22 03:35:15 np0005532048 podman[102481]: 2025-11-22 08:35:15.121276665 +0000 UTC m=+2.751563962 container remove 57eab9c860628cfee89fd2c739dbb7e34463feb52be7f78ef35dbc428d8ed139 (image=quay.io/ceph/ceph:v18, name=modest_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:35:15 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 22 03:35:15 np0005532048 systemd[1]: libpod-conmon-57eab9c860628cfee89fd2c739dbb7e34463feb52be7f78ef35dbc428d8ed139.scope: Deactivated successfully.
Nov 22 03:35:15 np0005532048 podman[102595]: 2025-11-22 08:35:15.14760429 +0000 UTC m=+0.752486108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:15 np0005532048 podman[102595]: 2025-11-22 08:35:15.428293763 +0000 UTC m=+1.033175501 container create a12fa4d20f7ffa83172de6683c5168aedfbf2e633fa2887e07cdcd3d44d24d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:35:15 np0005532048 systemd[1]: Started libpod-conmon-a12fa4d20f7ffa83172de6683c5168aedfbf2e633fa2887e07cdcd3d44d24d78.scope.
Nov 22 03:35:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bf25001f2edba53f3662e10c64b070121eed06c4b4515a008ed59d43d2f6e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bf25001f2edba53f3662e10c64b070121eed06c4b4515a008ed59d43d2f6e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bf25001f2edba53f3662e10c64b070121eed06c4b4515a008ed59d43d2f6e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bf25001f2edba53f3662e10c64b070121eed06c4b4515a008ed59d43d2f6e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:15 np0005532048 podman[102595]: 2025-11-22 08:35:15.771153654 +0000 UTC m=+1.376035422 container init a12fa4d20f7ffa83172de6683c5168aedfbf2e633fa2887e07cdcd3d44d24d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:15 np0005532048 podman[102595]: 2025-11-22 08:35:15.778691573 +0000 UTC m=+1.383573311 container start a12fa4d20f7ffa83172de6683c5168aedfbf2e633fa2887e07cdcd3d44d24d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:35:16 np0005532048 podman[102595]: 2025-11-22 08:35:16.028274725 +0000 UTC m=+1.633156473 container attach a12fa4d20f7ffa83172de6683c5168aedfbf2e633fa2887e07cdcd3d44d24d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:35:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v148: 181 pgs: 1 creating+peering, 180 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Nov 22 03:35:16 np0005532048 systemd[1]: libpod-conmon-9c173f7376cb40b346e2a9a48a1a0e0c8cf184a67c443012bb81122423263128.scope: Deactivated successfully.
Nov 22 03:35:16 np0005532048 python3[102642]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:16 np0005532048 podman[102643]: 2025-11-22 08:35:16.185700028 +0000 UTC m=+0.024672478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:35:16 np0005532048 podman[102643]: 2025-11-22 08:35:16.305179158 +0000 UTC m=+0.144151578 container create 8c2e757a1330cc2c6cf089135b682a322433eae7dac5cc22efd0fa30d18a3076 (image=quay.io/ceph/ceph:v18, name=blissful_wu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:35:16 np0005532048 systemd[1]: Started libpod-conmon-8c2e757a1330cc2c6cf089135b682a322433eae7dac5cc22efd0fa30d18a3076.scope.
Nov 22 03:35:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]: {
Nov 22 03:35:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c612ccec83c6984bd84d8a294bf16d0a64b3e7c51685656f490b488d5ef71aed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c612ccec83c6984bd84d8a294bf16d0a64b3e7c51685656f490b488d5ef71aed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:    "0": [
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:        {
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "devices": [
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "/dev/loop3"
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            ],
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_name": "ceph_lv0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_size": "21470642176",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "name": "ceph_lv0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "tags": {
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.cluster_name": "ceph",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.crush_device_class": "",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.encrypted": "0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.osd_id": "0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.type": "block",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.vdo": "0"
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            },
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "type": "block",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "vg_name": "ceph_vg0"
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:        }
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:    ],
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:    "1": [
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:        {
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "devices": [
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "/dev/loop4"
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            ],
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_name": "ceph_lv1",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_size": "21470642176",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "name": "ceph_lv1",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "tags": {
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.cluster_name": "ceph",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.crush_device_class": "",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.encrypted": "0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.osd_id": "1",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.type": "block",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.vdo": "0"
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            },
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "type": "block",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "vg_name": "ceph_vg1"
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:        }
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:    ],
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:    "2": [
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:        {
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "devices": [
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "/dev/loop5"
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            ],
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_name": "ceph_lv2",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_size": "21470642176",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "name": "ceph_lv2",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "tags": {
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.cluster_name": "ceph",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.crush_device_class": "",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.encrypted": "0",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.osd_id": "2",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.type": "block",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:                "ceph.vdo": "0"
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            },
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "type": "block",
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:            "vg_name": "ceph_vg2"
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:        }
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]:    ]
Nov 22 03:35:16 np0005532048 objective_keldysh[102612]: }
Nov 22 03:35:16 np0005532048 systemd[1]: libpod-a12fa4d20f7ffa83172de6683c5168aedfbf2e633fa2887e07cdcd3d44d24d78.scope: Deactivated successfully.
Nov 22 03:35:16 np0005532048 podman[102643]: 2025-11-22 08:35:16.855449279 +0000 UTC m=+0.694421719 container init 8c2e757a1330cc2c6cf089135b682a322433eae7dac5cc22efd0fa30d18a3076 (image=quay.io/ceph/ceph:v18, name=blissful_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:35:16 np0005532048 podman[102643]: 2025-11-22 08:35:16.862129488 +0000 UTC m=+0.701101908 container start 8c2e757a1330cc2c6cf089135b682a322433eae7dac5cc22efd0fa30d18a3076 (image=quay.io/ceph/ceph:v18, name=blissful_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:35:16 np0005532048 podman[102643]: 2025-11-22 08:35:16.985769037 +0000 UTC m=+0.824741497 container attach 8c2e757a1330cc2c6cf089135b682a322433eae7dac5cc22efd0fa30d18a3076 (image=quay.io/ceph/ceph:v18, name=blissful_wu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:17 np0005532048 podman[102595]: 2025-11-22 08:35:17.011880088 +0000 UTC m=+2.616761866 container died a12fa4d20f7ffa83172de6683c5168aedfbf2e633fa2887e07cdcd3d44d24d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:17 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 22 03:35:17 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 22 03:35:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 22 03:35:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576028438' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 22 03:35:17 np0005532048 blissful_wu[102662]: 
Nov 22 03:35:17 np0005532048 systemd[1]: libpod-8c2e757a1330cc2c6cf089135b682a322433eae7dac5cc22efd0fa30d18a3076.scope: Deactivated successfully.
Nov 22 03:35:17 np0005532048 blissful_wu[102662]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 22 03:35:17 np0005532048 podman[102643]: 2025-11-22 08:35:17.530836405 +0000 UTC m=+1.369808835 container died 8c2e757a1330cc2c6cf089135b682a322433eae7dac5cc22efd0fa30d18a3076 (image=quay.io/ceph/ceph:v18, name=blissful_wu, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:35:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-88bf25001f2edba53f3662e10c64b070121eed06c4b4515a008ed59d43d2f6e3-merged.mount: Deactivated successfully.
Nov 22 03:35:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v149: 181 pgs: 181 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 141 B/s rd, 2.9 KiB/s wr, 11 op/s
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 22 03:35:18 np0005532048 podman[102595]: 2025-11-22 08:35:18.261637358 +0000 UTC m=+3.866519106 container remove a12fa4d20f7ffa83172de6683c5168aedfbf2e633fa2887e07cdcd3d44d24d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 22 03:35:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 22 03:35:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:18 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 5a1d02ec-e8a0-443b-ad8e-fe2a7c9264c6 (Global Recovery Event) in 20 seconds
Nov 22 03:35:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c612ccec83c6984bd84d8a294bf16d0a64b3e7c51685656f490b488d5ef71aed-merged.mount: Deactivated successfully.
Nov 22 03:35:18 np0005532048 radosgw[100878]: LDAP not started since no server URIs were provided in the configuration.
Nov 22 03:35:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-rgw-rgw-compute-0-qkpyxa[100874]: 2025-11-22T08:35:18.455+0000 7f62a1441940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 22 03:35:18 np0005532048 radosgw[100878]: framework: beast
Nov 22 03:35:18 np0005532048 radosgw[100878]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 22 03:35:18 np0005532048 radosgw[100878]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 22 03:35:18 np0005532048 radosgw[100878]: starting handler: beast
Nov 22 03:35:18 np0005532048 radosgw[100878]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:35:18 np0005532048 radosgw[100878]: mgrc service_daemon_register rgw.14271 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.qkpyxa,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=55c2b1c0-2227-4292-87aa-d41619c74b21,zone_name=default,zonegroup_id=72294236-99d8-4d3a-af16-7a7efa6e24d5,zonegroup_name=default}
Nov 22 03:35:18 np0005532048 podman[102699]: 2025-11-22 08:35:18.642268065 +0000 UTC m=+1.171703804 container remove 8c2e757a1330cc2c6cf089135b682a322433eae7dac5cc22efd0fa30d18a3076 (image=quay.io/ceph/ceph:v18, name=blissful_wu, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:35:18 np0005532048 systemd[1]: libpod-conmon-8c2e757a1330cc2c6cf089135b682a322433eae7dac5cc22efd0fa30d18a3076.scope: Deactivated successfully.
Nov 22 03:35:18 np0005532048 systemd[1]: libpod-conmon-a12fa4d20f7ffa83172de6683c5168aedfbf2e633fa2887e07cdcd3d44d24d78.scope: Deactivated successfully.
Nov 22 03:35:19 np0005532048 podman[103396]: 2025-11-22 08:35:18.91630422 +0000 UTC m=+0.023941680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 22 03:35:19 np0005532048 podman[103396]: 2025-11-22 08:35:19.272026886 +0000 UTC m=+0.379664326 container create f161f3fb36d8e08588fa39c8cc0639f063d141982e24157e7573101fa5fa3b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 22 03:35:19 np0005532048 systemd[1]: Started libpod-conmon-f161f3fb36d8e08588fa39c8cc0639f063d141982e24157e7573101fa5fa3b86.scope.
Nov 22 03:35:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 22 03:35:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970758438s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.417816162s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.068012238s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515106201s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970797539s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.417938232s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970698357s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.417816162s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067962646s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515106201s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970765114s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.417938232s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970389366s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.417816162s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970548630s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.417968750s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970353127s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.417816162s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970503807s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.417968750s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067327499s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.514923096s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970553398s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.418167114s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067301750s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.514923096s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970536232s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.418167114s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970139503s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.417816162s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067622185s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515357971s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067607880s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515357971s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.970100403s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.417816162s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.969632149s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.417510986s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.969614983s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.417510986s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.969731331s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.417633057s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067167282s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515075684s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067137718s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515075684s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.969708443s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.417633057s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067086220s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515083313s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067072868s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515083313s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.969537735s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.417625427s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067232132s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515350342s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.969509125s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.417625427s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067201614s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515350342s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067152977s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515434265s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067138672s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515434265s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067167282s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515487671s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067143440s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515487671s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.962558746s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.410972595s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.962539673s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.410972595s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067119598s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515571594s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066927910s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515373230s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067100525s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515571594s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066903114s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515373230s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.962838173s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.411376953s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.962821007s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.411376953s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.962327003s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.410942078s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.962295532s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.410949707s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.962279320s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.410949707s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.962413788s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.411178589s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066783905s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515571594s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.962383270s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.411178589s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066855431s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515686035s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066760063s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515571594s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.067816734s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.514869690s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066667557s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515579224s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066778183s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515686035s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066653252s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515579224s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.065889359s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.514869690s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.961883545s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.410972595s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.961869240s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.410972595s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066359520s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515586853s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.961560249s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.410835266s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.961546898s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.410835266s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.961732864s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.411033630s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.961705208s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.411033630s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066549301s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515914917s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066536903s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515914917s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066254616s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515586853s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066575050s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.516036987s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066562653s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.516036987s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066360474s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.515945435s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066347122s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.515945435s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.967938423s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.417633057s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.960899353s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.410606384s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066295624s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.516029358s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.960878372s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.410606384s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066278458s) [2] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.516029358s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.960783005s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.410629272s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.967844009s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.417633057s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.960730553s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.410629272s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066111565s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active pruub 99.516029358s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.960617065s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 94.410614014s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=15.066041946s) [0] r=-1 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 99.516029358s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.960603714s) [0] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.410614014s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.960851669s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.410942078s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 podman[103396]: 2025-11-22 08:35:19.62643737 +0000 UTC m=+0.734074840 container init f161f3fb36d8e08588fa39c8cc0639f063d141982e24157e7573101fa5fa3b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:35:19 np0005532048 podman[103396]: 2025-11-22 08:35:19.634691297 +0000 UTC m=+0.742328787 container start f161f3fb36d8e08588fa39c8cc0639f063d141982e24157e7573101fa5fa3b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:35:19 np0005532048 romantic_maxwell[103412]: 167 167
Nov 22 03:35:19 np0005532048 systemd[1]: libpod-f161f3fb36d8e08588fa39c8cc0639f063d141982e24157e7573101fa5fa3b86.scope: Deactivated successfully.
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[7.1f( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.1b( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.f( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[7.4( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.c( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.1a( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.1( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[7.18( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[3.1e( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[3.1d( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.3( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.6( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[3.8( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[7.3( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[3.7( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.1( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.a( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.9( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[3.5( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.2( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.8( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[7.13( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[3.e( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.15( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.17( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[3.11( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.11( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[3.16( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[3.18( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.15( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.12( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[7.1b( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[7.1c( empty local-lis/les=0/0 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[3.1f( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.902006149s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854621887s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.902235985s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854873657s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901980400s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854621887s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.902120590s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854789734s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.902198792s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854873657s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.902095795s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854789734s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901945114s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854667664s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901929855s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854667664s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901973724s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854774475s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901712418s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854560852s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901856422s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854721069s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901834488s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854721069s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901894569s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854774475s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901660919s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854560852s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.d( v 49'3 (0'0,49'3] local-lis/les=43/44 n=2 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.945752144s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'3 lcod 49'2 mlcod 49'2 active pruub 102.898681641s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.d( v 49'3 (0'0,49'3] local-lis/les=43/44 n=2 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.945678711s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'3 lcod 49'2 mlcod 0'0 unknown NOTIFY pruub 102.898681641s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901533127s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854568481s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901509285s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854568481s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901722908s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854820251s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.f( v 49'5 (0'0,49'5] local-lis/les=43/44 n=3 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.945316315s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'4 lcod 49'4 mlcod 49'4 active pruub 102.898445129s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901691437s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854820251s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.f( v 49'5 (0'0,49'5] local-lis/les=43/44 n=3 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.945255280s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'4 lcod 49'4 mlcod 0'0 unknown NOTIFY pruub 102.898445129s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901280403s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854530334s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901256561s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854530334s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.3( v 49'2 (0'0,49'2] local-lis/les=43/44 n=2 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.945034027s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'2 lcod 49'1 mlcod 49'1 active pruub 102.898323059s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.3( v 49'2 (0'0,49'2] local-lis/les=43/44 n=2 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.944997787s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'2 lcod 49'1 mlcod 0'0 unknown NOTIFY pruub 102.898323059s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901059151s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854454041s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901018143s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854423523s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.945242882s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 102.898643494s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.901033401s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854454041s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.900988579s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854423523s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.900760651s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854232788s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.945203781s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.898643494s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.900921822s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854461670s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.900740623s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854232788s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.b( v 49'3 (0'0,49'3] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.944893837s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'1 lcod 49'2 mlcod 49'2 active pruub 102.898445129s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.b( v 49'3 (0'0,49'3] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.944829941s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'1 lcod 49'2 mlcod 0'0 unknown NOTIFY pruub 102.898445129s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.891113281s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.844764709s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.900819778s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854461670s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.891009331s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.844749451s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.7( v 49'2 (0'0,49'2] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.944578171s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'2 lcod 49'1 mlcod 49'1 active pruub 102.898323059s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.890992165s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.844749451s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.7( v 49'2 (0'0,49'2] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.944556236s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'2 lcod 49'1 mlcod 0'0 unknown NOTIFY pruub 102.898323059s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.938302994s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 102.892166138s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.938289642s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.892166138s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.890834808s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.844734192s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.890815735s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.844734192s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.890800476s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.844741821s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.5( v 49'3 (0'0,49'3] local-lis/les=43/44 n=2 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.944326401s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'3 lcod 49'2 mlcod 49'2 active pruub 102.898284912s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[6.5( v 49'3 (0'0,49'3] local-lis/les=43/44 n=2 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.944306374s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=49'3 lcod 49'2 mlcod 0'0 unknown NOTIFY pruub 102.898284912s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.890767097s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.844741821s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.900601387s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854606628s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.900586128s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854606628s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.900145531s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active pruub 100.854202271s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.900131226s) [2] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.854202271s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56 pruub=9.890711784s) [1] r=-1 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.844764709s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.932937622s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.186645508s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.932909966s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.186645508s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.19( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.902536392s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156394958s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.938533783s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.192413330s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.19( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.902498245s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156394958s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.938497543s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.192413330s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[5.1e( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.18( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.902176857s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156471252s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.1b( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.902096748s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156402588s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.938073158s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.192390442s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.18( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.902156830s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156471252s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.1b( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.902071953s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156402588s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.938043594s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.192390442s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.15( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.902008057s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156425476s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.15( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.901823044s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156425476s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.13( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.901617050s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156318665s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.13( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.901602745s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156318665s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.16( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.901534081s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156387329s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.17( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.901421547s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156379700s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.16( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.901471138s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156387329s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.17( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.901389122s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156379700s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937386513s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.192474365s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937362671s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.192474365s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937418938s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.192543030s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937396049s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.192543030s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.11( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.901082039s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156288147s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937244415s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.192474365s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937218666s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.192474365s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937420845s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.192802429s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937399864s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.192802429s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.f( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.900799751s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156257629s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.f( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.900781631s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156257629s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937344551s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.192977905s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[5.14( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937317848s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.192977905s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.d( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.900565147s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156250000s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.d( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.900538445s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156250000s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.11( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.900568962s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156288147s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936632156s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.192459106s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.7( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.900661469s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156524658s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937351227s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193244934s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.2( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.900597572s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156517029s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937329292s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193244934s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.7( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.900613785s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156524658s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[5.15( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.2( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.900576591s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156517029s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936395645s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.192459106s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937042236s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193252563s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937026024s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193252563s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.3( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899986267s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156234741s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.3( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899964333s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156234741s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937177658s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193481445s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.937163353s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193481445s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.4( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899886131s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156234741s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.4( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899870872s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156234741s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936872482s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193305969s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936835289s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193252563s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936857224s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193305969s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936800003s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193252563s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.5( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899626732s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156097412s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.5( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899603844s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156097412s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.1d( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.1b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.11( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.6( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899490356s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156044006s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.6( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899453163s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156044006s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936728477s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193328857s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936711311s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193328857s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.8( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899492264s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156150818s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.8( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899472237s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156150818s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936601639s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193321228s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.9( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899370193s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156097412s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.9( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899355888s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156097412s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936586380s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193321228s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.a( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899230003s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156036377s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.a( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899211884s) [1] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156036377s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.b( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899046898s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.155914307s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.b( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899026871s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.155914307s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936553955s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193466187s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.17( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.13( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.16( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.1c( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899078369s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.156028748s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.9( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[5.7( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.1d( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.898943901s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.155899048s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.1d( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.898923874s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.155899048s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936511993s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193466187s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.1c( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.899059296s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.156028748s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936767578s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193763733s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936753273s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193763733s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936704636s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193786621s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936687469s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193786621s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.1f( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.898519516s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active pruub 89.155647278s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[2.1f( empty local-lis/les=39/42 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56 pruub=9.898501396s) [0] r=-1 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.155647278s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936783791s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 91.193946838s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[5.5( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.936752319s) [1] r=-1 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.193946838s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[5.4( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 56 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[5.2( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.12( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.6( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.1( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.f( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[2.a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[5.3( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.8( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.c( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.1a( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.19( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.b( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[5.18( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.1c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 56 pg[2.1f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 56 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:19 np0005532048 podman[103396]: 2025-11-22 08:35:19.682609846 +0000 UTC m=+0.790247296 container attach f161f3fb36d8e08588fa39c8cc0639f063d141982e24157e7573101fa5fa3b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:35:19 np0005532048 podman[103396]: 2025-11-22 08:35:19.68406638 +0000 UTC m=+0.791703830 container died f161f3fb36d8e08588fa39c8cc0639f063d141982e24157e7573101fa5fa3b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4492b98b4e1a0276e210e0e1dcb55e70d3a1a2555bdf53da333140e668c19f7d-merged.mount: Deactivated successfully.
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Nov 22 03:35:19 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Nov 22 03:35:19 np0005532048 podman[103396]: 2025-11-22 08:35:19.995252738 +0000 UTC m=+1.102890188 container remove f161f3fb36d8e08588fa39c8cc0639f063d141982e24157e7573101fa5fa3b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:35:20 np0005532048 systemd[1]: libpod-conmon-f161f3fb36d8e08588fa39c8cc0639f063d141982e24157e7573101fa5fa3b86.scope: Deactivated successfully.
Nov 22 03:35:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v151: 181 pgs: 181 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 9 op/s
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 22 03:35:20 np0005532048 podman[103436]: 2025-11-22 08:35:20.192379464 +0000 UTC m=+0.088619358 container create 455590f4fec0458105328659d3954795481e582c2053b0cc4d37b95964c9cf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 22 03:35:20 np0005532048 podman[103436]: 2025-11-22 08:35:20.12616533 +0000 UTC m=+0.022405244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:20 np0005532048 systemd[1]: Started libpod-conmon-455590f4fec0458105328659d3954795481e582c2053b0cc4d37b95964c9cf85.scope.
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 03:35:20 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995a8e495e3ea0ac7337cf626be74be5f3e51536ebe9cf8434bc6e85a98186cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995a8e495e3ea0ac7337cf626be74be5f3e51536ebe9cf8434bc6e85a98186cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995a8e495e3ea0ac7337cf626be74be5f3e51536ebe9cf8434bc6e85a98186cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995a8e495e3ea0ac7337cf626be74be5f3e51536ebe9cf8434bc6e85a98186cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 22 03:35:20 np0005532048 podman[103436]: 2025-11-22 08:35:20.363293357 +0000 UTC m=+0.259533271 container init 455590f4fec0458105328659d3954795481e582c2053b0cc4d37b95964c9cf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shirley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 22 03:35:20 np0005532048 podman[103436]: 2025-11-22 08:35:20.370200381 +0000 UTC m=+0.266440275 container start 455590f4fec0458105328659d3954795481e582c2053b0cc4d37b95964c9cf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:20 np0005532048 podman[103436]: 2025-11-22 08:35:20.424822869 +0000 UTC m=+0.321062773 container attach 455590f4fec0458105328659d3954795481e582c2053b0cc4d37b95964c9cf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shirley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 22 03:35:20 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[6.e( v 49'3 (0'0,49'3] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=10.857632637s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=49'2 lcod 49'2 mlcod 49'2 active pruub 102.898597717s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[6.e( v 49'3 (0'0,49'3] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=10.857558250s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=49'2 lcod 49'2 mlcod 0'0 unknown NOTIFY pruub 102.898597717s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=10.857642174s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 active pruub 102.898689270s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=10.857576370s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 102.898689270s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[6.6( v 52'1 (0'0,52'1] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=10.857089043s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=52'1 lcod 0'0 mlcod 0'0 active pruub 102.898292542s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[6.6( v 52'1 (0'0,52'1] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=10.857047081s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=52'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.898292542s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[6.a( v 49'1 (0'0,49'1] local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=10.850601196s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=0'0 lcod 0'0 mlcod 0'0 active pruub 102.892181396s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[6.a( v 49'1 (0'0,49'1] local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57 pruub=10.850538254s) [1] r=-1 lpr=57 pi=[43,57)/1 crt=0'0 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.892181396s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.6( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.e( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[4.1c( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[5.7( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.18( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[4.13( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.11( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[4.11( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.1c( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.15( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.a( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[3.11( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[3.e( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[4.a( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.5( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.8( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[4.1( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.1( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[4.e( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[3.8( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.c( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[4.1a( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[4.18( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.2( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[4.1b( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[3.1d( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.e( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [2] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 57 pg[7.1a( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [2] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.19( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[5.1e( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.1d( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.1c( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.f( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.2( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[5.4( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[5.2( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[5.5( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[5.3( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.1f( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[5.15( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.8( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.b( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.16( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.13( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.1f( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[2.11( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.12( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[5.14( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.15( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.9( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.17( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.a( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.6( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.3( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.c( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.1( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.f( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=56/57 n=0 ec=45/29 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 57 pg[3.1b( empty local-lis/les=56/57 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=56) [0] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.19( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.1a( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.d( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.f( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.9( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.f( v 49'5 lc 49'1 (0'0,49'5] local-lis/les=56/57 n=3 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=49'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.d( v 49'3 lc 49'1 (0'0,49'3] local-lis/les=56/57 n=2 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=49'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.f( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.6( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.1( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.18( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.7( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.3( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=56/57 n=2 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=49'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.c( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.4( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.2( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.5( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.1( empty local-lis/les=56/57 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.3( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.5( v 49'3 lc 49'1 (0'0,49'3] local-lis/les=56/57 n=2 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=49'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.5( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.7( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.a( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.d( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.9( empty local-lis/les=56/57 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.9( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.b( v 49'3 lc 0'0 (0'0,49'3] local-lis/les=56/57 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=49'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.9( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.16( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[6.7( v 49'2 lc 49'1 (0'0,49'2] local-lis/les=56/57 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=49'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.4( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.14( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.12( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.15( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.12( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.17( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.10( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.11( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[2.1b( empty local-lis/les=56/57 n=0 ec=39/19 lis/c=39/39 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[39,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.13( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[4.8( empty local-lis/les=56/57 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=56) [1] r=0 lpr=56 pi=[41,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:20 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 57 pg[5.1d( empty local-lis/les=56/57 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=56) [1] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]: {
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "osd_id": 1,
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "type": "bluestore"
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:    },
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "osd_id": 0,
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "type": "bluestore"
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:    },
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "osd_id": 2,
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:        "type": "bluestore"
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]:    }
Nov 22 03:35:21 np0005532048 admiring_shirley[103452]: }
Nov 22 03:35:21 np0005532048 systemd[1]: libpod-455590f4fec0458105328659d3954795481e582c2053b0cc4d37b95964c9cf85.scope: Deactivated successfully.
Nov 22 03:35:21 np0005532048 podman[103436]: 2025-11-22 08:35:21.346729945 +0000 UTC m=+1.242969839 container died 455590f4fec0458105328659d3954795481e582c2053b0cc4d37b95964c9cf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 03:35:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-995a8e495e3ea0ac7337cf626be74be5f3e51536ebe9cf8434bc6e85a98186cc-merged.mount: Deactivated successfully.
Nov 22 03:35:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 22 03:35:21 np0005532048 podman[103436]: 2025-11-22 08:35:21.681558484 +0000 UTC m=+1.577798378 container remove 455590f4fec0458105328659d3954795481e582c2053b0cc4d37b95964c9cf85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_shirley, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 22 03:35:21 np0005532048 systemd[1]: libpod-conmon-455590f4fec0458105328659d3954795481e582c2053b0cc4d37b95964c9cf85.scope: Deactivated successfully.
Nov 22 03:35:21 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 22 03:35:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:35:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:35:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:21 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev f57b6b62-5de2-4441-8bc9-988507b3e629 does not exist
Nov 22 03:35:21 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fc7fe135-9ec7-4677-b626-e7ac44e1257e does not exist
Nov 22 03:35:21 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 58 pg[6.a( v 49'1 (0'0,49'1] local-lis/les=57/58 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=49'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:21 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 58 pg[6.e( v 49'3 lc 49'1 (0'0,49'3] local-lis/les=57/58 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=49'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:21 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 58 pg[6.6( v 52'1 lc 0'0 (0'0,52'1] local-lis/les=57/58 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=52'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:21 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 58 pg[6.2( empty local-lis/les=57/58 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=57) [1] r=0 lpr=57 pi=[43,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:21 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 22 03:35:21 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 22 03:35:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v154: 181 pgs: 4 active+recovery_wait+degraded, 4 peering, 1 active+recovering, 172 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.3 KiB/s wr, 67 op/s; 6/102 objects degraded (5.882%); 100 B/s, 0 objects/s recovering
Nov 22 03:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:22 np0005532048 podman[103718]: 2025-11-22 08:35:22.862933558 +0000 UTC m=+0.100786627 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:35:22 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 6/102 objects degraded (5.882%), 4 pgs degraded (PG_DEGRADED)
Nov 22 03:35:22 np0005532048 podman[103718]: 2025-11-22 08:35:22.96146158 +0000 UTC m=+0.199314639 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:23 np0005532048 ceph-mgr[75315]: [progress INFO root] Writing back 12 completed events
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: Health check failed: Degraded data redundancy: 6/102 objects degraded (5.882%), 4 pgs degraded (PG_DEGRADED)
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ca30d628-f41c-4227-8900-20ceaf48e02c does not exist
Nov 22 03:35:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 77d8195e-f3f8-4bd2-a906-f2c7ebdd556a does not exist
Nov 22 03:35:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bd9f1402-ed73-4df7-978f-e8e82db3bf3b does not exist
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:35:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:23 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 22 03:35:23 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 22 03:35:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v155: 181 pgs: 4 active+recovery_wait+degraded, 4 peering, 1 active+recovering, 172 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.2 KiB/s wr, 55 op/s; 6/102 objects degraded (5.882%); 100 B/s, 0 objects/s recovering
Nov 22 03:35:24 np0005532048 podman[104017]: 2025-11-22 08:35:24.369203125 +0000 UTC m=+0.026320967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:24 np0005532048 podman[104017]: 2025-11-22 08:35:24.479788744 +0000 UTC m=+0.136906476 container create d65abcf33735295b2a85b27ebd4900adc74ab508fb8d19562d4277dfb34ae353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:35:24 np0005532048 systemd[1]: Started libpod-conmon-d65abcf33735295b2a85b27ebd4900adc74ab508fb8d19562d4277dfb34ae353.scope.
Nov 22 03:35:24 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:24 np0005532048 podman[104017]: 2025-11-22 08:35:24.608511064 +0000 UTC m=+0.265628886 container init d65abcf33735295b2a85b27ebd4900adc74ab508fb8d19562d4277dfb34ae353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:24 np0005532048 podman[104017]: 2025-11-22 08:35:24.615022908 +0000 UTC m=+0.272140680 container start d65abcf33735295b2a85b27ebd4900adc74ab508fb8d19562d4277dfb34ae353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:24 np0005532048 keen_mahavira[104033]: 167 167
Nov 22 03:35:24 np0005532048 systemd[1]: libpod-d65abcf33735295b2a85b27ebd4900adc74ab508fb8d19562d4277dfb34ae353.scope: Deactivated successfully.
Nov 22 03:35:24 np0005532048 podman[104017]: 2025-11-22 08:35:24.670515708 +0000 UTC m=+0.327633550 container attach d65abcf33735295b2a85b27ebd4900adc74ab508fb8d19562d4277dfb34ae353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mahavira, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:35:24 np0005532048 podman[104017]: 2025-11-22 08:35:24.671471061 +0000 UTC m=+0.328588853 container died d65abcf33735295b2a85b27ebd4900adc74ab508fb8d19562d4277dfb34ae353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:24 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 22 03:35:24 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 22 03:35:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-68a011603ef5d88fcda8318c4a0b78f9cf5ea61a2a044cbe46444e51c7dbe700-merged.mount: Deactivated successfully.
Nov 22 03:35:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:35:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:35:25 np0005532048 podman[104017]: 2025-11-22 08:35:25.339853669 +0000 UTC m=+0.996971391 container remove d65abcf33735295b2a85b27ebd4900adc74ab508fb8d19562d4277dfb34ae353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mahavira, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:35:25 np0005532048 systemd[1]: libpod-conmon-d65abcf33735295b2a85b27ebd4900adc74ab508fb8d19562d4277dfb34ae353.scope: Deactivated successfully.
Nov 22 03:35:25 np0005532048 podman[104059]: 2025-11-22 08:35:25.56368302 +0000 UTC m=+0.111292407 container create 68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:35:25 np0005532048 podman[104059]: 2025-11-22 08:35:25.474806477 +0000 UTC m=+0.022415894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:25 np0005532048 systemd[1]: Started libpod-conmon-68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97.scope.
Nov 22 03:35:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e72f8ab0d763c60f1f692b981a52d13349d8950e43ee296689db9bdd7e29a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e72f8ab0d763c60f1f692b981a52d13349d8950e43ee296689db9bdd7e29a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e72f8ab0d763c60f1f692b981a52d13349d8950e43ee296689db9bdd7e29a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e72f8ab0d763c60f1f692b981a52d13349d8950e43ee296689db9bdd7e29a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e72f8ab0d763c60f1f692b981a52d13349d8950e43ee296689db9bdd7e29a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:25 np0005532048 podman[104059]: 2025-11-22 08:35:25.753143874 +0000 UTC m=+0.300753341 container init 68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:35:25 np0005532048 podman[104059]: 2025-11-22 08:35:25.761586364 +0000 UTC m=+0.309195751 container start 68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:35:25 np0005532048 podman[104059]: 2025-11-22 08:35:25.803821218 +0000 UTC m=+0.351430625 container attach 68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:35:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v156: 181 pgs: 4 active+recovery_wait+degraded, 4 peering, 1 active+recovering, 172 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 50 op/s; 6/102 objects degraded (5.882%); 92 B/s, 0 objects/s recovering
Nov 22 03:35:26 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Nov 22 03:35:26 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Nov 22 03:35:26 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 22 03:35:26 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 22 03:35:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 22 03:35:27 np0005532048 zealous_mccarthy[104076]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:35:27 np0005532048 zealous_mccarthy[104076]: --> relative data size: 1.0
Nov 22 03:35:27 np0005532048 zealous_mccarthy[104076]: --> All data devices are unavailable
Nov 22 03:35:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 22 03:35:27 np0005532048 systemd[1]: libpod-68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97.scope: Deactivated successfully.
Nov 22 03:35:27 np0005532048 systemd[1]: libpod-68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97.scope: Consumed 1.019s CPU time.
Nov 22 03:35:27 np0005532048 podman[104059]: 2025-11-22 08:35:27.164781197 +0000 UTC m=+1.712390604 container died 68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:35:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-50e72f8ab0d763c60f1f692b981a52d13349d8950e43ee296689db9bdd7e29a1-merged.mount: Deactivated successfully.
Nov 22 03:35:27 np0005532048 podman[104059]: 2025-11-22 08:35:27.712792886 +0000 UTC m=+2.260402313 container remove 68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:35:27 np0005532048 systemd[1]: libpod-conmon-68547e0a3e07b9fbb94537fb6c137dcbebe81e336b2985e9872262e70033bb97.scope: Deactivated successfully.
Nov 22 03:35:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v157: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.6 KiB/s wr, 88 op/s; 99 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:28 np0005532048 podman[104258]: 2025-11-22 08:35:28.38814538 +0000 UTC m=+0.064360666 container create 13c3d685b6a0ca127a427a84a88cc4afff9e3add17a41e880267ca210e147dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 6/102 objects degraded (5.882%), 4 pgs degraded)
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 03:35:28 np0005532048 podman[104258]: 2025-11-22 08:35:28.350953444 +0000 UTC m=+0.027168770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 22 03:35:28 np0005532048 systemd[1]: Started libpod-conmon-13c3d685b6a0ca127a427a84a88cc4afff9e3add17a41e880267ca210e147dee.scope.
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 22 03:35:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:35:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 59 pg[6.b( v 49'3 (0'0,49'3] local-lis/les=56/57 n=1 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=59 pruub=8.356516838s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=49'3 mlcod 49'3 active pruub 101.702079773s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 59 pg[6.b( v 49'3 (0'0,49'3] local-lis/les=56/57 n=1 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=59 pruub=8.356423378s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=49'3 mlcod 0'0 unknown NOTIFY pruub 101.702079773s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 59 pg[6.7( v 49'2 (0'0,49'2] local-lis/les=56/57 n=1 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=59 pruub=8.355888367s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=49'2 mlcod 49'2 active pruub 101.701972961s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 59 pg[6.7( v 49'2 (0'0,49'2] local-lis/les=56/57 n=1 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=59 pruub=8.355847359s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=49'2 mlcod 0'0 unknown NOTIFY pruub 101.701972961s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 59 pg[6.3( v 49'2 (0'0,49'2] local-lis/les=56/57 n=2 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=8.355532646s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=49'2 mlcod 49'2 active pruub 101.701843262s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 59 pg[6.3( v 49'2 (0'0,49'2] local-lis/les=56/57 n=2 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=8.355465889s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=49'2 mlcod 0'0 unknown NOTIFY pruub 101.701843262s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 59 pg[6.f( v 49'5 (0'0,49'5] local-lis/les=56/57 n=3 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=8.355250359s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=49'5 mlcod 49'5 active pruub 101.701705933s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:28 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 59 pg[6.f( v 49'5 (0'0,49'5] local-lis/les=56/57 n=3 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=8.354822159s) [0] r=-1 lpr=59 pi=[56,59)/1 crt=49'5 mlcod 0'0 unknown NOTIFY pruub 101.701705933s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:28 np0005532048 podman[104258]: 2025-11-22 08:35:28.506194158 +0000 UTC m=+0.182409514 container init 13c3d685b6a0ca127a427a84a88cc4afff9e3add17a41e880267ca210e147dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:35:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:28 np0005532048 podman[104258]: 2025-11-22 08:35:28.518086981 +0000 UTC m=+0.194302297 container start 13c3d685b6a0ca127a427a84a88cc4afff9e3add17a41e880267ca210e147dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:35:28 np0005532048 musing_sammet[104274]: 167 167
Nov 22 03:35:28 np0005532048 systemd[1]: libpod-13c3d685b6a0ca127a427a84a88cc4afff9e3add17a41e880267ca210e147dee.scope: Deactivated successfully.
Nov 22 03:35:28 np0005532048 podman[104258]: 2025-11-22 08:35:28.535840038 +0000 UTC m=+0.212055354 container attach 13c3d685b6a0ca127a427a84a88cc4afff9e3add17a41e880267ca210e147dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:35:28 np0005532048 podman[104258]: 2025-11-22 08:35:28.537080978 +0000 UTC m=+0.213296254 container died 13c3d685b6a0ca127a427a84a88cc4afff9e3add17a41e880267ca210e147dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:35:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b207132dcc4cb4c916db6bc309d96d97b3eadee445d651126c84166179b57f29-merged.mount: Deactivated successfully.
Nov 22 03:35:28 np0005532048 podman[104258]: 2025-11-22 08:35:28.650948374 +0000 UTC m=+0.327163650 container remove 13c3d685b6a0ca127a427a84a88cc4afff9e3add17a41e880267ca210e147dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:35:28 np0005532048 systemd[1]: libpod-conmon-13c3d685b6a0ca127a427a84a88cc4afff9e3add17a41e880267ca210e147dee.scope: Deactivated successfully.
Nov 22 03:35:28 np0005532048 podman[104300]: 2025-11-22 08:35:28.836429722 +0000 UTC m=+0.044003785 container create 199f170e6967802b2ca14f2e3cf7d55a0904fbbe6444b0040766bed1ab9fa3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:35:28 np0005532048 systemd[1]: Started libpod-conmon-199f170e6967802b2ca14f2e3cf7d55a0904fbbe6444b0040766bed1ab9fa3e9.scope.
Nov 22 03:35:28 np0005532048 podman[104300]: 2025-11-22 08:35:28.814464591 +0000 UTC m=+0.022038714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a217962642f30e715cd43ff00f13fc0cb896793a8194fa0ef6ba7f6b2adf1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a217962642f30e715cd43ff00f13fc0cb896793a8194fa0ef6ba7f6b2adf1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a217962642f30e715cd43ff00f13fc0cb896793a8194fa0ef6ba7f6b2adf1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a217962642f30e715cd43ff00f13fc0cb896793a8194fa0ef6ba7f6b2adf1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:28 np0005532048 podman[104300]: 2025-11-22 08:35:28.942584617 +0000 UTC m=+0.150158700 container init 199f170e6967802b2ca14f2e3cf7d55a0904fbbe6444b0040766bed1ab9fa3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamport, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:35:28 np0005532048 podman[104300]: 2025-11-22 08:35:28.957149755 +0000 UTC m=+0.164723818 container start 199f170e6967802b2ca14f2e3cf7d55a0904fbbe6444b0040766bed1ab9fa3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamport, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:35:28 np0005532048 podman[104300]: 2025-11-22 08:35:28.978985133 +0000 UTC m=+0.186559286 container attach 199f170e6967802b2ca14f2e3cf7d55a0904fbbe6444b0040766bed1ab9fa3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:35:29 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 22 03:35:29 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 22 03:35:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 22 03:35:29 np0005532048 ceph-mon[75021]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 6/102 objects degraded (5.882%), 4 pgs degraded)
Nov 22 03:35:29 np0005532048 ceph-mon[75021]: Cluster is now healthy
Nov 22 03:35:29 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:35:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 22 03:35:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 22 03:35:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 60 pg[6.3( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=59/60 n=2 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=49'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 60 pg[6.f( v 49'5 lc 49'1 (0'0,49'5] local-lis/les=59/60 n=3 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=49'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 60 pg[6.7( v 49'2 lc 49'1 (0'0,49'2] local-lis/les=59/60 n=1 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=49'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 60 pg[6.b( v 49'3 lc 0'0 (0'0,49'3] local-lis/les=59/60 n=1 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=59) [0] r=0 lpr=59 pi=[56,59)/1 crt=49'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]: {
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:    "0": [
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:        {
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "devices": [
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "/dev/loop3"
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            ],
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_name": "ceph_lv0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_size": "21470642176",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "name": "ceph_lv0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "tags": {
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.cluster_name": "ceph",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.crush_device_class": "",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.encrypted": "0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.osd_id": "0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.type": "block",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.vdo": "0"
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            },
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "type": "block",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "vg_name": "ceph_vg0"
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:        }
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:    ],
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:    "1": [
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:        {
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "devices": [
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "/dev/loop4"
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            ],
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_name": "ceph_lv1",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_size": "21470642176",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "name": "ceph_lv1",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "tags": {
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.cluster_name": "ceph",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.crush_device_class": "",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.encrypted": "0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.osd_id": "1",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.type": "block",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.vdo": "0"
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            },
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "type": "block",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "vg_name": "ceph_vg1"
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:        }
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:    ],
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:    "2": [
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:        {
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "devices": [
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "/dev/loop5"
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            ],
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_name": "ceph_lv2",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_size": "21470642176",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "name": "ceph_lv2",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "tags": {
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.cluster_name": "ceph",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.crush_device_class": "",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.encrypted": "0",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.osd_id": "2",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.type": "block",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:                "ceph.vdo": "0"
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            },
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "type": "block",
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:            "vg_name": "ceph_vg2"
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:        }
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]:    ]
Nov 22 03:35:29 np0005532048 recursing_lamport[104317]: }
Nov 22 03:35:29 np0005532048 systemd[1]: libpod-199f170e6967802b2ca14f2e3cf7d55a0904fbbe6444b0040766bed1ab9fa3e9.scope: Deactivated successfully.
Nov 22 03:35:29 np0005532048 podman[104300]: 2025-11-22 08:35:29.797927975 +0000 UTC m=+1.005502048 container died 199f170e6967802b2ca14f2e3cf7d55a0904fbbe6444b0040766bed1ab9fa3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamport, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:29 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c5a217962642f30e715cd43ff00f13fc0cb896793a8194fa0ef6ba7f6b2adf1d-merged.mount: Deactivated successfully.
Nov 22 03:35:29 np0005532048 podman[104300]: 2025-11-22 08:35:29.91791499 +0000 UTC m=+1.125489073 container remove 199f170e6967802b2ca14f2e3cf7d55a0904fbbe6444b0040766bed1ab9fa3e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamport, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:29 np0005532048 systemd[1]: libpod-conmon-199f170e6967802b2ca14f2e3cf7d55a0904fbbe6444b0040766bed1ab9fa3e9.scope: Deactivated successfully.
Nov 22 03:35:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v160: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 46 op/s; 24 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:35:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 22 03:35:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:35:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 22 03:35:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 22 03:35:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 22 03:35:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:35:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 22 03:35:30 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 22 03:35:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:35:30 np0005532048 podman[104481]: 2025-11-22 08:35:30.599933139 +0000 UTC m=+0.042309674 container create 404dd7fd92a7731cb507c3766ef7cca3d27ab054d6d9c371a1fa1a0165878423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:35:30 np0005532048 systemd[1]: Started libpod-conmon-404dd7fd92a7731cb507c3766ef7cca3d27ab054d6d9c371a1fa1a0165878423.scope.
Nov 22 03:35:30 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:30 np0005532048 podman[104481]: 2025-11-22 08:35:30.580966021 +0000 UTC m=+0.023342586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:30 np0005532048 podman[104481]: 2025-11-22 08:35:30.692162841 +0000 UTC m=+0.134539386 container init 404dd7fd92a7731cb507c3766ef7cca3d27ab054d6d9c371a1fa1a0165878423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:30 np0005532048 podman[104481]: 2025-11-22 08:35:30.701171162 +0000 UTC m=+0.143547707 container start 404dd7fd92a7731cb507c3766ef7cca3d27ab054d6d9c371a1fa1a0165878423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:30 np0005532048 podman[104481]: 2025-11-22 08:35:30.705889368 +0000 UTC m=+0.148265903 container attach 404dd7fd92a7731cb507c3766ef7cca3d27ab054d6d9c371a1fa1a0165878423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_knuth, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:35:30 np0005532048 crazy_knuth[104497]: 167 167
Nov 22 03:35:30 np0005532048 systemd[1]: libpod-404dd7fd92a7731cb507c3766ef7cca3d27ab054d6d9c371a1fa1a0165878423.scope: Deactivated successfully.
Nov 22 03:35:30 np0005532048 podman[104481]: 2025-11-22 08:35:30.708081812 +0000 UTC m=+0.150458397 container died 404dd7fd92a7731cb507c3766ef7cca3d27ab054d6d9c371a1fa1a0165878423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_knuth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:35:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e73b88d7a5a228aa4866bfb595bbfb1d71e6c23366fb4429c415a01efdb998e3-merged.mount: Deactivated successfully.
Nov 22 03:35:30 np0005532048 podman[104481]: 2025-11-22 08:35:30.75590443 +0000 UTC m=+0.198280965 container remove 404dd7fd92a7731cb507c3766ef7cca3d27ab054d6d9c371a1fa1a0165878423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:35:30 np0005532048 systemd[1]: libpod-conmon-404dd7fd92a7731cb507c3766ef7cca3d27ab054d6d9c371a1fa1a0165878423.scope: Deactivated successfully.
Nov 22 03:35:30 np0005532048 podman[104522]: 2025-11-22 08:35:30.925805374 +0000 UTC m=+0.048655208 container create a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhabha, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:35:30 np0005532048 systemd[1]: Started libpod-conmon-a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9.scope.
Nov 22 03:35:30 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:35:30 np0005532048 podman[104522]: 2025-11-22 08:35:30.903962717 +0000 UTC m=+0.026812571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34e859f981d0f18dcda95d16cb35a94f66b7a3e1e3c851bf6c8fedc64e315cc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34e859f981d0f18dcda95d16cb35a94f66b7a3e1e3c851bf6c8fedc64e315cc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34e859f981d0f18dcda95d16cb35a94f66b7a3e1e3c851bf6c8fedc64e315cc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34e859f981d0f18dcda95d16cb35a94f66b7a3e1e3c851bf6c8fedc64e315cc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:31 np0005532048 podman[104522]: 2025-11-22 08:35:31.013816592 +0000 UTC m=+0.136666426 container init a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:35:31 np0005532048 podman[104522]: 2025-11-22 08:35:31.021182925 +0000 UTC m=+0.144032759 container start a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhabha, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:31 np0005532048 podman[104522]: 2025-11-22 08:35:31.025989783 +0000 UTC m=+0.148839617 container attach a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:35:31 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 22 03:35:31 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 22 03:35:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 61 pg[6.c( v 49'2 (0'0,49'2] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=61 pruub=8.154572487s) [1] r=-1 lpr=61 pi=[43,61)/1 crt=49'2 lcod 49'1 mlcod 49'1 active pruub 110.898872375s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 61 pg[6.4( v 49'6 (0'0,49'6] local-lis/les=43/44 n=4 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=61 pruub=8.147883415s) [1] r=-1 lpr=61 pi=[43,61)/1 crt=49'6 lcod 49'5 mlcod 49'5 active pruub 110.892440796s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 61 pg[6.4( v 49'6 (0'0,49'6] local-lis/les=43/44 n=4 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=61 pruub=8.147791862s) [1] r=-1 lpr=61 pi=[43,61)/1 crt=49'6 lcod 49'5 mlcod 0'0 unknown NOTIFY pruub 110.892440796s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 61 pg[6.c( v 49'2 (0'0,49'2] local-lis/les=43/44 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=61 pruub=8.153409004s) [1] r=-1 lpr=61 pi=[43,61)/1 crt=49'2 lcod 49'1 mlcod 0'0 unknown NOTIFY pruub 110.898872375s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 61 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=61) [1] r=0 lpr=61 pi=[43,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 61 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=61) [1] r=0 lpr=61 pi=[43,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 22 03:35:31 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:35:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 22 03:35:31 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 22 03:35:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 62 pg[6.4( v 49'6 lc 49'1 (0'0,49'6] local-lis/les=61/62 n=4 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=61) [1] r=0 lpr=61 pi=[43,61)/1 crt=49'6 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 62 pg[6.c( v 49'2 lc 49'1 (0'0,49'2] local-lis/les=61/62 n=1 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=61) [1] r=0 lpr=61 pi=[43,61)/1 crt=49'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]: {
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "osd_id": 1,
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "type": "bluestore"
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:    },
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "osd_id": 0,
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "type": "bluestore"
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:    },
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "osd_id": 2,
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:        "type": "bluestore"
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]:    }
Nov 22 03:35:32 np0005532048 cranky_bhabha[104538]: }
Nov 22 03:35:32 np0005532048 podman[104522]: 2025-11-22 08:35:32.055661804 +0000 UTC m=+1.178511658 container died a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhabha, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:35:32 np0005532048 systemd[1]: libpod-a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9.scope: Deactivated successfully.
Nov 22 03:35:32 np0005532048 systemd[1]: libpod-a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9.scope: Consumed 1.037s CPU time.
Nov 22 03:35:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v163: 181 pgs: 2 peering, 179 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 79 op/s
Nov 22 03:35:32 np0005532048 systemd[1]: var-lib-containers-storage-overlay-34e859f981d0f18dcda95d16cb35a94f66b7a3e1e3c851bf6c8fedc64e315cc2-merged.mount: Deactivated successfully.
Nov 22 03:35:32 np0005532048 podman[104522]: 2025-11-22 08:35:32.153508494 +0000 UTC m=+1.276358328 container remove a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bhabha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:35:32 np0005532048 systemd[1]: libpod-conmon-a889360528d959bfaabd0bc6c7bbf6a66bb4d2e4b0d2f12ff4a4a3417d9b7ba9.scope: Deactivated successfully.
Nov 22 03:35:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:35:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:35:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7671ecb3-4607-4e25-a2a2-fd2b79dc96cf does not exist
Nov 22 03:35:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 316e1b57-8bde-4c2c-b0cf-58f060f183b6 does not exist
Nov 22 03:35:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:35:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 22 03:35:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 22 03:35:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v164: 181 pgs: 2 peering, 179 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 56 op/s; 113 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:35:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 22 03:35:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 22 03:35:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v165: 181 pgs: 2 peering, 179 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 48 op/s; 97 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:35:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 22 03:35:37 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 22 03:35:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 22 03:35:37 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 22 03:35:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v166: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 39 op/s; 275 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:35:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 22 03:35:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 03:35:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 22 03:35:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 03:35:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 03:35:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 22 03:35:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 22 03:35:38 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 63 pg[6.5( v 49'3 (0'0,49'3] local-lis/les=56/57 n=2 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=63 pruub=14.151795387s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=49'3 mlcod 49'3 active pruub 117.702339172s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:38 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 63 pg[6.5( v 49'3 (0'0,49'3] local-lis/les=56/57 n=2 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=63 pruub=14.151718140s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=49'3 mlcod 0'0 unknown NOTIFY pruub 117.702339172s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:38 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 63 pg[6.d( v 49'3 (0'0,49'3] local-lis/les=56/57 n=2 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.151336670s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=49'3 mlcod 49'3 active pruub 117.702056885s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:38 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 63 pg[6.d( v 49'3 (0'0,49'3] local-lis/les=56/57 n=2 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.151265144s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=49'3 mlcod 0'0 unknown NOTIFY pruub 117.702056885s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:38 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 63 pg[6.5( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:38 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 63 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:39 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 22 03:35:39 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 22 03:35:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 22 03:35:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 22 03:35:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 22 03:35:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 03:35:39 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 64 pg[6.d( v 49'3 lc 49'1 (0'0,49'3] local-lis/les=63/64 n=2 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=49'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:39 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 64 pg[6.5( v 49'3 lc 49'1 (0'0,49'3] local-lis/les=63/64 n=2 ec=43/27 lis/c=56/56 les/c/f=57/58/0 sis=63) [0] r=0 lpr=63 pi=[56,63)/1 crt=49'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v169: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 275 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:35:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 22 03:35:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 03:35:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 22 03:35:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 22 03:35:40 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 22 03:35:40 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 22 03:35:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 22 03:35:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 03:35:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 22 03:35:40 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 22 03:35:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 03:35:41 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 22 03:35:41 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 22 03:35:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 03:35:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v171: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 260 B/s, 0 objects/s recovering
Nov 22 03:35:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 22 03:35:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 03:35:42 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 22 03:35:42 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 22 03:35:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 22 03:35:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 03:35:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 03:35:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 22 03:35:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 22 03:35:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:43 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 03:35:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v173: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Nov 22 03:35:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 22 03:35:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 03:35:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 22 03:35:45 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 03:35:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 03:35:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 22 03:35:45 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 22 03:35:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 22 03:35:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 22 03:35:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v175: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Nov 22 03:35:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 22 03:35:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 22 03:35:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 03:35:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 22 03:35:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 22 03:35:46 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Nov 22 03:35:46 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Nov 22 03:35:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 03:35:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 03:35:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 22 03:35:46 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 22 03:35:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 22 03:35:47 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 22 03:35:47 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 22 03:35:47 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 67 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=8.275763512s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 active pruub 126.899208069s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:47 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 68 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=67 pruub=8.275479317s) [2] r=-1 lpr=67 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.899208069s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:47 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 68 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=68 pi=[43,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:47 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 68 pg[6.9( empty local-lis/les=56/57 n=0 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=68 pruub=13.421555519s) [0] r=-1 lpr=68 pi=[56,68)/1 crt=0'0 mlcod 0'0 active pruub 125.702613831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:47 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 68 pg[6.9( empty local-lis/les=56/57 n=0 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=68 pruub=13.421501160s) [0] r=-1 lpr=68 pi=[56,68)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.702613831s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:47 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 68 pg[6.9( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=68) [0] r=0 lpr=68 pi=[56,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 22 03:35:47 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 03:35:47 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 03:35:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 22 03:35:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 22 03:35:47 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 22 03:35:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v178: 181 pgs: 1 peering, 180 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 31 B/s, 0 objects/s recovering
Nov 22 03:35:48 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 69 pg[6.9( empty local-lis/les=68/69 n=0 ec=43/27 lis/c=56/56 les/c/f=57/57/0 sis=68) [0] r=0 lpr=68 pi=[56,68)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:48 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 69 pg[6.8( empty local-lis/les=67/69 n=0 ec=43/27 lis/c=43/43 les/c/f=44/44/0 sis=67) [2] r=0 lpr=68 pi=[43,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 22 03:35:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 22 03:35:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:48 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.1e deep-scrub starts
Nov 22 03:35:48 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.1e deep-scrub ok
Nov 22 03:35:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 22 03:35:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 22 03:35:49 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 22 03:35:49 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 22 03:35:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v179: 181 pgs: 1 peering, 180 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:50 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 22 03:35:50 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 22 03:35:51 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 22 03:35:51 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 22 03:35:51 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 22 03:35:51 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:35:52
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Some PGs (0.005525) are inactive; try again later
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v180: 181 pgs: 1 peering, 180 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:52 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Nov 22 03:35:52 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:35:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:35:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:53 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 22 03:35:53 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 22 03:35:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v181: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 22 03:35:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 03:35:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 22 03:35:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 22 03:35:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 22 03:35:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 03:35:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 03:35:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 22 03:35:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 22 03:35:54 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 70 pg[6.a( v 49'1 (0'0,49'1] local-lis/les=57/58 n=0 ec=43/27 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=15.026946068s) [0] r=-1 lpr=70 pi=[57,70)/1 crt=49'1 lcod 0'0 mlcod 0'0 active pruub 134.774215698s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:54 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 70 pg[6.a( v 49'1 (0'0,49'1] local-lis/les=57/58 n=0 ec=43/27 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=15.026720047s) [0] r=-1 lpr=70 pi=[57,70)/1 crt=49'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.774215698s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:54 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 70 pg[6.a( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=57/57 les/c/f=58/58/0 sis=70) [0] r=0 lpr=70 pi=[57,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 22 03:35:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 22 03:35:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 22 03:35:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 03:35:55 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 22 03:35:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 22 03:35:55 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 22 03:35:55 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 22 03:35:55 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 71 pg[6.a( v 49'1 (0'0,49'1] local-lis/les=70/71 n=0 ec=43/27 lis/c=57/57 les/c/f=58/58/0 sis=70) [0] r=0 lpr=70 pi=[57,70)/1 crt=49'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v184: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 22 03:35:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 03:35:56 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 22 03:35:56 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 22 03:35:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 22 03:35:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 03:35:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 22 03:35:56 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 22 03:35:56 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 72 pg[6.b( v 49'3 (0'0,49'3] local-lis/les=59/60 n=1 ec=43/27 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=12.642930031s) [1] r=-1 lpr=72 pi=[59,72)/1 crt=49'3 mlcod 49'3 active pruub 140.865493774s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:35:56 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 72 pg[6.b( v 49'3 (0'0,49'3] local-lis/les=59/60 n=1 ec=43/27 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=12.642870903s) [1] r=-1 lpr=72 pi=[59,72)/1 crt=49'3 mlcod 0'0 unknown NOTIFY pruub 140.865493774s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:35:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 03:35:56 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 72 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=59/59 les/c/f=60/60/0 sis=72) [1] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:35:57 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 22 03:35:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 22 03:35:57 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 22 03:35:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 22 03:35:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 22 03:35:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 22 03:35:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 22 03:35:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v187: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 22 03:35:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 03:35:58 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 73 pg[6.b( v 49'3 lc 0'0 (0'0,49'3] local-lis/les=72/73 n=1 ec=43/27 lis/c=59/59 les/c/f=60/60/0 sis=72) [1] r=0 lpr=72 pi=[59,72)/1 crt=49'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:35:58 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 22 03:35:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:58 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 22 03:35:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 22 03:35:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:58 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:35:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:35:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:35:58 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 22 03:35:58 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 22 03:35:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 22 03:35:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 03:35:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:35:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 22 03:35:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 22 03:35:59 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 62b4d49c-0bd3-4475-a0b0-965bc8d47453 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 22 03:35:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:35:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:35:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 03:35:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:35:59 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Nov 22 03:35:59 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Nov 22 03:36:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v189: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 22 03:36:00 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev e9b5c7ae-55b4-44e8-bcf9-131f77369193 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 03:36:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:36:00 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 75 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=47/48 n=4 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=8.688727379s) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 48'3 mlcod 48'3 active pruub 133.932708740s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:00 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 75 pg[8.0( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=75 pruub=8.688727379s) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 48'3 mlcod 0'0 unknown pruub 133.932708740s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:00 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 75 pg[6.d( v 49'3 (0'0,49'3] local-lis/les=63/64 n=2 ec=43/27 lis/c=63/63 les/c/f=64/64/0 sis=75 pruub=11.390882492s) [1] r=-1 lpr=75 pi=[63,75)/1 crt=49'3 mlcod 49'3 active pruub 143.233444214s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:00 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 75 pg[6.d( v 49'3 (0'0,49'3] local-lis/les=63/64 n=2 ec=43/27 lis/c=63/63 les/c/f=64/64/0 sis=75 pruub=11.390639305s) [1] r=-1 lpr=75 pi=[63,75)/1 crt=49'3 mlcod 0'0 unknown NOTIFY pruub 143.233444214s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:00 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 75 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=63/63 les/c/f=64/64/0 sis=75) [1] r=0 lpr=75 pi=[63,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 22 03:36:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:36:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 22 03:36:01 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 22 03:36:01 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 30891ed0-0279-4436-b437-2709c2c4982e (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 22 03:36:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:36:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.13( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.11( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.12( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.19( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.18( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.4( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.5( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.6( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.7( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.9( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.8( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.3( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.2( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=47/48 n=1 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.17( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.10( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.16( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.15( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.14( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.a( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.19( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1e( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.13( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=75/76 n=1 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.7( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.5( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 48'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.8( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[6.d( v 49'3 lc 49'1 (0'0,49'3] local-lis/les=75/76 n=2 ec=43/27 lis/c=63/63 les/c/f=64/64/0 sis=75) [1] r=0 lpr=75 pi=[63,75)/1 crt=49'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.3( v 48'4 (0'0,48'4] local-lis/les=75/76 n=1 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=75/76 n=1 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.17( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.16( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 76 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=75/76 n=1 ec=75/47 lis/c=47/47 les/c/f=48/48/0 sis=75) [1] r=0 lpr=75 pi=[47,75)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:36:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 22 03:36:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 22 03:36:01 np0005532048 python3[104658]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:36:01 np0005532048 podman[104659]: 2025-11-22 08:36:01.667158757 +0000 UTC m=+0.051001186 container create 7afe4bc815809e3da5f11c89132f3940b80bee58f27299e63ef4fab5a9cb01bd (image=quay.io/ceph/ceph:v18, name=quirky_greider, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:36:01 np0005532048 systemd[1]: Started libpod-conmon-7afe4bc815809e3da5f11c89132f3940b80bee58f27299e63ef4fab5a9cb01bd.scope.
Nov 22 03:36:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:36:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22ea5b988070a498395e172ea217df82c80e5d66a416bc7dc02f0b2bd259fa0f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22ea5b988070a498395e172ea217df82c80e5d66a416bc7dc02f0b2bd259fa0f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:01 np0005532048 podman[104659]: 2025-11-22 08:36:01.641300001 +0000 UTC m=+0.025142450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:36:01 np0005532048 podman[104659]: 2025-11-22 08:36:01.742474123 +0000 UTC m=+0.126316572 container init 7afe4bc815809e3da5f11c89132f3940b80bee58f27299e63ef4fab5a9cb01bd (image=quay.io/ceph/ceph:v18, name=quirky_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:36:01 np0005532048 podman[104659]: 2025-11-22 08:36:01.74884103 +0000 UTC m=+0.132683459 container start 7afe4bc815809e3da5f11c89132f3940b80bee58f27299e63ef4fab5a9cb01bd (image=quay.io/ceph/ceph:v18, name=quirky_greider, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:36:01 np0005532048 podman[104659]: 2025-11-22 08:36:01.760656011 +0000 UTC m=+0.144498460 container attach 7afe4bc815809e3da5f11c89132f3940b80bee58f27299e63ef4fab5a9cb01bd (image=quay.io/ceph/ceph:v18, name=quirky_greider, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:36:02 np0005532048 quirky_greider[104674]: could not fetch user info: no user info saved
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v192: 212 pgs: 212 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 22 03:36:02 np0005532048 systemd[1]: libpod-7afe4bc815809e3da5f11c89132f3940b80bee58f27299e63ef4fab5a9cb01bd.scope: Deactivated successfully.
Nov 22 03:36:02 np0005532048 podman[104659]: 2025-11-22 08:36:02.123090348 +0000 UTC m=+0.506932777 container died 7afe4bc815809e3da5f11c89132f3940b80bee58f27299e63ef4fab5a9cb01bd (image=quay.io/ceph/ceph:v18, name=quirky_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: [progress INFO root] update: starting ev 562c07b7-0c76-4beb-9f59-257ee9e9c773 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 62b4d49c-0bd3-4475-a0b0-965bc8d47453 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 62b4d49c-0bd3-4475-a0b0-965bc8d47453 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev e9b5c7ae-55b4-44e8-bcf9-131f77369193 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event e9b5c7ae-55b4-44e8-bcf9-131f77369193 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 30891ed0-0279-4436-b437-2709c2c4982e (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 30891ed0-0279-4436-b437-2709c2c4982e (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: [progress INFO root] complete: finished ev 562c07b7-0c76-4beb-9f59-257ee9e9c773 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 22 03:36:02 np0005532048 ceph-mgr[75315]: [progress INFO root] Completed event 562c07b7-0c76-4beb-9f59-257ee9e9c773 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.004915237s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023727417s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.004853249s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023727417s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.003620148s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023666382s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[9.0( v 76'387 (0'0,76'387] local-lis/les=49/50 n=177 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=77 pruub=9.475325584s) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 76'386 mlcod 76'386 active pruub 136.495407104s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.003567696s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023666382s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=75/76 n=1 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.003580093s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023849487s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=75/76 n=1 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.003561974s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023849487s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002988815s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023468018s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002970695s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023468018s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.003183365s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023788452s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.003201485s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023818970s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002940178s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023590088s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.003107071s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023788452s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.003113747s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023818970s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002338409s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023345947s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002314568s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023345947s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002048492s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023239136s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002284050s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023437500s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002022743s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023239136s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002172470s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023437500s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.001852036s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023315430s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.001823425s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023315430s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=75/76 n=1 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.001465797s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023071289s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=75/76 n=1 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.001438141s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023071289s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.001713753s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023376465s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.002464294s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023590088s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.001264572s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.023117065s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.001239777s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023117065s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.001448631s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.023376465s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.000689507s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.022644043s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.000659943s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.022644043s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.000511169s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.022537231s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.000494003s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.022537231s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.000348091s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.022476196s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=14.987961769s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.010192871s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=14.987942696s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.010192871s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=14.987511635s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.009948730s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=15.000077248s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.022476196s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=14.987524033s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 142.010009766s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=14.987482071s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.009948730s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=75/76 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=14.987491608s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.010009766s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:02 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 77 pg[9.0( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=77 pruub=9.475325584s) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 76'386 mlcod 0'0 unknown pruub 136.495407104s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.10( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.b( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.6( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.9( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.f( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.e( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.c( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-22ea5b988070a498395e172ea217df82c80e5d66a416bc7dc02f0b2bd259fa0f-merged.mount: Deactivated successfully.
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.18( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.1a( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.14( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.1f( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 77 pg[8.1d( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[8.15( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[8.2( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[8.d( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[8.4( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[8.12( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[8.11( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[8.1b( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[8.1c( empty local-lis/les=0/0 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[10.0( v 76'40 (0'0,76'40] local-lis/les=51/52 n=8 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=77 pruub=11.689751625s) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 76'39 mlcod 76'39 active pruub 133.483627319s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:02 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 77 pg[10.0( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=77 pruub=11.689751625s) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 76'39 mlcod 0'0 unknown pruub 133.483627319s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:36:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 03:36:02 np0005532048 podman[104659]: 2025-11-22 08:36:02.276671861 +0000 UTC m=+0.660514300 container remove 7afe4bc815809e3da5f11c89132f3940b80bee58f27299e63ef4fab5a9cb01bd (image=quay.io/ceph/ceph:v18, name=quirky_greider, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 22 03:36:02 np0005532048 systemd[1]: libpod-conmon-7afe4bc815809e3da5f11c89132f3940b80bee58f27299e63ef4fab5a9cb01bd.scope: Deactivated successfully.
Nov 22 03:36:02 np0005532048 python3[104797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 34829716-a12c-57a6-8915-c1aa615c9d8a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:36:02 np0005532048 podman[104798]: 2025-11-22 08:36:02.660774222 +0000 UTC m=+0.046565448 container create 880bf438212c31d8fa358692b1f47f781a684c4dfb308aa8fd011298c35bc1c8 (image=quay.io/ceph/ceph:v18, name=intelligent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:36:02 np0005532048 systemd[1]: Started libpod-conmon-880bf438212c31d8fa358692b1f47f781a684c4dfb308aa8fd011298c35bc1c8.scope.
Nov 22 03:36:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:36:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d047e9d763bb9bf041b3c55be30240cfaf8ae7b71450294b5c7a383ed17d33/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d047e9d763bb9bf041b3c55be30240cfaf8ae7b71450294b5c7a383ed17d33/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:02 np0005532048 podman[104798]: 2025-11-22 08:36:02.640102062 +0000 UTC m=+0.025893318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:36:02 np0005532048 podman[104798]: 2025-11-22 08:36:02.741561022 +0000 UTC m=+0.127352248 container init 880bf438212c31d8fa358692b1f47f781a684c4dfb308aa8fd011298c35bc1c8 (image=quay.io/ceph/ceph:v18, name=intelligent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:36:02 np0005532048 podman[104798]: 2025-11-22 08:36:02.747145179 +0000 UTC m=+0.132936405 container start 880bf438212c31d8fa358692b1f47f781a684c4dfb308aa8fd011298c35bc1c8 (image=quay.io/ceph/ceph:v18, name=intelligent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:36:02 np0005532048 podman[104798]: 2025-11-22 08:36:02.752600814 +0000 UTC m=+0.138392060 container attach 880bf438212c31d8fa358692b1f47f781a684c4dfb308aa8fd011298c35bc1c8 (image=quay.io/ceph/ceph:v18, name=intelligent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 22 03:36:02 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 22 03:36:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 22 03:36:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 22 03:36:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.15( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.14( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.17( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.16( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.11( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.3( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.2( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.d( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.c( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.f( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.9( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.e( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.8( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.a( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.6( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1b( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.7( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.d( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.4( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.5( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1a( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.b( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.18( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.19( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.a( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1e( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1e( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.19( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1f( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.13( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1c( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1d( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.12( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.11( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.13( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.10( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.b( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.12( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.10( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1f( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1d( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1b( v 76'387 lc 0'0 (0'0,76'387] local-lis/les=49/50 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1c( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1a( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.18( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.6( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.7( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.5( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.4( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.8( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.f( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.9( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.c( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.e( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1( v 76'40 (0'0,76'40] local-lis/les=51/52 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.2( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.3( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.14( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.15( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.16( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.17( v 76'40 lc 0'0 (0'0,76'40] local-lis/les=51/52 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1b( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.14( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 78 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.14( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.0( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 76'386 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.2( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.4( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1a( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.a( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.10( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 78 pg[9.12( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=49/49 les/c/f=50/50/0 sis=77) [1] r=0 lpr=77 pi=[49,77)/1 crt=76'387 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.b( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1e( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.a( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.19( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.13( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.12( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.11( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.10( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1f( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1d( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1c( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.18( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.6( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=77/78 n=1 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.7( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.5( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.4( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.8( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[8.d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.f( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.9( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.c( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=77/78 n=1 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.0( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 76'39 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1a( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.e( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.1( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.2( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.14( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.3( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.16( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.15( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.17( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[10.d( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 78 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=77/78 n=0 ec=75/47 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]: {
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "user_id": "openstack",
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "display_name": "openstack",
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "email": "",
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "suspended": 0,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "max_buckets": 1000,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "subusers": [],
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "keys": [
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        {
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:            "user": "openstack",
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:            "access_key": "UCFHMZRXZLDFEKBLDJK0",
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:            "secret_key": "ABNJJ5hJnjGCCI13wnXBoCVA9A5CinBeuYAHBIsb"
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        }
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    ],
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "swift_keys": [],
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "caps": [],
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "op_mask": "read, write, delete",
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "default_placement": "",
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "default_storage_class": "",
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "placement_tags": [],
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "bucket_quota": {
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "enabled": false,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "check_on_raw": false,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "max_size": -1,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "max_size_kb": 0,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "max_objects": -1
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    },
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "user_quota": {
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "enabled": false,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "check_on_raw": false,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "max_size": -1,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "max_size_kb": 0,
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:        "max_objects": -1
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    },
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "temp_url_keys": [],
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "type": "rgw",
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]:    "mfa_ids": []
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]: }
Nov 22 03:36:03 np0005532048 intelligent_euler[104814]: 
Nov 22 03:36:03 np0005532048 ceph-mgr[75315]: [progress INFO root] Writing back 16 completed events
Nov 22 03:36:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:36:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:36:03 np0005532048 systemd[1]: libpod-880bf438212c31d8fa358692b1f47f781a684c4dfb308aa8fd011298c35bc1c8.scope: Deactivated successfully.
Nov 22 03:36:03 np0005532048 podman[104798]: 2025-11-22 08:36:03.48046927 +0000 UTC m=+0.866260486 container died 880bf438212c31d8fa358692b1f47f781a684c4dfb308aa8fd011298c35bc1c8 (image=quay.io/ceph/ceph:v18, name=intelligent_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:36:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c8d047e9d763bb9bf041b3c55be30240cfaf8ae7b71450294b5c7a383ed17d33-merged.mount: Deactivated successfully.
Nov 22 03:36:03 np0005532048 podman[104798]: 2025-11-22 08:36:03.52830673 +0000 UTC m=+0.914097956 container remove 880bf438212c31d8fa358692b1f47f781a684c4dfb308aa8fd011298c35bc1c8 (image=quay.io/ceph/ceph:v18, name=intelligent_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:36:03 np0005532048 systemd[1]: libpod-conmon-880bf438212c31d8fa358692b1f47f781a684c4dfb308aa8fd011298c35bc1c8.scope: Deactivated successfully.
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 22 03:36:03 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 22 03:36:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v195: 274 pgs: 21 peering, 62 unknown, 191 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Nov 22 03:36:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:36:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:04 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Nov 22 03:36:04 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.1b deep-scrub ok
Nov 22 03:36:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 22 03:36:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:36:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 22 03:36:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 22 03:36:04 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 79 pg[11.0( v 78'2 (0'0,78'2] local-lis/les=53/54 n=2 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=79 pruub=11.498852730s) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 78'1 mlcod 78'1 active pruub 140.804672241s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:04 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 79 pg[11.0( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=79 pruub=11.498852730s) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 78'1 mlcod 0'0 unknown pruub 140.804672241s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:36:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:05 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 22 03:36:05 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 22 03:36:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 22 03:36:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 22 03:36:05 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.17( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.16( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.14( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.15( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.13( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.2( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=1 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1( v 78'2 (0'0,78'2] local-lis/les=53/54 n=1 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.f( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.d( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.e( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.b( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.c( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.8( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.a( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.3( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.4( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.5( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.6( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.7( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.18( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1b( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1a( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1c( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1e( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1d( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1f( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.11( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.12( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.9( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.10( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.19( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=53/54 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.17( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.13( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.16( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.14( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.2( v 78'2 (0'0,78'2] local-lis/les=79/80 n=1 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1( v 78'2 (0'0,78'2] local-lis/les=79/80 n=1 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.15( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.f( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.0( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 78'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.d( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.e( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.b( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.c( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.8( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.a( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.4( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.3( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.6( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.5( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.18( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1a( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.7( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1b( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1d( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1f( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1c( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.11( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.12( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.1e( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.9( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.19( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:05 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 80 pg[11.10( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=53/53 les/c/f=54/54/0 sis=79) [1] r=0 lpr=79 pi=[53,79)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 21 peering, 93 unknown, 191 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:07 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Nov 22 03:36:07 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Nov 22 03:36:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 12 peering, 293 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 344 B/s wr, 1 op/s; 147 B/s, 0 objects/s recovering
Nov 22 03:36:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 22 03:36:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 22 03:36:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 22 03:36:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 22 03:36:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 295 B/s wr, 1 op/s; 199 B/s, 0 objects/s recovering
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 22 03:36:10 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[6.f( v 49'5 (0'0,49'5] local-lis/les=59/60 n=3 ec=43/27 lis/c=59/59 les/c/f=60/60/0 sis=81 pruub=14.576889038s) [2] r=-1 lpr=81 pi=[59,81)/1 crt=49'5 mlcod 49'5 active pruub 156.850494385s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[6.f( v 49'5 (0'0,49'5] local-lis/les=59/60 n=3 ec=43/27 lis/c=59/59 les/c/f=60/60/0 sis=81 pruub=14.576817513s) [2] r=-1 lpr=81 pi=[59,81)/1 crt=49'5 mlcod 0'0 unknown NOTIFY pruub 156.850494385s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/27 lis/c=59/59 les/c/f=60/60/0 sis=81) [2] r=0 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.d( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.208712578s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 78'43 active pruub 138.791778564s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.d( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.208662033s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 0'0 unknown NOTIFY pruub 138.791778564s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.b( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207224846s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.790496826s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.b( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207158089s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.790496826s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.1e( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207211494s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.790649414s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.19( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207311630s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.790847778s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.19( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207235336s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.790847778s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.12( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207372665s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791015625s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.1e( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206887245s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.790649414s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.12( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207219124s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791015625s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.13( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207067490s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.790939331s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.10( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207151413s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791061401s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.10( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207116127s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791061401s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.11( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207078934s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791046143s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.11( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207039833s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791046143s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.1a( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207009315s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791152954s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.1a( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206960678s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791152954s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.6( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206963539s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791213989s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.7( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.207014084s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791259766s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.6( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206935883s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791213989s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.4( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206965446s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791320801s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.8( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206921577s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791336060s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.4( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206933022s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791320801s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.f( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206903458s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791366577s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.7( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206818581s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791259766s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.8( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206868172s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791336060s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.13( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206498146s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.790939331s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.f( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206873894s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791366577s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.9( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206728935s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 78'43 active pruub 138.791397095s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.e( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206726074s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 78'43 active pruub 138.791503906s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.e( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206695557s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 0'0 unknown NOTIFY pruub 138.791503906s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.1( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206691742s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791534424s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.1( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206673622s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791534424s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.2( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206578255s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791580200s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.d( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.2( v 76'40 (0'0,76'40] local-lis/les=77/78 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206554413s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791580200s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.9( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206690788s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 0'0 unknown NOTIFY pruub 138.791397095s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.14( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206427574s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 78'43 active pruub 138.791625977s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.14( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206388474s) [1] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 0'0 unknown NOTIFY pruub 138.791625977s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.1e( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.15( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206191063s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 78'43 active pruub 138.791717529s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.15( v 78'44 (0'0,78'44] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.206098557s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=78'44 lcod 78'43 mlcod 0'0 unknown NOTIFY pruub 138.791717529s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.16( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.205825806s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791671753s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.16( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.205796242s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791671753s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.17( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.205768585s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active pruub 138.791732788s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[10.17( v 76'40 (0'0,76'40] local-lis/les=77/78 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.205740929s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.791732788s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.8( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.7( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.4( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.15( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.e( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.1( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.9( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.16( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[10.17( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.11( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.13( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.10( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.1a( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.19( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.6( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.2( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.b( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.f( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.12( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[10.14( empty local-lis/les=0/0 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.17( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.779017448s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.631027222s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.15( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.784523010s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636749268s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.15( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.784491539s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636749268s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186916351s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.039138794s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.187916756s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040313721s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186750412s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.039138794s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.187896729s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040313721s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.17( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.778775215s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.631027222s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.14( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783989906s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636672974s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.14( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783960342s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636672974s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.15( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186649323s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.039535522s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1( v 78'2 (0'0,78'2] local-lis/les=79/80 n=1 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783811569s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636718750s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.2( v 78'2 (0'0,78'2] local-lis/les=79/80 n=1 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783671379s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636703491s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186607361s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.039535522s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1( v 78'2 (0'0,78'2] local-lis/les=79/80 n=1 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783662796s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636718750s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186558723s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.039688110s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.17( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.15( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[11.17( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.2( v 78'2 (0'0,78'2] local-lis/les=79/80 n=1 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783643723s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636703491s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.d( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.f( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783534050s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636810303s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.f( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783517838s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636810303s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186510086s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.039688110s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186277390s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.039703369s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186254501s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.039703369s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.e( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783368111s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636856079s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.d( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783339500s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636856079s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.e( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783326149s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636856079s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.d( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783312798s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636856079s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186163902s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.039779663s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.b( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783277512s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636886597s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186140060s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.039779663s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.b( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783211708s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636886597s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186111450s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.039855957s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.186096191s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.039855957s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[11.14( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.8( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.783016205s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636901855s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.8( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.782999992s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636901855s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.3( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.782788277s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636962891s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.3( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.782765388s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636962891s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[11.1( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.185741425s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040084839s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.185716629s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040084839s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.6( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.782539368s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636962891s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.6( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.782518387s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636962891s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.4( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.782467842s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.636947632s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.185553551s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040084839s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.4( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.782431602s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.636947632s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.185531616s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040084839s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[11.f( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.b( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.3( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.8( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.11( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.3( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.184658051s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040130615s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.184516907s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040130615s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.2( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[11.e( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.18( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.780788422s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637023926s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.18( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.780654907s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637023926s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.18( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.9( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.1( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[11.6( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.7( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[11.4( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.5( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1a( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776968002s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637039185s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1b( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776908875s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637023926s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.180092812s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040237427s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1a( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776932716s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637039185s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1c( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776910782s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637084961s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.180055618s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040237427s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1b( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776877403s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637023926s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1c( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776879311s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637084961s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.180006981s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040313721s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1e( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776762962s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637145996s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1e( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776738167s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637145996s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.179856300s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040374756s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.1a( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1f( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776572227s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637100220s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.179828644s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040374756s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.11( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776571274s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637130737s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.1f( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776531219s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637100220s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.11( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776545525s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637130737s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.179741859s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040451050s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.179717064s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040451050s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.1b( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.1c( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.19( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.9( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776381493s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637161255s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.179627419s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040466309s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.9( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776346207s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637161255s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.179601669s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040466309s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.12( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776436806s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637145996s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.10( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776275635s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637191772s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.10( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776245117s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637191772s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.19( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776132584s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active pruub 146.637191772s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.12( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776214600s) [2] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637145996s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.1e( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[11.19( v 78'2 (0'0,78'2] local-lis/les=79/80 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81 pruub=10.776105881s) [0] r=-1 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.637191772s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.179400444s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 144.040512085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.179219246s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040313721s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 81 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81 pruub=8.179352760s) [0] r=-1 lpr=81 pi=[77,81)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.040512085s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.1f( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.11( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.13( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.9( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 81 pg[11.12( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.b( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[11.10( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[11.19( empty local-lis/les=0/0 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.1f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 81 pg[9.1b( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 22 03:36:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 22 03:36:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 22 03:36:11 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.19( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.1f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.1f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.19( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.1d( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.1d( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.15( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.15( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.3( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 03:36:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:36:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 03:36:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.3( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.1( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.d( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.1( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.d( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.1b( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.1b( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.9( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.9( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.17( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.17( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.7( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.7( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.b( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.b( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.5( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.5( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.11( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.11( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.13( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[9.13( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:11 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.1( v 76'40 (0'0,76'40] local-lis/les=81/82 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[11.17( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[6.f( v 49'5 lc 49'1 (0'0,49'5] local-lis/les=81/82 n=3 ec=43/27 lis/c=59/59 les/c/f=60/60/0 sis=81) [2] r=0 lpr=81 pi=[59,81)/1 crt=49'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[11.19( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.1e( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[11.1( v 78'2 (0'0,78'2] local-lis/les=81/82 n=1 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[11.f( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.e( v 78'44 lc 78'43 (0'0,78'44] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=78'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.d( v 78'44 lc 78'43 (0'0,78'44] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=78'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.16( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[11.e( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.17( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[11.6( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.7( v 76'40 (0'0,76'40] local-lis/les=81/82 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.4( v 76'40 (0'0,76'40] local-lis/les=81/82 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[11.14( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.15( v 78'44 lc 78'43 (0'0,78'44] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=78'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[11.4( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.8( v 76'40 (0'0,76'40] local-lis/les=81/82 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[10.9( v 78'44 lc 78'43 (0'0,78'44] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [0] r=0 lpr=81 pi=[77,81)/1 crt=78'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 82 pg[11.10( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [0] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.12( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.14( v 78'44 lc 78'43 (0'0,78'44] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=78'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.b( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.f( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.2( v 76'40 (0'0,76'40] local-lis/les=81/82 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.6( v 76'40 (0'0,76'40] local-lis/les=81/82 n=1 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.19( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.1a( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.10( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.13( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 82 pg[10.11( v 76'40 (0'0,76'40] local-lis/les=81/82 n=0 ec=77/51 lis/c=77/77 les/c/f=78/78/0 sis=81) [1] r=0 lpr=81 pi=[77,81)/1 crt=76'40 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.b( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.1f( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.18( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.12( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.1e( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.11( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.1b( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.1a( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.9( v 78'2 lc 0'0 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.d( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.3( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.8( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.2( v 78'2 (0'0,78'2] local-lis/les=81/82 n=1 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.15( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 82 pg[11.1c( v 78'2 (0'0,78'2] local-lis/les=81/82 n=0 ec=79/53 lis/c=79/79 les/c/f=80/80/0 sis=81) [2] r=0 lpr=81 pi=[79,81)/1 crt=78'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 11 peering, 294 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 318 B/s wr, 2 op/s; 215 B/s, 0 objects/s recovering
Nov 22 03:36:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 22 03:36:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 22 03:36:12 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 83 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[77,82)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 22 03:36:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 22 03:36:13 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 84 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686993599s) [0] async=[0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868530273s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686601639s) [0] async=[0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868164062s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686591148s) [0] async=[0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868164062s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.687033653s) [0] async=[0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868621826s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686536789s) [0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868164062s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686902046s) [0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868530273s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686509132s) [0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868164062s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686923981s) [0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868621826s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686768532s) [0] async=[0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868621826s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686764717s) [0] async=[0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868682861s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686715126s) [0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868621826s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:13 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 84 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84 pruub=15.686705589s) [0] r=-1 lpr=84 pi=[77,84)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868682861s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 11 peering, 294 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 245 B/s, 2 objects/s recovering
Nov 22 03:36:14 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 22 03:36:14 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 22 03:36:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 22 03:36:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 22 03:36:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.547881126s) [0] async=[0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868804932s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.547809601s) [0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868804932s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.547499657s) [0] async=[0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868942261s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.547378540s) [0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868942261s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.547129631s) [0] async=[0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.869079590s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.547068596s) [0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.869079590s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.546912193s) [0] async=[0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868988037s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.546810150s) [0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868988037s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.546361923s) [0] async=[0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868728638s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.546240807s) [0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868728638s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.546204567s) [0] async=[0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.869155884s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.546143532s) [0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.869155884s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.545736313s) [0] async=[0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868835449s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.545683861s) [0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868835449s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.544920921s) [0] async=[0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868469238s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.544854164s) [0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868469238s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.545354843s) [0] async=[0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868896484s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 85 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=82/83 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85 pruub=14.545044899s) [0] r=-1 lpr=85 pi=[77,85)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868896484s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=84/85 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.1d( v 76'387 (0'0,76'387] local-lis/les=84/85 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.b( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.1b( v 76'387 (0'0,76'387] local-lis/les=84/85 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 85 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=84) [0] r=0 lpr=84 pi=[77,84)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 22 03:36:14 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 22 03:36:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 22 03:36:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 22 03:36:15 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 22 03:36:15 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 86 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=86 pruub=13.556304932s) [0] async=[0] r=-1 lpr=86 pi=[77,86)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 153.868499756s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:15 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 86 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=82/83 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=86 pruub=13.556152344s) [0] r=-1 lpr=86 pi=[77,86)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.868499756s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=86) [0] r=0 lpr=86 pi=[77,86)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=86) [0] r=0 lpr=86 pi=[77,86)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.9( v 76'387 (0'0,76'387] local-lis/les=85/86 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.11( v 76'387 (0'0,76'387] local-lis/les=85/86 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.d( v 76'387 (0'0,76'387] local-lis/les=85/86 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.1( v 76'387 (0'0,76'387] local-lis/les=85/86 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.3( v 76'387 (0'0,76'387] local-lis/les=85/86 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.5( v 76'387 (0'0,76'387] local-lis/les=85/86 n=6 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:15 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 86 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=85) [0] r=0 lpr=85 pi=[77,85)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 11 peering, 294 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 245 B/s, 2 objects/s recovering
Nov 22 03:36:16 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.1d deep-scrub starts
Nov 22 03:36:16 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.1d deep-scrub ok
Nov 22 03:36:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 22 03:36:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 22 03:36:16 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 22 03:36:16 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 87 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=86/87 n=5 ec=77/49 lis/c=82/77 les/c/f=83/78/0 sis=86) [0] r=0 lpr=86 pi=[77,86)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:16 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 22 03:36:16 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 22 03:36:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 22 03:36:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 22 03:36:17 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 22 03:36:17 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 22 03:36:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 03:36:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 22 03:36:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 03:36:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:18 np0005532048 systemd-logind[822]: New session 34 of user zuul.
Nov 22 03:36:18 np0005532048 systemd[1]: Started Session 34 of User zuul.
Nov 22 03:36:18 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 22 03:36:18 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 22 03:36:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 22 03:36:18 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 03:36:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 03:36:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 22 03:36:18 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 22 03:36:19 np0005532048 python3.9[105062]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:36:19 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 03:36:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 397 B/s, 15 objects/s recovering
Nov 22 03:36:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 22 03:36:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:36:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.1b deep-scrub starts
Nov 22 03:36:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.1b deep-scrub ok
Nov 22 03:36:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 22 03:36:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:36:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:36:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 22 03:36:20 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 22 03:36:20 np0005532048 python3.9[105280]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:36:21 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 22 03:36:21 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 22 03:36:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:36:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 373 B/s, 14 objects/s recovering
Nov 22 03:36:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 22 03:36:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:36:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 22 03:36:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 22 03:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 22 03:36:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:36:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:36:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 22 03:36:22 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 22 03:36:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 22 03:36:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 22 03:36:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:23 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 22 03:36:23 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 22 03:36:23 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Nov 22 03:36:23 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Nov 22 03:36:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:36:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 373 B/s, 14 objects/s recovering
Nov 22 03:36:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 22 03:36:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 03:36:24 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 22 03:36:24 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 22 03:36:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 22 03:36:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 03:36:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 22 03:36:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 22 03:36:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 03:36:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 03:36:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 22 03:36:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 03:36:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 22 03:36:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 22 03:36:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 22 03:36:26 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 03:36:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 03:36:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 22 03:36:26 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 22 03:36:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 22 03:36:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 22 03:36:27 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 22 03:36:27 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 22 03:36:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 03:36:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 22 03:36:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 03:36:28 np0005532048 systemd[1]: session-34.scope: Deactivated successfully.
Nov 22 03:36:28 np0005532048 systemd[1]: session-34.scope: Consumed 8.397s CPU time.
Nov 22 03:36:28 np0005532048 systemd-logind[822]: Session 34 logged out. Waiting for processes to exit.
Nov 22 03:36:28 np0005532048 systemd-logind[822]: Removed session 34.
Nov 22 03:36:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 22 03:36:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 03:36:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 03:36:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 22 03:36:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 22 03:36:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 93 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=93 pruub=10.598837852s) [2] r=-1 lpr=93 pi=[85,93)/1 crt=76'387 mlcod 0'0 active pruub 170.791412354s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 93 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=93 pruub=10.598786354s) [2] r=-1 lpr=93 pi=[85,93)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 170.791412354s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 93 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=93 pruub=9.601372719s) [2] r=-1 lpr=93 pi=[84,93)/1 crt=76'387 mlcod 0'0 active pruub 169.794021606s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 93 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=93 pruub=9.601299286s) [2] r=-1 lpr=93 pi=[84,93)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 169.794021606s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 93 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=93 pruub=9.600779533s) [2] r=-1 lpr=93 pi=[84,93)/1 crt=76'387 mlcod 0'0 active pruub 169.793853760s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 93 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=93 pruub=9.600658417s) [2] r=-1 lpr=93 pi=[84,93)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 169.793853760s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 93 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=93 pruub=10.598145485s) [2] r=-1 lpr=93 pi=[85,93)/1 crt=76'387 mlcod 0'0 active pruub 170.791549683s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 93 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=93 pruub=10.598065376s) [2] r=-1 lpr=93 pi=[85,93)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 170.791549683s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:28 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 93 pg[9.17( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=93) [2] r=0 lpr=93 pi=[85,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:28 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=93) [2] r=0 lpr=93 pi=[84,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:28 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 93 pg[9.7( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=93) [2] r=0 lpr=93 pi=[84,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:28 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=93) [2] r=0 lpr=93 pi=[85,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 92 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92 pruub=13.602972984s) [2] r=-1 lpr=92 pi=[77,92)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 168.039703369s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 93 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92 pruub=13.602574348s) [2] r=-1 lpr=92 pi=[77,92)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.039703369s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 92 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92 pruub=13.603199959s) [2] r=-1 lpr=92 pi=[77,92)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 168.040542603s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 93 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92 pruub=13.603087425s) [2] r=-1 lpr=92 pi=[77,92)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.040542603s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 92 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92 pruub=13.602912903s) [2] r=-1 lpr=92 pi=[77,92)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 168.040557861s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 93 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92 pruub=13.602823257s) [2] r=-1 lpr=92 pi=[77,92)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.040557861s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 93 pg[9.16( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92) [2] r=0 lpr=93 pi=[77,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 92 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92 pruub=13.602100372s) [2] r=-1 lpr=92 pi=[77,92)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 168.040908813s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 93 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92 pruub=13.602013588s) [2] r=-1 lpr=92 pi=[77,92)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.040908813s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 93 pg[9.e( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92) [2] r=0 lpr=93 pi=[77,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 93 pg[9.6( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92) [2] r=0 lpr=93 pi=[77,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 93 pg[9.1e( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=92) [2] r=0 lpr=93 pi=[77,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 22 03:36:29 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 03:36:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 22 03:36:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.e( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[77,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.e( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[77,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.1e( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[77,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.1f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[85,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.1e( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[77,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.1f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[85,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.6( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[77,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.6( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[77,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.7( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[84,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.7( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[84,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[84,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[84,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.17( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[85,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.17( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[85,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[77,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 94 pg[9.16( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=-1 lpr=94 pi=[77,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 94 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 94 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 94 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 94 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 94 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 94 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 94 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 94 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 94 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] r=0 lpr=94 pi=[85,94)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 94 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] r=0 lpr=94 pi=[84,94)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 94 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] r=0 lpr=94 pi=[84,94)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 94 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] r=0 lpr=94 pi=[85,94)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 94 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] r=0 lpr=94 pi=[84,94)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 94 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] r=0 lpr=94 pi=[85,94)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 94 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=84/85 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] r=0 lpr=94 pi=[84,94)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 94 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] r=0 lpr=94 pi=[85,94)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 22 03:36:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 03:36:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 22 03:36:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 03:36:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 22 03:36:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 03:36:30 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 22 03:36:30 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 95 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=95 pruub=12.255872726s) [2] r=-1 lpr=95 pi=[77,95)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 168.040573120s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:30 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 95 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=95 pruub=12.255448341s) [2] r=-1 lpr=95 pi=[77,95)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.040573120s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:30 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 95 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=95 pruub=12.255474091s) [2] r=-1 lpr=95 pi=[77,95)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 168.040863037s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:30 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 95 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=95 pruub=12.255362511s) [2] r=-1 lpr=95 pi=[77,95)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.040863037s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:30 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 95 pg[9.8( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=95) [2] r=0 lpr=95 pi=[77,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:30 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 95 pg[9.18( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=95) [2] r=0 lpr=95 pi=[77,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:30 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 95 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[84,94)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:30 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 95 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[85,94)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:30 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 95 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[85,94)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:30 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 95 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[84,94)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:30 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 95 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] async=[2] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:30 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 95 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] async=[2] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:30 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 95 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] async=[2] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:30 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 95 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=94) [2]/[1] async=[2] r=0 lpr=94 pi=[77,94)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 22 03:36:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 22 03:36:31 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 03:36:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 22 03:36:31 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 96 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96 pruub=15.008462906s) [2] async=[2] r=-1 lpr=96 pi=[85,96)/1 crt=76'387 mlcod 76'387 active pruub 178.246673584s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 96 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96 pruub=15.008324623s) [2] r=-1 lpr=96 pi=[85,96)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 178.246673584s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 96 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96 pruub=15.005599022s) [2] async=[2] r=-1 lpr=96 pi=[85,96)/1 crt=76'387 mlcod 76'387 active pruub 178.244384766s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 96 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96 pruub=14.997980118s) [2] async=[2] r=-1 lpr=96 pi=[84,96)/1 crt=76'387 mlcod 76'387 active pruub 178.236846924s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 96 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96 pruub=15.008023262s) [2] async=[2] r=-1 lpr=96 pi=[84,96)/1 crt=76'387 mlcod 76'387 active pruub 178.246902466s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 96 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96 pruub=15.005427361s) [2] r=-1 lpr=96 pi=[85,96)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 178.244384766s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 96 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96 pruub=14.997837067s) [2] r=-1 lpr=96 pi=[84,96)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 178.236846924s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 96 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96 pruub=15.007639885s) [2] r=-1 lpr=96 pi=[84,96)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 178.246902466s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96 pruub=15.006154060s) [2] async=[2] r=-1 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 171.801208496s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96 pruub=15.006058693s) [2] r=-1 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.801208496s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96 pruub=15.005788803s) [2] async=[2] r=-1 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 171.801284790s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96 pruub=15.005733490s) [2] r=-1 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.801284790s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] r=0 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] r=0 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96 pruub=15.005546570s) [2] async=[2] r=-1 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 171.801284790s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=94/95 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96 pruub=15.005432129s) [2] r=-1 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.801284790s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] r=0 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] r=0 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96 pruub=15.005199432s) [2] async=[2] r=-1 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 171.801254272s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 96 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=94/95 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96 pruub=15.005131721s) [2] r=-1 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 171.801254272s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96) [2] r=0 lpr=96 pi=[85,96)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96) [2] r=0 lpr=96 pi=[84,96)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96) [2] r=0 lpr=96 pi=[85,96)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96) [2] r=0 lpr=96 pi=[84,96)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96) [2] r=0 lpr=96 pi=[84,96)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96) [2] r=0 lpr=96 pi=[84,96)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96) [2] r=0 lpr=96 pi=[85,96)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96) [2] r=0 lpr=96 pi=[85,96)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.18( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] r=-1 lpr=96 pi=[77,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.18( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] r=-1 lpr=96 pi=[77,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.8( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] r=-1 lpr=96 pi=[77,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.8( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] r=-1 lpr=96 pi=[77,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 96 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 4 active+remapped, 301 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Nov 22 03:36:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 22 03:36:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 03:36:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 22 03:36:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 03:36:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 22 03:36:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 03:36:32 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 22 03:36:32 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 97 pg[9.17( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96) [2] r=0 lpr=96 pi=[85,96)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:32 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 97 pg[9.7( v 76'387 (0'0,76'387] local-lis/les=96/97 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96) [2] r=0 lpr=96 pi=[84,96)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:32 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 97 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=94/85 les/c/f=95/86/0 sis=96) [2] r=0 lpr=96 pi=[85,96)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:32 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 97 pg[9.f( v 76'387 (0'0,76'387] local-lis/les=96/97 n=6 ec=77/49 lis/c=94/84 les/c/f=95/85/0 sis=96) [2] r=0 lpr=96 pi=[84,96)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:32 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 97 pg[9.e( v 76'387 (0'0,76'387] local-lis/les=96/97 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:32 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 97 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:32 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 97 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:32 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 97 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=96/97 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:32 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 97 pg[9.6( v 76'387 (0'0,76'387] local-lis/les=96/97 n=6 ec=77/49 lis/c=94/77 les/c/f=95/78/0 sis=96) [2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:32 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 97 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[77,96)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:36:33 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2ccf38d7-0d0a-4f80-aa2c-3d6322430317 does not exist
Nov 22 03:36:33 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 43214ed9-4e86-49dc-8d0e-958393efb137 does not exist
Nov 22 03:36:33 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0d228b2c-7bb6-4f84-be10-b3f4a28bd0c5 does not exist
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 22 03:36:33 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 98 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98) [2] r=0 lpr=98 pi=[77,98)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:33 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 98 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98) [2] r=0 lpr=98 pi=[77,98)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:33 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 98 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98) [2] r=0 lpr=98 pi=[77,98)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:33 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 98 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98) [2] r=0 lpr=98 pi=[77,98)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:33 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 98 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98 pruub=15.658274651s) [2] async=[2] r=-1 lpr=98 pi=[77,98)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 173.837356567s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:33 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 98 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=96/97 n=6 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98 pruub=15.656476021s) [2] async=[2] r=-1 lpr=98 pi=[77,98)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 173.835556030s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:33 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 98 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=96/97 n=6 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98 pruub=15.656037331s) [2] r=-1 lpr=98 pi=[77,98)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.835556030s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:33 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 98 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98 pruub=15.657857895s) [2] r=-1 lpr=98 pi=[77,98)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.837356567s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:33 np0005532048 podman[105611]: 2025-11-22 08:36:33.653920704 +0000 UTC m=+0.034840593 container create 8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:36:33 np0005532048 systemd[1]: Started libpod-conmon-8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0.scope.
Nov 22 03:36:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:36:33 np0005532048 podman[105611]: 2025-11-22 08:36:33.726179645 +0000 UTC m=+0.107099634 container init 8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:36:33 np0005532048 podman[105611]: 2025-11-22 08:36:33.73369859 +0000 UTC m=+0.114618509 container start 8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:36:33 np0005532048 podman[105611]: 2025-11-22 08:36:33.637526299 +0000 UTC m=+0.018446218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:33 np0005532048 podman[105611]: 2025-11-22 08:36:33.737109518 +0000 UTC m=+0.118029447 container attach 8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:36:33 np0005532048 beautiful_wing[105628]: 167 167
Nov 22 03:36:33 np0005532048 systemd[1]: libpod-8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0.scope: Deactivated successfully.
Nov 22 03:36:33 np0005532048 conmon[105628]: conmon 8e3223f6798ef0953595 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0.scope/container/memory.events
Nov 22 03:36:33 np0005532048 podman[105611]: 2025-11-22 08:36:33.740866906 +0000 UTC m=+0.121786835 container died 8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:36:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-58befc991d0caede7bf69c7e89eab6f70a0a670cdfcbdad5ade4e73dcc840bac-merged.mount: Deactivated successfully.
Nov 22 03:36:33 np0005532048 podman[105611]: 2025-11-22 08:36:33.800300424 +0000 UTC m=+0.181220313 container remove 8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wing, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:36:33 np0005532048 systemd[1]: libpod-conmon-8e3223f6798ef09535950860ab63cd6fae6cab1d65b11718b7b05072a94478d0.scope: Deactivated successfully.
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:36:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:36:33 np0005532048 podman[105652]: 2025-11-22 08:36:33.9835305 +0000 UTC m=+0.046803433 container create 79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pike, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:36:34 np0005532048 systemd[1]: Started libpod-conmon-79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b.scope.
Nov 22 03:36:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:36:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdea7fc88ad25e24c583124c442c4feed704d9caa11fe696f67a8b037d511ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdea7fc88ad25e24c583124c442c4feed704d9caa11fe696f67a8b037d511ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdea7fc88ad25e24c583124c442c4feed704d9caa11fe696f67a8b037d511ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdea7fc88ad25e24c583124c442c4feed704d9caa11fe696f67a8b037d511ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:34 np0005532048 podman[105652]: 2025-11-22 08:36:33.965462582 +0000 UTC m=+0.028735525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdea7fc88ad25e24c583124c442c4feed704d9caa11fe696f67a8b037d511ff9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:34 np0005532048 podman[105652]: 2025-11-22 08:36:34.07080205 +0000 UTC m=+0.134075013 container init 79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pike, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:36:34 np0005532048 podman[105652]: 2025-11-22 08:36:34.081749054 +0000 UTC m=+0.145021987 container start 79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pike, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:36:34 np0005532048 podman[105652]: 2025-11-22 08:36:34.085531232 +0000 UTC m=+0.148804165 container attach 79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pike, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:36:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 8 peering, 297 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 192 B/s, 9 objects/s recovering
Nov 22 03:36:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 22 03:36:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 22 03:36:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 22 03:36:34 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 99 pg[9.8( v 76'387 (0'0,76'387] local-lis/les=98/99 n=6 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98) [2] r=0 lpr=98 pi=[77,98)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:34 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 99 pg[9.18( v 76'387 (0'0,76'387] local-lis/les=98/99 n=5 ec=77/49 lis/c=96/77 les/c/f=97/78/0 sis=98) [2] r=0 lpr=98 pi=[77,98)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 22 03:36:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 22 03:36:34 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 22 03:36:34 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 22 03:36:35 np0005532048 sad_pike[105667]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:36:35 np0005532048 sad_pike[105667]: --> relative data size: 1.0
Nov 22 03:36:35 np0005532048 sad_pike[105667]: --> All data devices are unavailable
Nov 22 03:36:35 np0005532048 systemd[1]: libpod-79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b.scope: Deactivated successfully.
Nov 22 03:36:35 np0005532048 systemd[1]: libpod-79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b.scope: Consumed 1.008s CPU time.
Nov 22 03:36:35 np0005532048 podman[105652]: 2025-11-22 08:36:35.149296701 +0000 UTC m=+1.212569644 container died 79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pike, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:36:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bdea7fc88ad25e24c583124c442c4feed704d9caa11fe696f67a8b037d511ff9-merged.mount: Deactivated successfully.
Nov 22 03:36:35 np0005532048 podman[105652]: 2025-11-22 08:36:35.217726664 +0000 UTC m=+1.280999597 container remove 79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:36:35 np0005532048 systemd[1]: libpod-conmon-79be903089993a5a41de34f6a0e7678c508f790c93d39fb2d2cdbf8e620c616b.scope: Deactivated successfully.
Nov 22 03:36:35 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.e deep-scrub starts
Nov 22 03:36:35 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.e deep-scrub ok
Nov 22 03:36:35 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 22 03:36:35 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 22 03:36:35 np0005532048 podman[105850]: 2025-11-22 08:36:35.848612802 +0000 UTC m=+0.045494369 container create 47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_grothendieck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:36:35 np0005532048 systemd[1]: Started libpod-conmon-47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9.scope.
Nov 22 03:36:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:36:35 np0005532048 podman[105850]: 2025-11-22 08:36:35.824454686 +0000 UTC m=+0.021336253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:35 np0005532048 podman[105850]: 2025-11-22 08:36:35.937832773 +0000 UTC m=+0.134714360 container init 47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:36:35 np0005532048 podman[105850]: 2025-11-22 08:36:35.944761822 +0000 UTC m=+0.141643389 container start 47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:36:35 np0005532048 nostalgic_grothendieck[105866]: 167 167
Nov 22 03:36:35 np0005532048 systemd[1]: libpod-47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9.scope: Deactivated successfully.
Nov 22 03:36:35 np0005532048 conmon[105866]: conmon 47b9d02afa4ff817c9cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9.scope/container/memory.events
Nov 22 03:36:35 np0005532048 podman[105850]: 2025-11-22 08:36:35.955071899 +0000 UTC m=+0.151953476 container attach 47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_grothendieck, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:36:35 np0005532048 podman[105850]: 2025-11-22 08:36:35.955694376 +0000 UTC m=+0.152575963 container died 47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_grothendieck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:36:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0cbc656ba01f527ab50b0a7fa54e2f225916e5a13154d62219ef9782d9c66dc3-merged.mount: Deactivated successfully.
Nov 22 03:36:36 np0005532048 podman[105850]: 2025-11-22 08:36:36.010465914 +0000 UTC m=+0.207347481 container remove 47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_grothendieck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:36:36 np0005532048 systemd[1]: libpod-conmon-47b9d02afa4ff817c9cffcbe604e3f5092cb56e2a89d83a981a4c3cc5d3520f9.scope: Deactivated successfully.
Nov 22 03:36:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 8 peering, 297 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 132 B/s, 4 objects/s recovering
Nov 22 03:36:36 np0005532048 podman[105890]: 2025-11-22 08:36:36.171586097 +0000 UTC m=+0.047914062 container create 81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:36:36 np0005532048 systemd[1]: Started libpod-conmon-81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0.scope.
Nov 22 03:36:36 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:36:36 np0005532048 podman[105890]: 2025-11-22 08:36:36.149437813 +0000 UTC m=+0.025765798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81120373bacabaa2e14bd4239e215c50ce132d9403a8eb5635f305069639c84c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81120373bacabaa2e14bd4239e215c50ce132d9403a8eb5635f305069639c84c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81120373bacabaa2e14bd4239e215c50ce132d9403a8eb5635f305069639c84c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81120373bacabaa2e14bd4239e215c50ce132d9403a8eb5635f305069639c84c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:36 np0005532048 podman[105890]: 2025-11-22 08:36:36.268488856 +0000 UTC m=+0.144816821 container init 81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:36:36 np0005532048 podman[105890]: 2025-11-22 08:36:36.277219162 +0000 UTC m=+0.153547137 container start 81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 03:36:36 np0005532048 podman[105890]: 2025-11-22 08:36:36.286897103 +0000 UTC m=+0.163225088 container attach 81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:36:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 22 03:36:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 22 03:36:36 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Nov 22 03:36:36 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]: {
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:    "0": [
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:        {
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "devices": [
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "/dev/loop3"
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            ],
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_name": "ceph_lv0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_size": "21470642176",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "name": "ceph_lv0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "tags": {
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.cluster_name": "ceph",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.crush_device_class": "",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.encrypted": "0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.osd_id": "0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.type": "block",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.vdo": "0"
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            },
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "type": "block",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "vg_name": "ceph_vg0"
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:        }
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:    ],
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:    "1": [
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:        {
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "devices": [
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "/dev/loop4"
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            ],
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_name": "ceph_lv1",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_size": "21470642176",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "name": "ceph_lv1",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "tags": {
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.cluster_name": "ceph",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.crush_device_class": "",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.encrypted": "0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.osd_id": "1",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.type": "block",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.vdo": "0"
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            },
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "type": "block",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "vg_name": "ceph_vg1"
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:        }
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:    ],
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:    "2": [
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:        {
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "devices": [
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "/dev/loop5"
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            ],
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_name": "ceph_lv2",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_size": "21470642176",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "name": "ceph_lv2",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "tags": {
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.cluster_name": "ceph",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.crush_device_class": "",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.encrypted": "0",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.osd_id": "2",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.type": "block",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:                "ceph.vdo": "0"
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            },
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "type": "block",
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:            "vg_name": "ceph_vg2"
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:        }
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]:    ]
Nov 22 03:36:37 np0005532048 vigorous_bouman[105907]: }
Nov 22 03:36:37 np0005532048 systemd[1]: libpod-81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0.scope: Deactivated successfully.
Nov 22 03:36:37 np0005532048 conmon[105907]: conmon 81351897ec6118687028 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0.scope/container/memory.events
Nov 22 03:36:37 np0005532048 podman[105890]: 2025-11-22 08:36:37.12856714 +0000 UTC m=+1.004895195 container died 81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:36:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay-81120373bacabaa2e14bd4239e215c50ce132d9403a8eb5635f305069639c84c-merged.mount: Deactivated successfully.
Nov 22 03:36:37 np0005532048 podman[105890]: 2025-11-22 08:36:37.221490787 +0000 UTC m=+1.097818752 container remove 81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:36:37 np0005532048 systemd[1]: libpod-conmon-81351897ec611868702843deef8b1d504fe9c31003867dea9adf5d525f5650f0.scope: Deactivated successfully.
Nov 22 03:36:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 22 03:36:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 22 03:36:37 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 22 03:36:37 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 22 03:36:37 np0005532048 podman[106070]: 2025-11-22 08:36:37.957345444 +0000 UTC m=+0.047461841 container create e9485e4860e5fb9d4d973f53e3f502592ae8f8f7dab61afe8993f8736784ef1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:36:38 np0005532048 systemd[1]: Started libpod-conmon-e9485e4860e5fb9d4d973f53e3f502592ae8f8f7dab61afe8993f8736784ef1e.scope.
Nov 22 03:36:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:36:38 np0005532048 podman[106070]: 2025-11-22 08:36:37.939308187 +0000 UTC m=+0.029424604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:38 np0005532048 podman[106070]: 2025-11-22 08:36:38.038235549 +0000 UTC m=+0.128351956 container init e9485e4860e5fb9d4d973f53e3f502592ae8f8f7dab61afe8993f8736784ef1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:36:38 np0005532048 podman[106070]: 2025-11-22 08:36:38.043925336 +0000 UTC m=+0.134041733 container start e9485e4860e5fb9d4d973f53e3f502592ae8f8f7dab61afe8993f8736784ef1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:36:38 np0005532048 podman[106070]: 2025-11-22 08:36:38.048046883 +0000 UTC m=+0.138163280 container attach e9485e4860e5fb9d4d973f53e3f502592ae8f8f7dab61afe8993f8736784ef1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 22 03:36:38 np0005532048 funny_ganguly[106086]: 167 167
Nov 22 03:36:38 np0005532048 systemd[1]: libpod-e9485e4860e5fb9d4d973f53e3f502592ae8f8f7dab61afe8993f8736784ef1e.scope: Deactivated successfully.
Nov 22 03:36:38 np0005532048 podman[106070]: 2025-11-22 08:36:38.05102136 +0000 UTC m=+0.141137757 container died e9485e4860e5fb9d4d973f53e3f502592ae8f8f7dab61afe8993f8736784ef1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:36:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e1ed6893b93288f5856ef0fe0f80a64c2e681cdef75f445a3238ada0905abd45-merged.mount: Deactivated successfully.
Nov 22 03:36:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 128 B/s, 5 objects/s recovering
Nov 22 03:36:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 22 03:36:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 03:36:38 np0005532048 podman[106070]: 2025-11-22 08:36:38.109916265 +0000 UTC m=+0.200032672 container remove e9485e4860e5fb9d4d973f53e3f502592ae8f8f7dab61afe8993f8736784ef1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:36:38 np0005532048 systemd[1]: libpod-conmon-e9485e4860e5fb9d4d973f53e3f502592ae8f8f7dab61afe8993f8736784ef1e.scope: Deactivated successfully.
Nov 22 03:36:38 np0005532048 podman[106110]: 2025-11-22 08:36:38.286387185 +0000 UTC m=+0.049223115 container create c55dfa5653cff11f5ac054d2e1f109e497edf5f0ba5cac616edcbb877605f100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jang, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:36:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 22 03:36:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 22 03:36:38 np0005532048 systemd[1]: Started libpod-conmon-c55dfa5653cff11f5ac054d2e1f109e497edf5f0ba5cac616edcbb877605f100.scope.
Nov 22 03:36:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:36:38 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 22 03:36:38 np0005532048 podman[106110]: 2025-11-22 08:36:38.26723442 +0000 UTC m=+0.030070370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ab721ed485e4eb52693d5c6ee8b402e6469193e29805046eca7f9f35461364/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ab721ed485e4eb52693d5c6ee8b402e6469193e29805046eca7f9f35461364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ab721ed485e4eb52693d5c6ee8b402e6469193e29805046eca7f9f35461364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ab721ed485e4eb52693d5c6ee8b402e6469193e29805046eca7f9f35461364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 22 03:36:38 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 22 03:36:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 03:36:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 22 03:36:38 np0005532048 podman[106110]: 2025-11-22 08:36:38.384706322 +0000 UTC m=+0.147542262 container init c55dfa5653cff11f5ac054d2e1f109e497edf5f0ba5cac616edcbb877605f100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:36:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 22 03:36:38 np0005532048 podman[106110]: 2025-11-22 08:36:38.393216533 +0000 UTC m=+0.156052473 container start c55dfa5653cff11f5ac054d2e1f109e497edf5f0ba5cac616edcbb877605f100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jang, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:36:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 03:36:38 np0005532048 podman[106110]: 2025-11-22 08:36:38.401112777 +0000 UTC m=+0.163948737 container attach c55dfa5653cff11f5ac054d2e1f109e497edf5f0ba5cac616edcbb877605f100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jang, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:36:38 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 22 03:36:38 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 22 03:36:39 np0005532048 boring_jang[106127]: {
Nov 22 03:36:39 np0005532048 boring_jang[106127]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "osd_id": 1,
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "type": "bluestore"
Nov 22 03:36:39 np0005532048 boring_jang[106127]:    },
Nov 22 03:36:39 np0005532048 boring_jang[106127]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "osd_id": 0,
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "type": "bluestore"
Nov 22 03:36:39 np0005532048 boring_jang[106127]:    },
Nov 22 03:36:39 np0005532048 boring_jang[106127]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "osd_id": 2,
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:36:39 np0005532048 boring_jang[106127]:        "type": "bluestore"
Nov 22 03:36:39 np0005532048 boring_jang[106127]:    }
Nov 22 03:36:39 np0005532048 boring_jang[106127]: }
Nov 22 03:36:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 22 03:36:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 22 03:36:39 np0005532048 systemd[1]: libpod-c55dfa5653cff11f5ac054d2e1f109e497edf5f0ba5cac616edcbb877605f100.scope: Deactivated successfully.
Nov 22 03:36:39 np0005532048 podman[106110]: 2025-11-22 08:36:39.353288916 +0000 UTC m=+1.116124846 container died c55dfa5653cff11f5ac054d2e1f109e497edf5f0ba5cac616edcbb877605f100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:36:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-31ab721ed485e4eb52693d5c6ee8b402e6469193e29805046eca7f9f35461364-merged.mount: Deactivated successfully.
Nov 22 03:36:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 03:36:39 np0005532048 podman[106110]: 2025-11-22 08:36:39.412282484 +0000 UTC m=+1.175118414 container remove c55dfa5653cff11f5ac054d2e1f109e497edf5f0ba5cac616edcbb877605f100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jang, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:36:39 np0005532048 systemd[1]: libpod-conmon-c55dfa5653cff11f5ac054d2e1f109e497edf5f0ba5cac616edcbb877605f100.scope: Deactivated successfully.
Nov 22 03:36:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:36:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:36:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:36:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:36:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 19146311-015f-4204-8699-7e1fe7117bd3 does not exist
Nov 22 03:36:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 58f9558e-fbc8-4030-a21d-a928c81986c7 does not exist
Nov 22 03:36:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 1 objects/s recovering
Nov 22 03:36:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 22 03:36:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 03:36:40 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 22 03:36:40 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 22 03:36:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:36:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:36:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 03:36:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 22 03:36:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 03:36:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 22 03:36:40 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 22 03:36:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 03:36:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Nov 22 03:36:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 22 03:36:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 03:36:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 22 03:36:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 03:36:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 03:36:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 22 03:36:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 22 03:36:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 102 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=102 pruub=15.738818169s) [2] r=-1 lpr=102 pi=[77,102)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 184.040649414s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 102 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=102 pruub=15.738935471s) [2] r=-1 lpr=102 pi=[77,102)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 184.041229248s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 102 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=102 pruub=15.738850594s) [2] r=-1 lpr=102 pi=[77,102)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.041229248s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 102 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=102 pruub=15.738450050s) [2] r=-1 lpr=102 pi=[77,102)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.040649414s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:43 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 102 pg[9.1c( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=102) [2] r=0 lpr=102 pi=[77,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:43 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 102 pg[9.c( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=102) [2] r=0 lpr=102 pi=[77,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:43 np0005532048 systemd-logind[822]: New session 35 of user zuul.
Nov 22 03:36:43 np0005532048 systemd[1]: Started Session 35 of User zuul.
Nov 22 03:36:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 22 03:36:43 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 03:36:43 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 22 03:36:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 22 03:36:43 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 22 03:36:43 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 22 03:36:43 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] r=-1 lpr=103 pi=[77,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:43 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] r=-1 lpr=103 pi=[77,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:43 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 103 pg[9.c( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] r=-1 lpr=103 pi=[77,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:43 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 103 pg[9.c( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] r=-1 lpr=103 pi=[77,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 103 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] r=0 lpr=103 pi=[77,103)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 103 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] r=0 lpr=103 pi=[77,103)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 103 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] r=0 lpr=103 pi=[77,103)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:43 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 103 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=77/78 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] r=0 lpr=103 pi=[77,103)/1 crt=76'387 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 22 03:36:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 03:36:44 np0005532048 python3.9[106378]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 22 03:36:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 22 03:36:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 03:36:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 03:36:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 22 03:36:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 22 03:36:45 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 104 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=103/104 n=6 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] async=[2] r=0 lpr=103 pi=[77,103)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:45 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 104 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=103/104 n=5 ec=77/49 lis/c=77/77 les/c/f=78/78/0 sis=103) [2]/[1] async=[2] r=0 lpr=103 pi=[77,103)/1 crt=76'387 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:45 np0005532048 python3.9[106552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:36:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 22 03:36:45 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 03:36:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 22 03:36:45 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 22 03:36:45 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 105 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=103/104 n=6 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105 pruub=15.675045967s) [2] async=[2] r=-1 lpr=105 pi=[77,105)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 186.189575195s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:45 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 105 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=103/104 n=6 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105 pruub=15.674971581s) [2] r=-1 lpr=105 pi=[77,105)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.189575195s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:45 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 105 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=103/104 n=5 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105 pruub=15.683097839s) [2] async=[2] r=-1 lpr=105 pi=[77,105)/1 crt=76'387 lcod 0'0 mlcod 0'0 active pruub 186.198074341s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:45 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 105 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=103/104 n=5 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105 pruub=15.683002472s) [2] r=-1 lpr=105 pi=[77,105)/1 crt=76'387 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.198074341s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:45 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 105 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105) [2] r=0 lpr=105 pi=[77,105)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:45 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 105 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105) [2] r=0 lpr=105 pi=[77,105)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:45 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 105 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=0/0 n=6 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105) [2] r=0 lpr=105 pi=[77,105)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:45 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 105 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105) [2] r=0 lpr=105 pi=[77,105)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 22 03:36:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 03:36:46 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Nov 22 03:36:46 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Nov 22 03:36:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 22 03:36:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 03:36:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 22 03:36:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 03:36:46 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 22 03:36:46 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 106 pg[9.c( v 76'387 (0'0,76'387] local-lis/les=105/106 n=6 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105) [2] r=0 lpr=105 pi=[77,105)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:46 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 106 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=105/106 n=5 ec=77/49 lis/c=103/77 les/c/f=104/78/0 sis=105) [2] r=0 lpr=105 pi=[77,105)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:46 np0005532048 python3.9[106708]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:36:47 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 03:36:47 np0005532048 python3.9[106861]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:36:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 2 objects/s recovering
Nov 22 03:36:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 22 03:36:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 03:36:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:48 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 22 03:36:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 22 03:36:48 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 22 03:36:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 22 03:36:48 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 22 03:36:48 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 22 03:36:48 np0005532048 python3.9[107015]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:36:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 22 03:36:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 03:36:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 03:36:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 22 03:36:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 22 03:36:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 22 03:36:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 22 03:36:49 np0005532048 python3.9[107167]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:36:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 03:36:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 2 objects/s recovering
Nov 22 03:36:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 22 03:36:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 22 03:36:50 np0005532048 python3.9[107317]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:36:50 np0005532048 network[107334]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:36:50 np0005532048 network[107335]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:36:50 np0005532048 network[107336]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:36:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 22 03:36:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 22 03:36:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 22 03:36:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 22 03:36:50 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 22 03:36:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 22 03:36:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 22 03:36:51 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 22 03:36:51 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 22 03:36:51 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 22 03:36:51 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 22 03:36:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:36:52
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'images', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log']
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 22 03:36:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 22 03:36:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 22 03:36:52 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 22 03:36:52 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:36:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:36:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 22 03:36:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 22 03:36:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 22 03:36:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 22 03:36:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 22 03:36:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 22 03:36:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 22 03:36:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 22 03:36:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 22 03:36:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 22 03:36:54 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 22 03:36:54 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 22 03:36:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 22 03:36:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 22 03:36:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 22 03:36:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 22 03:36:55 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 22 03:36:55 np0005532048 python3.9[107596]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 22 03:36:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 22 03:36:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 22 03:36:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 22 03:36:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 22 03:36:56 np0005532048 python3.9[107746]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:36:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 22 03:36:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 22 03:36:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 22 03:36:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 22 03:36:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 22 03:36:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 111 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=84/85 n=5 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=111 pruub=13.456903458s) [2] r=-1 lpr=111 pi=[84,111)/1 crt=76'387 mlcod 0'0 active pruub 201.794692993s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:57 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 111 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=84/85 n=5 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=111 pruub=13.456340790s) [2] r=-1 lpr=111 pi=[84,111)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 201.794692993s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:57 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 111 pg[9.13( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=111) [2] r=0 lpr=111 pi=[84,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:57 np0005532048 python3.9[107900]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:36:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 22 03:36:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 22 03:36:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 22 03:36:58 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 112 pg[9.13( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=112) [2]/[0] r=-1 lpr=112 pi=[84,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:58 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 112 pg[9.13( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=112) [2]/[0] r=-1 lpr=112 pi=[84,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:36:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 22 03:36:58 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 112 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=84/85 n=5 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=112) [2]/[0] r=0 lpr=112 pi=[84,112)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:36:58 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 112 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=84/85 n=5 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=112) [2]/[0] r=0 lpr=112 pi=[84,112)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:36:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:58 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 22 03:36:58 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 22 03:36:58 np0005532048 python3.9[108058]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:36:58 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 22 03:36:58 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 22 03:36:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 22 03:36:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 22 03:36:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 22 03:36:59 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 113 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=112/113 n=5 ec=77/49 lis/c=84/84 les/c/f=85/85/0 sis=112) [2]/[0] async=[2] r=0 lpr=112 pi=[84,112)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:36:59 np0005532048 python3.9[108142]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:36:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 22 03:36:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 22 03:37:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 22 03:37:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 22 03:37:00 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 22 03:37:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:00 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 114 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=112/113 n=5 ec=77/49 lis/c=112/84 les/c/f=113/85/0 sis=114 pruub=14.991329193s) [2] async=[2] r=-1 lpr=114 pi=[84,114)/1 crt=76'387 mlcod 76'387 active pruub 206.404098511s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:00 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 114 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=112/113 n=5 ec=77/49 lis/c=112/84 les/c/f=113/85/0 sis=114 pruub=14.990860939s) [2] r=-1 lpr=114 pi=[84,114)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 206.404098511s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:00 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 114 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=112/84 les/c/f=113/85/0 sis=114) [2] r=0 lpr=114 pi=[84,114)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:00 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 114 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=112/84 les/c/f=113/85/0 sis=114) [2] r=0 lpr=114 pi=[84,114)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 22 03:37:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 22 03:37:01 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 22 03:37:01 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 115 pg[9.13( v 76'387 (0'0,76'387] local-lis/les=114/115 n=5 ec=77/49 lis/c=112/84 les/c/f=113/85/0 sis=114) [2] r=0 lpr=114 pi=[84,114)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:01 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 22 03:37:01 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:37:02 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Nov 22 03:37:02 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Nov 22 03:37:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 22 03:37:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 22 03:37:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 22 03:37:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 22 03:37:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 22 03:37:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 22 03:37:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 22 03:37:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 22 03:37:04 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 22 03:37:04 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 22 03:37:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 22 03:37:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 22 03:37:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 22 03:37:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 22 03:37:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 22 03:37:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 22 03:37:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 22 03:37:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 22 03:37:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 22 03:37:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 22 03:37:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Nov 22 03:37:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Nov 22 03:37:07 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 117 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=11.538623810s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=76'387 mlcod 0'0 active pruub 210.792541504s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:07 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 117 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=117 pruub=11.538399696s) [1] r=-1 lpr=117 pi=[85,117)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 210.792541504s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:07 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 117 pg[9.15( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=117) [1] r=0 lpr=117 pi=[85,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 146 B/s wr, 5 op/s; 31 B/s, 1 objects/s recovering
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 22 03:37:08 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 118 pg[9.15( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:08 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 118 pg[9.15( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=-1 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:08 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 118 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=118 pruub=12.750043869s) [0] r=-1 lpr=118 pi=[96,118)/1 crt=76'387 mlcod 0'0 active pruub 200.581039429s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:08 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 118 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=118 pruub=12.749948502s) [0] r=-1 lpr=118 pi=[96,118)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 200.581039429s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:08 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 118 pg[9.16( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=118) [0] r=0 lpr=118 pi=[96,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:08 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 118 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:08 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 118 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=85/86 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] r=0 lpr=118 pi=[85,118)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 22 03:37:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 22 03:37:08 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 119 pg[9.16( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[96,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:08 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 119 pg[9.16( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[96,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:08 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 119 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=119) [0]/[2] r=0 lpr=119 pi=[96,119)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:08 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 119 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=119) [0]/[2] r=0 lpr=119 pi=[96,119)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:08 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 119 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=118/119 n=5 ec=77/49 lis/c=85/85 les/c/f=86/86/0 sis=118) [1]/[0] async=[1] r=0 lpr=118 pi=[85,118)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:08 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Nov 22 03:37:08 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Nov 22 03:37:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 22 03:37:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 22 03:37:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 22 03:37:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 22 03:37:09 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 120 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=118/119 n=5 ec=77/49 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.072413445s) [1] async=[1] r=-1 lpr=120 pi=[85,120)/1 crt=76'387 mlcod 76'387 active pruub 215.715347290s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:09 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 120 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=118/119 n=5 ec=77/49 lis/c=118/85 les/c/f=119/86/0 sis=120 pruub=15.072171211s) [1] r=-1 lpr=120 pi=[85,120)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 215.715347290s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:09 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 120 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:09 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 120 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:09 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 120 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=119/120 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[96,119)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:09 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 22 03:37:09 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 22 03:37:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 22 03:37:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 22 03:37:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 22 03:37:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 22 03:37:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 22 03:37:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 22 03:37:10 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 22 03:37:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 121 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=119/120 n=5 ec=77/49 lis/c=119/96 les/c/f=120/97/0 sis=121 pruub=14.990451813s) [0] async=[0] r=-1 lpr=121 pi=[96,121)/1 crt=76'387 mlcod 76'387 active pruub 204.955978394s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:10 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 121 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=119/120 n=5 ec=77/49 lis/c=119/96 les/c/f=120/97/0 sis=121 pruub=14.990345001s) [0] r=-1 lpr=121 pi=[96,121)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 204.955978394s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:10 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 121 pg[9.15( v 76'387 (0'0,76'387] local-lis/les=120/121 n=5 ec=77/49 lis/c=118/85 les/c/f=119/86/0 sis=120) [1] r=0 lpr=120 pi=[85,120)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 121 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=119/96 les/c/f=120/97/0 sis=121) [0] r=0 lpr=121 pi=[96,121)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:10 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 121 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=119/96 les/c/f=120/97/0 sis=121) [0] r=0 lpr=121 pi=[96,121)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:10 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 22 03:37:10 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 22 03:37:10 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 22 03:37:10 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 22 03:37:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 22 03:37:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 22 03:37:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 22 03:37:11 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 22 03:37:11 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 122 pg[9.16( v 76'387 (0'0,76'387] local-lis/les=121/122 n=5 ec=77/49 lis/c=119/96 les/c/f=120/97/0 sis=121) [0] r=0 lpr=121 pi=[96,121)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:11 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 22 03:37:11 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 22 03:37:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Nov 22 03:37:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 22 03:37:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 22 03:37:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 22 03:37:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 22 03:37:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 22 03:37:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 22 03:37:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 22 03:37:12 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 22 03:37:12 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 22 03:37:12 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 22 03:37:12 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 22 03:37:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 22 03:37:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 22 03:37:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 22 03:37:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 1 objects/s recovering
Nov 22 03:37:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 22 03:37:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 22 03:37:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 22 03:37:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 22 03:37:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 22 03:37:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 22 03:37:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 22 03:37:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 22 03:37:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 22 03:37:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 22 03:37:15 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 22 03:37:15 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 22 03:37:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 1 objects/s recovering
Nov 22 03:37:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 22 03:37:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 22 03:37:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 22 03:37:16 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 22 03:37:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 22 03:37:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 22 03:37:16 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 22 03:37:16 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 124 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=86/87 n=5 ec=77/49 lis/c=86/86 les/c/f=87/87/0 sis=124 pruub=11.962144852s) [2] r=-1 lpr=124 pi=[86,124)/1 crt=76'387 mlcod 0'0 active pruub 219.815017700s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:16 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 125 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=86/87 n=5 ec=77/49 lis/c=86/86 les/c/f=87/87/0 sis=124 pruub=11.962024689s) [2] r=-1 lpr=124 pi=[86,124)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 219.815017700s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:16 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 125 pg[9.19( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=86/86 les/c/f=87/87/0 sis=124) [2] r=0 lpr=125 pi=[86,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 22 03:37:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 22 03:37:17 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 22 03:37:17 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 126 pg[9.19( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=86/86 les/c/f=87/87/0 sis=126) [2]/[0] r=-1 lpr=126 pi=[86,126)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:17 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 126 pg[9.19( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=86/86 les/c/f=87/87/0 sis=126) [2]/[0] r=-1 lpr=126 pi=[86,126)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 22 03:37:17 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 126 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=86/87 n=5 ec=77/49 lis/c=86/86 les/c/f=87/87/0 sis=126) [2]/[0] r=0 lpr=126 pi=[86,126)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:17 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 126 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=86/87 n=5 ec=77/49 lis/c=86/86 les/c/f=87/87/0 sis=126) [2]/[0] r=0 lpr=126 pi=[86,126)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Nov 22 03:37:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 22 03:37:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 22 03:37:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 22 03:37:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 22 03:37:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 22 03:37:18 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 22 03:37:18 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 22 03:37:18 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 127 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=126/127 n=5 ec=77/49 lis/c=86/86 les/c/f=87/87/0 sis=126) [2]/[0] async=[2] r=0 lpr=126 pi=[86,126)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:18 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Nov 22 03:37:18 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Nov 22 03:37:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 22 03:37:19 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 22 03:37:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 22 03:37:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 22 03:37:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 128 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=126/86 les/c/f=127/87/0 sis=128) [2] r=0 lpr=128 pi=[86,128)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:19 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 128 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=126/86 les/c/f=127/87/0 sis=128) [2] r=0 lpr=128 pi=[86,128)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 128 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=126/127 n=5 ec=77/49 lis/c=126/86 les/c/f=127/87/0 sis=128 pruub=14.995551109s) [2] async=[2] r=-1 lpr=128 pi=[86,128)/1 crt=76'387 mlcod 76'387 active pruub 225.832290649s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:19 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 128 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=126/127 n=5 ec=77/49 lis/c=126/86 les/c/f=127/87/0 sis=128 pruub=14.995267868s) [2] r=-1 lpr=128 pi=[86,128)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 225.832290649s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 22 03:37:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 22 03:37:20 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 22 03:37:20 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 129 pg[9.19( v 76'387 (0'0,76'387] local-lis/les=128/129 n=5 ec=77/49 lis/c=126/86 les/c/f=127/87/0 sis=128) [2] r=0 lpr=128 pi=[86,128)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 22 03:37:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 22 03:37:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:22 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 22 03:37:22 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 22 03:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 22 03:37:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 22 03:37:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 22 03:37:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 22 03:37:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 22 03:37:24 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 22 03:37:24 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 22 03:37:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 22 03:37:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 22 03:37:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 22 03:37:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 22 03:37:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 22 03:37:24 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 130 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=105/106 n=5 ec=77/49 lis/c=105/105 les/c/f=106/106/0 sis=130 pruub=10.062794685s) [0] r=-1 lpr=130 pi=[105,130)/1 crt=76'387 mlcod 0'0 active pruub 214.285095215s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:24 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 130 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=105/106 n=5 ec=77/49 lis/c=105/105 les/c/f=106/106/0 sis=130 pruub=10.062462807s) [0] r=-1 lpr=130 pi=[105,130)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 214.285095215s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:24 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 22 03:37:24 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 130 pg[9.1c( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=105/105 les/c/f=106/106/0 sis=130) [0] r=0 lpr=130 pi=[105,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:24 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 22 03:37:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 22 03:37:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 22 03:37:25 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 22 03:37:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 22 03:37:25 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 131 pg[9.1c( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=105/105 les/c/f=106/106/0 sis=131) [0]/[2] r=-1 lpr=131 pi=[105,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:25 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 131 pg[9.1c( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=105/105 les/c/f=106/106/0 sis=131) [0]/[2] r=-1 lpr=131 pi=[105,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:25 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 131 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=105/106 n=5 ec=77/49 lis/c=105/105 les/c/f=106/106/0 sis=131) [0]/[2] r=0 lpr=131 pi=[105,131)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:25 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 131 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=105/106 n=5 ec=77/49 lis/c=105/105 les/c/f=106/106/0 sis=131) [0]/[2] r=0 lpr=131 pi=[105,131)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 22 03:37:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 22 03:37:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 22 03:37:26 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 22 03:37:26 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 22 03:37:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 22 03:37:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 22 03:37:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 22 03:37:26 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 22 03:37:26 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 22 03:37:26 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 132 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=131/132 n=5 ec=77/49 lis/c=105/105 les/c/f=106/106/0 sis=131) [0]/[2] async=[0] r=0 lpr=131 pi=[105,131)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 22 03:37:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 22 03:37:27 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 22 03:37:27 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 22 03:37:27 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 133 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=131/105 les/c/f=132/106/0 sis=133) [0] r=0 lpr=133 pi=[105,133)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:27 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 133 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=131/105 les/c/f=132/106/0 sis=133) [0] r=0 lpr=133 pi=[105,133)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:27 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 133 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=131/132 n=5 ec=77/49 lis/c=131/105 les/c/f=132/106/0 sis=133 pruub=15.003668785s) [0] async=[0] r=-1 lpr=133 pi=[105,133)/1 crt=76'387 mlcod 76'387 active pruub 222.282226562s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:27 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 133 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=131/132 n=5 ec=77/49 lis/c=131/105 les/c/f=132/106/0 sis=133 pruub=15.003452301s) [0] r=-1 lpr=133 pi=[105,133)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 222.282226562s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 22 03:37:27 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 22 03:37:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 141 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 22 03:37:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 22 03:37:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:28 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Nov 22 03:37:28 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Nov 22 03:37:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 22 03:37:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 22 03:37:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 22 03:37:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 22 03:37:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 22 03:37:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 22 03:37:28 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 134 pg[9.1c( v 76'387 (0'0,76'387] local-lis/les=133/134 n=5 ec=77/49 lis/c=131/105 les/c/f=132/106/0 sis=133) [0] r=0 lpr=133 pi=[105,133)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 22 03:37:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 22 03:37:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 134 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=134 pruub=15.399241447s) [0] r=-1 lpr=134 pi=[96,134)/1 crt=76'387 mlcod 0'0 active pruub 224.581726074s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 134 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=134 pruub=15.398857117s) [0] r=-1 lpr=134 pi=[96,134)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 224.581726074s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 134 pg[9.1e( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=134) [0] r=0 lpr=134 pi=[96,134)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:29 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 22 03:37:29 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 22 03:37:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 22 03:37:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 22 03:37:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 22 03:37:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 135 pg[9.1e( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=135) [0]/[2] r=-1 lpr=135 pi=[96,135)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:29 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 135 pg[9.1e( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=135) [0]/[2] r=-1 lpr=135 pi=[96,135)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 135 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=135) [0]/[2] r=0 lpr=135 pi=[96,135)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:29 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 135 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=135) [0]/[2] r=0 lpr=135 pi=[96,135)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 03:37:30 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 22 03:37:30 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 22 03:37:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 22 03:37:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 22 03:37:30 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 22 03:37:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 136 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=135/136 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=135) [0]/[2] async=[0] r=0 lpr=135 pi=[96,135)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 22 03:37:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 22 03:37:31 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 22 03:37:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 137 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=135/96 les/c/f=136/97/0 sis=137) [0] r=0 lpr=137 pi=[96,137)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:31 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 137 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=135/96 les/c/f=136/97/0 sis=137) [0] r=0 lpr=137 pi=[96,137)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 137 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=135/136 n=5 ec=77/49 lis/c=135/96 les/c/f=136/97/0 sis=137 pruub=15.405105591s) [0] async=[0] r=-1 lpr=137 pi=[96,137)/1 crt=76'387 mlcod 76'387 active pruub 226.764846802s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:31 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 137 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=135/136 n=5 ec=77/49 lis/c=135/96 les/c/f=136/97/0 sis=137 pruub=15.404996872s) [0] r=-1 lpr=137 pi=[96,137)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 226.764846802s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 03:37:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 22 03:37:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 22 03:37:32 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 22 03:37:32 np0005532048 ceph-osd[88656]: osd.0 pg_epoch: 138 pg[9.1e( v 76'387 (0'0,76'387] local-lis/les=137/138 n=5 ec=77/49 lis/c=135/96 les/c/f=136/97/0 sis=137) [0] r=0 lpr=137 pi=[96,137)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 22 03:37:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 22 03:37:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:33 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 22 03:37:33 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 22 03:37:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 22 03:37:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:37:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:37:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 22 03:37:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:37:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:37:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 22 03:37:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 22 03:37:35 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 22 03:37:35 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 22 03:37:35 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:37:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 03:37:36 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 139 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=139 pruub=8.632966042s) [1] r=-1 lpr=139 pi=[96,139)/1 crt=76'387 mlcod 0'0 active pruub 224.581710815s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:36 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 139 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=139 pruub=8.632266045s) [1] r=-1 lpr=139 pi=[96,139)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 224.581710815s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:36 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 139 pg[9.1f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=139) [1] r=0 lpr=139 pi=[96,139)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:36 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 22 03:37:36 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 22 03:37:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 22 03:37:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 22 03:37:36 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 140 pg[9.1f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=140) [1]/[2] r=-1 lpr=140 pi=[96,140)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:36 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 140 pg[9.1f( empty local-lis/les=0/0 n=0 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=140) [1]/[2] r=-1 lpr=140 pi=[96,140)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:36 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 22 03:37:36 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 140 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=140) [1]/[2] r=0 lpr=140 pi=[96,140)/1 crt=76'387 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:36 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 140 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=96/97 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=140) [1]/[2] r=0 lpr=140 pi=[96,140)/1 crt=76'387 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:37 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 22 03:37:37 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 22 03:37:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 22 03:37:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 22 03:37:37 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 22 03:37:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:37:38 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 141 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=140/141 n=5 ec=77/49 lis/c=96/96 les/c/f=97/97/0 sis=140) [1]/[2] async=[1] r=0 lpr=140 pi=[96,140)/1 crt=76'387 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:38 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 22 03:37:38 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 22 03:37:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 22 03:37:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 22 03:37:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 22 03:37:38 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 142 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=140/96 les/c/f=141/97/0 sis=142) [1] r=0 lpr=142 pi=[96,142)/1 luod=0'0 crt=76'387 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:38 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 142 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=0/0 n=5 ec=77/49 lis/c=140/96 les/c/f=141/97/0 sis=142) [1] r=0 lpr=142 pi=[96,142)/1 crt=76'387 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:37:38 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 142 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=140/141 n=5 ec=77/49 lis/c=140/96 les/c/f=141/97/0 sis=142 pruub=15.531610489s) [1] async=[1] r=-1 lpr=142 pi=[96,142)/1 crt=76'387 mlcod 76'387 active pruub 233.976196289s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:37:38 np0005532048 ceph-osd[90703]: osd.2 pg_epoch: 142 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=140/141 n=5 ec=77/49 lis/c=140/96 les/c/f=141/97/0 sis=142 pruub=15.531504631s) [1] r=-1 lpr=142 pi=[96,142)/1 crt=76'387 mlcod 0'0 unknown NOTIFY pruub 233.976196289s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:37:39 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Nov 22 03:37:39 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Nov 22 03:37:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 22 03:37:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 22 03:37:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 22 03:37:39 np0005532048 ceph-osd[89679]: osd.1 pg_epoch: 143 pg[9.1f( v 76'387 (0'0,76'387] local-lis/les=142/143 n=5 ec=77/49 lis/c=140/96 les/c/f=141/97/0 sis=142) [1] r=0 lpr=142 pi=[96,142)/1 crt=76'387 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:37:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:37:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 342baad2-bc16-4087-9936-5ae122873621 does not exist
Nov 22 03:37:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ece0fb18-37af-41ef-bca5-b2387726f96c does not exist
Nov 22 03:37:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d854fa1c-d761-4721-8ed1-8b0aa567ecc8 does not exist
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:37:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:37:41 np0005532048 podman[108560]: 2025-11-22 08:37:41.104807024 +0000 UTC m=+0.054057618 container create 5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:37:41 np0005532048 systemd[76524]: Created slice User Background Tasks Slice.
Nov 22 03:37:41 np0005532048 systemd[76524]: Starting Cleanup of User's Temporary Files and Directories...
Nov 22 03:37:41 np0005532048 systemd[1]: Started libpod-conmon-5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572.scope.
Nov 22 03:37:41 np0005532048 systemd[76524]: Finished Cleanup of User's Temporary Files and Directories.
Nov 22 03:37:41 np0005532048 podman[108560]: 2025-11-22 08:37:41.078616426 +0000 UTC m=+0.027867040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:37:41 np0005532048 podman[108560]: 2025-11-22 08:37:41.197573501 +0000 UTC m=+0.146824115 container init 5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:37:41 np0005532048 podman[108560]: 2025-11-22 08:37:41.20788485 +0000 UTC m=+0.157135454 container start 5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:37:41 np0005532048 podman[108560]: 2025-11-22 08:37:41.212496496 +0000 UTC m=+0.161747120 container attach 5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:37:41 np0005532048 systemd[1]: libpod-5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572.scope: Deactivated successfully.
Nov 22 03:37:41 np0005532048 blissful_williams[108578]: 167 167
Nov 22 03:37:41 np0005532048 podman[108560]: 2025-11-22 08:37:41.218025634 +0000 UTC m=+0.167276238 container died 5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:37:41 np0005532048 conmon[108578]: conmon 5f895d8a9a81a44f3203 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572.scope/container/memory.events
Nov 22 03:37:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3e4eeececd9fb267b36816c7c3173b9cd113f1fc68277734f9586b338aa709d7-merged.mount: Deactivated successfully.
Nov 22 03:37:41 np0005532048 podman[108560]: 2025-11-22 08:37:41.268681695 +0000 UTC m=+0.217932289 container remove 5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:37:41 np0005532048 systemd[1]: libpod-conmon-5f895d8a9a81a44f320376ac5d3bc1d176f7cae59e7216c19965cce170b7e572.scope: Deactivated successfully.
Nov 22 03:37:41 np0005532048 podman[108604]: 2025-11-22 08:37:41.446641949 +0000 UTC m=+0.045654625 container create 7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:37:41 np0005532048 systemd[1]: Started libpod-conmon-7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db.scope.
Nov 22 03:37:41 np0005532048 podman[108604]: 2025-11-22 08:37:41.425864138 +0000 UTC m=+0.024876854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:37:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed2f6d5b9115b62aa057e40bc814534a88086cc7c27ed260646c069a728874c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed2f6d5b9115b62aa057e40bc814534a88086cc7c27ed260646c069a728874c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed2f6d5b9115b62aa057e40bc814534a88086cc7c27ed260646c069a728874c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed2f6d5b9115b62aa057e40bc814534a88086cc7c27ed260646c069a728874c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed2f6d5b9115b62aa057e40bc814534a88086cc7c27ed260646c069a728874c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:41 np0005532048 podman[108604]: 2025-11-22 08:37:41.565271616 +0000 UTC m=+0.164284312 container init 7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_johnson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:37:41 np0005532048 podman[108604]: 2025-11-22 08:37:41.573584454 +0000 UTC m=+0.172597130 container start 7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_johnson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:37:41 np0005532048 podman[108604]: 2025-11-22 08:37:41.577952874 +0000 UTC m=+0.176965560 container attach 7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:37:41 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 22 03:37:41 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 22 03:37:42 np0005532048 python3.9[108775]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:37:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 6 op/s; 61 B/s, 2 objects/s recovering
Nov 22 03:37:42 np0005532048 suspicious_johnson[108655]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:37:42 np0005532048 suspicious_johnson[108655]: --> relative data size: 1.0
Nov 22 03:37:42 np0005532048 suspicious_johnson[108655]: --> All data devices are unavailable
Nov 22 03:37:42 np0005532048 systemd[1]: libpod-7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db.scope: Deactivated successfully.
Nov 22 03:37:42 np0005532048 podman[108604]: 2025-11-22 08:37:42.672306529 +0000 UTC m=+1.271319225 container died 7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:37:42 np0005532048 systemd[1]: libpod-7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db.scope: Consumed 1.039s CPU time.
Nov 22 03:37:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6ed2f6d5b9115b62aa057e40bc814534a88086cc7c27ed260646c069a728874c-merged.mount: Deactivated successfully.
Nov 22 03:37:42 np0005532048 podman[108604]: 2025-11-22 08:37:42.737611116 +0000 UTC m=+1.336623792 container remove 7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_johnson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:37:42 np0005532048 systemd[1]: libpod-conmon-7258b7d06bca328c7b04fd7f9d80f10e9731ba8b5682b101834992c6dc3184db.scope: Deactivated successfully.
Nov 22 03:37:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:37:43 np0005532048 podman[109164]: 2025-11-22 08:37:43.47754312 +0000 UTC m=+0.048928849 container create 1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:37:43 np0005532048 systemd[1]: Started libpod-conmon-1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51.scope.
Nov 22 03:37:43 np0005532048 podman[109164]: 2025-11-22 08:37:43.453286361 +0000 UTC m=+0.024672180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:37:43 np0005532048 podman[109164]: 2025-11-22 08:37:43.569854886 +0000 UTC m=+0.141240635 container init 1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:37:43 np0005532048 podman[109164]: 2025-11-22 08:37:43.587295693 +0000 UTC m=+0.158681422 container start 1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:37:43 np0005532048 podman[109164]: 2025-11-22 08:37:43.592207146 +0000 UTC m=+0.163592925 container attach 1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:37:43 np0005532048 recursing_burnell[109204]: 167 167
Nov 22 03:37:43 np0005532048 systemd[1]: libpod-1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51.scope: Deactivated successfully.
Nov 22 03:37:43 np0005532048 conmon[109204]: conmon 1ac00ffae1643892fb9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51.scope/container/memory.events
Nov 22 03:37:43 np0005532048 podman[109164]: 2025-11-22 08:37:43.595192481 +0000 UTC m=+0.166578250 container died 1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:37:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5d880d29d24be77245cbbb95d7182f7b7ce885b7561c97eeecab83c49339daa2-merged.mount: Deactivated successfully.
Nov 22 03:37:43 np0005532048 podman[109164]: 2025-11-22 08:37:43.640778904 +0000 UTC m=+0.212164633 container remove 1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_burnell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:37:43 np0005532048 systemd[1]: libpod-conmon-1ac00ffae1643892fb9b97c1191e027748743bf06250b8874fc5324216704f51.scope: Deactivated successfully.
Nov 22 03:37:43 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 22 03:37:43 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 22 03:37:43 np0005532048 podman[109279]: 2025-11-22 08:37:43.812501163 +0000 UTC m=+0.054363235 container create 4331b0066d22874af7fdde9bd4e672fc5f152f65272da6589a1b7bfd35b09332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:37:43 np0005532048 systemd[1]: Started libpod-conmon-4331b0066d22874af7fdde9bd4e672fc5f152f65272da6589a1b7bfd35b09332.scope.
Nov 22 03:37:43 np0005532048 podman[109279]: 2025-11-22 08:37:43.785013583 +0000 UTC m=+0.026875655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:37:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194dba9a686ae4a223b13eadbbebc6e382c58ef8eac6e38ddf6b57db789c07d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194dba9a686ae4a223b13eadbbebc6e382c58ef8eac6e38ddf6b57db789c07d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194dba9a686ae4a223b13eadbbebc6e382c58ef8eac6e38ddf6b57db789c07d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/194dba9a686ae4a223b13eadbbebc6e382c58ef8eac6e38ddf6b57db789c07d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:43 np0005532048 podman[109279]: 2025-11-22 08:37:43.909745722 +0000 UTC m=+0.151607794 container init 4331b0066d22874af7fdde9bd4e672fc5f152f65272da6589a1b7bfd35b09332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_robinson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:37:43 np0005532048 podman[109279]: 2025-11-22 08:37:43.917103347 +0000 UTC m=+0.158965399 container start 4331b0066d22874af7fdde9bd4e672fc5f152f65272da6589a1b7bfd35b09332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:37:43 np0005532048 podman[109279]: 2025-11-22 08:37:43.931539859 +0000 UTC m=+0.173401941 container attach 4331b0066d22874af7fdde9bd4e672fc5f152f65272da6589a1b7bfd35b09332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:37:43 np0005532048 python3.9[109273]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 22 03:37:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 162 B/s wr, 5 op/s; 52 B/s, 2 objects/s recovering
Nov 22 03:37:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 22 03:37:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 22 03:37:44 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 22 03:37:44 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 22 03:37:44 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 22 03:37:44 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]: {
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:    "0": [
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:        {
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "devices": [
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "/dev/loop3"
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            ],
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_name": "ceph_lv0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_size": "21470642176",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "name": "ceph_lv0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "tags": {
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.cluster_name": "ceph",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.crush_device_class": "",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.encrypted": "0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.osd_id": "0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.type": "block",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.vdo": "0"
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            },
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "type": "block",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "vg_name": "ceph_vg0"
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:        }
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:    ],
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:    "1": [
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:        {
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "devices": [
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "/dev/loop4"
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            ],
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_name": "ceph_lv1",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_size": "21470642176",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "name": "ceph_lv1",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "tags": {
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.cluster_name": "ceph",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.crush_device_class": "",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.encrypted": "0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.osd_id": "1",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.type": "block",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.vdo": "0"
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            },
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "type": "block",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "vg_name": "ceph_vg1"
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:        }
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:    ],
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:    "2": [
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:        {
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "devices": [
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "/dev/loop5"
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            ],
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_name": "ceph_lv2",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_size": "21470642176",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "name": "ceph_lv2",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "tags": {
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.cluster_name": "ceph",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.crush_device_class": "",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.encrypted": "0",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.osd_id": "2",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.type": "block",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:                "ceph.vdo": "0"
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            },
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "type": "block",
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:            "vg_name": "ceph_vg2"
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:        }
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]:    ]
Nov 22 03:37:44 np0005532048 intelligent_robinson[109295]: }
Nov 22 03:37:44 np0005532048 systemd[1]: libpod-4331b0066d22874af7fdde9bd4e672fc5f152f65272da6589a1b7bfd35b09332.scope: Deactivated successfully.
Nov 22 03:37:44 np0005532048 podman[109279]: 2025-11-22 08:37:44.795864552 +0000 UTC m=+1.037726604 container died 4331b0066d22874af7fdde9bd4e672fc5f152f65272da6589a1b7bfd35b09332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:37:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-194dba9a686ae4a223b13eadbbebc6e382c58ef8eac6e38ddf6b57db789c07d4-merged.mount: Deactivated successfully.
Nov 22 03:37:44 np0005532048 python3.9[109451]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 22 03:37:44 np0005532048 podman[109279]: 2025-11-22 08:37:44.876463935 +0000 UTC m=+1.118325987 container remove 4331b0066d22874af7fdde9bd4e672fc5f152f65272da6589a1b7bfd35b09332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_robinson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:37:44 np0005532048 systemd[1]: libpod-conmon-4331b0066d22874af7fdde9bd4e672fc5f152f65272da6589a1b7bfd35b09332.scope: Deactivated successfully.
Nov 22 03:37:45 np0005532048 python3.9[109722]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:37:45 np0005532048 podman[109760]: 2025-11-22 08:37:45.576556088 +0000 UTC m=+0.043262466 container create dfac1980dc2c2a9b11182d3c9127740454a2b76f05292c9d95869248b05f4109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:37:45 np0005532048 systemd[1]: Started libpod-conmon-dfac1980dc2c2a9b11182d3c9127740454a2b76f05292c9d95869248b05f4109.scope.
Nov 22 03:37:45 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 22 03:37:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:37:45 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 22 03:37:45 np0005532048 podman[109760]: 2025-11-22 08:37:45.558260589 +0000 UTC m=+0.024966977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:45 np0005532048 podman[109760]: 2025-11-22 08:37:45.659878348 +0000 UTC m=+0.126584786 container init dfac1980dc2c2a9b11182d3c9127740454a2b76f05292c9d95869248b05f4109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:37:45 np0005532048 podman[109760]: 2025-11-22 08:37:45.66714623 +0000 UTC m=+0.133852608 container start dfac1980dc2c2a9b11182d3c9127740454a2b76f05292c9d95869248b05f4109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:37:45 np0005532048 podman[109760]: 2025-11-22 08:37:45.670556056 +0000 UTC m=+0.137262434 container attach dfac1980dc2c2a9b11182d3c9127740454a2b76f05292c9d95869248b05f4109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:37:45 np0005532048 elegant_keldysh[109777]: 167 167
Nov 22 03:37:45 np0005532048 systemd[1]: libpod-dfac1980dc2c2a9b11182d3c9127740454a2b76f05292c9d95869248b05f4109.scope: Deactivated successfully.
Nov 22 03:37:45 np0005532048 podman[109760]: 2025-11-22 08:37:45.674164197 +0000 UTC m=+0.140870575 container died dfac1980dc2c2a9b11182d3c9127740454a2b76f05292c9d95869248b05f4109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:37:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f8add07a415abebd041723956865b253a6116836a87b985501da49365fba3038-merged.mount: Deactivated successfully.
Nov 22 03:37:45 np0005532048 podman[109760]: 2025-11-22 08:37:45.710151469 +0000 UTC m=+0.176857857 container remove dfac1980dc2c2a9b11182d3c9127740454a2b76f05292c9d95869248b05f4109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:37:45 np0005532048 systemd[1]: libpod-conmon-dfac1980dc2c2a9b11182d3c9127740454a2b76f05292c9d95869248b05f4109.scope: Deactivated successfully.
Nov 22 03:37:45 np0005532048 podman[109871]: 2025-11-22 08:37:45.8687812 +0000 UTC m=+0.048593801 container create 8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:37:45 np0005532048 systemd[1]: Started libpod-conmon-8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6.scope.
Nov 22 03:37:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:37:45 np0005532048 podman[109871]: 2025-11-22 08:37:45.849228899 +0000 UTC m=+0.029041520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8ce9b27b50e724e5c2fc50c41ff3b8c2956bf88bb928b67c4fcbd48d8d59af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8ce9b27b50e724e5c2fc50c41ff3b8c2956bf88bb928b67c4fcbd48d8d59af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8ce9b27b50e724e5c2fc50c41ff3b8c2956bf88bb928b67c4fcbd48d8d59af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8ce9b27b50e724e5c2fc50c41ff3b8c2956bf88bb928b67c4fcbd48d8d59af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:45 np0005532048 podman[109871]: 2025-11-22 08:37:45.956179472 +0000 UTC m=+0.135992093 container init 8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:37:45 np0005532048 podman[109871]: 2025-11-22 08:37:45.969358292 +0000 UTC m=+0.149170903 container start 8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:37:45 np0005532048 podman[109871]: 2025-11-22 08:37:45.972728217 +0000 UTC m=+0.152540828 container attach 8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:37:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 127 B/s wr, 4 op/s; 41 B/s, 1 objects/s recovering
Nov 22 03:37:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 22 03:37:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 22 03:37:46 np0005532048 python3.9[109972]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 22 03:37:46 np0005532048 silly_faraday[109892]: {
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "osd_id": 1,
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "type": "bluestore"
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:    },
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "osd_id": 0,
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "type": "bluestore"
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:    },
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "osd_id": 2,
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:        "type": "bluestore"
Nov 22 03:37:46 np0005532048 silly_faraday[109892]:    }
Nov 22 03:37:46 np0005532048 silly_faraday[109892]: }
Nov 22 03:37:46 np0005532048 systemd[1]: libpod-8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6.scope: Deactivated successfully.
Nov 22 03:37:46 np0005532048 podman[109871]: 2025-11-22 08:37:46.983980897 +0000 UTC m=+1.163793518 container died 8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:37:46 np0005532048 systemd[1]: libpod-8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6.scope: Consumed 1.022s CPU time.
Nov 22 03:37:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1d8ce9b27b50e724e5c2fc50c41ff3b8c2956bf88bb928b67c4fcbd48d8d59af-merged.mount: Deactivated successfully.
Nov 22 03:37:47 np0005532048 podman[109871]: 2025-11-22 08:37:47.055781528 +0000 UTC m=+1.235594129 container remove 8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:37:47 np0005532048 systemd[1]: libpod-conmon-8f634abc7120c79125812fcd8fb3286e6dbca453e573db55a48d49934d4634b6.scope: Deactivated successfully.
Nov 22 03:37:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:37:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:37:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:37:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:37:47 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c87015bf-f06f-4f45-ba6a-c0b1146cd4e4 does not exist
Nov 22 03:37:47 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4c81c79c-ad6b-4c31-a573-03d8858d7e69 does not exist
Nov 22 03:37:47 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:37:47 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:37:47 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Nov 22 03:37:47 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Nov 22 03:37:47 np0005532048 python3.9[110216]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:37:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s; 0 B/s, 0 objects/s recovering
Nov 22 03:37:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:37:48 np0005532048 python3.9[110368]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:37:48 np0005532048 python3.9[110446]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:37:49 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 22 03:37:49 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 22 03:37:49 np0005532048 python3.9[110598]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:37:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s; 0 B/s, 0 objects/s recovering
Nov 22 03:37:51 np0005532048 python3.9[110752]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 22 03:37:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Nov 22 03:37:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Nov 22 03:37:51 np0005532048 python3.9[110905]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:37:52
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'volumes']
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1 op/s; 0 B/s, 0 objects/s recovering
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:52 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:37:52 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:37:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:37:52 np0005532048 python3.9[111058]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:37:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:37:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 22 03:37:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 22 03:37:53 np0005532048 python3.9[111210]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 22 03:37:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:54 np0005532048 python3.9[111362]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:37:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 22 03:37:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 22 03:37:56 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 22 03:37:56 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 22 03:37:56 np0005532048 python3.9[111515]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:37:56 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 22 03:37:56 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 22 03:37:57 np0005532048 python3.9[111667]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:37:57 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 22 03:37:57 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 22 03:37:57 np0005532048 python3.9[111745]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:37:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:37:58 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 22 03:37:58 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 22 03:37:58 np0005532048 python3.9[111897]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:37:59 np0005532048 python3.9[111975]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:37:59 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 22 03:37:59 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 22 03:38:00 np0005532048 python3.9[112127]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:38:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.3 deep-scrub starts
Nov 22 03:38:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.3 deep-scrub ok
Nov 22 03:38:00 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 22 03:38:00 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 22 03:38:00 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 22 03:38:00 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 22 03:38:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 22 03:38:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 22 03:38:01 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 22 03:38:01 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:38:02 np0005532048 python3.9[112278]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:38:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 22 03:38:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 22 03:38:03 np0005532048 python3.9[112430]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 22 03:38:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 22 03:38:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 22 03:38:03 np0005532048 python3.9[112580]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:38:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 22 03:38:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 22 03:38:04 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 22 03:38:04 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 22 03:38:05 np0005532048 python3.9[112732]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:38:05 np0005532048 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 22 03:38:05 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 22 03:38:05 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 22 03:38:05 np0005532048 systemd[1]: tuned.service: Deactivated successfully.
Nov 22 03:38:05 np0005532048 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 22 03:38:05 np0005532048 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 03:38:05 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Nov 22 03:38:05 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Nov 22 03:38:05 np0005532048 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 03:38:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:06 np0005532048 python3.9[112893]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 22 03:38:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 22 03:38:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 22 03:38:08 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Nov 22 03:38:08 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Nov 22 03:38:09 np0005532048 python3.9[113045]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:38:09 np0005532048 python3.9[113199]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:38:09 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 22 03:38:09 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 22 03:38:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:10 np0005532048 systemd[1]: session-35.scope: Deactivated successfully.
Nov 22 03:38:10 np0005532048 systemd[1]: session-35.scope: Consumed 1min 7.556s CPU time.
Nov 22 03:38:10 np0005532048 systemd-logind[822]: Session 35 logged out. Waiting for processes to exit.
Nov 22 03:38:10 np0005532048 systemd-logind[822]: Removed session 35.
Nov 22 03:38:10 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 22 03:38:10 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 22 03:38:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:12 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 22 03:38:12 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 22 03:38:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 22 03:38:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 22 03:38:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:16 np0005532048 systemd-logind[822]: New session 36 of user zuul.
Nov 22 03:38:16 np0005532048 systemd[1]: Started Session 36 of User zuul.
Nov 22 03:38:16 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 22 03:38:16 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 22 03:38:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 22 03:38:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 22 03:38:17 np0005532048 python3.9[113379]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:38:17 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 22 03:38:17 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 22 03:38:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:18 np0005532048 python3.9[113535]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 22 03:38:18 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 22 03:38:18 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 22 03:38:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 22 03:38:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 22 03:38:19 np0005532048 python3.9[113688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:38:19 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 22 03:38:19 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 22 03:38:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:20 np0005532048 python3.9[113772]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 03:38:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:22 np0005532048 python3.9[113925]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:38:22 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 22 03:38:22 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 22 03:38:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:23 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 22 03:38:23 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 22 03:38:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:25 np0005532048 python3.9[114078]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:38:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:26 np0005532048 python3.9[114231]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.660457) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800706660571, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7304, "num_deletes": 251, "total_data_size": 8842521, "memory_usage": 9006112, "flush_reason": "Manual Compaction"}
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800706732438, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7348681, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7442, "table_properties": {"data_size": 7321508, "index_size": 17970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 74924, "raw_average_key_size": 23, "raw_value_size": 7258412, "raw_average_value_size": 2238, "num_data_blocks": 785, "num_entries": 3243, "num_filter_entries": 3243, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800257, "oldest_key_time": 1763800257, "file_creation_time": 1763800706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 72076 microseconds, and 21813 cpu microseconds.
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.732531) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7348681 bytes OK
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.732565) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.740896) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.740924) EVENT_LOG_v1 {"time_micros": 1763800706740916, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.740955) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8810750, prev total WAL file size 8810750, number of live WAL files 2.
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.743739) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7176KB) 13(53KB) 8(1944B)]
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800706743870, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7405886, "oldest_snapshot_seqno": -1}
Nov 22 03:38:26 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.f deep-scrub starts
Nov 22 03:38:26 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.f deep-scrub ok
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3059 keys, 7361187 bytes, temperature: kUnknown
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800706827261, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7361187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7334450, "index_size": 17987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 72996, "raw_average_key_size": 23, "raw_value_size": 7272962, "raw_average_value_size": 2377, "num_data_blocks": 787, "num_entries": 3059, "num_filter_entries": 3059, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763800706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.827650) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7361187 bytes
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.833203) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.8 rd, 88.2 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.1, 0.0 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3349, records dropped: 290 output_compression: NoCompression
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.833243) EVENT_LOG_v1 {"time_micros": 1763800706833224, "job": 4, "event": "compaction_finished", "compaction_time_micros": 83425, "compaction_time_cpu_micros": 17100, "output_level": 6, "num_output_files": 1, "total_output_size": 7361187, "num_input_records": 3349, "num_output_records": 3059, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800706836240, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800706836385, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800706836452, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 22 03:38:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:38:26.743639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:38:27 np0005532048 python3.9[114384]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 22 03:38:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 22 03:38:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 22 03:38:27 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 22 03:38:27 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 22 03:38:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 22 03:38:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 22 03:38:28 np0005532048 python3.9[114534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:38:29 np0005532048 python3.9[114692]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:38:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:31 np0005532048 python3.9[114845]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:38:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 22 03:38:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 22 03:38:33 np0005532048 python3.9[115132]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 03:38:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:34 np0005532048 python3.9[115282]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:38:34 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 22 03:38:34 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 22 03:38:35 np0005532048 python3.9[115436]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:38:35 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 22 03:38:35 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 22 03:38:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:36 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 22 03:38:36 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 22 03:38:37 np0005532048 python3.9[115589]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:38:37 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 22 03:38:37 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 22 03:38:37 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 22 03:38:37 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 22 03:38:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 22 03:38:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 22 03:38:39 np0005532048 python3.9[115742]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:38:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 22 03:38:40 np0005532048 python3.9[115896]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 22 03:38:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 22 03:38:40 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 22 03:38:40 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 22 03:38:41 np0005532048 systemd[1]: session-36.scope: Deactivated successfully.
Nov 22 03:38:41 np0005532048 systemd[1]: session-36.scope: Consumed 19.233s CPU time.
Nov 22 03:38:41 np0005532048 systemd-logind[822]: Session 36 logged out. Waiting for processes to exit.
Nov 22 03:38:41 np0005532048 systemd-logind[822]: Removed session 36.
Nov 22 03:38:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 22 03:38:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 22 03:38:41 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 22 03:38:41 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 22 03:38:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:42 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 22 03:38:42 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 22 03:38:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:43 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Nov 22 03:38:43 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Nov 22 03:38:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:45 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 22 03:38:45 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 22 03:38:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 22 03:38:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 22 03:38:46 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 22 03:38:46 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 22 03:38:46 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Nov 22 03:38:46 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Nov 22 03:38:47 np0005532048 systemd-logind[822]: New session 37 of user zuul.
Nov 22 03:38:47 np0005532048 systemd[1]: Started Session 37 of User zuul.
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:38:48 np0005532048 python3.9[116186]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:38:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:38:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 321f0591-391f-4a81-b755-30c70f4f4615 does not exist
Nov 22 03:38:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 70386cad-f4f5-4a5f-88a6-28fc874c0029 does not exist
Nov 22 03:38:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 25715733-17c9-43ed-9c7c-e3f76d63b9bc does not exist
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:38:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:38:48 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Nov 22 03:38:48 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Nov 22 03:38:48 np0005532048 podman[116471]: 2025-11-22 08:38:48.863939918 +0000 UTC m=+0.054911802 container create 190b38986c964343220b5b85fa45cc89f0fd5ed614583f82bbd5bc0fca33e0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:38:48 np0005532048 systemd[1]: Started libpod-conmon-190b38986c964343220b5b85fa45cc89f0fd5ed614583f82bbd5bc0fca33e0c6.scope.
Nov 22 03:38:48 np0005532048 podman[116471]: 2025-11-22 08:38:48.834889358 +0000 UTC m=+0.025861262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:38:49 np0005532048 podman[116471]: 2025-11-22 08:38:49.034876068 +0000 UTC m=+0.225847952 container init 190b38986c964343220b5b85fa45cc89f0fd5ed614583f82bbd5bc0fca33e0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:38:49 np0005532048 podman[116471]: 2025-11-22 08:38:49.044586713 +0000 UTC m=+0.235558607 container start 190b38986c964343220b5b85fa45cc89f0fd5ed614583f82bbd5bc0fca33e0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:38:49 np0005532048 pensive_moore[116514]: 167 167
Nov 22 03:38:49 np0005532048 systemd[1]: libpod-190b38986c964343220b5b85fa45cc89f0fd5ed614583f82bbd5bc0fca33e0c6.scope: Deactivated successfully.
Nov 22 03:38:49 np0005532048 podman[116471]: 2025-11-22 08:38:49.102041678 +0000 UTC m=+0.293013562 container attach 190b38986c964343220b5b85fa45cc89f0fd5ed614583f82bbd5bc0fca33e0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:38:49 np0005532048 podman[116471]: 2025-11-22 08:38:49.103289478 +0000 UTC m=+0.294261362 container died 190b38986c964343220b5b85fa45cc89f0fd5ed614583f82bbd5bc0fca33e0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:38:49 np0005532048 python3.9[116511]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:38:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-383107a07822adba02c548b7188994c341fe476130ebc3a019de1aabfcdc97ca-merged.mount: Deactivated successfully.
Nov 22 03:38:49 np0005532048 podman[116471]: 2025-11-22 08:38:49.4802459 +0000 UTC m=+0.671217784 container remove 190b38986c964343220b5b85fa45cc89f0fd5ed614583f82bbd5bc0fca33e0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:38:49 np0005532048 systemd[1]: libpod-conmon-190b38986c964343220b5b85fa45cc89f0fd5ed614583f82bbd5bc0fca33e0c6.scope: Deactivated successfully.
Nov 22 03:38:49 np0005532048 podman[116588]: 2025-11-22 08:38:49.661221991 +0000 UTC m=+0.048252185 container create 51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:38:49 np0005532048 systemd[1]: Started libpod-conmon-51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb.scope.
Nov 22 03:38:49 np0005532048 podman[116588]: 2025-11-22 08:38:49.637161126 +0000 UTC m=+0.024191350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:38:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85987c19cb5c124ec2a6bffbfa5c44a198ece7f4ec6f0bfb89a563e491fcf907/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85987c19cb5c124ec2a6bffbfa5c44a198ece7f4ec6f0bfb89a563e491fcf907/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85987c19cb5c124ec2a6bffbfa5c44a198ece7f4ec6f0bfb89a563e491fcf907/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85987c19cb5c124ec2a6bffbfa5c44a198ece7f4ec6f0bfb89a563e491fcf907/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85987c19cb5c124ec2a6bffbfa5c44a198ece7f4ec6f0bfb89a563e491fcf907/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:49 np0005532048 podman[116588]: 2025-11-22 08:38:49.761231116 +0000 UTC m=+0.148261340 container init 51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:38:49 np0005532048 podman[116588]: 2025-11-22 08:38:49.7689069 +0000 UTC m=+0.155937094 container start 51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:38:49 np0005532048 podman[116588]: 2025-11-22 08:38:49.777054295 +0000 UTC m=+0.164084479 container attach 51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:38:49 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 22 03:38:49 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 22 03:38:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:50 np0005532048 python3.9[116751]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:38:50 np0005532048 stupefied_curie[116621]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:38:50 np0005532048 stupefied_curie[116621]: --> relative data size: 1.0
Nov 22 03:38:50 np0005532048 stupefied_curie[116621]: --> All data devices are unavailable
Nov 22 03:38:50 np0005532048 systemd[1]: libpod-51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb.scope: Deactivated successfully.
Nov 22 03:38:50 np0005532048 systemd[1]: libpod-51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb.scope: Consumed 1.054s CPU time.
Nov 22 03:38:50 np0005532048 podman[116588]: 2025-11-22 08:38:50.870963448 +0000 UTC m=+1.257993642 container died 51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:38:50 np0005532048 systemd[1]: session-37.scope: Deactivated successfully.
Nov 22 03:38:50 np0005532048 systemd[1]: session-37.scope: Consumed 2.571s CPU time.
Nov 22 03:38:50 np0005532048 systemd-logind[822]: Session 37 logged out. Waiting for processes to exit.
Nov 22 03:38:50 np0005532048 systemd-logind[822]: Removed session 37.
Nov 22 03:38:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-85987c19cb5c124ec2a6bffbfa5c44a198ece7f4ec6f0bfb89a563e491fcf907-merged.mount: Deactivated successfully.
Nov 22 03:38:51 np0005532048 podman[116588]: 2025-11-22 08:38:51.396704521 +0000 UTC m=+1.783734715 container remove 51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:38:51 np0005532048 systemd[1]: libpod-conmon-51b499c6c871aeff7d589bbe2da7f3e6889c9ce07515cce41aff033b4e61b1bb.scope: Deactivated successfully.
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:38:52
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups', '.rgw.root']
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:38:52 np0005532048 podman[116955]: 2025-11-22 08:38:52.117520471 +0000 UTC m=+0.047578358 container create 4469b442776516b570b8899e804bba830ba6922b4c1a8f4d3a02e3b0509b7173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:52 np0005532048 systemd[1]: Started libpod-conmon-4469b442776516b570b8899e804bba830ba6922b4c1a8f4d3a02e3b0509b7173.scope.
Nov 22 03:38:52 np0005532048 podman[116955]: 2025-11-22 08:38:52.092898362 +0000 UTC m=+0.022956259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:38:52 np0005532048 podman[116955]: 2025-11-22 08:38:52.221690112 +0000 UTC m=+0.151748019 container init 4469b442776516b570b8899e804bba830ba6922b4c1a8f4d3a02e3b0509b7173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:38:52 np0005532048 podman[116955]: 2025-11-22 08:38:52.228393629 +0000 UTC m=+0.158451506 container start 4469b442776516b570b8899e804bba830ba6922b4c1a8f4d3a02e3b0509b7173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:38:52 np0005532048 optimistic_cori[116971]: 167 167
Nov 22 03:38:52 np0005532048 systemd[1]: libpod-4469b442776516b570b8899e804bba830ba6922b4c1a8f4d3a02e3b0509b7173.scope: Deactivated successfully.
Nov 22 03:38:52 np0005532048 podman[116955]: 2025-11-22 08:38:52.237754585 +0000 UTC m=+0.167812472 container attach 4469b442776516b570b8899e804bba830ba6922b4c1a8f4d3a02e3b0509b7173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:38:52 np0005532048 podman[116955]: 2025-11-22 08:38:52.238055863 +0000 UTC m=+0.168113770 container died 4469b442776516b570b8899e804bba830ba6922b4c1a8f4d3a02e3b0509b7173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:38:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f081e7d50dfb1b61c6c4c5297fef43c5fabb64de4227bf8f260b7f31256e3b97-merged.mount: Deactivated successfully.
Nov 22 03:38:52 np0005532048 podman[116955]: 2025-11-22 08:38:52.305075328 +0000 UTC m=+0.235133205 container remove 4469b442776516b570b8899e804bba830ba6922b4c1a8f4d3a02e3b0509b7173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:38:52 np0005532048 systemd[1]: libpod-conmon-4469b442776516b570b8899e804bba830ba6922b4c1a8f4d3a02e3b0509b7173.scope: Deactivated successfully.
Nov 22 03:38:52 np0005532048 podman[116995]: 2025-11-22 08:38:52.473622438 +0000 UTC m=+0.052277946 container create 42a15444b98309674fed1b14751672df416bf51a0de0a8135ed9192006f9a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:38:52 np0005532048 systemd[1]: Started libpod-conmon-42a15444b98309674fed1b14751672df416bf51a0de0a8135ed9192006f9a017.scope.
Nov 22 03:38:52 np0005532048 podman[116995]: 2025-11-22 08:38:52.447603743 +0000 UTC m=+0.026259251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:38:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb0d131a179235b1b6b5887b34d9084b87ad79cc2fbc1b064fc5ccaf42abaf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb0d131a179235b1b6b5887b34d9084b87ad79cc2fbc1b064fc5ccaf42abaf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb0d131a179235b1b6b5887b34d9084b87ad79cc2fbc1b064fc5ccaf42abaf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb0d131a179235b1b6b5887b34d9084b87ad79cc2fbc1b064fc5ccaf42abaf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:52 np0005532048 podman[116995]: 2025-11-22 08:38:52.581337877 +0000 UTC m=+0.159993395 container init 42a15444b98309674fed1b14751672df416bf51a0de0a8135ed9192006f9a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:38:52 np0005532048 podman[116995]: 2025-11-22 08:38:52.588772334 +0000 UTC m=+0.167427842 container start 42a15444b98309674fed1b14751672df416bf51a0de0a8135ed9192006f9a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 22 03:38:52 np0005532048 podman[116995]: 2025-11-22 08:38:52.596777956 +0000 UTC m=+0.175433464 container attach 42a15444b98309674fed1b14751672df416bf51a0de0a8135ed9192006f9a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:38:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:38:52 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 22 03:38:52 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]: {
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:    "0": [
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:        {
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "devices": [
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "/dev/loop3"
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            ],
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_name": "ceph_lv0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_size": "21470642176",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "name": "ceph_lv0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "tags": {
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.cluster_name": "ceph",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.crush_device_class": "",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.encrypted": "0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.osd_id": "0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.type": "block",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.vdo": "0"
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            },
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "type": "block",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "vg_name": "ceph_vg0"
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:        }
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:    ],
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:    "1": [
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:        {
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "devices": [
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "/dev/loop4"
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            ],
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_name": "ceph_lv1",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_size": "21470642176",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "name": "ceph_lv1",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "tags": {
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.cluster_name": "ceph",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.crush_device_class": "",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.encrypted": "0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.osd_id": "1",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.type": "block",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.vdo": "0"
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            },
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "type": "block",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "vg_name": "ceph_vg1"
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:        }
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:    ],
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:    "2": [
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:        {
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "devices": [
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "/dev/loop5"
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            ],
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_name": "ceph_lv2",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_size": "21470642176",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "name": "ceph_lv2",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "tags": {
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.cluster_name": "ceph",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.crush_device_class": "",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.encrypted": "0",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.osd_id": "2",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.type": "block",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:                "ceph.vdo": "0"
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            },
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "type": "block",
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:            "vg_name": "ceph_vg2"
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:        }
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]:    ]
Nov 22 03:38:53 np0005532048 eager_ganguly[117011]: }
Nov 22 03:38:53 np0005532048 systemd[1]: libpod-42a15444b98309674fed1b14751672df416bf51a0de0a8135ed9192006f9a017.scope: Deactivated successfully.
Nov 22 03:38:53 np0005532048 podman[116995]: 2025-11-22 08:38:53.41361568 +0000 UTC m=+0.992271168 container died 42a15444b98309674fed1b14751672df416bf51a0de0a8135ed9192006f9a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:38:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1eb0d131a179235b1b6b5887b34d9084b87ad79cc2fbc1b064fc5ccaf42abaf0-merged.mount: Deactivated successfully.
Nov 22 03:38:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 22 03:38:53 np0005532048 podman[116995]: 2025-11-22 08:38:53.511165414 +0000 UTC m=+1.089820902 container remove 42a15444b98309674fed1b14751672df416bf51a0de0a8135ed9192006f9a017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:38:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 22 03:38:53 np0005532048 systemd[1]: libpod-conmon-42a15444b98309674fed1b14751672df416bf51a0de0a8135ed9192006f9a017.scope: Deactivated successfully.
Nov 22 03:38:53 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 22 03:38:53 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 22 03:38:54 np0005532048 podman[117174]: 2025-11-22 08:38:54.161388038 +0000 UTC m=+0.048342098 container create fc73e7b3fcbc8cdd36162524f48b40063291f86e2e4add6d0d8c320f98a35cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_carver, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:38:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:54 np0005532048 systemd[1]: Started libpod-conmon-fc73e7b3fcbc8cdd36162524f48b40063291f86e2e4add6d0d8c320f98a35cb9.scope.
Nov 22 03:38:54 np0005532048 podman[117174]: 2025-11-22 08:38:54.137071886 +0000 UTC m=+0.024025976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:38:54 np0005532048 podman[117174]: 2025-11-22 08:38:54.263395173 +0000 UTC m=+0.150349253 container init fc73e7b3fcbc8cdd36162524f48b40063291f86e2e4add6d0d8c320f98a35cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:38:54 np0005532048 podman[117174]: 2025-11-22 08:38:54.27042135 +0000 UTC m=+0.157375410 container start fc73e7b3fcbc8cdd36162524f48b40063291f86e2e4add6d0d8c320f98a35cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_carver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:38:54 np0005532048 unruffled_carver[117191]: 167 167
Nov 22 03:38:54 np0005532048 systemd[1]: libpod-fc73e7b3fcbc8cdd36162524f48b40063291f86e2e4add6d0d8c320f98a35cb9.scope: Deactivated successfully.
Nov 22 03:38:54 np0005532048 podman[117174]: 2025-11-22 08:38:54.282668698 +0000 UTC m=+0.169622758 container attach fc73e7b3fcbc8cdd36162524f48b40063291f86e2e4add6d0d8c320f98a35cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_carver, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:38:54 np0005532048 podman[117174]: 2025-11-22 08:38:54.283402766 +0000 UTC m=+0.170356836 container died fc73e7b3fcbc8cdd36162524f48b40063291f86e2e4add6d0d8c320f98a35cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:38:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d4826a1d30a831098aecf3d6e5aac8ec326f3ff4b3089a7b1b331b1f1808d2cc-merged.mount: Deactivated successfully.
Nov 22 03:38:54 np0005532048 podman[117174]: 2025-11-22 08:38:54.360398273 +0000 UTC m=+0.247352323 container remove fc73e7b3fcbc8cdd36162524f48b40063291f86e2e4add6d0d8c320f98a35cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:38:54 np0005532048 systemd[1]: libpod-conmon-fc73e7b3fcbc8cdd36162524f48b40063291f86e2e4add6d0d8c320f98a35cb9.scope: Deactivated successfully.
Nov 22 03:38:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 22 03:38:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 22 03:38:54 np0005532048 podman[117215]: 2025-11-22 08:38:54.529946367 +0000 UTC m=+0.026508127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:54 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 22 03:38:54 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 22 03:38:55 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 22 03:38:55 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 22 03:38:55 np0005532048 podman[117215]: 2025-11-22 08:38:55.482097775 +0000 UTC m=+0.978659545 container create 6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:38:55 np0005532048 systemd[1]: Started libpod-conmon-6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4.scope.
Nov 22 03:38:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:38:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e521d219269b63906730ce80d28ac28bd3364b57973c360382a68675a0b4ce40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e521d219269b63906730ce80d28ac28bd3364b57973c360382a68675a0b4ce40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e521d219269b63906730ce80d28ac28bd3364b57973c360382a68675a0b4ce40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e521d219269b63906730ce80d28ac28bd3364b57973c360382a68675a0b4ce40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:55 np0005532048 podman[117215]: 2025-11-22 08:38:55.870220808 +0000 UTC m=+1.366782568 container init 6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 22 03:38:55 np0005532048 podman[117215]: 2025-11-22 08:38:55.880058006 +0000 UTC m=+1.376619746 container start 6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_einstein, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:38:55 np0005532048 podman[117215]: 2025-11-22 08:38:55.943577134 +0000 UTC m=+1.440138884 container attach 6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:38:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:56 np0005532048 systemd-logind[822]: New session 38 of user zuul.
Nov 22 03:38:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 22 03:38:56 np0005532048 systemd[1]: Started Session 38 of User zuul.
Nov 22 03:38:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 22 03:38:56 np0005532048 competent_einstein[117232]: {
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "osd_id": 1,
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "type": "bluestore"
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:    },
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "osd_id": 0,
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "type": "bluestore"
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:    },
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "osd_id": 2,
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:        "type": "bluestore"
Nov 22 03:38:56 np0005532048 competent_einstein[117232]:    }
Nov 22 03:38:56 np0005532048 competent_einstein[117232]: }
Nov 22 03:38:56 np0005532048 systemd[1]: libpod-6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4.scope: Deactivated successfully.
Nov 22 03:38:56 np0005532048 systemd[1]: libpod-6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4.scope: Consumed 1.086s CPU time.
Nov 22 03:38:57 np0005532048 podman[117322]: 2025-11-22 08:38:57.004429756 +0000 UTC m=+0.028389126 container died 6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:38:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e521d219269b63906730ce80d28ac28bd3364b57973c360382a68675a0b4ce40-merged.mount: Deactivated successfully.
Nov 22 03:38:57 np0005532048 podman[117322]: 2025-11-22 08:38:57.071063822 +0000 UTC m=+0.095023162 container remove 6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_einstein, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:38:57 np0005532048 systemd[1]: libpod-conmon-6107d9d5e3ea316b45c9991a62575e3d81872db8061200c1e7d7223f2e84f5a4.scope: Deactivated successfully.
Nov 22 03:38:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:38:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:38:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:38:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:38:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b3771bb3-092d-4395-8b05-cb348d7e0b84 does not exist
Nov 22 03:38:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 13dcbe23-be4b-42a5-ba8b-acbae4ef9ae0 does not exist
Nov 22 03:38:57 np0005532048 python3.9[117484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:38:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:38:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:38:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:38:58 np0005532048 python3.9[117638]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:38:58 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 22 03:38:58 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 22 03:38:59 np0005532048 python3.9[117794]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:38:59 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 22 03:38:59 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 22 03:39:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:00 np0005532048 python3.9[117878]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:39:00 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 22 03:39:00 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 22 03:39:01 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 22 03:39:01 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:39:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 22 03:39:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 22 03:39:02 np0005532048 python3.9[118031]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:39:03 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 22 03:39:03 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 22 03:39:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:03 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 22 03:39:03 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 22 03:39:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 22 03:39:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 22 03:39:04 np0005532048 python3.9[118226]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:05 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.12 deep-scrub starts
Nov 22 03:39:05 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.12 deep-scrub ok
Nov 22 03:39:05 np0005532048 python3.9[118378]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:39:05 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 22 03:39:05 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 22 03:39:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:06 np0005532048 python3.9[118543]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:06 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 22 03:39:06 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 22 03:39:06 np0005532048 python3.9[118621]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:06 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 22 03:39:06 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 22 03:39:07 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 22 03:39:07 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 22 03:39:07 np0005532048 python3.9[118773]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:07 np0005532048 python3.9[118851]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:08 np0005532048 python3.9[119003]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:08 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 22 03:39:08 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 22 03:39:09 np0005532048 python3.9[119155]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:10 np0005532048 python3.9[119307]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:10 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 22 03:39:10 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 22 03:39:10 np0005532048 python3.9[119459]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:10 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 22 03:39:10 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 22 03:39:11 np0005532048 python3.9[119611]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:39:11 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 22 03:39:11 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 22 03:39:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:12 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 22 03:39:12 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 22 03:39:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Nov 22 03:39:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Nov 22 03:39:13 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 22 03:39:13 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 22 03:39:14 np0005532048 python3.9[119764]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:39:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 22 03:39:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 22 03:39:14 np0005532048 python3.9[119918]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:39:14 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 22 03:39:14 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 22 03:39:15 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 22 03:39:15 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 22 03:39:15 np0005532048 python3.9[120070]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:39:16 np0005532048 python3.9[120222]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:39:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 22 03:39:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 22 03:39:17 np0005532048 python3.9[120375]: ansible-service_facts Invoked
Nov 22 03:39:17 np0005532048 network[120392]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:39:17 np0005532048 network[120393]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:39:17 np0005532048 network[120394]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:39:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 22 03:39:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 22 03:39:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:18 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 22 03:39:18 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 22 03:39:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:18 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 22 03:39:18 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 22 03:39:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 22 03:39:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 22 03:39:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 22 03:39:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 22 03:39:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 22 03:39:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 22 03:39:21 np0005532048 python3.9[120846]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:39:21 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 22 03:39:21 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 22 03:39:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:23 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 22 03:39:23 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 22 03:39:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:24 np0005532048 python3.9[120999]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 22 03:39:25 np0005532048 python3.9[121151]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 22 03:39:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 22 03:39:26 np0005532048 python3.9[121229]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:26 np0005532048 python3.9[121381]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:26 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 22 03:39:26 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 22 03:39:27 np0005532048 python3.9[121459]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:27 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 22 03:39:27 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 22 03:39:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:28 np0005532048 python3.9[121611]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:29 np0005532048 python3.9[121763]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:39:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 22 03:39:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 22 03:39:29 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 22 03:39:29 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 22 03:39:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:30 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.8 deep-scrub starts
Nov 22 03:39:30 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.8 deep-scrub ok
Nov 22 03:39:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 22 03:39:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 22 03:39:30 np0005532048 python3.9[121847]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:39:31 np0005532048 systemd[1]: session-38.scope: Deactivated successfully.
Nov 22 03:39:31 np0005532048 systemd[1]: session-38.scope: Consumed 25.236s CPU time.
Nov 22 03:39:31 np0005532048 systemd-logind[822]: Session 38 logged out. Waiting for processes to exit.
Nov 22 03:39:31 np0005532048 systemd-logind[822]: Removed session 38.
Nov 22 03:39:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:33 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 22 03:39:33 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 22 03:39:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.2 deep-scrub starts
Nov 22 03:39:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.2 deep-scrub ok
Nov 22 03:39:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:34 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 22 03:39:34 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 22 03:39:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 22 03:39:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 22 03:39:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:37 np0005532048 systemd-logind[822]: New session 39 of user zuul.
Nov 22 03:39:37 np0005532048 systemd[1]: Started Session 39 of User zuul.
Nov 22 03:39:37 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 22 03:39:37 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 22 03:39:37 np0005532048 python3.9[122030]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:38 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 22 03:39:38 np0005532048 ceph-osd[90703]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 22 03:39:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:38 np0005532048 python3.9[122182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:38 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 22 03:39:38 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 22 03:39:39 np0005532048 python3.9[122260]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:39 np0005532048 systemd[1]: session-39.scope: Deactivated successfully.
Nov 22 03:39:39 np0005532048 systemd[1]: session-39.scope: Consumed 1.671s CPU time.
Nov 22 03:39:39 np0005532048 systemd-logind[822]: Session 39 logged out. Waiting for processes to exit.
Nov 22 03:39:39 np0005532048 systemd-logind[822]: Removed session 39.
Nov 22 03:39:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 22 03:39:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 22 03:39:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 22 03:39:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 22 03:39:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:44 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 22 03:39:44 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 22 03:39:45 np0005532048 systemd-logind[822]: New session 40 of user zuul.
Nov 22 03:39:45 np0005532048 systemd[1]: Started Session 40 of User zuul.
Nov 22 03:39:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:46 np0005532048 python3.9[122438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:39:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Nov 22 03:39:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Nov 22 03:39:47 np0005532048 python3.9[122594]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:48 np0005532048 python3.9[122769]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 22 03:39:48 np0005532048 python3.9[122847]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.d00_3jya recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 22 03:39:48 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.b deep-scrub starts
Nov 22 03:39:48 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.b deep-scrub ok
Nov 22 03:39:49 np0005532048 python3.9[122999]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 22 03:39:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 22 03:39:49 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 22 03:39:49 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 22 03:39:50 np0005532048 python3.9[123077]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ckfr8rmg recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:50 np0005532048 python3.9[123229]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:51 np0005532048 python3.9[123381]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:51 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 22 03:39:51 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 22 03:39:51 np0005532048 python3.9[123459]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:39:52
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.meta']
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:52 np0005532048 python3.9[123611]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:39:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:39:52 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 22 03:39:52 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 22 03:39:53 np0005532048 python3.9[123689]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:53 np0005532048 python3.9[123841]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:54 np0005532048 python3.9[123993]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:54 np0005532048 python3.9[124071]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:55 np0005532048 python3.9[124223]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:56 np0005532048 python3.9[124301]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:57 np0005532048 python3.9[124453]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:39:57 np0005532048 systemd[1]: Reloading.
Nov 22 03:39:57 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:57 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:39:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3fff406c-11ea-4adf-82f3-407640a5d987 does not exist
Nov 22 03:39:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9205eb13-8443-480b-b32f-3ad4a3503fca does not exist
Nov 22 03:39:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 07f2f051-eb6e-427b-a68b-4eb21adce961 does not exist
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:39:58 np0005532048 python3.9[124759]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:39:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:39:58 np0005532048 python3.9[124951]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:58 np0005532048 podman[124992]: 2025-11-22 08:39:58.678133058 +0000 UTC m=+0.044123808 container create dd2131ca91f2b40d7d79f02c4eedfad49c7b2cddc7eae4c756fa785789b63a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:39:58 np0005532048 systemd[1]: Started libpod-conmon-dd2131ca91f2b40d7d79f02c4eedfad49c7b2cddc7eae4c756fa785789b63a22.scope.
Nov 22 03:39:58 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:39:58 np0005532048 podman[124992]: 2025-11-22 08:39:58.659242619 +0000 UTC m=+0.025233399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:39:58 np0005532048 podman[124992]: 2025-11-22 08:39:58.767703844 +0000 UTC m=+0.133694614 container init dd2131ca91f2b40d7d79f02c4eedfad49c7b2cddc7eae4c756fa785789b63a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:39:58 np0005532048 podman[124992]: 2025-11-22 08:39:58.776526847 +0000 UTC m=+0.142517597 container start dd2131ca91f2b40d7d79f02c4eedfad49c7b2cddc7eae4c756fa785789b63a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:39:58 np0005532048 podman[124992]: 2025-11-22 08:39:58.779770518 +0000 UTC m=+0.145761288 container attach dd2131ca91f2b40d7d79f02c4eedfad49c7b2cddc7eae4c756fa785789b63a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:39:58 np0005532048 pensive_mcnulty[125033]: 167 167
Nov 22 03:39:58 np0005532048 systemd[1]: libpod-dd2131ca91f2b40d7d79f02c4eedfad49c7b2cddc7eae4c756fa785789b63a22.scope: Deactivated successfully.
Nov 22 03:39:58 np0005532048 podman[124992]: 2025-11-22 08:39:58.785403831 +0000 UTC m=+0.151394591 container died dd2131ca91f2b40d7d79f02c4eedfad49c7b2cddc7eae4c756fa785789b63a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:39:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a1e1f1408ba79c82b0923f0d52fd80d27e2159949aa14ae8393b3cfb2e3a7fc9-merged.mount: Deactivated successfully.
Nov 22 03:39:58 np0005532048 podman[124992]: 2025-11-22 08:39:58.83516866 +0000 UTC m=+0.201159410 container remove dd2131ca91f2b40d7d79f02c4eedfad49c7b2cddc7eae4c756fa785789b63a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:39:58 np0005532048 systemd[1]: libpod-conmon-dd2131ca91f2b40d7d79f02c4eedfad49c7b2cddc7eae4c756fa785789b63a22.scope: Deactivated successfully.
Nov 22 03:39:58 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 22 03:39:58 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 22 03:39:58 np0005532048 podman[125130]: 2025-11-22 08:39:58.994537122 +0000 UTC m=+0.044681881 container create d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:39:59 np0005532048 systemd[1]: Started libpod-conmon-d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5.scope.
Nov 22 03:39:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:39:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c267af7a3f00ce2a9d58ccf4e0b3b0bddbe87d7ff9adb3da2afa5176126e1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:39:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c267af7a3f00ce2a9d58ccf4e0b3b0bddbe87d7ff9adb3da2afa5176126e1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:39:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c267af7a3f00ce2a9d58ccf4e0b3b0bddbe87d7ff9adb3da2afa5176126e1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:39:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c267af7a3f00ce2a9d58ccf4e0b3b0bddbe87d7ff9adb3da2afa5176126e1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:39:59 np0005532048 podman[125130]: 2025-11-22 08:39:58.977535722 +0000 UTC m=+0.027680511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:39:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c267af7a3f00ce2a9d58ccf4e0b3b0bddbe87d7ff9adb3da2afa5176126e1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:39:59 np0005532048 podman[125130]: 2025-11-22 08:39:59.092278645 +0000 UTC m=+0.142423424 container init d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:39:59 np0005532048 podman[125130]: 2025-11-22 08:39:59.101344074 +0000 UTC m=+0.151488833 container start d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hofstadter, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:39:59 np0005532048 podman[125130]: 2025-11-22 08:39:59.105371586 +0000 UTC m=+0.155516365 container attach d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:39:59 np0005532048 python3.9[125201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:59 np0005532048 python3.9[125281]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:59 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 22 03:39:59 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 22 03:40:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:00 np0005532048 distracted_hofstadter[125191]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:40:00 np0005532048 distracted_hofstadter[125191]: --> relative data size: 1.0
Nov 22 03:40:00 np0005532048 distracted_hofstadter[125191]: --> All data devices are unavailable
Nov 22 03:40:00 np0005532048 systemd[1]: libpod-d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5.scope: Deactivated successfully.
Nov 22 03:40:00 np0005532048 systemd[1]: libpod-d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5.scope: Consumed 1.151s CPU time.
Nov 22 03:40:00 np0005532048 podman[125130]: 2025-11-22 08:40:00.342077792 +0000 UTC m=+1.392222571 container died d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:40:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-06c267af7a3f00ce2a9d58ccf4e0b3b0bddbe87d7ff9adb3da2afa5176126e1b-merged.mount: Deactivated successfully.
Nov 22 03:40:00 np0005532048 podman[125130]: 2025-11-22 08:40:00.422491797 +0000 UTC m=+1.472636566 container remove d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hofstadter, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:40:00 np0005532048 systemd[1]: libpod-conmon-d6e1a2e0b4be3001b23e3af32be78d1cfb4b8578ab219b284b9aac6def0674e5.scope: Deactivated successfully.
Nov 22 03:40:00 np0005532048 python3.9[125453]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:40:00 np0005532048 systemd[1]: Reloading.
Nov 22 03:40:00 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:40:00 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:40:00 np0005532048 systemd[1]: Starting Create netns directory...
Nov 22 03:40:00 np0005532048 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:40:00 np0005532048 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:40:00 np0005532048 systemd[1]: Finished Create netns directory.
Nov 22 03:40:01 np0005532048 podman[125728]: 2025-11-22 08:40:01.27727085 +0000 UTC m=+0.022771276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:01 np0005532048 podman[125728]: 2025-11-22 08:40:01.456097675 +0000 UTC m=+0.201598101 container create d1a9ae6e45c892371bfc84e041876f4727227bb370a05080073e148a75f6ab0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_taussig, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:40:01 np0005532048 systemd[1]: Started libpod-conmon-d1a9ae6e45c892371bfc84e041876f4727227bb370a05080073e148a75f6ab0b.scope.
Nov 22 03:40:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:40:01 np0005532048 podman[125728]: 2025-11-22 08:40:01.542752967 +0000 UTC m=+0.288253403 container init d1a9ae6e45c892371bfc84e041876f4727227bb370a05080073e148a75f6ab0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_taussig, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:40:01 np0005532048 podman[125728]: 2025-11-22 08:40:01.55233515 +0000 UTC m=+0.297835556 container start d1a9ae6e45c892371bfc84e041876f4727227bb370a05080073e148a75f6ab0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:40:01 np0005532048 podman[125728]: 2025-11-22 08:40:01.55710229 +0000 UTC m=+0.302602726 container attach d1a9ae6e45c892371bfc84e041876f4727227bb370a05080073e148a75f6ab0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_taussig, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:40:01 np0005532048 jovial_taussig[125767]: 167 167
Nov 22 03:40:01 np0005532048 systemd[1]: libpod-d1a9ae6e45c892371bfc84e041876f4727227bb370a05080073e148a75f6ab0b.scope: Deactivated successfully.
Nov 22 03:40:01 np0005532048 podman[125728]: 2025-11-22 08:40:01.560125106 +0000 UTC m=+0.305625512 container died d1a9ae6e45c892371bfc84e041876f4727227bb370a05080073e148a75f6ab0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:40:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f7cd492e21b1535d57c6adfe44878bad64c0a926c68a6fb6b96d515391bf3c1b-merged.mount: Deactivated successfully.
Nov 22 03:40:01 np0005532048 podman[125728]: 2025-11-22 08:40:01.606778166 +0000 UTC m=+0.352278572 container remove d1a9ae6e45c892371bfc84e041876f4727227bb370a05080073e148a75f6ab0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_taussig, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:40:01 np0005532048 systemd[1]: libpod-conmon-d1a9ae6e45c892371bfc84e041876f4727227bb370a05080073e148a75f6ab0b.scope: Deactivated successfully.
Nov 22 03:40:01 np0005532048 podman[125841]: 2025-11-22 08:40:01.768348064 +0000 UTC m=+0.041046090 container create 3e97815101f12e90ef9fefca08086a00dcb7683bc0f330b17056937c365d75ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:40:01 np0005532048 systemd[1]: Started libpod-conmon-3e97815101f12e90ef9fefca08086a00dcb7683bc0f330b17056937c365d75ba.scope.
Nov 22 03:40:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:40:01 np0005532048 python3.9[125835]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:40:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e00c4a9a1d6600d6a26b34ee7bc0a642569a34d80904c8d02d38acf9430fb01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e00c4a9a1d6600d6a26b34ee7bc0a642569a34d80904c8d02d38acf9430fb01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e00c4a9a1d6600d6a26b34ee7bc0a642569a34d80904c8d02d38acf9430fb01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e00c4a9a1d6600d6a26b34ee7bc0a642569a34d80904c8d02d38acf9430fb01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:01 np0005532048 podman[125841]: 2025-11-22 08:40:01.74922931 +0000 UTC m=+0.021927366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:01 np0005532048 podman[125841]: 2025-11-22 08:40:01.856399532 +0000 UTC m=+0.129097588 container init 3e97815101f12e90ef9fefca08086a00dcb7683bc0f330b17056937c365d75ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:40:01 np0005532048 podman[125841]: 2025-11-22 08:40:01.867266996 +0000 UTC m=+0.139965022 container start 3e97815101f12e90ef9fefca08086a00dcb7683bc0f330b17056937c365d75ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_solomon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:40:01 np0005532048 podman[125841]: 2025-11-22 08:40:01.871855803 +0000 UTC m=+0.144553839 container attach 3e97815101f12e90ef9fefca08086a00dcb7683bc0f330b17056937c365d75ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_solomon, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:40:01 np0005532048 network[125879]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:40:01 np0005532048 network[125880]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:40:01 np0005532048 network[125881]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:40:01 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 22 03:40:01 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:40:02 np0005532048 loving_solomon[125858]: {
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:    "0": [
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:        {
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "devices": [
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "/dev/loop3"
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            ],
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_name": "ceph_lv0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_size": "21470642176",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "name": "ceph_lv0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "tags": {
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.cluster_name": "ceph",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.crush_device_class": "",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.encrypted": "0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.osd_id": "0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.type": "block",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.vdo": "0"
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            },
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "type": "block",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "vg_name": "ceph_vg0"
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:        }
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:    ],
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:    "1": [
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:        {
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "devices": [
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "/dev/loop4"
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            ],
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_name": "ceph_lv1",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_size": "21470642176",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "name": "ceph_lv1",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "tags": {
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.cluster_name": "ceph",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.crush_device_class": "",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.encrypted": "0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.osd_id": "1",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.type": "block",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.vdo": "0"
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            },
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "type": "block",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "vg_name": "ceph_vg1"
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:        }
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:    ],
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:    "2": [
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:        {
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "devices": [
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "/dev/loop5"
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            ],
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_name": "ceph_lv2",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_size": "21470642176",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "name": "ceph_lv2",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "tags": {
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.cluster_name": "ceph",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.crush_device_class": "",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.encrypted": "0",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.osd_id": "2",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.type": "block",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:                "ceph.vdo": "0"
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            },
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "type": "block",
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:            "vg_name": "ceph_vg2"
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:        }
Nov 22 03:40:02 np0005532048 loving_solomon[125858]:    ]
Nov 22 03:40:02 np0005532048 loving_solomon[125858]: }
Nov 22 03:40:02 np0005532048 systemd[1]: libpod-3e97815101f12e90ef9fefca08086a00dcb7683bc0f330b17056937c365d75ba.scope: Deactivated successfully.
Nov 22 03:40:02 np0005532048 podman[125841]: 2025-11-22 08:40:02.773496753 +0000 UTC m=+1.046194769 container died 3e97815101f12e90ef9fefca08086a00dcb7683bc0f330b17056937c365d75ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_solomon, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:40:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1e00c4a9a1d6600d6a26b34ee7bc0a642569a34d80904c8d02d38acf9430fb01-merged.mount: Deactivated successfully.
Nov 22 03:40:02 np0005532048 podman[125841]: 2025-11-22 08:40:02.839439401 +0000 UTC m=+1.112137427 container remove 3e97815101f12e90ef9fefca08086a00dcb7683bc0f330b17056937c365d75ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_solomon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:40:02 np0005532048 systemd[1]: libpod-conmon-3e97815101f12e90ef9fefca08086a00dcb7683bc0f330b17056937c365d75ba.scope: Deactivated successfully.
Nov 22 03:40:03 np0005532048 podman[126082]: 2025-11-22 08:40:03.507617274 +0000 UTC m=+0.044701012 container create 05e96ce92d8193b77c65cdb103e0ec83d00f798dfde090eb9f8b4239f2c0e128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:40:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:03 np0005532048 systemd[1]: Started libpod-conmon-05e96ce92d8193b77c65cdb103e0ec83d00f798dfde090eb9f8b4239f2c0e128.scope.
Nov 22 03:40:03 np0005532048 podman[126082]: 2025-11-22 08:40:03.487646069 +0000 UTC m=+0.024729807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:40:03 np0005532048 podman[126082]: 2025-11-22 08:40:03.605289495 +0000 UTC m=+0.142373233 container init 05e96ce92d8193b77c65cdb103e0ec83d00f798dfde090eb9f8b4239f2c0e128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:40:03 np0005532048 podman[126082]: 2025-11-22 08:40:03.614479177 +0000 UTC m=+0.151562895 container start 05e96ce92d8193b77c65cdb103e0ec83d00f798dfde090eb9f8b4239f2c0e128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:40:03 np0005532048 podman[126082]: 2025-11-22 08:40:03.618178091 +0000 UTC m=+0.155261839 container attach 05e96ce92d8193b77c65cdb103e0ec83d00f798dfde090eb9f8b4239f2c0e128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:40:03 np0005532048 condescending_agnesi[126103]: 167 167
Nov 22 03:40:03 np0005532048 systemd[1]: libpod-05e96ce92d8193b77c65cdb103e0ec83d00f798dfde090eb9f8b4239f2c0e128.scope: Deactivated successfully.
Nov 22 03:40:03 np0005532048 podman[126082]: 2025-11-22 08:40:03.621173957 +0000 UTC m=+0.158257675 container died 05e96ce92d8193b77c65cdb103e0ec83d00f798dfde090eb9f8b4239f2c0e128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:40:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ae842acac9bb760930369a8214171500a7e02940f5ce733944842c352cf12cfb-merged.mount: Deactivated successfully.
Nov 22 03:40:03 np0005532048 podman[126082]: 2025-11-22 08:40:03.670466874 +0000 UTC m=+0.207550582 container remove 05e96ce92d8193b77c65cdb103e0ec83d00f798dfde090eb9f8b4239f2c0e128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:40:03 np0005532048 systemd[1]: libpod-conmon-05e96ce92d8193b77c65cdb103e0ec83d00f798dfde090eb9f8b4239f2c0e128.scope: Deactivated successfully.
Nov 22 03:40:03 np0005532048 podman[126135]: 2025-11-22 08:40:03.84265139 +0000 UTC m=+0.048283382 container create 26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:40:03 np0005532048 systemd[1]: Started libpod-conmon-26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04.scope.
Nov 22 03:40:03 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 22 03:40:03 np0005532048 podman[126135]: 2025-11-22 08:40:03.821770371 +0000 UTC m=+0.027402393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:40:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ebe00693dde0dd7dc948b0cd780b45cb2b6455cddb3378fe59cf2149ddb006/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:03 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 22 03:40:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ebe00693dde0dd7dc948b0cd780b45cb2b6455cddb3378fe59cf2149ddb006/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ebe00693dde0dd7dc948b0cd780b45cb2b6455cddb3378fe59cf2149ddb006/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ebe00693dde0dd7dc948b0cd780b45cb2b6455cddb3378fe59cf2149ddb006/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:03 np0005532048 podman[126135]: 2025-11-22 08:40:03.944333782 +0000 UTC m=+0.149965784 container init 26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jepsen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:40:03 np0005532048 podman[126135]: 2025-11-22 08:40:03.961702331 +0000 UTC m=+0.167334353 container start 26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jepsen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:40:03 np0005532048 podman[126135]: 2025-11-22 08:40:03.965915268 +0000 UTC m=+0.171547290 container attach 26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:40:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:04 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 22 03:40:04 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]: {
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "osd_id": 1,
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "type": "bluestore"
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:    },
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "osd_id": 0,
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "type": "bluestore"
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:    },
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "osd_id": 2,
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:        "type": "bluestore"
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]:    }
Nov 22 03:40:05 np0005532048 strange_jepsen[126156]: }
Nov 22 03:40:05 np0005532048 systemd[1]: libpod-26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04.scope: Deactivated successfully.
Nov 22 03:40:05 np0005532048 systemd[1]: libpod-26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04.scope: Consumed 1.082s CPU time.
Nov 22 03:40:05 np0005532048 podman[126135]: 2025-11-22 08:40:05.042188956 +0000 UTC m=+1.247820968 container died 26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jepsen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:40:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-75ebe00693dde0dd7dc948b0cd780b45cb2b6455cddb3378fe59cf2149ddb006-merged.mount: Deactivated successfully.
Nov 22 03:40:05 np0005532048 podman[126135]: 2025-11-22 08:40:05.110634538 +0000 UTC m=+1.316266540 container remove 26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:40:05 np0005532048 systemd[1]: libpod-conmon-26437d99a0e7fc950f25cf7d72b0fad41034513ed1eed8f338ecb3d185b36b04.scope: Deactivated successfully.
Nov 22 03:40:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:40:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:40:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:40:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:40:05 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 50cc3734-56f5-43fb-82f6-cd29700ce639 does not exist
Nov 22 03:40:05 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3a7f4fe5-987a-4a08-9fcc-a6490bbf1ab9 does not exist
Nov 22 03:40:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:40:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:40:05 np0005532048 python3.9[126446]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:06 np0005532048 python3.9[126524]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:07 np0005532048 python3.9[126676]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:07 np0005532048 python3.9[126828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:07 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 22 03:40:07 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 22 03:40:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:08 np0005532048 python3.9[126906]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:08 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Nov 22 03:40:08 np0005532048 ceph-osd[88656]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Nov 22 03:40:09 np0005532048 python3.9[127058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 03:40:09 np0005532048 systemd[1]: Starting Time & Date Service...
Nov 22 03:40:09 np0005532048 systemd[1]: Started Time & Date Service.
Nov 22 03:40:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:10 np0005532048 python3.9[127214]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:10 np0005532048 python3.9[127366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:11 np0005532048 python3.9[127444]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:12 np0005532048 python3.9[127596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:12 np0005532048 python3.9[127674]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.a184wuh8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:13 np0005532048 python3.9[127826]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:13 np0005532048 python3.9[127904]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:14 np0005532048 python3.9[128056]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:40:15 np0005532048 python3[128209]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 03:40:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:16 np0005532048 python3.9[128361]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:16 np0005532048 python3.9[128439]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:17 np0005532048 python3.9[128591]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:17 np0005532048 python3.9[128669]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:18 np0005532048 python3.9[128821]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:19 np0005532048 python3.9[128899]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:19 np0005532048 python3.9[129051]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:20 np0005532048 python3.9[129129]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:21 np0005532048 python3.9[129281]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:21 np0005532048 python3.9[129359]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:22 np0005532048 python3.9[129511]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:23 np0005532048 python3.9[129666]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:23 np0005532048 python3.9[129818]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:24 np0005532048 python3.9[129970]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:25 np0005532048 python3.9[130122]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 03:40:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:26 np0005532048 python3.9[130274]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 03:40:26 np0005532048 systemd[1]: session-40.scope: Deactivated successfully.
Nov 22 03:40:26 np0005532048 systemd[1]: session-40.scope: Consumed 31.422s CPU time.
Nov 22 03:40:26 np0005532048 systemd-logind[822]: Session 40 logged out. Waiting for processes to exit.
Nov 22 03:40:26 np0005532048 systemd-logind[822]: Removed session 40.
Nov 22 03:40:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:31 np0005532048 systemd-logind[822]: New session 41 of user zuul.
Nov 22 03:40:31 np0005532048 systemd[1]: Started Session 41 of User zuul.
Nov 22 03:40:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:32 np0005532048 python3.9[130454]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 22 03:40:33 np0005532048 python3.9[130606]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:40:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:34 np0005532048 python3.9[130760]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 22 03:40:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:34 np0005532048 python3.9[130912]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.hh033x8i follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:35 np0005532048 python3.9[131037]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.hh033x8i mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800834.2776737-44-214828538370713/.source.hh033x8i _original_basename=.hqc9tlk8 follow=False checksum=79b6ef9a09098f1f619661cabc95b0986e08a965 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:36 np0005532048 python3.9[131189]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:40:37 np0005532048 python3.9[131341]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClQbCrfYHYcZVrX6ClZPfju7WP41Laza3GwI9YrowKjBNGdI0nR82stmvdgQuiHajPLJ22WCr0F+kB1JrEL7C/e3dwMW71KNOV4t0/n8wi6dh7A0MAYMRWmAS4iTccodPhuAHSsL3Y2WJQ0gQWcs+D47d4S6UgUfY8McyIrSyku1RMrZvqD+Horky+VXJyMnsc6m32MTL8hw/XFttt8bUMFyhPzl8RPK57aM4xkoHnKhqMFgEdWDgJ/2bhleaNBLFcDwcYSBCIj0uOO1qOI9eWVZLBuU9MlaHzLpx44iPJwbG0fG/yd6h27j2o8Kd/RSp5wOPd86SbmEnv4yU4zFiF1jykKvvivEg0EFLYkokwg/5lFJAuf+pP/d7+yBlm5V+6NYATTJfKsY5cnPMzxllm+aANyAcNsBjnGMyWg9Ax0f+9bLnKdSWORGi7kuC4h8ELbfFKcpjWPGgHB85DCeVhTsWPVcO9FXThAzAWAixeeY0ZMKa50h9OmUwINRVu3yc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2hQQQ43NShzuaEL4Cp+20/r6q8pEcftymfrK4aUcAg#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIe46hlHz1zy6uhPDRcH4wsrfH7UfRZvDfoinfWFeiDxCLhKrGTSMkoLOX8bmMBaO1LfWPgU6AdevI9F65iL0cc=#012 create=True mode=0644 path=/tmp/ansible.hh033x8i state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:38 np0005532048 python3.9[131493]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hh033x8i' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:40:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:39 np0005532048 python3.9[131647]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.hh033x8i state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:39 np0005532048 systemd[1]: session-41.scope: Deactivated successfully.
Nov 22 03:40:39 np0005532048 systemd[1]: session-41.scope: Consumed 5.258s CPU time.
Nov 22 03:40:39 np0005532048 systemd-logind[822]: Session 41 logged out. Waiting for processes to exit.
Nov 22 03:40:39 np0005532048 systemd-logind[822]: Removed session 41.
Nov 22 03:40:39 np0005532048 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 03:40:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.551151) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800843551261, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1672, "num_deletes": 250, "total_data_size": 2143382, "memory_usage": 2173408, "flush_reason": "Manual Compaction"}
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800843575604, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1292874, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7443, "largest_seqno": 9114, "table_properties": {"data_size": 1287249, "index_size": 2444, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17809, "raw_average_key_size": 21, "raw_value_size": 1273434, "raw_average_value_size": 1534, "num_data_blocks": 114, "num_entries": 830, "num_filter_entries": 830, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800707, "oldest_key_time": 1763800707, "file_creation_time": 1763800843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 24484 microseconds, and 5157 cpu microseconds.
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.575658) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1292874 bytes OK
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.575684) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.582652) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.582690) EVENT_LOG_v1 {"time_micros": 1763800843582680, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.582716) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2135724, prev total WAL file size 2135724, number of live WAL files 2.
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.583608) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1262KB)], [20(7188KB)]
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800843583643, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8654061, "oldest_snapshot_seqno": -1}
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3439 keys, 7036614 bytes, temperature: kUnknown
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800843679664, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7036614, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7009725, "index_size": 17179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 82306, "raw_average_key_size": 23, "raw_value_size": 6943724, "raw_average_value_size": 2019, "num_data_blocks": 757, "num_entries": 3439, "num_filter_entries": 3439, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763800843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.680066) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7036614 bytes
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.683029) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.0 rd, 73.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.0 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(12.1) write-amplify(5.4) OK, records in: 3889, records dropped: 450 output_compression: NoCompression
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.683063) EVENT_LOG_v1 {"time_micros": 1763800843683046, "job": 6, "event": "compaction_finished", "compaction_time_micros": 96157, "compaction_time_cpu_micros": 16447, "output_level": 6, "num_output_files": 1, "total_output_size": 7036614, "num_input_records": 3889, "num_output_records": 3439, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800843683455, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800843684667, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.583526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.684736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.684744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.684746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.684748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:40:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:40:43.684749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:40:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:45 np0005532048 systemd-logind[822]: New session 42 of user zuul.
Nov 22 03:40:45 np0005532048 systemd[1]: Started Session 42 of User zuul.
Nov 22 03:40:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:46 np0005532048 python3.9[131827]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:40:47 np0005532048 python3.9[131983]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 03:40:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:48 np0005532048 python3.9[132137]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:40:49 np0005532048 python3.9[132290]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:40:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:50 np0005532048 python3.9[132443]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:40:51 np0005532048 python3.9[132595]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:51 np0005532048 systemd[1]: session-42.scope: Deactivated successfully.
Nov 22 03:40:51 np0005532048 systemd[1]: session-42.scope: Consumed 4.142s CPU time.
Nov 22 03:40:51 np0005532048 systemd-logind[822]: Session 42 logged out. Waiting for processes to exit.
Nov 22 03:40:51 np0005532048 systemd-logind[822]: Removed session 42.
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:40:52
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'backups']
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:40:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:40:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:57 np0005532048 systemd-logind[822]: New session 43 of user zuul.
Nov 22 03:40:57 np0005532048 systemd[1]: Started Session 43 of User zuul.
Nov 22 03:40:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:40:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2020 writes, 9074 keys, 2020 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2020 writes, 2020 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2020 writes, 9074 keys, 2020 commit groups, 1.0 writes per commit group, ingest: 10.62 MB, 0.02 MB/s#012Interval WAL: 2020 writes, 2020 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     83.7      0.10              0.03         3    0.033       0      0       0.0       0.0#012  L6      1/0    6.71 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.7     85.3     76.5      0.18              0.03         2    0.090    7238    740       0.0       0.0#012 Sum      1/0    6.71 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     55.0     79.0      0.28              0.06         5    0.056    7238    740       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     55.5     79.6      0.28              0.06         4    0.069    7238    740       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     85.3     76.5      0.18              0.03         2    0.090    7238    740       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     85.3      0.10              0.03         2    0.048       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.3 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 561.66 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,470.69 KB,0.151203%) FilterBlock(6,27.80 KB,0.0089294%) IndexBlock(6,63.17 KB,0.0202932%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 03:40:58 np0005532048 python3.9[132773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:40:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:40:59 np0005532048 python3.9[132929]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:41:00 np0005532048 python3.9[133013]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 03:41:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:41:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:02 np0005532048 python3.9[133164]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:41:03 np0005532048 systemd[1]: session-18.scope: Deactivated successfully.
Nov 22 03:41:03 np0005532048 systemd[1]: session-18.scope: Consumed 1min 30.420s CPU time.
Nov 22 03:41:03 np0005532048 systemd-logind[822]: Session 18 logged out. Waiting for processes to exit.
Nov 22 03:41:03 np0005532048 systemd-logind[822]: Removed session 18.
Nov 22 03:41:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:04 np0005532048 python3.9[133315]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:41:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:04 np0005532048 python3.9[133465]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:41:05 np0005532048 python3.9[133640]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:41:06 np0005532048 systemd[1]: session-43.scope: Deactivated successfully.
Nov 22 03:41:06 np0005532048 systemd[1]: session-43.scope: Consumed 6.544s CPU time.
Nov 22 03:41:06 np0005532048 systemd-logind[822]: Session 43 logged out. Waiting for processes to exit.
Nov 22 03:41:06 np0005532048 systemd-logind[822]: Removed session 43.
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:41:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev af1e9f27-51e6-4ad0-90e0-d5ac23192e09 does not exist
Nov 22 03:41:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c3c7a738-6490-4b79-a4cb-9682bf57bc3e does not exist
Nov 22 03:41:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 497baa1e-448a-4404-ae61-ecfcefeaf58a does not exist
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:41:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:41:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:41:06 np0005532048 podman[133912]: 2025-11-22 08:41:06.784965148 +0000 UTC m=+0.073547991 container create 24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kalam, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:41:06 np0005532048 podman[133912]: 2025-11-22 08:41:06.73783554 +0000 UTC m=+0.026418443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:06 np0005532048 systemd[1]: Started libpod-conmon-24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3.scope.
Nov 22 03:41:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:41:06 np0005532048 podman[133912]: 2025-11-22 08:41:06.924160305 +0000 UTC m=+0.212743168 container init 24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kalam, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:41:06 np0005532048 podman[133912]: 2025-11-22 08:41:06.931783109 +0000 UTC m=+0.220365942 container start 24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kalam, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:41:06 np0005532048 nervous_kalam[133928]: 167 167
Nov 22 03:41:06 np0005532048 systemd[1]: libpod-24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3.scope: Deactivated successfully.
Nov 22 03:41:06 np0005532048 conmon[133928]: conmon 24aa7da303b1c1f78911 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3.scope/container/memory.events
Nov 22 03:41:06 np0005532048 podman[133912]: 2025-11-22 08:41:06.955773669 +0000 UTC m=+0.244356542 container attach 24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:41:06 np0005532048 podman[133912]: 2025-11-22 08:41:06.956426225 +0000 UTC m=+0.245009088 container died 24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:41:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b28d3181c29bc6b2b06b72f5c811c940ae19727f82f9ef17d5b946d66f454cd2-merged.mount: Deactivated successfully.
Nov 22 03:41:07 np0005532048 podman[133912]: 2025-11-22 08:41:07.151961316 +0000 UTC m=+0.440544159 container remove 24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:41:07 np0005532048 systemd[1]: libpod-conmon-24aa7da303b1c1f7891141eded7a2d21013fd4d5eafd65d6601c10b3218e05b3.scope: Deactivated successfully.
Nov 22 03:41:07 np0005532048 podman[133953]: 2025-11-22 08:41:07.354947715 +0000 UTC m=+0.086870469 container create c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:41:07 np0005532048 podman[133953]: 2025-11-22 08:41:07.289701886 +0000 UTC m=+0.021624620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:07 np0005532048 systemd[1]: Started libpod-conmon-c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2.scope.
Nov 22 03:41:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:41:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94dde28d10b4c92851723edc10d3a373e73a29966d81aac5f4e2608b9a800ec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94dde28d10b4c92851723edc10d3a373e73a29966d81aac5f4e2608b9a800ec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94dde28d10b4c92851723edc10d3a373e73a29966d81aac5f4e2608b9a800ec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94dde28d10b4c92851723edc10d3a373e73a29966d81aac5f4e2608b9a800ec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94dde28d10b4c92851723edc10d3a373e73a29966d81aac5f4e2608b9a800ec6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:07 np0005532048 podman[133953]: 2025-11-22 08:41:07.827956706 +0000 UTC m=+0.559879450 container init c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:41:07 np0005532048 podman[133953]: 2025-11-22 08:41:07.836703079 +0000 UTC m=+0.568625793 container start c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:41:07 np0005532048 podman[133953]: 2025-11-22 08:41:07.950982083 +0000 UTC m=+0.682904827 container attach c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:41:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:08 np0005532048 nifty_williamson[133970]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:41:08 np0005532048 nifty_williamson[133970]: --> relative data size: 1.0
Nov 22 03:41:08 np0005532048 nifty_williamson[133970]: --> All data devices are unavailable
Nov 22 03:41:08 np0005532048 systemd[1]: libpod-c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2.scope: Deactivated successfully.
Nov 22 03:41:08 np0005532048 podman[133953]: 2025-11-22 08:41:08.969291565 +0000 UTC m=+1.701214299 container died c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:41:08 np0005532048 systemd[1]: libpod-c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2.scope: Consumed 1.085s CPU time.
Nov 22 03:41:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-94dde28d10b4c92851723edc10d3a373e73a29966d81aac5f4e2608b9a800ec6-merged.mount: Deactivated successfully.
Nov 22 03:41:09 np0005532048 podman[133953]: 2025-11-22 08:41:09.031110016 +0000 UTC m=+1.763032730 container remove c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:41:09 np0005532048 systemd[1]: libpod-conmon-c47c30e5df8e80fd42c39467a0489b904d22368824f7059bcc98bfef3ef419c2.scope: Deactivated successfully.
Nov 22 03:41:09 np0005532048 podman[134151]: 2025-11-22 08:41:09.644182518 +0000 UTC m=+0.039062023 container create 9eda09a12c4f4c16bc8dffcc9d5b9f2ad089a2131f73c1b015724133cc896e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:41:09 np0005532048 systemd[1]: Started libpod-conmon-9eda09a12c4f4c16bc8dffcc9d5b9f2ad089a2131f73c1b015724133cc896e0c.scope.
Nov 22 03:41:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:41:09 np0005532048 podman[134151]: 2025-11-22 08:41:09.62815242 +0000 UTC m=+0.023031935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:09 np0005532048 podman[134151]: 2025-11-22 08:41:09.727649799 +0000 UTC m=+0.122529324 container init 9eda09a12c4f4c16bc8dffcc9d5b9f2ad089a2131f73c1b015724133cc896e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pascal, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:41:09 np0005532048 podman[134151]: 2025-11-22 08:41:09.738447554 +0000 UTC m=+0.133327079 container start 9eda09a12c4f4c16bc8dffcc9d5b9f2ad089a2131f73c1b015724133cc896e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pascal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:41:09 np0005532048 podman[134151]: 2025-11-22 08:41:09.742381954 +0000 UTC m=+0.137261459 container attach 9eda09a12c4f4c16bc8dffcc9d5b9f2ad089a2131f73c1b015724133cc896e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pascal, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:41:09 np0005532048 flamboyant_pascal[134167]: 167 167
Nov 22 03:41:09 np0005532048 systemd[1]: libpod-9eda09a12c4f4c16bc8dffcc9d5b9f2ad089a2131f73c1b015724133cc896e0c.scope: Deactivated successfully.
Nov 22 03:41:09 np0005532048 podman[134151]: 2025-11-22 08:41:09.747089303 +0000 UTC m=+0.141968808 container died 9eda09a12c4f4c16bc8dffcc9d5b9f2ad089a2131f73c1b015724133cc896e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pascal, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:41:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bd001d0e18cfb78ada4e195bf731d8de458e6f21ebb258c0cdedf45c805964a7-merged.mount: Deactivated successfully.
Nov 22 03:41:09 np0005532048 podman[134151]: 2025-11-22 08:41:09.795897924 +0000 UTC m=+0.190777419 container remove 9eda09a12c4f4c16bc8dffcc9d5b9f2ad089a2131f73c1b015724133cc896e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:41:09 np0005532048 systemd[1]: libpod-conmon-9eda09a12c4f4c16bc8dffcc9d5b9f2ad089a2131f73c1b015724133cc896e0c.scope: Deactivated successfully.
Nov 22 03:41:09 np0005532048 podman[134192]: 2025-11-22 08:41:09.967166237 +0000 UTC m=+0.055906722 container create c13e2b0935b50755855d5ec51083d7fd775952e608a1bababf0bf52ef303953e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wozniak, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:41:10 np0005532048 systemd[1]: Started libpod-conmon-c13e2b0935b50755855d5ec51083d7fd775952e608a1bababf0bf52ef303953e.scope.
Nov 22 03:41:10 np0005532048 podman[134192]: 2025-11-22 08:41:09.936448296 +0000 UTC m=+0.025188811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:41:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55192148d2285840770af4569ab159eaf4a99bf6dabde71b7f73892d4ff187e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55192148d2285840770af4569ab159eaf4a99bf6dabde71b7f73892d4ff187e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55192148d2285840770af4569ab159eaf4a99bf6dabde71b7f73892d4ff187e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55192148d2285840770af4569ab159eaf4a99bf6dabde71b7f73892d4ff187e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:10 np0005532048 podman[134192]: 2025-11-22 08:41:10.061232607 +0000 UTC m=+0.149973112 container init c13e2b0935b50755855d5ec51083d7fd775952e608a1bababf0bf52ef303953e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:41:10 np0005532048 podman[134192]: 2025-11-22 08:41:10.06961764 +0000 UTC m=+0.158358125 container start c13e2b0935b50755855d5ec51083d7fd775952e608a1bababf0bf52ef303953e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:41:10 np0005532048 podman[134192]: 2025-11-22 08:41:10.073036798 +0000 UTC m=+0.161777313 container attach c13e2b0935b50755855d5ec51083d7fd775952e608a1bababf0bf52ef303953e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:41:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]: {
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:    "0": [
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:        {
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "devices": [
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "/dev/loop3"
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            ],
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_name": "ceph_lv0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_size": "21470642176",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "name": "ceph_lv0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "tags": {
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.cluster_name": "ceph",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.crush_device_class": "",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.encrypted": "0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.osd_id": "0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.type": "block",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.vdo": "0"
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            },
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "type": "block",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "vg_name": "ceph_vg0"
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:        }
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:    ],
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:    "1": [
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:        {
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "devices": [
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "/dev/loop4"
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            ],
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_name": "ceph_lv1",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_size": "21470642176",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "name": "ceph_lv1",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "tags": {
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.cluster_name": "ceph",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.crush_device_class": "",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.encrypted": "0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.osd_id": "1",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.type": "block",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.vdo": "0"
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            },
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "type": "block",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "vg_name": "ceph_vg1"
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:        }
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:    ],
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:    "2": [
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:        {
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "devices": [
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "/dev/loop5"
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            ],
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_name": "ceph_lv2",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_size": "21470642176",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "name": "ceph_lv2",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "tags": {
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.cluster_name": "ceph",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.crush_device_class": "",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.encrypted": "0",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.osd_id": "2",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.type": "block",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:                "ceph.vdo": "0"
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            },
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "type": "block",
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:            "vg_name": "ceph_vg2"
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:        }
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]:    ]
Nov 22 03:41:10 np0005532048 sharp_wozniak[134209]: }
Nov 22 03:41:10 np0005532048 systemd[1]: libpod-c13e2b0935b50755855d5ec51083d7fd775952e608a1bababf0bf52ef303953e.scope: Deactivated successfully.
Nov 22 03:41:10 np0005532048 podman[134192]: 2025-11-22 08:41:10.928283875 +0000 UTC m=+1.017024360 container died c13e2b0935b50755855d5ec51083d7fd775952e608a1bababf0bf52ef303953e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wozniak, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:41:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e55192148d2285840770af4569ab159eaf4a99bf6dabde71b7f73892d4ff187e-merged.mount: Deactivated successfully.
Nov 22 03:41:10 np0005532048 podman[134192]: 2025-11-22 08:41:10.989218003 +0000 UTC m=+1.077958488 container remove c13e2b0935b50755855d5ec51083d7fd775952e608a1bababf0bf52ef303953e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wozniak, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:41:10 np0005532048 systemd[1]: libpod-conmon-c13e2b0935b50755855d5ec51083d7fd775952e608a1bababf0bf52ef303953e.scope: Deactivated successfully.
Nov 22 03:41:11 np0005532048 systemd-logind[822]: New session 44 of user zuul.
Nov 22 03:41:11 np0005532048 systemd[1]: Started Session 44 of User zuul.
Nov 22 03:41:11 np0005532048 podman[134456]: 2025-11-22 08:41:11.629045725 +0000 UTC m=+0.041853534 container create 3bdf7c744bdb92714cbafa25425254ec4783f38c6d3c38ef36331d079f84feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_elion, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:41:11 np0005532048 systemd[1]: Started libpod-conmon-3bdf7c744bdb92714cbafa25425254ec4783f38c6d3c38ef36331d079f84feb9.scope.
Nov 22 03:41:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:41:11 np0005532048 podman[134456]: 2025-11-22 08:41:11.610501954 +0000 UTC m=+0.023309783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:11 np0005532048 podman[134456]: 2025-11-22 08:41:11.723491056 +0000 UTC m=+0.136298885 container init 3bdf7c744bdb92714cbafa25425254ec4783f38c6d3c38ef36331d079f84feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_elion, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:41:11 np0005532048 podman[134456]: 2025-11-22 08:41:11.732272799 +0000 UTC m=+0.145080618 container start 3bdf7c744bdb92714cbafa25425254ec4783f38c6d3c38ef36331d079f84feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_elion, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:41:11 np0005532048 podman[134456]: 2025-11-22 08:41:11.736472176 +0000 UTC m=+0.149280015 container attach 3bdf7c744bdb92714cbafa25425254ec4783f38c6d3c38ef36331d079f84feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_elion, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:41:11 np0005532048 confident_elion[134516]: 167 167
Nov 22 03:41:11 np0005532048 systemd[1]: libpod-3bdf7c744bdb92714cbafa25425254ec4783f38c6d3c38ef36331d079f84feb9.scope: Deactivated successfully.
Nov 22 03:41:11 np0005532048 podman[134456]: 2025-11-22 08:41:11.73940697 +0000 UTC m=+0.152214779 container died 3bdf7c744bdb92714cbafa25425254ec4783f38c6d3c38ef36331d079f84feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:41:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6949ff058674df1469ab5eecace965c383bfff4a87ce9684a4bf93bb3ba5e33c-merged.mount: Deactivated successfully.
Nov 22 03:41:11 np0005532048 podman[134456]: 2025-11-22 08:41:11.784469936 +0000 UTC m=+0.197277745 container remove 3bdf7c744bdb92714cbafa25425254ec4783f38c6d3c38ef36331d079f84feb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:41:11 np0005532048 systemd[1]: libpod-conmon-3bdf7c744bdb92714cbafa25425254ec4783f38c6d3c38ef36331d079f84feb9.scope: Deactivated successfully.
Nov 22 03:41:11 np0005532048 podman[134569]: 2025-11-22 08:41:11.943000685 +0000 UTC m=+0.049634182 container create fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:41:11 np0005532048 systemd[1]: Started libpod-conmon-fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d.scope.
Nov 22 03:41:12 np0005532048 podman[134569]: 2025-11-22 08:41:11.920954295 +0000 UTC m=+0.027587822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:41:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef9b9f7eea0ef3a8b445479781e4056f3a09a72d374a49f611dffc722e5f941f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef9b9f7eea0ef3a8b445479781e4056f3a09a72d374a49f611dffc722e5f941f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef9b9f7eea0ef3a8b445479781e4056f3a09a72d374a49f611dffc722e5f941f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef9b9f7eea0ef3a8b445479781e4056f3a09a72d374a49f611dffc722e5f941f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:12 np0005532048 podman[134569]: 2025-11-22 08:41:12.064459992 +0000 UTC m=+0.171093519 container init fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:41:12 np0005532048 python3.9[134558]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:41:12 np0005532048 podman[134569]: 2025-11-22 08:41:12.072403614 +0000 UTC m=+0.179037111 container start fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:41:12 np0005532048 podman[134569]: 2025-11-22 08:41:12.07655212 +0000 UTC m=+0.183185617 container attach fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:41:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]: {
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "osd_id": 1,
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "type": "bluestore"
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:    },
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "osd_id": 0,
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "type": "bluestore"
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:    },
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "osd_id": 2,
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:        "type": "bluestore"
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]:    }
Nov 22 03:41:13 np0005532048 vibrant_bohr[134586]: }
Nov 22 03:41:13 np0005532048 systemd[1]: libpod-fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d.scope: Deactivated successfully.
Nov 22 03:41:13 np0005532048 systemd[1]: libpod-fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d.scope: Consumed 1.078s CPU time.
Nov 22 03:41:13 np0005532048 conmon[134586]: conmon fd79647f6d82894e3782 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d.scope/container/memory.events
Nov 22 03:41:13 np0005532048 podman[134569]: 2025-11-22 08:41:13.168871552 +0000 UTC m=+1.275505049 container died fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:41:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ef9b9f7eea0ef3a8b445479781e4056f3a09a72d374a49f611dffc722e5f941f-merged.mount: Deactivated successfully.
Nov 22 03:41:13 np0005532048 podman[134569]: 2025-11-22 08:41:13.234940031 +0000 UTC m=+1.341573528 container remove fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:41:13 np0005532048 systemd[1]: libpod-conmon-fd79647f6d82894e37829496e6345c5fa00a8ad9448c38ce2de88f1d7161c79d.scope: Deactivated successfully.
Nov 22 03:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:41:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:41:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:41:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 89c31036-d8c9-4e95-927e-c94857dea102 does not exist
Nov 22 03:41:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a6b964b3-7f79-461b-9cf8-a22ae7583fe8 does not exist
Nov 22 03:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:14 np0005532048 python3.9[134836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:41:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:41:14 np0005532048 python3.9[134988]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:15 np0005532048 python3.9[135140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:16 np0005532048 python3.9[135263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800874.872428-65-272889788657360/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4f429ff67080be87c0359a071b5bf02725fdd971 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:16 np0005532048 python3.9[135415]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:17 np0005532048 python3.9[135538]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800876.3722355-65-118411557801060/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2f24a18f8058dc822cbd7a7f00ab6084f8d2eec0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:18 np0005532048 python3.9[135690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:18 np0005532048 python3.9[135813]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800877.6419904-65-67682622169072/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a03a0ceed0c0416ece251c94b2c4eab0c9caea97 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:19 np0005532048 python3.9[135965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:20 np0005532048 python3.9[136117]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:20 np0005532048 python3.9[136269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:21 np0005532048 python3.9[136392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800880.4696863-124-178935927004100/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2e37ac134f9a53f2f831d871c0a2e5df5192f8ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:22 np0005532048 python3.9[136544]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:22 np0005532048 python3.9[136667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800881.6735544-124-157350700726194/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ab16696edd59a991dfafba2a65e45dc7ab5d38c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:23 np0005532048 python3.9[136819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:23 np0005532048 python3.9[136942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800882.8923852-124-217966356865688/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=852956d09b2009ac2d5e4132002281f2579c2378 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:24 np0005532048 python3.9[137094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:25 np0005532048 python3.9[137246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:25 np0005532048 python3.9[137398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:26 np0005532048 python3.9[137521]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800885.4592335-183-92668612095739/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=30e21ae3dc5d8a28fa9c02701e14edfdf61361c5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:27 np0005532048 python3.9[137673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:27 np0005532048 python3.9[137796]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800886.7271314-183-100436154557380/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ab16696edd59a991dfafba2a65e45dc7ab5d38c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:28 np0005532048 python3.9[137948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:28 np0005532048 python3.9[138071]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800887.9322462-183-14287388269966/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=807e934f68c4da1ac912033bc853569fc8a45df7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:30 np0005532048 python3.9[138223]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:30 np0005532048 python3.9[138375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:31 np0005532048 python3.9[138498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800890.2877357-251-146222481622721/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a47cfef5659f96e38749b52219b95e14d8a2625 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:32 np0005532048 python3.9[138650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:32 np0005532048 python3.9[138802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:33 np0005532048 python3.9[138925]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800892.2044053-275-141373366491367/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a47cfef5659f96e38749b52219b95e14d8a2625 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:33 np0005532048 python3.9[139077]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:34 np0005532048 python3.9[139229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:35 np0005532048 python3.9[139352]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800894.2028906-299-108022920440212/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a47cfef5659f96e38749b52219b95e14d8a2625 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:36 np0005532048 python3.9[139504]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:36 np0005532048 python3.9[139656]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:37 np0005532048 python3.9[139779]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800896.2097068-323-177387708524047/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a47cfef5659f96e38749b52219b95e14d8a2625 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:37 np0005532048 python3.9[139931]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:38 np0005532048 python3.9[140083]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:39 np0005532048 python3.9[140206]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800898.14907-347-226971738281818/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a47cfef5659f96e38749b52219b95e14d8a2625 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:39 np0005532048 python3.9[140358]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:40 np0005532048 python3.9[140510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:41 np0005532048 python3.9[140633]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800900.1821294-371-150071897494468/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=7a47cfef5659f96e38749b52219b95e14d8a2625 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:41 np0005532048 systemd[1]: session-44.scope: Deactivated successfully.
Nov 22 03:41:41 np0005532048 systemd[1]: session-44.scope: Consumed 23.696s CPU time.
Nov 22 03:41:41 np0005532048 systemd-logind[822]: Session 44 logged out. Waiting for processes to exit.
Nov 22 03:41:41 np0005532048 systemd-logind[822]: Removed session 44.
Nov 22 03:41:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:47 np0005532048 systemd-logind[822]: New session 45 of user zuul.
Nov 22 03:41:47 np0005532048 systemd[1]: Started Session 45 of User zuul.
Nov 22 03:41:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:48 np0005532048 python3.9[140813]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.315133) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800909315196, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 761, "num_deletes": 251, "total_data_size": 982492, "memory_usage": 996280, "flush_reason": "Manual Compaction"}
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800909324465, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 973634, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9115, "largest_seqno": 9875, "table_properties": {"data_size": 969733, "index_size": 1681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8310, "raw_average_key_size": 18, "raw_value_size": 961948, "raw_average_value_size": 2147, "num_data_blocks": 78, "num_entries": 448, "num_filter_entries": 448, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800844, "oldest_key_time": 1763800844, "file_creation_time": 1763800909, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9364 microseconds, and 3349 cpu microseconds.
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.324508) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 973634 bytes OK
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.324526) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.326276) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.326295) EVENT_LOG_v1 {"time_micros": 1763800909326289, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.326334) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 978642, prev total WAL file size 978642, number of live WAL files 2.
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.326985) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(950KB)], [23(6871KB)]
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800909327070, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8010248, "oldest_snapshot_seqno": -1}
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3373 keys, 6421486 bytes, temperature: kUnknown
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800909363798, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6421486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6396202, "index_size": 15767, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 81691, "raw_average_key_size": 24, "raw_value_size": 6332457, "raw_average_value_size": 1877, "num_data_blocks": 684, "num_entries": 3373, "num_filter_entries": 3373, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763800909, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.364093) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6421486 bytes
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.365817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.5 rd, 174.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.7 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(14.8) write-amplify(6.6) OK, records in: 3887, records dropped: 514 output_compression: NoCompression
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.365838) EVENT_LOG_v1 {"time_micros": 1763800909365827, "job": 8, "event": "compaction_finished", "compaction_time_micros": 36829, "compaction_time_cpu_micros": 19260, "output_level": 6, "num_output_files": 1, "total_output_size": 6421486, "num_input_records": 3887, "num_output_records": 3373, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800909366341, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763800909368228, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.326851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.368287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.368292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.368294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.368295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:41:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:41:49.368297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:41:49 np0005532048 python3.9[140965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:50 np0005532048 python3.9[141088]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800908.9488444-34-281474737267016/.source.conf _original_basename=ceph.conf follow=False checksum=2c5b8d95d6f6d37975309d3ab1d48af0166c88d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:50 np0005532048 python3.9[141240]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:51 np0005532048 python3.9[141363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800910.466343-34-139968161858539/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=4960bd1f30f6819c36201db3694f6bf9dc55bf29 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:51 np0005532048 systemd[1]: session-45.scope: Deactivated successfully.
Nov 22 03:41:51 np0005532048 systemd[1]: session-45.scope: Consumed 2.881s CPU time.
Nov 22 03:41:51 np0005532048 systemd-logind[822]: Session 45 logged out. Waiting for processes to exit.
Nov 22 03:41:51 np0005532048 systemd-logind[822]: Removed session 45.
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:41:52
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data']
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:41:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:41:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:57 np0005532048 systemd-logind[822]: New session 46 of user zuul.
Nov 22 03:41:57 np0005532048 systemd[1]: Started Session 46 of User zuul.
Nov 22 03:41:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:58 np0005532048 python3.9[141541]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:41:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:41:59 np0005532048 python3.9[141697]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:00 np0005532048 python3.9[141849]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:00 np0005532048 python3.9[141999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:42:01 np0005532048 python3.9[142151]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:42:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:03 np0005532048 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 22 03:42:04 np0005532048 python3.9[142308]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:42:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:05 np0005532048 python3.9[142393]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:42:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:07 np0005532048 python3.9[142546]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:42:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:08 np0005532048 python3[142701]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 22 03:42:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:09 np0005532048 python3.9[142853]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:09 np0005532048 python3.9[143005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:10 np0005532048 python3.9[143083]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:11 np0005532048 python3.9[143235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:11 np0005532048 python3.9[143313]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.73rpw_7x recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:12 np0005532048 python3.9[143465]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:12 np0005532048 python3.9[143543]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:13 np0005532048 python3.9[143695]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:42:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:42:14 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 20250732-697c-46ac-8ce1-40805eb7ffe8 does not exist
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:42:14 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5c1c8644-0bc8-4e1d-ab76-8fc9d879ffaf does not exist
Nov 22 03:42:14 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ff780b65-50c3-4b76-8f70-d0df8b0ae247 does not exist
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:42:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:14 np0005532048 python3[143980]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:42:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:42:14 np0005532048 podman[144242]: 2025-11-22 08:42:14.87152143 +0000 UTC m=+0.058686495 container create 3c68644a84a81983bfca7a0e1c2bfde6c2e72673ee6010a1db1d9a598c5f6770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:42:14 np0005532048 podman[144242]: 2025-11-22 08:42:14.838712298 +0000 UTC m=+0.025877393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:14 np0005532048 systemd[1]: Started libpod-conmon-3c68644a84a81983bfca7a0e1c2bfde6c2e72673ee6010a1db1d9a598c5f6770.scope.
Nov 22 03:42:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:42:15 np0005532048 podman[144242]: 2025-11-22 08:42:15.024613922 +0000 UTC m=+0.211778987 container init 3c68644a84a81983bfca7a0e1c2bfde6c2e72673ee6010a1db1d9a598c5f6770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:42:15 np0005532048 podman[144242]: 2025-11-22 08:42:15.035265275 +0000 UTC m=+0.222430340 container start 3c68644a84a81983bfca7a0e1c2bfde6c2e72673ee6010a1db1d9a598c5f6770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:42:15 np0005532048 hardcore_darwin[144288]: 167 167
Nov 22 03:42:15 np0005532048 systemd[1]: libpod-3c68644a84a81983bfca7a0e1c2bfde6c2e72673ee6010a1db1d9a598c5f6770.scope: Deactivated successfully.
Nov 22 03:42:15 np0005532048 podman[144242]: 2025-11-22 08:42:15.053094742 +0000 UTC m=+0.240259837 container attach 3c68644a84a81983bfca7a0e1c2bfde6c2e72673ee6010a1db1d9a598c5f6770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:42:15 np0005532048 podman[144242]: 2025-11-22 08:42:15.053734608 +0000 UTC m=+0.240899693 container died 3c68644a84a81983bfca7a0e1c2bfde6c2e72673ee6010a1db1d9a598c5f6770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:42:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-117f26363ef7347939ce13d87ab69536e4732a041225de899dc7302bce4772ef-merged.mount: Deactivated successfully.
Nov 22 03:42:15 np0005532048 python3.9[144285]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:15 np0005532048 podman[144242]: 2025-11-22 08:42:15.151512313 +0000 UTC m=+0.338677368 container remove 3c68644a84a81983bfca7a0e1c2bfde6c2e72673ee6010a1db1d9a598c5f6770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:42:15 np0005532048 systemd[1]: libpod-conmon-3c68644a84a81983bfca7a0e1c2bfde6c2e72673ee6010a1db1d9a598c5f6770.scope: Deactivated successfully.
Nov 22 03:42:15 np0005532048 podman[144339]: 2025-11-22 08:42:15.321104238 +0000 UTC m=+0.053097921 container create 01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:42:15 np0005532048 systemd[1]: Started libpod-conmon-01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510.scope.
Nov 22 03:42:15 np0005532048 podman[144339]: 2025-11-22 08:42:15.290713329 +0000 UTC m=+0.022707032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:42:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd150b3e4e5800cd5fb464678be773b703c689ca7c7888a8847bf95837d48b8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd150b3e4e5800cd5fb464678be773b703c689ca7c7888a8847bf95837d48b8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd150b3e4e5800cd5fb464678be773b703c689ca7c7888a8847bf95837d48b8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd150b3e4e5800cd5fb464678be773b703c689ca7c7888a8847bf95837d48b8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd150b3e4e5800cd5fb464678be773b703c689ca7c7888a8847bf95837d48b8c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:15 np0005532048 podman[144339]: 2025-11-22 08:42:15.434601736 +0000 UTC m=+0.166595509 container init 01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:42:15 np0005532048 podman[144339]: 2025-11-22 08:42:15.443169136 +0000 UTC m=+0.175162819 container start 01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:42:15 np0005532048 podman[144339]: 2025-11-22 08:42:15.450492683 +0000 UTC m=+0.182486366 container attach 01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:42:16 np0005532048 python3.9[144460]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800934.556102-157-137582454216213/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:16 np0005532048 vibrant_banach[144380]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:42:16 np0005532048 vibrant_banach[144380]: --> relative data size: 1.0
Nov 22 03:42:16 np0005532048 vibrant_banach[144380]: --> All data devices are unavailable
Nov 22 03:42:16 np0005532048 systemd[1]: libpod-01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510.scope: Deactivated successfully.
Nov 22 03:42:16 np0005532048 systemd[1]: libpod-01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510.scope: Consumed 1.088s CPU time.
Nov 22 03:42:16 np0005532048 podman[144339]: 2025-11-22 08:42:16.622877651 +0000 UTC m=+1.354871374 container died 01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:42:16 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bd150b3e4e5800cd5fb464678be773b703c689ca7c7888a8847bf95837d48b8c-merged.mount: Deactivated successfully.
Nov 22 03:42:16 np0005532048 podman[144339]: 2025-11-22 08:42:16.719077865 +0000 UTC m=+1.451071548 container remove 01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:42:16 np0005532048 systemd[1]: libpod-conmon-01fe61d6c5f04d8d0fa06254aecc75699f7887938d2806af8113750e83c6c510.scope: Deactivated successfully.
Nov 22 03:42:16 np0005532048 python3.9[144636]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:17 np0005532048 python3.9[144895]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800936.3124025-172-187794112284195/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:17 np0005532048 podman[144913]: 2025-11-22 08:42:17.276890298 +0000 UTC m=+0.024465569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:17 np0005532048 podman[144913]: 2025-11-22 08:42:17.403261335 +0000 UTC m=+0.150836586 container create 72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:42:17 np0005532048 systemd[1]: Started libpod-conmon-72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308.scope.
Nov 22 03:42:17 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:42:17 np0005532048 podman[144913]: 2025-11-22 08:42:17.527659022 +0000 UTC m=+0.275234283 container init 72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:42:17 np0005532048 podman[144913]: 2025-11-22 08:42:17.540771888 +0000 UTC m=+0.288347119 container start 72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:42:17 np0005532048 awesome_payne[144953]: 167 167
Nov 22 03:42:17 np0005532048 systemd[1]: libpod-72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308.scope: Deactivated successfully.
Nov 22 03:42:17 np0005532048 conmon[144953]: conmon 72f5a591b0aad7ed626f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308.scope/container/memory.events
Nov 22 03:42:17 np0005532048 podman[144913]: 2025-11-22 08:42:17.555366033 +0000 UTC m=+0.302941304 container attach 72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 22 03:42:17 np0005532048 podman[144913]: 2025-11-22 08:42:17.555699981 +0000 UTC m=+0.303275222 container died 72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:42:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4c85a90331b727cbc8db6e11941910f75547901c53583608974250d6a74d27aa-merged.mount: Deactivated successfully.
Nov 22 03:42:17 np0005532048 podman[144913]: 2025-11-22 08:42:17.704964825 +0000 UTC m=+0.452540066 container remove 72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:42:17 np0005532048 systemd[1]: libpod-conmon-72f5a591b0aad7ed626f1cacc2d8f3b33c3bedb4d398f7d9c7bd9dc5f5069308.scope: Deactivated successfully.
Nov 22 03:42:17 np0005532048 podman[145099]: 2025-11-22 08:42:17.917277275 +0000 UTC m=+0.077456456 container create cc3387eb27b3272eee9eb677a7b4c37f0b3cec6dce3268fe0f1faca1cf5b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:42:17 np0005532048 podman[145099]: 2025-11-22 08:42:17.873695638 +0000 UTC m=+0.033874849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:17 np0005532048 systemd[1]: Started libpod-conmon-cc3387eb27b3272eee9eb677a7b4c37f0b3cec6dce3268fe0f1faca1cf5b4b2f.scope.
Nov 22 03:42:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:42:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35dee8723889916832798cfe208a8b97430f2d08ddb2578a54916f38a957da0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35dee8723889916832798cfe208a8b97430f2d08ddb2578a54916f38a957da0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35dee8723889916832798cfe208a8b97430f2d08ddb2578a54916f38a957da0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35dee8723889916832798cfe208a8b97430f2d08ddb2578a54916f38a957da0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:18 np0005532048 python3.9[145116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:18 np0005532048 podman[145099]: 2025-11-22 08:42:18.104867471 +0000 UTC m=+0.265046742 container init cc3387eb27b3272eee9eb677a7b4c37f0b3cec6dce3268fe0f1faca1cf5b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:42:18 np0005532048 podman[145099]: 2025-11-22 08:42:18.1149852 +0000 UTC m=+0.275164391 container start cc3387eb27b3272eee9eb677a7b4c37f0b3cec6dce3268fe0f1faca1cf5b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:42:18 np0005532048 podman[145099]: 2025-11-22 08:42:18.121806125 +0000 UTC m=+0.281985326 container attach cc3387eb27b3272eee9eb677a7b4c37f0b3cec6dce3268fe0f1faca1cf5b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:42:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:18 np0005532048 python3.9[145252]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800937.552308-187-244685498569789/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:18 np0005532048 keen_allen[145123]: {
Nov 22 03:42:18 np0005532048 keen_allen[145123]:    "0": [
Nov 22 03:42:18 np0005532048 keen_allen[145123]:        {
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "devices": [
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "/dev/loop3"
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            ],
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_name": "ceph_lv0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_size": "21470642176",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "name": "ceph_lv0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "tags": {
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.cluster_name": "ceph",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.crush_device_class": "",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.encrypted": "0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.osd_id": "0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.type": "block",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.vdo": "0"
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            },
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "type": "block",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "vg_name": "ceph_vg0"
Nov 22 03:42:18 np0005532048 keen_allen[145123]:        }
Nov 22 03:42:18 np0005532048 keen_allen[145123]:    ],
Nov 22 03:42:18 np0005532048 keen_allen[145123]:    "1": [
Nov 22 03:42:18 np0005532048 keen_allen[145123]:        {
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "devices": [
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "/dev/loop4"
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            ],
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_name": "ceph_lv1",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_size": "21470642176",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "name": "ceph_lv1",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "tags": {
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.cluster_name": "ceph",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.crush_device_class": "",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.encrypted": "0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.osd_id": "1",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.type": "block",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.vdo": "0"
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            },
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "type": "block",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "vg_name": "ceph_vg1"
Nov 22 03:42:18 np0005532048 keen_allen[145123]:        }
Nov 22 03:42:18 np0005532048 keen_allen[145123]:    ],
Nov 22 03:42:18 np0005532048 keen_allen[145123]:    "2": [
Nov 22 03:42:18 np0005532048 keen_allen[145123]:        {
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "devices": [
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "/dev/loop5"
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            ],
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_name": "ceph_lv2",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_size": "21470642176",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "name": "ceph_lv2",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "tags": {
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.cluster_name": "ceph",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.crush_device_class": "",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.encrypted": "0",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.osd_id": "2",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.type": "block",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:                "ceph.vdo": "0"
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            },
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "type": "block",
Nov 22 03:42:18 np0005532048 keen_allen[145123]:            "vg_name": "ceph_vg2"
Nov 22 03:42:18 np0005532048 keen_allen[145123]:        }
Nov 22 03:42:18 np0005532048 keen_allen[145123]:    ]
Nov 22 03:42:18 np0005532048 keen_allen[145123]: }
Nov 22 03:42:18 np0005532048 systemd[1]: libpod-cc3387eb27b3272eee9eb677a7b4c37f0b3cec6dce3268fe0f1faca1cf5b4b2f.scope: Deactivated successfully.
Nov 22 03:42:18 np0005532048 podman[145099]: 2025-11-22 08:42:18.912129294 +0000 UTC m=+1.072308485 container died cc3387eb27b3272eee9eb677a7b4c37f0b3cec6dce3268fe0f1faca1cf5b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:42:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b35dee8723889916832798cfe208a8b97430f2d08ddb2578a54916f38a957da0-merged.mount: Deactivated successfully.
Nov 22 03:42:19 np0005532048 podman[145099]: 2025-11-22 08:42:19.193870892 +0000 UTC m=+1.354050083 container remove cc3387eb27b3272eee9eb677a7b4c37f0b3cec6dce3268fe0f1faca1cf5b4b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:42:19 np0005532048 systemd[1]: libpod-conmon-cc3387eb27b3272eee9eb677a7b4c37f0b3cec6dce3268fe0f1faca1cf5b4b2f.scope: Deactivated successfully.
Nov 22 03:42:19 np0005532048 python3.9[145418]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:19 np0005532048 podman[145682]: 2025-11-22 08:42:19.758919499 +0000 UTC m=+0.043676100 container create 6e9e8a548d5a9237fa6ccb4ab39c15a5554cd9f55e8a73ccf21ebaa11c9912fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:42:19 np0005532048 systemd[1]: Started libpod-conmon-6e9e8a548d5a9237fa6ccb4ab39c15a5554cd9f55e8a73ccf21ebaa11c9912fb.scope.
Nov 22 03:42:19 np0005532048 python3.9[145669]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800938.821546-202-121269032852338/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:42:19 np0005532048 podman[145682]: 2025-11-22 08:42:19.739638656 +0000 UTC m=+0.024395227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:19 np0005532048 podman[145682]: 2025-11-22 08:42:19.846807001 +0000 UTC m=+0.131563602 container init 6e9e8a548d5a9237fa6ccb4ab39c15a5554cd9f55e8a73ccf21ebaa11c9912fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:42:19 np0005532048 podman[145682]: 2025-11-22 08:42:19.853544504 +0000 UTC m=+0.138301065 container start 6e9e8a548d5a9237fa6ccb4ab39c15a5554cd9f55e8a73ccf21ebaa11c9912fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:42:19 np0005532048 podman[145682]: 2025-11-22 08:42:19.857502765 +0000 UTC m=+0.142259346 container attach 6e9e8a548d5a9237fa6ccb4ab39c15a5554cd9f55e8a73ccf21ebaa11c9912fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:42:19 np0005532048 beautiful_mirzakhani[145699]: 167 167
Nov 22 03:42:19 np0005532048 systemd[1]: libpod-6e9e8a548d5a9237fa6ccb4ab39c15a5554cd9f55e8a73ccf21ebaa11c9912fb.scope: Deactivated successfully.
Nov 22 03:42:19 np0005532048 podman[145682]: 2025-11-22 08:42:19.86159389 +0000 UTC m=+0.146350451 container died 6e9e8a548d5a9237fa6ccb4ab39c15a5554cd9f55e8a73ccf21ebaa11c9912fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:42:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c147bf9646c2632d3a7530391a22dd3ea986755fc20794a3d2e5d6c474106448-merged.mount: Deactivated successfully.
Nov 22 03:42:19 np0005532048 podman[145682]: 2025-11-22 08:42:19.970441399 +0000 UTC m=+0.255197960 container remove 6e9e8a548d5a9237fa6ccb4ab39c15a5554cd9f55e8a73ccf21ebaa11c9912fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:42:20 np0005532048 systemd[1]: libpod-conmon-6e9e8a548d5a9237fa6ccb4ab39c15a5554cd9f55e8a73ccf21ebaa11c9912fb.scope: Deactivated successfully.
Nov 22 03:42:20 np0005532048 podman[145799]: 2025-11-22 08:42:20.126497797 +0000 UTC m=+0.038509778 container create d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:42:20 np0005532048 systemd[1]: Started libpod-conmon-d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc.scope.
Nov 22 03:42:20 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:42:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641a969d237bfb758dfbcfbcd0103b0da34d1e54b8c018674cbe099fc22cfa1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641a969d237bfb758dfbcfbcd0103b0da34d1e54b8c018674cbe099fc22cfa1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641a969d237bfb758dfbcfbcd0103b0da34d1e54b8c018674cbe099fc22cfa1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641a969d237bfb758dfbcfbcd0103b0da34d1e54b8c018674cbe099fc22cfa1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:20 np0005532048 podman[145799]: 2025-11-22 08:42:20.109341648 +0000 UTC m=+0.021353639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:20 np0005532048 podman[145799]: 2025-11-22 08:42:20.2183311 +0000 UTC m=+0.130343111 container init d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:42:20 np0005532048 podman[145799]: 2025-11-22 08:42:20.224743294 +0000 UTC m=+0.136755275 container start d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:42:20 np0005532048 podman[145799]: 2025-11-22 08:42:20.2280625 +0000 UTC m=+0.140074481 container attach d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:42:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:20 np0005532048 python3.9[145895]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:21 np0005532048 python3.9[146020]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763800939.9941657-217-266727107217269/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:21 np0005532048 focused_panini[145816]: {
Nov 22 03:42:21 np0005532048 focused_panini[145816]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "osd_id": 1,
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "type": "bluestore"
Nov 22 03:42:21 np0005532048 focused_panini[145816]:    },
Nov 22 03:42:21 np0005532048 focused_panini[145816]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "osd_id": 0,
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "type": "bluestore"
Nov 22 03:42:21 np0005532048 focused_panini[145816]:    },
Nov 22 03:42:21 np0005532048 focused_panini[145816]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "osd_id": 2,
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:42:21 np0005532048 focused_panini[145816]:        "type": "bluestore"
Nov 22 03:42:21 np0005532048 focused_panini[145816]:    }
Nov 22 03:42:21 np0005532048 focused_panini[145816]: }
Nov 22 03:42:21 np0005532048 systemd[1]: libpod-d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc.scope: Deactivated successfully.
Nov 22 03:42:21 np0005532048 systemd[1]: libpod-d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc.scope: Consumed 1.048s CPU time.
Nov 22 03:42:21 np0005532048 podman[145799]: 2025-11-22 08:42:21.267993403 +0000 UTC m=+1.180005394 container died d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:42:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-641a969d237bfb758dfbcfbcd0103b0da34d1e54b8c018674cbe099fc22cfa1a-merged.mount: Deactivated successfully.
Nov 22 03:42:21 np0005532048 podman[145799]: 2025-11-22 08:42:21.328912704 +0000 UTC m=+1.240924685 container remove d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:42:21 np0005532048 systemd[1]: libpod-conmon-d4cea80b738b7bb0fe7e8222c123684a8bb9895012b1269753c690abb5b368cc.scope: Deactivated successfully.
Nov 22 03:42:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:42:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:42:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:42:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:42:21 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9af07d54-ab07-4674-a4d8-c53ebe1c7728 does not exist
Nov 22 03:42:21 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2c917643-b9de-470b-9f7f-a52b17b7a9b3 does not exist
Nov 22 03:42:21 np0005532048 python3.9[146263]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:42:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:42:22 np0005532048 python3.9[146415]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:42:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:23 np0005532048 python3.9[146570]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:24 np0005532048 python3.9[146722]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:42:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:24 np0005532048 python3.9[146875]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:42:25 np0005532048 python3.9[147029]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:42:26 np0005532048 python3.9[147184]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:27 np0005532048 python3.9[147334]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:42:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:28 np0005532048 python3.9[147487]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:42:28 np0005532048 ovs-vsctl[147488]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 22 03:42:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:29 np0005532048 python3.9[147640]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:42:29 np0005532048 python3.9[147795]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:42:29 np0005532048 ovs-vsctl[147796]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 22 03:42:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:30 np0005532048 python3.9[147946]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:42:31 np0005532048 python3.9[148100]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:31 np0005532048 python3.9[148252]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:32 np0005532048 python3.9[148330]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:33 np0005532048 python3.9[148482]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:33 np0005532048 python3.9[148560]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:34 np0005532048 python3.9[148712]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:35 np0005532048 python3.9[148864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:35 np0005532048 python3.9[148942]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:36 np0005532048 python3.9[149094]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:36 np0005532048 python3.9[149172]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:37 np0005532048 python3.9[149324]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:37 np0005532048 systemd[1]: Reloading.
Nov 22 03:42:37 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:37 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:38 np0005532048 python3.9[149514]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:39 np0005532048 python3.9[149592]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:39 np0005532048 python3.9[149744]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:40 np0005532048 python3.9[149822]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:41 np0005532048 python3.9[149974]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:41 np0005532048 systemd[1]: Reloading.
Nov 22 03:42:41 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:41 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:41 np0005532048 systemd[1]: Starting Create netns directory...
Nov 22 03:42:41 np0005532048 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:42:41 np0005532048 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:42:41 np0005532048 systemd[1]: Finished Create netns directory.
Nov 22 03:42:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:42 np0005532048 python3.9[150167]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:43 np0005532048 python3.9[150319]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:43 np0005532048 python3.9[150442]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763800962.7222111-468-140885282438973/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:44 np0005532048 python3.9[150594]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:45 np0005532048 python3.9[150746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:46 np0005532048 python3.9[150869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763800965.0220697-493-32525607431011/.source.json _original_basename=.6ato1s6t follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:46 np0005532048 python3.9[151021]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:49 np0005532048 python3.9[151448]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 22 03:42:50 np0005532048 python3.9[151600]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:42:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:51 np0005532048 python3.9[151752]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:42:52
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.control']
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:42:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:42:52 np0005532048 python3[151930]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:42:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:42:58 np0005532048 podman[151943]: 2025-11-22 08:42:58.807957494 +0000 UTC m=+5.956845093 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 22 03:42:59 np0005532048 podman[152060]: 2025-11-22 08:42:58.932277808 +0000 UTC m=+0.022690869 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 22 03:42:59 np0005532048 podman[152060]: 2025-11-22 08:42:59.097820977 +0000 UTC m=+0.188234018 container create e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:42:59 np0005532048 python3[151930]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 22 03:42:59 np0005532048 python3.9[152249]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:43:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:00 np0005532048 python3.9[152403]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:01 np0005532048 python3.9[152479]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:43:02 np0005532048 python3.9[152630]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763800981.198546-581-106689053031128/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:43:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:02 np0005532048 python3.9[152706]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:43:02 np0005532048 systemd[1]: Reloading.
Nov 22 03:43:02 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:43:02 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:43:03 np0005532048 python3.9[152817]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:43:03 np0005532048 systemd[1]: Reloading.
Nov 22 03:43:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:03 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:43:03 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:43:03 np0005532048 systemd[1]: Starting ovn_controller container...
Nov 22 03:43:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:43:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d128a99f7e6d88a955045d655a71429de50266e9a0a1f89a8e91d501429f4a8e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:04 np0005532048 systemd[1]: Started /usr/bin/podman healthcheck run e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478.
Nov 22 03:43:04 np0005532048 podman[152857]: 2025-11-22 08:43:04.121196386 +0000 UTC m=+0.148545929 container init e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: + sudo -E kolla_set_configs
Nov 22 03:43:04 np0005532048 podman[152857]: 2025-11-22 08:43:04.159024501 +0000 UTC m=+0.186374004 container start e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:43:04 np0005532048 edpm-start-podman-container[152857]: ovn_controller
Nov 22 03:43:04 np0005532048 systemd[1]: Created slice User Slice of UID 0.
Nov 22 03:43:04 np0005532048 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 22 03:43:04 np0005532048 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 22 03:43:04 np0005532048 systemd[1]: Starting User Manager for UID 0...
Nov 22 03:43:04 np0005532048 edpm-start-podman-container[152856]: Creating additional drop-in dependency for "ovn_controller" (e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478)
Nov 22 03:43:04 np0005532048 podman[152879]: 2025-11-22 08:43:04.252163806 +0000 UTC m=+0.081074481 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 03:43:04 np0005532048 systemd[1]: e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478-7df5514fc2b94746.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 03:43:04 np0005532048 systemd[1]: e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478-7df5514fc2b94746.service: Failed with result 'exit-code'.
Nov 22 03:43:04 np0005532048 systemd[1]: Reloading.
Nov 22 03:43:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:04 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:43:04 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:43:04 np0005532048 systemd[152911]: Queued start job for default target Main User Target.
Nov 22 03:43:04 np0005532048 systemd[152911]: Created slice User Application Slice.
Nov 22 03:43:04 np0005532048 systemd[152911]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 22 03:43:04 np0005532048 systemd[152911]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 03:43:04 np0005532048 systemd[152911]: Reached target Paths.
Nov 22 03:43:04 np0005532048 systemd[152911]: Reached target Timers.
Nov 22 03:43:04 np0005532048 systemd[152911]: Starting D-Bus User Message Bus Socket...
Nov 22 03:43:04 np0005532048 systemd[152911]: Starting Create User's Volatile Files and Directories...
Nov 22 03:43:04 np0005532048 systemd[152911]: Finished Create User's Volatile Files and Directories.
Nov 22 03:43:04 np0005532048 systemd[152911]: Listening on D-Bus User Message Bus Socket.
Nov 22 03:43:04 np0005532048 systemd[152911]: Reached target Sockets.
Nov 22 03:43:04 np0005532048 systemd[152911]: Reached target Basic System.
Nov 22 03:43:04 np0005532048 systemd[152911]: Reached target Main User Target.
Nov 22 03:43:04 np0005532048 systemd[152911]: Startup finished in 173ms.
Nov 22 03:43:04 np0005532048 systemd[1]: Started User Manager for UID 0.
Nov 22 03:43:04 np0005532048 systemd[1]: Started ovn_controller container.
Nov 22 03:43:04 np0005532048 systemd[1]: Started Session c1 of User root.
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: INFO:__main__:Validating config file
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: INFO:__main__:Writing out command to execute
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: ++ cat /run_command
Nov 22 03:43:04 np0005532048 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: + ARGS=
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: + sudo kolla_copy_cacerts
Nov 22 03:43:04 np0005532048 systemd[1]: Started Session c2 of User root.
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: + [[ ! -n '' ]]
Nov 22 03:43:04 np0005532048 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: + . kolla_extend_start
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: + umask 0022
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 22 03:43:04 np0005532048 NetworkManager[48920]: <info>  [1763800984.7564] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 22 03:43:04 np0005532048 NetworkManager[48920]: <info>  [1763800984.7574] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:43:04 np0005532048 NetworkManager[48920]: <info>  [1763800984.7595] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 22 03:43:04 np0005532048 NetworkManager[48920]: <info>  [1763800984.7604] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 22 03:43:04 np0005532048 NetworkManager[48920]: <info>  [1763800984.7610] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 03:43:04 np0005532048 kernel: br-int: entered promiscuous mode
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 22 03:43:04 np0005532048 systemd-udevd[153022]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 03:43:04 np0005532048 NetworkManager[48920]: <info>  [1763800984.8101] manager: (ovn-a3a669-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 03:43:04 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:04Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 03:43:04 np0005532048 kernel: genev_sys_6081: entered promiscuous mode
Nov 22 03:43:04 np0005532048 NetworkManager[48920]: <info>  [1763800984.8295] device (genev_sys_6081): carrier: link connected
Nov 22 03:43:04 np0005532048 NetworkManager[48920]: <info>  [1763800984.8298] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 22 03:43:05 np0005532048 python3.9[153137]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:05 np0005532048 ovs-vsctl[153138]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 22 03:43:05 np0005532048 python3.9[153290]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:05 np0005532048 ovs-vsctl[153292]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 22 03:43:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:06 np0005532048 python3.9[153445]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:06 np0005532048 ovs-vsctl[153446]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 22 03:43:07 np0005532048 systemd[1]: session-46.scope: Deactivated successfully.
Nov 22 03:43:07 np0005532048 systemd[1]: session-46.scope: Consumed 1min 483ms CPU time.
Nov 22 03:43:07 np0005532048 systemd-logind[822]: Session 46 logged out. Waiting for processes to exit.
Nov 22 03:43:07 np0005532048 systemd-logind[822]: Removed session 46.
Nov 22 03:43:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:12 np0005532048 systemd-logind[822]: New session 48 of user zuul.
Nov 22 03:43:12 np0005532048 systemd[1]: Started Session 48 of User zuul.
Nov 22 03:43:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:13 np0005532048 python3.9[153627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:43:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:14 np0005532048 systemd[1]: Stopping User Manager for UID 0...
Nov 22 03:43:14 np0005532048 systemd[152911]: Activating special unit Exit the Session...
Nov 22 03:43:14 np0005532048 systemd[152911]: Stopped target Main User Target.
Nov 22 03:43:14 np0005532048 systemd[152911]: Stopped target Basic System.
Nov 22 03:43:14 np0005532048 systemd[152911]: Stopped target Paths.
Nov 22 03:43:14 np0005532048 systemd[152911]: Stopped target Sockets.
Nov 22 03:43:14 np0005532048 systemd[152911]: Stopped target Timers.
Nov 22 03:43:14 np0005532048 systemd[152911]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 22 03:43:14 np0005532048 systemd[152911]: Closed D-Bus User Message Bus Socket.
Nov 22 03:43:14 np0005532048 systemd[152911]: Stopped Create User's Volatile Files and Directories.
Nov 22 03:43:14 np0005532048 systemd[152911]: Removed slice User Application Slice.
Nov 22 03:43:14 np0005532048 systemd[152911]: Reached target Shutdown.
Nov 22 03:43:14 np0005532048 systemd[152911]: Finished Exit the Session.
Nov 22 03:43:14 np0005532048 systemd[152911]: Reached target Exit the Session.
Nov 22 03:43:14 np0005532048 systemd[1]: user@0.service: Deactivated successfully.
Nov 22 03:43:14 np0005532048 systemd[1]: Stopped User Manager for UID 0.
Nov 22 03:43:14 np0005532048 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 22 03:43:14 np0005532048 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 22 03:43:14 np0005532048 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 22 03:43:14 np0005532048 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 22 03:43:14 np0005532048 systemd[1]: Removed slice User Slice of UID 0.
Nov 22 03:43:15 np0005532048 python3.9[153786]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:15 np0005532048 python3.9[153938]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:16 np0005532048 python3.9[154090]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:17 np0005532048 python3.9[154242]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:17 np0005532048 python3.9[154394]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:18 np0005532048 python3.9[154544]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:43:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:19 np0005532048 python3.9[154696]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 03:43:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:20 np0005532048 python3.9[154846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:21 np0005532048 python3.9[154967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801000.2224348-86-238513469579673/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:22 np0005532048 python3.9[155236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b9c11564-c475-4bd6-8211-817d00088c47 does not exist
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev dc97720d-f06d-4d12-b9d3-f45b7ec29aa6 does not exist
Nov 22 03:43:22 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c4ae22b5-adcd-4e99-af45-366f2e19a252 does not exist
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:43:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:43:22 np0005532048 python3.9[155477]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801001.7335815-101-189054088171074/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:43:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:43:23 np0005532048 podman[155733]: 2025-11-22 08:43:23.441389101 +0000 UTC m=+0.049793649 container create 16cf1c6c8c0efe1f60224d53e0128ac3fded76cc0f4ed6b4efd92e99cb4136ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:43:23 np0005532048 systemd[1]: Started libpod-conmon-16cf1c6c8c0efe1f60224d53e0128ac3fded76cc0f4ed6b4efd92e99cb4136ff.scope.
Nov 22 03:43:23 np0005532048 podman[155733]: 2025-11-22 08:43:23.419091553 +0000 UTC m=+0.027496121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:43:23 np0005532048 podman[155733]: 2025-11-22 08:43:23.533934862 +0000 UTC m=+0.142339440 container init 16cf1c6c8c0efe1f60224d53e0128ac3fded76cc0f4ed6b4efd92e99cb4136ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:43:23 np0005532048 podman[155733]: 2025-11-22 08:43:23.543142601 +0000 UTC m=+0.151547149 container start 16cf1c6c8c0efe1f60224d53e0128ac3fded76cc0f4ed6b4efd92e99cb4136ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:43:23 np0005532048 relaxed_germain[155796]: 167 167
Nov 22 03:43:23 np0005532048 systemd[1]: libpod-16cf1c6c8c0efe1f60224d53e0128ac3fded76cc0f4ed6b4efd92e99cb4136ff.scope: Deactivated successfully.
Nov 22 03:43:23 np0005532048 podman[155733]: 2025-11-22 08:43:23.550425713 +0000 UTC m=+0.158830291 container attach 16cf1c6c8c0efe1f60224d53e0128ac3fded76cc0f4ed6b4efd92e99cb4136ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:43:23 np0005532048 podman[155733]: 2025-11-22 08:43:23.552108442 +0000 UTC m=+0.160513000 container died 16cf1c6c8c0efe1f60224d53e0128ac3fded76cc0f4ed6b4efd92e99cb4136ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 22 03:43:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-567ea726ee3c6994dc51bd27473a3fa259ea9762520769687f72f3233ac62459-merged.mount: Deactivated successfully.
Nov 22 03:43:23 np0005532048 podman[155733]: 2025-11-22 08:43:23.602501856 +0000 UTC m=+0.210906414 container remove 16cf1c6c8c0efe1f60224d53e0128ac3fded76cc0f4ed6b4efd92e99cb4136ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:23 np0005532048 systemd[1]: libpod-conmon-16cf1c6c8c0efe1f60224d53e0128ac3fded76cc0f4ed6b4efd92e99cb4136ff.scope: Deactivated successfully.
Nov 22 03:43:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:23 np0005532048 podman[155823]: 2025-11-22 08:43:23.772196724 +0000 UTC m=+0.045957359 container create ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:43:23 np0005532048 systemd[1]: Started libpod-conmon-ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8.scope.
Nov 22 03:43:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:43:23 np0005532048 python3.9[155801]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:43:23 np0005532048 podman[155823]: 2025-11-22 08:43:23.75049213 +0000 UTC m=+0.024252785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f15de054d42107d4c3054b0ade52c727deccd324d7e7126dfff1bc49aae126d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f15de054d42107d4c3054b0ade52c727deccd324d7e7126dfff1bc49aae126d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f15de054d42107d4c3054b0ade52c727deccd324d7e7126dfff1bc49aae126d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f15de054d42107d4c3054b0ade52c727deccd324d7e7126dfff1bc49aae126d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f15de054d42107d4c3054b0ade52c727deccd324d7e7126dfff1bc49aae126d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:23 np0005532048 podman[155823]: 2025-11-22 08:43:23.870577773 +0000 UTC m=+0.144338428 container init ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:43:23 np0005532048 podman[155823]: 2025-11-22 08:43:23.878613614 +0000 UTC m=+0.152374249 container start ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:43:23 np0005532048 podman[155823]: 2025-11-22 08:43:23.883164931 +0000 UTC m=+0.156925576 container attach ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:43:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:24 np0005532048 python3.9[155928]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:43:25 np0005532048 lucid_pascal[155840]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:43:25 np0005532048 lucid_pascal[155840]: --> relative data size: 1.0
Nov 22 03:43:25 np0005532048 lucid_pascal[155840]: --> All data devices are unavailable
Nov 22 03:43:25 np0005532048 systemd[1]: libpod-ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8.scope: Deactivated successfully.
Nov 22 03:43:25 np0005532048 systemd[1]: libpod-ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8.scope: Consumed 1.143s CPU time.
Nov 22 03:43:25 np0005532048 podman[155823]: 2025-11-22 08:43:25.081104745 +0000 UTC m=+1.354865400 container died ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:43:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6f15de054d42107d4c3054b0ade52c727deccd324d7e7126dfff1bc49aae126d-merged.mount: Deactivated successfully.
Nov 22 03:43:25 np0005532048 podman[155823]: 2025-11-22 08:43:25.226765983 +0000 UTC m=+1.500526638 container remove ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:43:25 np0005532048 systemd[1]: libpod-conmon-ad5272ea96491161c1dbb575bdc71cc7125abcf81168817b0b4bdaff9d14f6c8.scope: Deactivated successfully.
Nov 22 03:43:25 np0005532048 podman[156108]: 2025-11-22 08:43:25.890064409 +0000 UTC m=+0.046395190 container create 81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:25 np0005532048 systemd[1]: Started libpod-conmon-81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6.scope.
Nov 22 03:43:25 np0005532048 podman[156108]: 2025-11-22 08:43:25.868527099 +0000 UTC m=+0.024857910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:43:25 np0005532048 podman[156108]: 2025-11-22 08:43:25.993737473 +0000 UTC m=+0.150068294 container init 81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:43:26 np0005532048 podman[156108]: 2025-11-22 08:43:26.002367898 +0000 UTC m=+0.158698679 container start 81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_herschel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 22 03:43:26 np0005532048 dreamy_herschel[156124]: 167 167
Nov 22 03:43:26 np0005532048 podman[156108]: 2025-11-22 08:43:26.007747075 +0000 UTC m=+0.164077986 container attach 81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:43:26 np0005532048 systemd[1]: libpod-81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6.scope: Deactivated successfully.
Nov 22 03:43:26 np0005532048 conmon[156124]: conmon 81cc2878a19d48a22352 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6.scope/container/memory.events
Nov 22 03:43:26 np0005532048 podman[156108]: 2025-11-22 08:43:26.010209594 +0000 UTC m=+0.166540375 container died 81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:43:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9d4e5de530fee3a1ce8ecca5f3489ba1868414719ef5da0c368727ef39d14ae2-merged.mount: Deactivated successfully.
Nov 22 03:43:26 np0005532048 podman[156108]: 2025-11-22 08:43:26.053884237 +0000 UTC m=+0.210215018 container remove 81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_herschel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:43:26 np0005532048 systemd[1]: libpod-conmon-81cc2878a19d48a2235209dbe38048ffbdc254d9f0fe943f6858d9e6196d84e6.scope: Deactivated successfully.
Nov 22 03:43:26 np0005532048 podman[156148]: 2025-11-22 08:43:26.225993813 +0000 UTC m=+0.047384973 container create d290ac999fad1e79d6e26c03def0ee47f7ce54f530494e4f512a8e66d0aafce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:43:26 np0005532048 systemd[1]: Started libpod-conmon-d290ac999fad1e79d6e26c03def0ee47f7ce54f530494e4f512a8e66d0aafce8.scope.
Nov 22 03:43:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:26 np0005532048 podman[156148]: 2025-11-22 08:43:26.201587445 +0000 UTC m=+0.022978625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:43:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7037b954e6d3b8aabb686a89eea94fe152c555997a34b8d6b3a3a6b5cfa73bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7037b954e6d3b8aabb686a89eea94fe152c555997a34b8d6b3a3a6b5cfa73bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7037b954e6d3b8aabb686a89eea94fe152c555997a34b8d6b3a3a6b5cfa73bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7037b954e6d3b8aabb686a89eea94fe152c555997a34b8d6b3a3a6b5cfa73bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:26 np0005532048 podman[156148]: 2025-11-22 08:43:26.332833982 +0000 UTC m=+0.154225142 container init d290ac999fad1e79d6e26c03def0ee47f7ce54f530494e4f512a8e66d0aafce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:43:26 np0005532048 podman[156148]: 2025-11-22 08:43:26.342109622 +0000 UTC m=+0.163500772 container start d290ac999fad1e79d6e26c03def0ee47f7ce54f530494e4f512a8e66d0aafce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:43:26 np0005532048 podman[156148]: 2025-11-22 08:43:26.34794734 +0000 UTC m=+0.169338520 container attach d290ac999fad1e79d6e26c03def0ee47f7ce54f530494e4f512a8e66d0aafce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]: {
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:    "0": [
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:        {
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "devices": [
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "/dev/loop3"
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            ],
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_name": "ceph_lv0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_size": "21470642176",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "name": "ceph_lv0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "tags": {
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.cluster_name": "ceph",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.crush_device_class": "",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.encrypted": "0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.osd_id": "0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.type": "block",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.vdo": "0"
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            },
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "type": "block",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "vg_name": "ceph_vg0"
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:        }
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:    ],
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:    "1": [
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:        {
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "devices": [
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "/dev/loop4"
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            ],
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_name": "ceph_lv1",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_size": "21470642176",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "name": "ceph_lv1",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "tags": {
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.cluster_name": "ceph",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.crush_device_class": "",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.encrypted": "0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.osd_id": "1",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.type": "block",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.vdo": "0"
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            },
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "type": "block",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "vg_name": "ceph_vg1"
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:        }
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:    ],
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:    "2": [
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:        {
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "devices": [
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "/dev/loop5"
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            ],
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_name": "ceph_lv2",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_size": "21470642176",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "name": "ceph_lv2",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "tags": {
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.cluster_name": "ceph",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.crush_device_class": "",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.encrypted": "0",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.osd_id": "2",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.type": "block",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:                "ceph.vdo": "0"
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            },
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "type": "block",
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:            "vg_name": "ceph_vg2"
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:        }
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]:    ]
Nov 22 03:43:27 np0005532048 unruffled_jackson[156164]: }
Nov 22 03:43:27 np0005532048 systemd[1]: libpod-d290ac999fad1e79d6e26c03def0ee47f7ce54f530494e4f512a8e66d0aafce8.scope: Deactivated successfully.
Nov 22 03:43:27 np0005532048 podman[156148]: 2025-11-22 08:43:27.275146733 +0000 UTC m=+1.096537913 container died d290ac999fad1e79d6e26c03def0ee47f7ce54f530494e4f512a8e66d0aafce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f7037b954e6d3b8aabb686a89eea94fe152c555997a34b8d6b3a3a6b5cfa73bc-merged.mount: Deactivated successfully.
Nov 22 03:43:27 np0005532048 podman[156148]: 2025-11-22 08:43:27.349649458 +0000 UTC m=+1.171040608 container remove d290ac999fad1e79d6e26c03def0ee47f7ce54f530494e4f512a8e66d0aafce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 03:43:27 np0005532048 systemd[1]: libpod-conmon-d290ac999fad1e79d6e26c03def0ee47f7ce54f530494e4f512a8e66d0aafce8.scope: Deactivated successfully.
Nov 22 03:43:27 np0005532048 python3.9[156320]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:43:27 np0005532048 podman[156603]: 2025-11-22 08:43:27.996622966 +0000 UTC m=+0.047795022 container create 182acf69030e6ac18dee3f2e7e7b5e236b6b7efee34a3222ea3a4c34fbf369ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_taussig, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:43:28 np0005532048 systemd[1]: Started libpod-conmon-182acf69030e6ac18dee3f2e7e7b5e236b6b7efee34a3222ea3a4c34fbf369ec.scope.
Nov 22 03:43:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:43:28 np0005532048 podman[156603]: 2025-11-22 08:43:27.976129851 +0000 UTC m=+0.027301927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:28 np0005532048 podman[156603]: 2025-11-22 08:43:28.088207325 +0000 UTC m=+0.139379401 container init 182acf69030e6ac18dee3f2e7e7b5e236b6b7efee34a3222ea3a4c34fbf369ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_taussig, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:28 np0005532048 podman[156603]: 2025-11-22 08:43:28.097432023 +0000 UTC m=+0.148604079 container start 182acf69030e6ac18dee3f2e7e7b5e236b6b7efee34a3222ea3a4c34fbf369ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_taussig, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:28 np0005532048 podman[156603]: 2025-11-22 08:43:28.101544371 +0000 UTC m=+0.152716417 container attach 182acf69030e6ac18dee3f2e7e7b5e236b6b7efee34a3222ea3a4c34fbf369ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_taussig, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:43:28 np0005532048 objective_taussig[156648]: 167 167
Nov 22 03:43:28 np0005532048 systemd[1]: libpod-182acf69030e6ac18dee3f2e7e7b5e236b6b7efee34a3222ea3a4c34fbf369ec.scope: Deactivated successfully.
Nov 22 03:43:28 np0005532048 podman[156603]: 2025-11-22 08:43:28.106428826 +0000 UTC m=+0.157600892 container died 182acf69030e6ac18dee3f2e7e7b5e236b6b7efee34a3222ea3a4c34fbf369ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_taussig, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:43:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-debf61048e07872e25cbb831a0060511bb8400bc9eddde4b5188d0adf7f0fd77-merged.mount: Deactivated successfully.
Nov 22 03:43:28 np0005532048 podman[156603]: 2025-11-22 08:43:28.150190152 +0000 UTC m=+0.201362218 container remove 182acf69030e6ac18dee3f2e7e7b5e236b6b7efee34a3222ea3a4c34fbf369ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:43:28 np0005532048 systemd[1]: libpod-conmon-182acf69030e6ac18dee3f2e7e7b5e236b6b7efee34a3222ea3a4c34fbf369ec.scope: Deactivated successfully.
Nov 22 03:43:28 np0005532048 python3.9[156645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:28 np0005532048 podman[156671]: 2025-11-22 08:43:28.321902638 +0000 UTC m=+0.048255763 container create 1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:43:28 np0005532048 systemd[1]: Started libpod-conmon-1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7.scope.
Nov 22 03:43:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:43:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2969f8cfe6fa32e6fe9974845cb7878f7fb5ae36287e74bcdba9630f0258db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2969f8cfe6fa32e6fe9974845cb7878f7fb5ae36287e74bcdba9630f0258db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2969f8cfe6fa32e6fe9974845cb7878f7fb5ae36287e74bcdba9630f0258db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2969f8cfe6fa32e6fe9974845cb7878f7fb5ae36287e74bcdba9630f0258db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:28 np0005532048 podman[156671]: 2025-11-22 08:43:28.304042675 +0000 UTC m=+0.030395790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:28 np0005532048 podman[156671]: 2025-11-22 08:43:28.414250614 +0000 UTC m=+0.140603749 container init 1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:28 np0005532048 podman[156671]: 2025-11-22 08:43:28.422241034 +0000 UTC m=+0.148594149 container start 1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:43:28 np0005532048 podman[156671]: 2025-11-22 08:43:28.428393679 +0000 UTC m=+0.154746824 container attach 1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:43:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:28 np0005532048 python3.9[156812]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801007.7036242-138-144528695451202/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:29 np0005532048 python3.9[156967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]: {
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "osd_id": 1,
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "type": "bluestore"
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:    },
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "osd_id": 0,
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "type": "bluestore"
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:    },
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "osd_id": 2,
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:        "type": "bluestore"
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]:    }
Nov 22 03:43:29 np0005532048 infallible_thompson[156734]: }
Nov 22 03:43:29 np0005532048 systemd[1]: libpod-1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7.scope: Deactivated successfully.
Nov 22 03:43:29 np0005532048 systemd[1]: libpod-1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7.scope: Consumed 1.119s CPU time.
Nov 22 03:43:29 np0005532048 podman[156671]: 2025-11-22 08:43:29.53288499 +0000 UTC m=+1.259238105 container died 1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:43:29 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1f2969f8cfe6fa32e6fe9974845cb7878f7fb5ae36287e74bcdba9630f0258db-merged.mount: Deactivated successfully.
Nov 22 03:43:29 np0005532048 podman[156671]: 2025-11-22 08:43:29.597734086 +0000 UTC m=+1.324087201 container remove 1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:43:29 np0005532048 systemd[1]: libpod-conmon-1a293be2b2e505bd6789a4d40521fa1e8bea553fee31c050c3b8442b7d1ff4c7.scope: Deactivated successfully.
Nov 22 03:43:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:43:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:43:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:29 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev f1ccc61e-54b6-4196-9b0e-f1d311cb0105 does not exist
Nov 22 03:43:29 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 51b1e044-decd-40fe-a4a2-d11a0a1fe82a does not exist
Nov 22 03:43:30 np0005532048 python3.9[157174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801008.9699874-138-96034087753163/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:43:31 np0005532048 python3.9[157324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:31 np0005532048 python3.9[157445]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801010.7758186-182-63451133121080/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:32 np0005532048 python3.9[157595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:32 np0005532048 python3.9[157716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801011.992386-182-82138987635235/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:33 np0005532048 python3.9[157866]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:43:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:34 np0005532048 python3.9[158020]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:34 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:34Z|00025|memory|INFO|15872 kB peak resident set size after 29.6 seconds
Nov 22 03:43:34 np0005532048 ovn_controller[152872]: 2025-11-22T08:43:34Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 22 03:43:34 np0005532048 podman[158021]: 2025-11-22 08:43:34.417890994 +0000 UTC m=+0.107678671 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 03:43:35 np0005532048 python3.9[158200]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:35 np0005532048 python3.9[158278]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:36 np0005532048 python3.9[158430]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:36 np0005532048 python3.9[158508]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:37 np0005532048 python3.9[158660]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:37 np0005532048 python3.9[158812]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:38 np0005532048 python3.9[158890]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:38 np0005532048 python3.9[159042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:39 np0005532048 python3.9[159120]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:40 np0005532048 python3.9[159272]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:43:40 np0005532048 systemd[1]: Reloading.
Nov 22 03:43:40 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:43:40 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:43:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:41 np0005532048 python3.9[159462]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:41 np0005532048 python3.9[159540]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:42 np0005532048 python3.9[159692]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:42 np0005532048 python3.9[159770]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:43 np0005532048 python3.9[159922]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:43:43 np0005532048 systemd[1]: Reloading.
Nov 22 03:43:43 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:43:43 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:43:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:43 np0005532048 systemd[1]: Starting Create netns directory...
Nov 22 03:43:43 np0005532048 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:43:43 np0005532048 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:43:43 np0005532048 systemd[1]: Finished Create netns directory.
Nov 22 03:43:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:44 np0005532048 python3.9[160114]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:45 np0005532048 python3.9[160266]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:45 np0005532048 python3.9[160389]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801024.949354-333-165805137544575/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:46 np0005532048 python3.9[160541]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:47 np0005532048 python3.9[160693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:48 np0005532048 python3.9[160816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801027.011657-358-81266675464800/.source.json _original_basename=.wvur9i31 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:48 np0005532048 python3.9[160968]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:43:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5541 writes, 23K keys, 5541 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5541 writes, 838 syncs, 6.61 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5541 writes, 23K keys, 5541 commit groups, 1.0 writes per commit group, ingest: 18.94 MB, 0.03 MB/s#012Interval WAL: 5541 writes, 838 syncs, 6.61 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 22 03:43:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:50 np0005532048 python3.9[161395]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 22 03:43:51 np0005532048 python3.9[161547]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:43:52
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'vms', 'volumes']
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:52 np0005532048 python3.9[161699]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:43:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:43:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:43:54 np0005532048 python3[161876]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:43:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:43:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.2 total, 600.0 interval#012Cumulative writes: 6597 writes, 28K keys, 6597 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6597 writes, 1129 syncs, 5.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6597 writes, 28K keys, 6597 commit groups, 1.0 writes per commit group, ingest: 19.88 MB, 0.03 MB/s#012Interval WAL: 6597 writes, 1129 syncs, 5.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 22 03:43:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:44:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5491 writes, 23K keys, 5491 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5491 writes, 784 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5491 writes, 23K keys, 5491 commit groups, 1.0 writes per commit group, ingest: 18.78 MB, 0.03 MB/s#012Interval WAL: 5491 writes, 784 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:02 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 03:44:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:03 np0005532048 podman[161889]: 2025-11-22 08:44:03.791639153 +0000 UTC m=+9.554653463 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 03:44:04 np0005532048 podman[162016]: 2025-11-22 08:44:03.917494647 +0000 UTC m=+0.024196684 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 03:44:04 np0005532048 podman[162016]: 2025-11-22 08:44:04.04277976 +0000 UTC m=+0.149481777 container create b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 03:44:04 np0005532048 python3[161876]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 03:44:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:05 np0005532048 podman[162178]: 2025-11-22 08:44:05.141988875 +0000 UTC m=+0.222322395 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 03:44:05 np0005532048 python3.9[162225]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:44:06 np0005532048 python3.9[162387]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:06 np0005532048 python3.9[162463]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:44:07 np0005532048 python3.9[162614]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763801046.6065052-446-54424650325208/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:07 np0005532048 python3.9[162690]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:44:07 np0005532048 systemd[1]: Reloading.
Nov 22 03:44:07 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:44:07 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:44:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:10 np0005532048 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 03:44:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:14 np0005532048 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 03:44:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:18 np0005532048 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 03:44:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 16.2375 seconds
Nov 22 03:44:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:21 np0005532048 python3.9[162800]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:44:21 np0005532048 systemd[1]: Reloading.
Nov 22 03:44:21 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:44:21 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:44:22 np0005532048 systemd[1]: Starting ovn_metadata_agent container...
Nov 22 03:44:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:24 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:44:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12583d679ebb4bfda7d6b13064e7614ce448d9ca166c7880a4af38e765f5e29/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12583d679ebb4bfda7d6b13064e7614ce448d9ca166c7880a4af38e765f5e29/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:24 np0005532048 systemd[1]: Started /usr/bin/podman healthcheck run b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c.
Nov 22 03:44:24 np0005532048 podman[162841]: 2025-11-22 08:44:24.535289058 +0000 UTC m=+2.330397684 container init b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: + sudo -E kolla_set_configs
Nov 22 03:44:24 np0005532048 podman[162841]: 2025-11-22 08:44:24.56298788 +0000 UTC m=+2.358096496 container start b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 22 03:44:24 np0005532048 edpm-start-podman-container[162841]: ovn_metadata_agent
Nov 22 03:44:24 np0005532048 edpm-start-podman-container[162840]: Creating additional drop-in dependency for "ovn_metadata_agent" (b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c)
Nov 22 03:44:24 np0005532048 podman[162861]: 2025-11-22 08:44:24.766466845 +0000 UTC m=+0.190780198 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:44:24 np0005532048 systemd[1]: Reloading.
Nov 22 03:44:24 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:44:24 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Validating config file
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Copying service configuration files
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Writing out command to execute
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: ++ cat /run_command
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: + CMD=neutron-ovn-metadata-agent
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: + ARGS=
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: + sudo kolla_copy_cacerts
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: + [[ ! -n '' ]]
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: + . kolla_extend_start
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: Running command: 'neutron-ovn-metadata-agent'
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: + umask 0022
Nov 22 03:44:24 np0005532048 ovn_metadata_agent[162856]: + exec neutron-ovn-metadata-agent
Nov 22 03:44:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:25 np0005532048 systemd[1]: Started ovn_metadata_agent container.
Nov 22 03:44:25 np0005532048 systemd[1]: session-48.scope: Deactivated successfully.
Nov 22 03:44:25 np0005532048 systemd[1]: session-48.scope: Consumed 56.259s CPU time.
Nov 22 03:44:25 np0005532048 systemd-logind[822]: Session 48 logged out. Waiting for processes to exit.
Nov 22 03:44:25 np0005532048 systemd-logind[822]: Removed session 48.
Nov 22 03:44:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.866 162862 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.866 162862 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.866 162862 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.867 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.867 162862 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.867 162862 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.867 162862 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.867 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.867 162862 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.868 162862 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.868 162862 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.868 162862 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.868 162862 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.868 162862 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.868 162862 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.868 162862 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.869 162862 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.869 162862 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.869 162862 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.869 162862 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.869 162862 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.869 162862 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.869 162862 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.870 162862 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.870 162862 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.870 162862 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.870 162862 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.870 162862 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.870 162862 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.870 162862 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.870 162862 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.871 162862 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.871 162862 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.871 162862 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.871 162862 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.871 162862 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.871 162862 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.872 162862 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.872 162862 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.872 162862 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.872 162862 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.872 162862 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.872 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.873 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.873 162862 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.873 162862 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.873 162862 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.873 162862 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.873 162862 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.873 162862 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.874 162862 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.874 162862 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.874 162862 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.874 162862 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.874 162862 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.874 162862 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.874 162862 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.875 162862 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.875 162862 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.875 162862 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.875 162862 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.875 162862 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.875 162862 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.875 162862 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.876 162862 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.876 162862 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.876 162862 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.876 162862 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.876 162862 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.876 162862 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.876 162862 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.877 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.877 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.877 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.877 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.877 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.877 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.877 162862 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.878 162862 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.878 162862 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.878 162862 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.878 162862 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.878 162862 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.878 162862 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.878 162862 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.879 162862 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.879 162862 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.879 162862 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.879 162862 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.879 162862 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.879 162862 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.879 162862 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.880 162862 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.880 162862 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.880 162862 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.880 162862 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.880 162862 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.880 162862 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.880 162862 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.880 162862 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.881 162862 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.881 162862 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.881 162862 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.881 162862 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.881 162862 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.881 162862 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.881 162862 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.881 162862 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.882 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.882 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.882 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.882 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.882 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.882 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.883 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.883 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.883 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.883 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.883 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.883 162862 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.883 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.884 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.884 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.884 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.884 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.884 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.884 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.884 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.885 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.885 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.885 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.885 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.885 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.885 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.885 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.886 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.886 162862 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.886 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.886 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.886 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.886 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.886 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.887 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.887 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.887 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.887 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.887 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.887 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.887 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.888 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.888 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.888 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.888 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.888 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.888 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.888 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.889 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.889 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.889 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.889 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.889 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.889 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.889 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.890 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.890 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.890 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.890 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.890 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.890 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.890 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.891 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.891 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.891 162862 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.891 162862 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.891 162862 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.891 162862 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.891 162862 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.892 162862 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.892 162862 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.892 162862 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.892 162862 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.892 162862 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.892 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.892 162862 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.892 162862 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.893 162862 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.893 162862 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.893 162862 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.893 162862 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.893 162862 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.893 162862 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.893 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.894 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.894 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.894 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.894 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.894 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.894 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.894 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.895 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.895 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.895 162862 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.895 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.895 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.895 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.895 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.895 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.896 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.896 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.896 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.896 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.896 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.896 162862 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.896 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.897 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.897 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.897 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.897 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.897 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.897 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.897 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.898 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.898 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.898 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.898 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.898 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.898 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.898 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.899 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.899 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.899 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.899 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.899 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.899 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.899 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.899 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.900 162862 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.900 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.900 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.900 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.900 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.900 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.900 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.901 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.901 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.901 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.901 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.901 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.901 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.901 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.902 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.902 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.902 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.902 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.902 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.902 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.902 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.903 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.903 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.903 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.903 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.903 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.903 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.903 162862 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.904 162862 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.904 162862 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.904 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.904 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.904 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.904 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.904 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.905 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.905 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.905 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.905 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.905 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.905 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.905 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.906 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.906 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.906 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.906 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.906 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.906 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.906 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.907 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.907 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.907 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.907 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.907 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.907 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.907 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.908 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.908 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.908 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.908 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.908 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.908 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.908 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.909 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.909 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.909 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.909 162862 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.909 162862 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.918 162862 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.918 162862 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.918 162862 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.919 162862 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.919 162862 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 22 03:44:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:27.931 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 26987bf4-0c95-4db6-9113-da9e4051262c (UUID: 26987bf4-0c95-4db6-9113-da9e4051262c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.048 162862 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.048 162862 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.049 162862 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.049 162862 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.057 162862 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.064 162862 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.071 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '26987bf4-0c95-4db6-9113-da9e4051262c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], external_ids={}, name=26987bf4-0c95-4db6-9113-da9e4051262c, nb_cfg_timestamp=1763800992780, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.073 162862 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fc8a7f4ceb0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.074 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.074 162862 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.074 162862 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.074 162862 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.080 162862 DEBUG oslo_service.service [-] Started child 162970 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.083 162970 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-16682179'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.084 162862 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpsiwtacgq/privsep.sock']#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.112 162970 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.113 162970 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.113 162970 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.117 162970 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.122 162970 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.129 162970 INFO eventlet.wsgi.server [-] (162970) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 22 03:44:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:28 np0005532048 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.939 162862 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.940 162862 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsiwtacgq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.741 162975 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.798 162975 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.803 162975 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.803 162975 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162975#033[00m
Nov 22 03:44:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:28.943 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e69a2b-211b-4a0e-b53c-ce222d9e81a7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 03:44:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:29.529 162975 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:44:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:29.530 162975 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:44:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:29.531 162975 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:44:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.155 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf6692f-79d3-4576-8633-10fd8bacd549]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.158 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, column=external_ids, values=({'neutron:ovn-metadata-id': '3d5de975-3991-5972-87fc-290daaa9fde5'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 03:44:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.568 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 03:44:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 03:44:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:44:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:44:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:44:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:44:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:44:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.806 162862 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.807 162862 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.807 162862 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.807 162862 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.807 162862 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.807 162862 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.808 162862 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.808 162862 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.808 162862 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.808 162862 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.808 162862 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.808 162862 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.808 162862 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.809 162862 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.809 162862 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.809 162862 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.809 162862 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.809 162862 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.810 162862 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.810 162862 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.810 162862 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.810 162862 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.810 162862 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.810 162862 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.810 162862 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.811 162862 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.811 162862 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.811 162862 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.811 162862 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.811 162862 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.811 162862 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.812 162862 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.812 162862 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.812 162862 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.812 162862 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.812 162862 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.812 162862 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.813 162862 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.813 162862 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.813 162862 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.813 162862 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.813 162862 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.813 162862 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.813 162862 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.814 162862 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.814 162862 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.814 162862 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.814 162862 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.814 162862 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.814 162862 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.815 162862 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.815 162862 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.815 162862 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.815 162862 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.815 162862 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.815 162862 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.815 162862 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.815 162862 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.816 162862 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.816 162862 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.816 162862 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.816 162862 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.816 162862 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.816 162862 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.816 162862 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.816 162862 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.817 162862 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.818 162862 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.818 162862 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.818 162862 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.818 162862 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.818 162862 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.818 162862 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.818 162862 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.818 162862 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.819 162862 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.819 162862 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.819 162862 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.819 162862 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.819 162862 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.819 162862 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.819 162862 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.819 162862 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.820 162862 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.821 162862 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.821 162862 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.821 162862 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.821 162862 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.821 162862 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.821 162862 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.821 162862 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.822 162862 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.822 162862 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.822 162862 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.822 162862 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.822 162862 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.822 162862 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.822 162862 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.822 162862 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.822 162862 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.823 162862 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.823 162862 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.823 162862 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.823 162862 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.823 162862 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.823 162862 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.823 162862 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.823 162862 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.824 162862 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.825 162862 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.825 162862 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.825 162862 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.825 162862 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.825 162862 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.825 162862 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.825 162862 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.825 162862 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.825 162862 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.826 162862 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.827 162862 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.828 162862 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.828 162862 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.828 162862 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.828 162862 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.828 162862 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.828 162862 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.828 162862 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.828 162862 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.829 162862 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.829 162862 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.829 162862 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.829 162862 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.829 162862 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.829 162862 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.829 162862 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.829 162862 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.829 162862 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.830 162862 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.830 162862 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.830 162862 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.830 162862 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.830 162862 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.830 162862 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.830 162862 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.830 162862 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.830 162862 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.831 162862 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.831 162862 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.831 162862 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.831 162862 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.831 162862 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.831 162862 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.831 162862 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.831 162862 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.831 162862 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.832 162862 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.833 162862 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.833 162862 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.833 162862 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.833 162862 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.833 162862 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.833 162862 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.833 162862 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.833 162862 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.833 162862 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.834 162862 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.835 162862 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.835 162862 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.835 162862 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.835 162862 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.835 162862 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.835 162862 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.835 162862 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.835 162862 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.836 162862 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.836 162862 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.836 162862 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.836 162862 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.836 162862 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.836 162862 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.836 162862 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.837 162862 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.838 162862 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.839 162862 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.839 162862 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.839 162862 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.839 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.839 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.839 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.839 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.839 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.839 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.840 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.840 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.840 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.840 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.840 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.840 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.840 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.840 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.840 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.841 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.841 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.841 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.841 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.841 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.841 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.841 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.841 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.841 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.842 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.842 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.842 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.842 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.842 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.842 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.842 162862 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.842 162862 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.843 162862 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.843 162862 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.843 162862 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:44:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:44:30.843 162862 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 22 03:44:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:44:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6e3d5757-aa5f-4adf-ac1c-9aa75ec45fd8 does not exist
Nov 22 03:44:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev da67e0f1-f387-4b7f-ba1e-89b2bf695722 does not exist
Nov 22 03:44:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e1775ce9-dc1c-4a15-a6a8-5ac041081bdf does not exist
Nov 22 03:44:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:44:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:44:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:44:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:44:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:44:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:44:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:44:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:44:33 np0005532048 podman[163250]: 2025-11-22 08:44:33.719880726 +0000 UTC m=+0.026539811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:34 np0005532048 systemd-logind[822]: New session 49 of user zuul.
Nov 22 03:44:34 np0005532048 systemd[1]: Started Session 49 of User zuul.
Nov 22 03:44:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:34 np0005532048 podman[163250]: 2025-11-22 08:44:34.90940694 +0000 UTC m=+1.216065995 container create cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:44:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:44:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:44:35 np0005532048 systemd[1]: Started libpod-conmon-cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f.scope.
Nov 22 03:44:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:44:35 np0005532048 podman[163250]: 2025-11-22 08:44:35.293680824 +0000 UTC m=+1.600339899 container init cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:44:35 np0005532048 podman[163250]: 2025-11-22 08:44:35.302513558 +0000 UTC m=+1.609172613 container start cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:44:35 np0005532048 systemd[1]: libpod-cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f.scope: Deactivated successfully.
Nov 22 03:44:35 np0005532048 vigilant_joliot[163422]: 167 167
Nov 22 03:44:35 np0005532048 conmon[163422]: conmon cbfed0ed34fd3cbf0c4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f.scope/container/memory.events
Nov 22 03:44:35 np0005532048 python3.9[163418]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:44:35 np0005532048 podman[163250]: 2025-11-22 08:44:35.554781207 +0000 UTC m=+1.861440252 container attach cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:44:35 np0005532048 podman[163250]: 2025-11-22 08:44:35.555897649 +0000 UTC m=+1.862556734 container died cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:44:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-caff9731a31818aec05397b73da799d83f7a9accba5ab9a0ba2db48dd19bab4c-merged.mount: Deactivated successfully.
Nov 22 03:44:36 np0005532048 python3.9[163603]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:44:37 np0005532048 podman[163250]: 2025-11-22 08:44:37.113452179 +0000 UTC m=+3.420111234 container remove cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:44:37 np0005532048 systemd[1]: libpod-conmon-cbfed0ed34fd3cbf0c4efc317f49e819a0af1e167f3dfa1db95843b9fdf48f8f.scope: Deactivated successfully.
Nov 22 03:44:37 np0005532048 podman[163426]: 2025-11-22 08:44:37.244995745 +0000 UTC m=+1.930939583 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:44:37 np0005532048 podman[163666]: 2025-11-22 08:44:37.271498554 +0000 UTC m=+0.023374419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:37 np0005532048 podman[163666]: 2025-11-22 08:44:37.646561269 +0000 UTC m=+0.398437114 container create a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:44:37 np0005532048 systemd[1]: Started libpod-conmon-a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32.scope.
Nov 22 03:44:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:44:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da47d56656bd615b1c028e974eda506c2cbb37583ae9f7fa8cbfe42601d0b56a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da47d56656bd615b1c028e974eda506c2cbb37583ae9f7fa8cbfe42601d0b56a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da47d56656bd615b1c028e974eda506c2cbb37583ae9f7fa8cbfe42601d0b56a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da47d56656bd615b1c028e974eda506c2cbb37583ae9f7fa8cbfe42601d0b56a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da47d56656bd615b1c028e974eda506c2cbb37583ae9f7fa8cbfe42601d0b56a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:38 np0005532048 podman[163666]: 2025-11-22 08:44:38.062774249 +0000 UTC m=+0.814650114 container init a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:44:38 np0005532048 podman[163666]: 2025-11-22 08:44:38.07195443 +0000 UTC m=+0.823830275 container start a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:44:38 np0005532048 podman[163666]: 2025-11-22 08:44:38.126579059 +0000 UTC m=+0.878454934 container attach a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:44:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:38 np0005532048 python3.9[163816]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:44:38 np0005532048 systemd[1]: Reloading.
Nov 22 03:44:38 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:44:38 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:44:39 np0005532048 affectionate_dubinsky[163757]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:44:39 np0005532048 affectionate_dubinsky[163757]: --> relative data size: 1.0
Nov 22 03:44:39 np0005532048 affectionate_dubinsky[163757]: --> All data devices are unavailable
Nov 22 03:44:39 np0005532048 systemd[1]: libpod-a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32.scope: Deactivated successfully.
Nov 22 03:44:39 np0005532048 systemd[1]: libpod-a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32.scope: Consumed 1.061s CPU time.
Nov 22 03:44:39 np0005532048 podman[163666]: 2025-11-22 08:44:39.21236944 +0000 UTC m=+1.964245295 container died a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:44:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-da47d56656bd615b1c028e974eda506c2cbb37583ae9f7fa8cbfe42601d0b56a-merged.mount: Deactivated successfully.
Nov 22 03:44:39 np0005532048 podman[163666]: 2025-11-22 08:44:39.645533263 +0000 UTC m=+2.397409108 container remove a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:44:39 np0005532048 systemd[1]: libpod-conmon-a37b29eff6b008b2bf0ce3a5667546636fbc582f45a84d88915a6cec73794e32.scope: Deactivated successfully.
Nov 22 03:44:39 np0005532048 python3.9[164038]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:44:39 np0005532048 network[164148]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:44:39 np0005532048 network[164152]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:44:39 np0005532048 network[164155]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:44:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:40 np0005532048 podman[164203]: 2025-11-22 08:44:40.216557734 +0000 UTC m=+0.045150285 container create c7212d5d11ce55db4da7ba1196c7a76cb14f621a61ac503f60f383f5c7b531bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:44:40 np0005532048 podman[164203]: 2025-11-22 08:44:40.196763176 +0000 UTC m=+0.025355747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:40 np0005532048 systemd[1]: Started libpod-conmon-c7212d5d11ce55db4da7ba1196c7a76cb14f621a61ac503f60f383f5c7b531bd.scope.
Nov 22 03:44:40 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:44:40 np0005532048 podman[164203]: 2025-11-22 08:44:40.684971887 +0000 UTC m=+0.513564458 container init c7212d5d11ce55db4da7ba1196c7a76cb14f621a61ac503f60f383f5c7b531bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:44:40 np0005532048 podman[164203]: 2025-11-22 08:44:40.694412942 +0000 UTC m=+0.523005493 container start c7212d5d11ce55db4da7ba1196c7a76cb14f621a61ac503f60f383f5c7b531bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:44:40 np0005532048 elated_blackburn[164221]: 167 167
Nov 22 03:44:40 np0005532048 systemd[1]: libpod-c7212d5d11ce55db4da7ba1196c7a76cb14f621a61ac503f60f383f5c7b531bd.scope: Deactivated successfully.
Nov 22 03:44:40 np0005532048 podman[164203]: 2025-11-22 08:44:40.745287618 +0000 UTC m=+0.573880189 container attach c7212d5d11ce55db4da7ba1196c7a76cb14f621a61ac503f60f383f5c7b531bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:44:40 np0005532048 podman[164203]: 2025-11-22 08:44:40.745847598 +0000 UTC m=+0.574440149 container died c7212d5d11ce55db4da7ba1196c7a76cb14f621a61ac503f60f383f5c7b531bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:44:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-705156bc1785efc902b398efcdc5d86d79771c96caebed6267aeb1625d8587ba-merged.mount: Deactivated successfully.
Nov 22 03:44:41 np0005532048 podman[164203]: 2025-11-22 08:44:41.139846914 +0000 UTC m=+0.968439465 container remove c7212d5d11ce55db4da7ba1196c7a76cb14f621a61ac503f60f383f5c7b531bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:44:41 np0005532048 systemd[1]: libpod-conmon-c7212d5d11ce55db4da7ba1196c7a76cb14f621a61ac503f60f383f5c7b531bd.scope: Deactivated successfully.
Nov 22 03:44:41 np0005532048 podman[164283]: 2025-11-22 08:44:41.319593274 +0000 UTC m=+0.056426416 container create e44fc3f5a168648a1c45948915f7aa859857ee0ec527e7420ec27feb09991216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:44:41 np0005532048 podman[164283]: 2025-11-22 08:44:41.289389783 +0000 UTC m=+0.026222945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:41 np0005532048 systemd[1]: Started libpod-conmon-e44fc3f5a168648a1c45948915f7aa859857ee0ec527e7420ec27feb09991216.scope.
Nov 22 03:44:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:44:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca29f656640592b0a33a631f3ca60cddd860181e5a7568c8bbb912ce8450f06f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca29f656640592b0a33a631f3ca60cddd860181e5a7568c8bbb912ce8450f06f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca29f656640592b0a33a631f3ca60cddd860181e5a7568c8bbb912ce8450f06f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca29f656640592b0a33a631f3ca60cddd860181e5a7568c8bbb912ce8450f06f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:41 np0005532048 podman[164283]: 2025-11-22 08:44:41.442008941 +0000 UTC m=+0.178842123 container init e44fc3f5a168648a1c45948915f7aa859857ee0ec527e7420ec27feb09991216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:44:41 np0005532048 podman[164283]: 2025-11-22 08:44:41.45116975 +0000 UTC m=+0.188002892 container start e44fc3f5a168648a1c45948915f7aa859857ee0ec527e7420ec27feb09991216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:44:41 np0005532048 podman[164283]: 2025-11-22 08:44:41.507765989 +0000 UTC m=+0.244599131 container attach e44fc3f5a168648a1c45948915f7aa859857ee0ec527e7420ec27feb09991216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]: {
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:    "0": [
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:        {
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "devices": [
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "/dev/loop3"
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            ],
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_name": "ceph_lv0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_size": "21470642176",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "name": "ceph_lv0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "tags": {
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.cluster_name": "ceph",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.crush_device_class": "",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.encrypted": "0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.osd_id": "0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.type": "block",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.vdo": "0"
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            },
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "type": "block",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "vg_name": "ceph_vg0"
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:        }
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:    ],
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:    "1": [
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:        {
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "devices": [
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "/dev/loop4"
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            ],
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_name": "ceph_lv1",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_size": "21470642176",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "name": "ceph_lv1",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "tags": {
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.cluster_name": "ceph",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.crush_device_class": "",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.encrypted": "0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.osd_id": "1",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.type": "block",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.vdo": "0"
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            },
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "type": "block",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "vg_name": "ceph_vg1"
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:        }
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:    ],
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:    "2": [
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:        {
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "devices": [
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "/dev/loop5"
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            ],
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_name": "ceph_lv2",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_size": "21470642176",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "name": "ceph_lv2",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "tags": {
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.cluster_name": "ceph",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.crush_device_class": "",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.encrypted": "0",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.osd_id": "2",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.type": "block",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:                "ceph.vdo": "0"
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            },
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "type": "block",
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:            "vg_name": "ceph_vg2"
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:        }
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]:    ]
Nov 22 03:44:42 np0005532048 compassionate_fermi[164306]: }
Nov 22 03:44:42 np0005532048 systemd[1]: libpod-e44fc3f5a168648a1c45948915f7aa859857ee0ec527e7420ec27feb09991216.scope: Deactivated successfully.
Nov 22 03:44:42 np0005532048 podman[164283]: 2025-11-22 08:44:42.295280329 +0000 UTC m=+1.032113481 container died e44fc3f5a168648a1c45948915f7aa859857ee0ec527e7420ec27feb09991216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:44:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ca29f656640592b0a33a631f3ca60cddd860181e5a7568c8bbb912ce8450f06f-merged.mount: Deactivated successfully.
Nov 22 03:44:42 np0005532048 podman[164283]: 2025-11-22 08:44:42.515192675 +0000 UTC m=+1.252025817 container remove e44fc3f5a168648a1c45948915f7aa859857ee0ec527e7420ec27feb09991216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_fermi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:44:42 np0005532048 systemd[1]: libpod-conmon-e44fc3f5a168648a1c45948915f7aa859857ee0ec527e7420ec27feb09991216.scope: Deactivated successfully.
Nov 22 03:44:43 np0005532048 podman[164680]: 2025-11-22 08:44:43.186596223 +0000 UTC m=+0.073958249 container create 69312f328cd0aa637a7d7a4dce108634af6d357952ae2c25ddef727734eb71b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:44:43 np0005532048 systemd[1]: Started libpod-conmon-69312f328cd0aa637a7d7a4dce108634af6d357952ae2c25ddef727734eb71b2.scope.
Nov 22 03:44:43 np0005532048 podman[164680]: 2025-11-22 08:44:43.135617044 +0000 UTC m=+0.022979080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:44:43 np0005532048 podman[164680]: 2025-11-22 08:44:43.276998033 +0000 UTC m=+0.164360069 container init 69312f328cd0aa637a7d7a4dce108634af6d357952ae2c25ddef727734eb71b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:44:43 np0005532048 podman[164680]: 2025-11-22 08:44:43.284959069 +0000 UTC m=+0.172321085 container start 69312f328cd0aa637a7d7a4dce108634af6d357952ae2c25ddef727734eb71b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:44:43 np0005532048 podman[164680]: 2025-11-22 08:44:43.289215742 +0000 UTC m=+0.176577758 container attach 69312f328cd0aa637a7d7a4dce108634af6d357952ae2c25ddef727734eb71b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:44:43 np0005532048 sleepy_lalande[164697]: 167 167
Nov 22 03:44:43 np0005532048 systemd[1]: libpod-69312f328cd0aa637a7d7a4dce108634af6d357952ae2c25ddef727734eb71b2.scope: Deactivated successfully.
Nov 22 03:44:43 np0005532048 podman[164680]: 2025-11-22 08:44:43.293421864 +0000 UTC m=+0.180783910 container died 69312f328cd0aa637a7d7a4dce108634af6d357952ae2c25ddef727734eb71b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:44:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-02ea0d7a6da86c5101bb980d538f5d39e74e849cc3f00e5d92b7e426fcc48872-merged.mount: Deactivated successfully.
Nov 22 03:44:43 np0005532048 podman[164680]: 2025-11-22 08:44:43.346241649 +0000 UTC m=+0.233603665 container remove 69312f328cd0aa637a7d7a4dce108634af6d357952ae2c25ddef727734eb71b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lalande, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:44:43 np0005532048 systemd[1]: libpod-conmon-69312f328cd0aa637a7d7a4dce108634af6d357952ae2c25ddef727734eb71b2.scope: Deactivated successfully.
Nov 22 03:44:43 np0005532048 python3.9[164669]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:44:43 np0005532048 podman[164724]: 2025-11-22 08:44:43.514749508 +0000 UTC m=+0.045676665 container create f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermi, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:44:43 np0005532048 systemd[1]: Started libpod-conmon-f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992.scope.
Nov 22 03:44:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:44:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827668efbccb4cda072e09640f9ddde06c91ee066f52960e078b74cb2cfd176c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827668efbccb4cda072e09640f9ddde06c91ee066f52960e078b74cb2cfd176c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827668efbccb4cda072e09640f9ddde06c91ee066f52960e078b74cb2cfd176c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:43 np0005532048 podman[164724]: 2025-11-22 08:44:43.495066713 +0000 UTC m=+0.025993880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/827668efbccb4cda072e09640f9ddde06c91ee066f52960e078b74cb2cfd176c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:43 np0005532048 podman[164724]: 2025-11-22 08:44:43.606407463 +0000 UTC m=+0.137334640 container init f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:44:43 np0005532048 podman[164724]: 2025-11-22 08:44:43.612734217 +0000 UTC m=+0.143661374 container start f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:44:43 np0005532048 podman[164724]: 2025-11-22 08:44:43.617395378 +0000 UTC m=+0.148322535 container attach f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:44:44 np0005532048 python3.9[164896]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:44:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:44 np0005532048 nice_fermi[164764]: {
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "osd_id": 1,
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "type": "bluestore"
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:    },
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "osd_id": 0,
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "type": "bluestore"
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:    },
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "osd_id": 2,
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:        "type": "bluestore"
Nov 22 03:44:44 np0005532048 nice_fermi[164764]:    }
Nov 22 03:44:44 np0005532048 nice_fermi[164764]: }
Nov 22 03:44:44 np0005532048 systemd[1]: libpod-f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992.scope: Deactivated successfully.
Nov 22 03:44:44 np0005532048 systemd[1]: libpod-f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992.scope: Consumed 1.040s CPU time.
Nov 22 03:44:44 np0005532048 podman[164724]: 2025-11-22 08:44:44.650122071 +0000 UTC m=+1.181049228 container died f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:44:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-827668efbccb4cda072e09640f9ddde06c91ee066f52960e078b74cb2cfd176c-merged.mount: Deactivated successfully.
Nov 22 03:44:44 np0005532048 podman[164724]: 2025-11-22 08:44:44.721880886 +0000 UTC m=+1.252808043 container remove f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 22 03:44:44 np0005532048 systemd[1]: libpod-conmon-f0e14e96fbce76a5b177441680bba9f4fa7aa984451b5832f4bcbaed033d8992.scope: Deactivated successfully.
Nov 22 03:44:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:44:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:44:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:44:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:44:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2ced5eea-64c4-42ac-84c0-694eebc02487 does not exist
Nov 22 03:44:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fb785199-c847-4543-ade3-e5646fe7f6a0 does not exist
Nov 22 03:44:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:44 np0005532048 python3.9[165084]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:44:45 np0005532048 python3.9[165292]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:44:45 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:44:45 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:44:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:46 np0005532048 python3.9[165445]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:44:47 np0005532048 python3.9[165598]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:44:48 np0005532048 python3.9[165751]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:44:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:49 np0005532048 python3.9[165904]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:49 np0005532048 python3.9[166056]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:50 np0005532048 python3.9[166208]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:51 np0005532048 python3.9[166360]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:51 np0005532048 python3.9[166512]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:44:52
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:52 np0005532048 python3.9[166664]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:44:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:44:53 np0005532048 python3.9[166816]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:53 np0005532048 python3.9[166968]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:54 np0005532048 python3.9[167120]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:44:54 np0005532048 python3.9[167272]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:55 np0005532048 podman[167372]: 2025-11-22 08:44:55.389279266 +0000 UTC m=+0.065854058 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 03:44:55 np0005532048 python3.9[167443]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:56 np0005532048 python3.9[167596]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:56 np0005532048 python3.9[167748]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:57 np0005532048 python3.9[167900]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:59 np0005532048 python3.9[168052]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:44:59 np0005532048 python3.9[168204]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:44:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:00 np0005532048 python3.9[168356]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:45:00 np0005532048 systemd[1]: Reloading.
Nov 22 03:45:00 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:45:00 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:45:01 np0005532048 python3.9[168543]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:45:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:03 np0005532048 python3.9[168696]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:45:04 np0005532048 python3.9[168849]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:45:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:04 np0005532048 python3.9[169002]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:45:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:05 np0005532048 python3.9[169155]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:45:06 np0005532048 python3.9[169308]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:45:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:06 np0005532048 python3.9[169461]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:45:07 np0005532048 podman[169539]: 2025-11-22 08:45:07.410521236 +0000 UTC m=+0.097743860 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 03:45:07 np0005532048 python3.9[169642]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 22 03:45:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:08 np0005532048 python3.9[169795]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:45:09 np0005532048 python3.9[169953]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 03:45:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:10 np0005532048 python3.9[170113]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:45:11 np0005532048 python3.9[170197]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:45:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 15 op/s
Nov 22 03:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 55 op/s
Nov 22 03:45:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 55 op/s
Nov 22 03:45:26 np0005532048 podman[170383]: 2025-11-22 08:45:26.371508957 +0000 UTC m=+0.066736827 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 03:45:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:45:27.920 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:45:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:45:27.921 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:45:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:45:27.921 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:45:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:45:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:45:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:45:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Nov 22 03:45:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 22 03:45:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 22 03:45:38 np0005532048 podman[170409]: 2025-11-22 08:45:38.407637654 +0000 UTC m=+0.099629227 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:45:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:48 np0005532048 podman[170608]: 2025-11-22 08:45:48.071222793 +0000 UTC m=+2.373745936 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:45:48 np0005532048 podman[170608]: 2025-11-22 08:45:48.26991482 +0000 UTC m=+2.572437943 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:45:48 np0005532048 kernel: SELinux:  Converting 2769 SID table entries...
Nov 22 03:45:48 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:45:48 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:45:48 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:45:48 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:45:48 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:45:48 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:45:48 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:45:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:49 np0005532048 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:50 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e79bc877-1fe5-4d1e-89d5-2813791841b9 does not exist
Nov 22 03:45:50 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 09d8f905-6020-497e-ba2d-87111dcc81eb does not exist
Nov 22 03:45:50 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 636077ea-5130-4348-a77b-21dd64f9fdfb does not exist
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:45:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:50 np0005532048 podman[171050]: 2025-11-22 08:45:50.663368347 +0000 UTC m=+0.048685475 container create d64f179ef9eec28a21fa65516146d6ae8cacef3496b688964f6f162a226eeee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_benz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:45:50 np0005532048 systemd[1]: Started libpod-conmon-d64f179ef9eec28a21fa65516146d6ae8cacef3496b688964f6f162a226eeee3.scope.
Nov 22 03:45:50 np0005532048 podman[171050]: 2025-11-22 08:45:50.636584939 +0000 UTC m=+0.021902077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:45:50 np0005532048 podman[171050]: 2025-11-22 08:45:50.789771711 +0000 UTC m=+0.175088939 container init d64f179ef9eec28a21fa65516146d6ae8cacef3496b688964f6f162a226eeee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:45:50 np0005532048 podman[171050]: 2025-11-22 08:45:50.805424402 +0000 UTC m=+0.190741530 container start d64f179ef9eec28a21fa65516146d6ae8cacef3496b688964f6f162a226eeee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_benz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:45:50 np0005532048 jolly_benz[171066]: 167 167
Nov 22 03:45:50 np0005532048 systemd[1]: libpod-d64f179ef9eec28a21fa65516146d6ae8cacef3496b688964f6f162a226eeee3.scope: Deactivated successfully.
Nov 22 03:45:50 np0005532048 podman[171050]: 2025-11-22 08:45:50.825431651 +0000 UTC m=+0.210748879 container attach d64f179ef9eec28a21fa65516146d6ae8cacef3496b688964f6f162a226eeee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:45:50 np0005532048 podman[171050]: 2025-11-22 08:45:50.828110708 +0000 UTC m=+0.213427846 container died d64f179ef9eec28a21fa65516146d6ae8cacef3496b688964f6f162a226eeee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:45:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-585fd3597bb6df50e04c17b21b52adc0bfa6101ed5cce22df68d457ef4cc35f9-merged.mount: Deactivated successfully.
Nov 22 03:45:51 np0005532048 podman[171050]: 2025-11-22 08:45:51.05344779 +0000 UTC m=+0.438764958 container remove d64f179ef9eec28a21fa65516146d6ae8cacef3496b688964f6f162a226eeee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_benz, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:45:51 np0005532048 systemd[1]: libpod-conmon-d64f179ef9eec28a21fa65516146d6ae8cacef3496b688964f6f162a226eeee3.scope: Deactivated successfully.
Nov 22 03:45:51 np0005532048 podman[171091]: 2025-11-22 08:45:51.195134855 +0000 UTC m=+0.026096812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:51 np0005532048 podman[171091]: 2025-11-22 08:45:51.308528094 +0000 UTC m=+0.139490051 container create 364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:45:51 np0005532048 systemd[1]: Started libpod-conmon-364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472.scope.
Nov 22 03:45:51 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:45:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31d18a9fb017cc787d6b846077d115695741abd3f7d32c82fdf5accad1a94af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31d18a9fb017cc787d6b846077d115695741abd3f7d32c82fdf5accad1a94af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31d18a9fb017cc787d6b846077d115695741abd3f7d32c82fdf5accad1a94af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31d18a9fb017cc787d6b846077d115695741abd3f7d32c82fdf5accad1a94af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31d18a9fb017cc787d6b846077d115695741abd3f7d32c82fdf5accad1a94af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:51 np0005532048 podman[171091]: 2025-11-22 08:45:51.717416156 +0000 UTC m=+0.548378113 container init 364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:45:51 np0005532048 podman[171091]: 2025-11-22 08:45:51.726607136 +0000 UTC m=+0.557569073 container start 364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_benz, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:45:51 np0005532048 podman[171091]: 2025-11-22 08:45:51.758399439 +0000 UTC m=+0.589361376 container attach 364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:45:52
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'volumes']
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:45:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:45:53 np0005532048 xenodochial_benz[171108]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:45:53 np0005532048 xenodochial_benz[171108]: --> relative data size: 1.0
Nov 22 03:45:53 np0005532048 xenodochial_benz[171108]: --> All data devices are unavailable
Nov 22 03:45:53 np0005532048 systemd[1]: libpod-364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472.scope: Deactivated successfully.
Nov 22 03:45:53 np0005532048 podman[171091]: 2025-11-22 08:45:53.172358617 +0000 UTC m=+2.003320564 container died 364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:45:53 np0005532048 systemd[1]: libpod-364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472.scope: Consumed 1.187s CPU time.
Nov 22 03:45:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a31d18a9fb017cc787d6b846077d115695741abd3f7d32c82fdf5accad1a94af-merged.mount: Deactivated successfully.
Nov 22 03:45:53 np0005532048 podman[171091]: 2025-11-22 08:45:53.266056055 +0000 UTC m=+2.097017992 container remove 364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:45:53 np0005532048 systemd[1]: libpod-conmon-364239c6a669cbad2f9ef34476e0a5e226f9cfc9128ab5c4ba25a3eb12046472.scope: Deactivated successfully.
Nov 22 03:45:53 np0005532048 podman[171289]: 2025-11-22 08:45:53.88308218 +0000 UTC m=+0.046843389 container create 2e1ac8b3ed03a84dae7076dca533efdee58d228e791a5c4e87fed67389803e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:45:53 np0005532048 systemd[1]: Started libpod-conmon-2e1ac8b3ed03a84dae7076dca533efdee58d228e791a5c4e87fed67389803e4d.scope.
Nov 22 03:45:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:45:53 np0005532048 podman[171289]: 2025-11-22 08:45:53.861570504 +0000 UTC m=+0.025331733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:53 np0005532048 podman[171289]: 2025-11-22 08:45:53.970628495 +0000 UTC m=+0.134389714 container init 2e1ac8b3ed03a84dae7076dca533efdee58d228e791a5c4e87fed67389803e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:45:53 np0005532048 podman[171289]: 2025-11-22 08:45:53.978161732 +0000 UTC m=+0.141922981 container start 2e1ac8b3ed03a84dae7076dca533efdee58d228e791a5c4e87fed67389803e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bose, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:45:53 np0005532048 stupefied_bose[171306]: 167 167
Nov 22 03:45:53 np0005532048 podman[171289]: 2025-11-22 08:45:53.985499835 +0000 UTC m=+0.149261104 container attach 2e1ac8b3ed03a84dae7076dca533efdee58d228e791a5c4e87fed67389803e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:45:53 np0005532048 systemd[1]: libpod-2e1ac8b3ed03a84dae7076dca533efdee58d228e791a5c4e87fed67389803e4d.scope: Deactivated successfully.
Nov 22 03:45:53 np0005532048 podman[171289]: 2025-11-22 08:45:53.985953027 +0000 UTC m=+0.149714286 container died 2e1ac8b3ed03a84dae7076dca533efdee58d228e791a5c4e87fed67389803e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bose, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:45:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-21958c15c97abdd0b4aa3d901ff0daa7e80a06c246bd227e03e530e5d8065aa8-merged.mount: Deactivated successfully.
Nov 22 03:45:54 np0005532048 podman[171289]: 2025-11-22 08:45:54.047461721 +0000 UTC m=+0.211222920 container remove 2e1ac8b3ed03a84dae7076dca533efdee58d228e791a5c4e87fed67389803e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:45:54 np0005532048 systemd[1]: libpod-conmon-2e1ac8b3ed03a84dae7076dca533efdee58d228e791a5c4e87fed67389803e4d.scope: Deactivated successfully.
Nov 22 03:45:54 np0005532048 podman[171331]: 2025-11-22 08:45:54.226306303 +0000 UTC m=+0.047072094 container create cc551fd0940a1d6728e5b15adc9092299817df15f063512a8145d61fcb841ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:45:54 np0005532048 systemd[1]: Started libpod-conmon-cc551fd0940a1d6728e5b15adc9092299817df15f063512a8145d61fcb841ae1.scope.
Nov 22 03:45:54 np0005532048 podman[171331]: 2025-11-22 08:45:54.206604612 +0000 UTC m=+0.027370423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:45:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98411640762db625bba7d555c1299bb4d29ecd6389352a80195b6f3269286983/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98411640762db625bba7d555c1299bb4d29ecd6389352a80195b6f3269286983/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98411640762db625bba7d555c1299bb4d29ecd6389352a80195b6f3269286983/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98411640762db625bba7d555c1299bb4d29ecd6389352a80195b6f3269286983/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:54 np0005532048 podman[171331]: 2025-11-22 08:45:54.348513632 +0000 UTC m=+0.169279453 container init cc551fd0940a1d6728e5b15adc9092299817df15f063512a8145d61fcb841ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:45:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:54 np0005532048 podman[171331]: 2025-11-22 08:45:54.357903827 +0000 UTC m=+0.178669618 container start cc551fd0940a1d6728e5b15adc9092299817df15f063512a8145d61fcb841ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:45:54 np0005532048 podman[171331]: 2025-11-22 08:45:54.365189648 +0000 UTC m=+0.185955449 container attach cc551fd0940a1d6728e5b15adc9092299817df15f063512a8145d61fcb841ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:45:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]: {
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:    "0": [
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:        {
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "devices": [
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "/dev/loop3"
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            ],
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_name": "ceph_lv0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_size": "21470642176",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "name": "ceph_lv0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "tags": {
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.cluster_name": "ceph",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.crush_device_class": "",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.encrypted": "0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.osd_id": "0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.type": "block",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.vdo": "0"
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            },
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "type": "block",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "vg_name": "ceph_vg0"
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:        }
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:    ],
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:    "1": [
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:        {
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "devices": [
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "/dev/loop4"
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            ],
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_name": "ceph_lv1",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_size": "21470642176",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "name": "ceph_lv1",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "tags": {
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.cluster_name": "ceph",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.crush_device_class": "",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.encrypted": "0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.osd_id": "1",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.type": "block",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.vdo": "0"
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            },
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "type": "block",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "vg_name": "ceph_vg1"
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:        }
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:    ],
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:    "2": [
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:        {
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "devices": [
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "/dev/loop5"
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            ],
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_name": "ceph_lv2",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_size": "21470642176",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "name": "ceph_lv2",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "tags": {
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.cluster_name": "ceph",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.crush_device_class": "",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.encrypted": "0",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.osd_id": "2",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.type": "block",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:                "ceph.vdo": "0"
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            },
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "type": "block",
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:            "vg_name": "ceph_vg2"
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:        }
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]:    ]
Nov 22 03:45:55 np0005532048 optimistic_kowalevski[171348]: }
Nov 22 03:45:55 np0005532048 systemd[1]: libpod-cc551fd0940a1d6728e5b15adc9092299817df15f063512a8145d61fcb841ae1.scope: Deactivated successfully.
Nov 22 03:45:55 np0005532048 podman[171331]: 2025-11-22 08:45:55.267373029 +0000 UTC m=+1.088138830 container died cc551fd0940a1d6728e5b15adc9092299817df15f063512a8145d61fcb841ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:45:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-98411640762db625bba7d555c1299bb4d29ecd6389352a80195b6f3269286983-merged.mount: Deactivated successfully.
Nov 22 03:45:55 np0005532048 podman[171331]: 2025-11-22 08:45:55.356845151 +0000 UTC m=+1.177610942 container remove cc551fd0940a1d6728e5b15adc9092299817df15f063512a8145d61fcb841ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:45:55 np0005532048 systemd[1]: libpod-conmon-cc551fd0940a1d6728e5b15adc9092299817df15f063512a8145d61fcb841ae1.scope: Deactivated successfully.
Nov 22 03:45:55 np0005532048 podman[171514]: 2025-11-22 08:45:55.977806894 +0000 UTC m=+0.041119337 container create cac794693be23e059bd9a5819f88cb30dbd64e0c2c1de9287dcbca9d34a32ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:45:56 np0005532048 systemd[1]: Started libpod-conmon-cac794693be23e059bd9a5819f88cb30dbd64e0c2c1de9287dcbca9d34a32ed2.scope.
Nov 22 03:45:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:45:56 np0005532048 podman[171514]: 2025-11-22 08:45:55.959427475 +0000 UTC m=+0.022739918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:56 np0005532048 podman[171514]: 2025-11-22 08:45:56.057557864 +0000 UTC m=+0.120870307 container init cac794693be23e059bd9a5819f88cb30dbd64e0c2c1de9287dcbca9d34a32ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:45:56 np0005532048 podman[171514]: 2025-11-22 08:45:56.066093776 +0000 UTC m=+0.129406219 container start cac794693be23e059bd9a5819f88cb30dbd64e0c2c1de9287dcbca9d34a32ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:45:56 np0005532048 frosty_swirles[171531]: 167 167
Nov 22 03:45:56 np0005532048 systemd[1]: libpod-cac794693be23e059bd9a5819f88cb30dbd64e0c2c1de9287dcbca9d34a32ed2.scope: Deactivated successfully.
Nov 22 03:45:56 np0005532048 podman[171514]: 2025-11-22 08:45:56.074775443 +0000 UTC m=+0.138088026 container attach cac794693be23e059bd9a5819f88cb30dbd64e0c2c1de9287dcbca9d34a32ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swirles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:45:56 np0005532048 podman[171514]: 2025-11-22 08:45:56.075417419 +0000 UTC m=+0.138729862 container died cac794693be23e059bd9a5819f88cb30dbd64e0c2c1de9287dcbca9d34a32ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:45:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d4b72a6835f1795d4bc97f58fb11d18e176df9bdaaa12b7ae394f2ed1e787ff7-merged.mount: Deactivated successfully.
Nov 22 03:45:56 np0005532048 podman[171514]: 2025-11-22 08:45:56.136679038 +0000 UTC m=+0.199991471 container remove cac794693be23e059bd9a5819f88cb30dbd64e0c2c1de9287dcbca9d34a32ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swirles, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:45:56 np0005532048 systemd[1]: libpod-conmon-cac794693be23e059bd9a5819f88cb30dbd64e0c2c1de9287dcbca9d34a32ed2.scope: Deactivated successfully.
Nov 22 03:45:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:56 np0005532048 podman[171557]: 2025-11-22 08:45:56.364163284 +0000 UTC m=+0.104981561 container create bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:45:56 np0005532048 podman[171557]: 2025-11-22 08:45:56.280810934 +0000 UTC m=+0.021629231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:56 np0005532048 systemd[1]: Started libpod-conmon-bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97.scope.
Nov 22 03:45:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:45:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc5c6be6f24bc9312076fbfa602da1308d450480025de6290a2b451adf667df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc5c6be6f24bc9312076fbfa602da1308d450480025de6290a2b451adf667df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc5c6be6f24bc9312076fbfa602da1308d450480025de6290a2b451adf667df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc5c6be6f24bc9312076fbfa602da1308d450480025de6290a2b451adf667df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:56 np0005532048 podman[171557]: 2025-11-22 08:45:56.5059018 +0000 UTC m=+0.246720077 container init bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:45:56 np0005532048 podman[171557]: 2025-11-22 08:45:56.514985567 +0000 UTC m=+0.255803844 container start bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hypatia, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:45:56 np0005532048 podman[171557]: 2025-11-22 08:45:56.552150814 +0000 UTC m=+0.292969101 container attach bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:45:56 np0005532048 podman[171573]: 2025-11-22 08:45:56.566476222 +0000 UTC m=+0.134378794 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]: {
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "osd_id": 1,
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "type": "bluestore"
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:    },
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "osd_id": 0,
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "type": "bluestore"
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:    },
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "osd_id": 2,
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:        "type": "bluestore"
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]:    }
Nov 22 03:45:57 np0005532048 loving_hypatia[171574]: }
Nov 22 03:45:57 np0005532048 systemd[1]: libpod-bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97.scope: Deactivated successfully.
Nov 22 03:45:57 np0005532048 systemd[1]: libpod-bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97.scope: Consumed 1.084s CPU time.
Nov 22 03:45:57 np0005532048 podman[171557]: 2025-11-22 08:45:57.61129037 +0000 UTC m=+1.352108677 container died bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hypatia, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:45:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1cc5c6be6f24bc9312076fbfa602da1308d450480025de6290a2b451adf667df-merged.mount: Deactivated successfully.
Nov 22 03:45:58 np0005532048 podman[171557]: 2025-11-22 08:45:58.10187621 +0000 UTC m=+1.842694507 container remove bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:45:58 np0005532048 systemd[1]: libpod-conmon-bf1c6cb1e4978e4b906e90c7ae8ed0f1f879a867fdb66de247212380a8a67c97.scope: Deactivated successfully.
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:45:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5b7abcda-5593-4fc0-bc65-28183e4d7f1f does not exist
Nov 22 03:45:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2428aecf-f155-4d7b-873a-28d45c801b89 does not exist
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.471823) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801158471894, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 3565381, "memory_usage": 3623960, "flush_reason": "Manual Compaction"}
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801158510161, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3479564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9876, "largest_seqno": 11931, "table_properties": {"data_size": 3470121, "index_size": 6001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18110, "raw_average_key_size": 19, "raw_value_size": 3451474, "raw_average_value_size": 3723, "num_data_blocks": 272, "num_entries": 927, "num_filter_entries": 927, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800910, "oldest_key_time": 1763800910, "file_creation_time": 1763801158, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 38371 microseconds, and 7196 cpu microseconds.
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.510203) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3479564 bytes OK
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.510224) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.514417) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.514442) EVENT_LOG_v1 {"time_micros": 1763801158514435, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.514465) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3556762, prev total WAL file size 3556762, number of live WAL files 2.
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.515520) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3398KB)], [26(6270KB)]
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801158515577, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9901050, "oldest_snapshot_seqno": -1}
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3786 keys, 8029408 bytes, temperature: kUnknown
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801158572882, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8029408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8000077, "index_size": 18789, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 90876, "raw_average_key_size": 24, "raw_value_size": 7927716, "raw_average_value_size": 2093, "num_data_blocks": 808, "num_entries": 3786, "num_filter_entries": 3786, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801158, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.573173) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8029408 bytes
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.575756) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.5 rd, 139.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.1 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 4300, records dropped: 514 output_compression: NoCompression
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.575825) EVENT_LOG_v1 {"time_micros": 1763801158575798, "job": 10, "event": "compaction_finished", "compaction_time_micros": 57410, "compaction_time_cpu_micros": 19913, "output_level": 6, "num_output_files": 1, "total_output_size": 8029408, "num_input_records": 4300, "num_output_records": 3786, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801158576818, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801158578137, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.515463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.578181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.578186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.578188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.578190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:45:58.578192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:45:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:45:59 np0005532048 kernel: SELinux:  Converting 2769 SID table entries...
Nov 22 03:45:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:00 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:46:00 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:46:00 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:46:00 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:46:00 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:46:00 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:46:00 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:46:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:46:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:09 np0005532048 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 22 03:46:09 np0005532048 podman[171695]: 2025-11-22 08:46:09.428930096 +0000 UTC m=+0.105769560 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:46:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:27 np0005532048 podman[179613]: 2025-11-22 08:46:27.362470124 +0000 UTC m=+0.048574879 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 22 03:46:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:46:27.923 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:46:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:46:27.925 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:46:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:46:27.925 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:46:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:40 np0005532048 podman[187540]: 2025-11-22 08:46:40.382050302 +0000 UTC m=+0.075477603 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 22 03:46:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:46:52
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'backups', 'images', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:46:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:46:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:46:58 np0005532048 podman[188565]: 2025-11-22 08:46:58.374077089 +0000 UTC m=+0.055781847 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 03:46:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:46:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:46:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:46:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:46:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:47:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:47:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:47:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 453042ce-156f-439a-b63e-ae77437d762c does not exist
Nov 22 03:47:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 70a2e420-4565-4cc8-a15e-d620c5050490 does not exist
Nov 22 03:47:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 654c6490-5dea-4aa8-be6a-57df87c129cb does not exist
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:47:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:47:05 np0005532048 kernel: SELinux:  Converting 2770 SID table entries...
Nov 22 03:47:05 np0005532048 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:47:05 np0005532048 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:47:05 np0005532048 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:47:05 np0005532048 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:47:05 np0005532048 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:47:05 np0005532048 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:47:05 np0005532048 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:47:05 np0005532048 podman[188853]: 2025-11-22 08:47:05.486920464 +0000 UTC m=+0.051194527 container create 944e3c35c4b25dd2a5a02cbf921bdb3dbb3a72e9a8d706b57034238c3bb6b505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamarr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:47:05 np0005532048 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 22 03:47:05 np0005532048 systemd[1]: Started libpod-conmon-944e3c35c4b25dd2a5a02cbf921bdb3dbb3a72e9a8d706b57034238c3bb6b505.scope.
Nov 22 03:47:05 np0005532048 podman[188853]: 2025-11-22 08:47:05.462140688 +0000 UTC m=+0.026414781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:47:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:47:05 np0005532048 podman[188853]: 2025-11-22 08:47:05.586852645 +0000 UTC m=+0.151126808 container init 944e3c35c4b25dd2a5a02cbf921bdb3dbb3a72e9a8d706b57034238c3bb6b505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamarr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:47:05 np0005532048 podman[188853]: 2025-11-22 08:47:05.594213707 +0000 UTC m=+0.158487770 container start 944e3c35c4b25dd2a5a02cbf921bdb3dbb3a72e9a8d706b57034238c3bb6b505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamarr, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:47:05 np0005532048 quizzical_lamarr[188869]: 167 167
Nov 22 03:47:05 np0005532048 systemd[1]: libpod-944e3c35c4b25dd2a5a02cbf921bdb3dbb3a72e9a8d706b57034238c3bb6b505.scope: Deactivated successfully.
Nov 22 03:47:05 np0005532048 podman[188853]: 2025-11-22 08:47:05.603399667 +0000 UTC m=+0.167673750 container attach 944e3c35c4b25dd2a5a02cbf921bdb3dbb3a72e9a8d706b57034238c3bb6b505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamarr, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:47:05 np0005532048 podman[188853]: 2025-11-22 08:47:05.603793467 +0000 UTC m=+0.168067530 container died 944e3c35c4b25dd2a5a02cbf921bdb3dbb3a72e9a8d706b57034238c3bb6b505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:47:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-81b9f1929e8c18d5889833d8295e2d4b50a2b5d8eea4d548547787352bd860f3-merged.mount: Deactivated successfully.
Nov 22 03:47:05 np0005532048 podman[188853]: 2025-11-22 08:47:05.657544871 +0000 UTC m=+0.221818934 container remove 944e3c35c4b25dd2a5a02cbf921bdb3dbb3a72e9a8d706b57034238c3bb6b505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:47:05 np0005532048 systemd[1]: libpod-conmon-944e3c35c4b25dd2a5a02cbf921bdb3dbb3a72e9a8d706b57034238c3bb6b505.scope: Deactivated successfully.
Nov 22 03:47:05 np0005532048 podman[188893]: 2025-11-22 08:47:05.833676631 +0000 UTC m=+0.049389510 container create bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_faraday, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:47:05 np0005532048 systemd[1]: Started libpod-conmon-bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc.scope.
Nov 22 03:47:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:47:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef677e5d9216a6c87621140d218275f6ed9bb0bc090e4c5a3e5c9b57fe5ca6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef677e5d9216a6c87621140d218275f6ed9bb0bc090e4c5a3e5c9b57fe5ca6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef677e5d9216a6c87621140d218275f6ed9bb0bc090e4c5a3e5c9b57fe5ca6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef677e5d9216a6c87621140d218275f6ed9bb0bc090e4c5a3e5c9b57fe5ca6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:05 np0005532048 podman[188893]: 2025-11-22 08:47:05.808794161 +0000 UTC m=+0.024507130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:47:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef677e5d9216a6c87621140d218275f6ed9bb0bc090e4c5a3e5c9b57fe5ca6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:05 np0005532048 podman[188893]: 2025-11-22 08:47:05.915741465 +0000 UTC m=+0.131454344 container init bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_faraday, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:47:05 np0005532048 podman[188893]: 2025-11-22 08:47:05.926695161 +0000 UTC m=+0.142408040 container start bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:47:05 np0005532048 podman[188893]: 2025-11-22 08:47:05.942532315 +0000 UTC m=+0.158245224 container attach bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:47:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:06 np0005532048 dbus-broker-launch[805]: Noticed file-system modification, trigger reload.
Nov 22 03:47:06 np0005532048 dbus-broker-launch[805]: Noticed file-system modification, trigger reload.
Nov 22 03:47:07 np0005532048 optimistic_faraday[188909]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:47:07 np0005532048 optimistic_faraday[188909]: --> relative data size: 1.0
Nov 22 03:47:07 np0005532048 optimistic_faraday[188909]: --> All data devices are unavailable
Nov 22 03:47:07 np0005532048 systemd[1]: libpod-bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc.scope: Deactivated successfully.
Nov 22 03:47:07 np0005532048 systemd[1]: libpod-bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc.scope: Consumed 1.046s CPU time.
Nov 22 03:47:07 np0005532048 podman[188893]: 2025-11-22 08:47:07.036839145 +0000 UTC m=+1.252552024 container died bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_faraday, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 22 03:47:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-02ef677e5d9216a6c87621140d218275f6ed9bb0bc090e4c5a3e5c9b57fe5ca6-merged.mount: Deactivated successfully.
Nov 22 03:47:07 np0005532048 podman[188893]: 2025-11-22 08:47:07.137150036 +0000 UTC m=+1.352862925 container remove bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:47:07 np0005532048 systemd[1]: libpod-conmon-bde222968c626c5bafe52a43f1d575b5dc7337513aa963f589274d0ab753e1fc.scope: Deactivated successfully.
Nov 22 03:47:07 np0005532048 podman[189111]: 2025-11-22 08:47:07.735470312 +0000 UTC m=+0.039357228 container create d1516d0d5ff58a40520f5c07104e096473bd1d7225407c9ab4d8e2b194e58101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:47:07 np0005532048 systemd[1]: Started libpod-conmon-d1516d0d5ff58a40520f5c07104e096473bd1d7225407c9ab4d8e2b194e58101.scope.
Nov 22 03:47:07 np0005532048 podman[189111]: 2025-11-22 08:47:07.71740513 +0000 UTC m=+0.021292066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:47:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:47:07 np0005532048 podman[189111]: 2025-11-22 08:47:07.843464983 +0000 UTC m=+0.147351919 container init d1516d0d5ff58a40520f5c07104e096473bd1d7225407c9ab4d8e2b194e58101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:47:07 np0005532048 podman[189111]: 2025-11-22 08:47:07.853558927 +0000 UTC m=+0.157445843 container start d1516d0d5ff58a40520f5c07104e096473bd1d7225407c9ab4d8e2b194e58101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:47:07 np0005532048 elegant_moore[189136]: 167 167
Nov 22 03:47:07 np0005532048 systemd[1]: libpod-d1516d0d5ff58a40520f5c07104e096473bd1d7225407c9ab4d8e2b194e58101.scope: Deactivated successfully.
Nov 22 03:47:07 np0005532048 podman[189111]: 2025-11-22 08:47:07.863516407 +0000 UTC m=+0.167403363 container attach d1516d0d5ff58a40520f5c07104e096473bd1d7225407c9ab4d8e2b194e58101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:47:07 np0005532048 podman[189111]: 2025-11-22 08:47:07.865111929 +0000 UTC m=+0.168998845 container died d1516d0d5ff58a40520f5c07104e096473bd1d7225407c9ab4d8e2b194e58101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:47:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0ecc1ebbb11f68d65d0b50ee8b5a57acb940cebcf27bf3de6d646d014f1358ed-merged.mount: Deactivated successfully.
Nov 22 03:47:07 np0005532048 podman[189111]: 2025-11-22 08:47:07.941034872 +0000 UTC m=+0.244921778 container remove d1516d0d5ff58a40520f5c07104e096473bd1d7225407c9ab4d8e2b194e58101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:47:07 np0005532048 systemd[1]: libpod-conmon-d1516d0d5ff58a40520f5c07104e096473bd1d7225407c9ab4d8e2b194e58101.scope: Deactivated successfully.
Nov 22 03:47:08 np0005532048 podman[189178]: 2025-11-22 08:47:08.137569435 +0000 UTC m=+0.053987341 container create b18b385c5e173cc5ac54eea4185a9128350eb7c494151e0d2020c6a00b378db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:47:08 np0005532048 systemd[1]: Started libpod-conmon-b18b385c5e173cc5ac54eea4185a9128350eb7c494151e0d2020c6a00b378db7.scope.
Nov 22 03:47:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:47:08 np0005532048 podman[189178]: 2025-11-22 08:47:08.110252311 +0000 UTC m=+0.026670267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:47:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4697f277ab2afc694f1a68e4da4bb542d2d11ec0ea618212c24c6ae3b784c6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4697f277ab2afc694f1a68e4da4bb542d2d11ec0ea618212c24c6ae3b784c6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4697f277ab2afc694f1a68e4da4bb542d2d11ec0ea618212c24c6ae3b784c6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4697f277ab2afc694f1a68e4da4bb542d2d11ec0ea618212c24c6ae3b784c6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:08 np0005532048 podman[189178]: 2025-11-22 08:47:08.230652226 +0000 UTC m=+0.147070162 container init b18b385c5e173cc5ac54eea4185a9128350eb7c494151e0d2020c6a00b378db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:47:08 np0005532048 podman[189178]: 2025-11-22 08:47:08.243275646 +0000 UTC m=+0.159693552 container start b18b385c5e173cc5ac54eea4185a9128350eb7c494151e0d2020c6a00b378db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lamarr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:47:08 np0005532048 podman[189178]: 2025-11-22 08:47:08.254688494 +0000 UTC m=+0.171106450 container attach b18b385c5e173cc5ac54eea4185a9128350eb7c494151e0d2020c6a00b378db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lamarr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:47:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]: {
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:    "0": [
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:        {
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "devices": [
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "/dev/loop3"
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            ],
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_name": "ceph_lv0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_size": "21470642176",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "name": "ceph_lv0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "tags": {
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.cluster_name": "ceph",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.crush_device_class": "",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.encrypted": "0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.osd_id": "0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.type": "block",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.vdo": "0"
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            },
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "type": "block",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "vg_name": "ceph_vg0"
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:        }
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:    ],
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:    "1": [
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:        {
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "devices": [
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "/dev/loop4"
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            ],
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_name": "ceph_lv1",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_size": "21470642176",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "name": "ceph_lv1",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "tags": {
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.cluster_name": "ceph",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.crush_device_class": "",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.encrypted": "0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.osd_id": "1",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.type": "block",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.vdo": "0"
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            },
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "type": "block",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "vg_name": "ceph_vg1"
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:        }
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:    ],
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:    "2": [
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:        {
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "devices": [
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "/dev/loop5"
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            ],
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_name": "ceph_lv2",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_size": "21470642176",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "name": "ceph_lv2",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "tags": {
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.cluster_name": "ceph",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.crush_device_class": "",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.encrypted": "0",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.osd_id": "2",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.type": "block",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:                "ceph.vdo": "0"
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            },
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "type": "block",
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:            "vg_name": "ceph_vg2"
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:        }
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]:    ]
Nov 22 03:47:09 np0005532048 objective_lamarr[189195]: }
Nov 22 03:47:09 np0005532048 podman[189178]: 2025-11-22 08:47:09.096719556 +0000 UTC m=+1.013137482 container died b18b385c5e173cc5ac54eea4185a9128350eb7c494151e0d2020c6a00b378db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lamarr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:47:09 np0005532048 systemd[1]: libpod-b18b385c5e173cc5ac54eea4185a9128350eb7c494151e0d2020c6a00b378db7.scope: Deactivated successfully.
Nov 22 03:47:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c4697f277ab2afc694f1a68e4da4bb542d2d11ec0ea618212c24c6ae3b784c6f-merged.mount: Deactivated successfully.
Nov 22 03:47:10 np0005532048 podman[189178]: 2025-11-22 08:47:10.380650711 +0000 UTC m=+2.297068617 container remove b18b385c5e173cc5ac54eea4185a9128350eb7c494151e0d2020c6a00b378db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:47:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:10 np0005532048 systemd[1]: libpod-conmon-b18b385c5e173cc5ac54eea4185a9128350eb7c494151e0d2020c6a00b378db7.scope: Deactivated successfully.
Nov 22 03:47:10 np0005532048 podman[189242]: 2025-11-22 08:47:10.668369805 +0000 UTC m=+0.170099044 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:47:11 np0005532048 podman[189395]: 2025-11-22 08:47:11.116082348 +0000 UTC m=+0.092303432 container create b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:47:11 np0005532048 podman[189395]: 2025-11-22 08:47:11.04802059 +0000 UTC m=+0.024241674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:47:11 np0005532048 systemd[1]: Started libpod-conmon-b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba.scope.
Nov 22 03:47:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:47:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:11 np0005532048 podman[189395]: 2025-11-22 08:47:11.82291098 +0000 UTC m=+0.799132094 container init b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:47:11 np0005532048 podman[189395]: 2025-11-22 08:47:11.832384226 +0000 UTC m=+0.808605350 container start b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:47:11 np0005532048 infallible_napier[189411]: 167 167
Nov 22 03:47:11 np0005532048 systemd[1]: libpod-b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba.scope: Deactivated successfully.
Nov 22 03:47:11 np0005532048 conmon[189411]: conmon b66513b5e4b8ce8fc27f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba.scope/container/memory.events
Nov 22 03:47:12 np0005532048 podman[189395]: 2025-11-22 08:47:12.106803284 +0000 UTC m=+1.083024388 container attach b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:47:12 np0005532048 podman[189395]: 2025-11-22 08:47:12.107715308 +0000 UTC m=+1.083936392 container died b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:47:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-105e4f3773ac2a86a079c93914a1aa3bf1c3d25aaff0bb2b354d67f7d761c903-merged.mount: Deactivated successfully.
Nov 22 03:47:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:14 np0005532048 podman[189395]: 2025-11-22 08:47:14.66348148 +0000 UTC m=+3.639702604 container remove b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:47:14 np0005532048 systemd[1]: libpod-conmon-b66513b5e4b8ce8fc27f6b8aecf890a9b90c7086b306fb1b028ed48ecaf2cfba.scope: Deactivated successfully.
Nov 22 03:47:14 np0005532048 podman[189447]: 2025-11-22 08:47:14.857939008 +0000 UTC m=+0.030799565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:47:15 np0005532048 podman[189447]: 2025-11-22 08:47:15.11950813 +0000 UTC m=+0.292368667 container create dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:47:15 np0005532048 systemd[1]: Started libpod-conmon-dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1.scope.
Nov 22 03:47:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:47:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64c5a8242e008ee3a74181aa23a0412be79d671125d0471a674c9c5fa3d43c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64c5a8242e008ee3a74181aa23a0412be79d671125d0471a674c9c5fa3d43c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64c5a8242e008ee3a74181aa23a0412be79d671125d0471a674c9c5fa3d43c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64c5a8242e008ee3a74181aa23a0412be79d671125d0471a674c9c5fa3d43c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:47:16 np0005532048 podman[189447]: 2025-11-22 08:47:16.236184807 +0000 UTC m=+1.409045374 container init dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:47:16 np0005532048 podman[189447]: 2025-11-22 08:47:16.244973036 +0000 UTC m=+1.417833573 container start dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williams, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:47:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:16 np0005532048 podman[189447]: 2025-11-22 08:47:16.557947549 +0000 UTC m=+1.730808086 container attach dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williams, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:47:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:17 np0005532048 confident_williams[189487]: {
Nov 22 03:47:17 np0005532048 confident_williams[189487]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "osd_id": 1,
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "type": "bluestore"
Nov 22 03:47:17 np0005532048 confident_williams[189487]:    },
Nov 22 03:47:17 np0005532048 confident_williams[189487]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "osd_id": 0,
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "type": "bluestore"
Nov 22 03:47:17 np0005532048 confident_williams[189487]:    },
Nov 22 03:47:17 np0005532048 confident_williams[189487]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "osd_id": 2,
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:47:17 np0005532048 confident_williams[189487]:        "type": "bluestore"
Nov 22 03:47:17 np0005532048 confident_williams[189487]:    }
Nov 22 03:47:17 np0005532048 confident_williams[189487]: }
Nov 22 03:47:17 np0005532048 systemd[1]: libpod-dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1.scope: Deactivated successfully.
Nov 22 03:47:17 np0005532048 systemd[1]: libpod-dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1.scope: Consumed 1.034s CPU time.
Nov 22 03:47:17 np0005532048 podman[189447]: 2025-11-22 08:47:17.293771038 +0000 UTC m=+2.466631575 container died dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williams, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:47:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f64c5a8242e008ee3a74181aa23a0412be79d671125d0471a674c9c5fa3d43c8-merged.mount: Deactivated successfully.
Nov 22 03:47:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:18 np0005532048 podman[189447]: 2025-11-22 08:47:18.777230126 +0000 UTC m=+3.950090673 container remove dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:47:18 np0005532048 systemd[1]: libpod-conmon-dfbf9195febde0b741f3926c0c6ce2bd7b18efa7db845f2d57c23cd21df338f1.scope: Deactivated successfully.
Nov 22 03:47:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:47:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:47:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:47:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:47:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 73ac6160-2c3c-4684-8c87-b98f3d24230b does not exist
Nov 22 03:47:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a2404afd-0ebe-47b6-b0c9-88ffbc9e1d4f does not exist
Nov 22 03:47:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:47:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:47:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:27 np0005532048 systemd[1]: Stopping OpenSSH server daemon...
Nov 22 03:47:27 np0005532048 systemd[1]: sshd.service: Deactivated successfully.
Nov 22 03:47:27 np0005532048 systemd[1]: Stopped OpenSSH server daemon.
Nov 22 03:47:27 np0005532048 systemd[1]: sshd.service: Consumed 2.286s CPU time, read 32.0K from disk, written 0B to disk.
Nov 22 03:47:27 np0005532048 systemd[1]: Stopped target sshd-keygen.target.
Nov 22 03:47:27 np0005532048 systemd[1]: Stopping sshd-keygen.target...
Nov 22 03:47:27 np0005532048 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 03:47:27 np0005532048 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 03:47:27 np0005532048 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 03:47:27 np0005532048 systemd[1]: Reached target sshd-keygen.target.
Nov 22 03:47:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:27 np0005532048 systemd[1]: Starting OpenSSH server daemon...
Nov 22 03:47:27 np0005532048 systemd[1]: Started OpenSSH server daemon.
Nov 22 03:47:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:47:27.925 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:47:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:47:27.927 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:47:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:47:27.928 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:47:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:28 np0005532048 podman[190520]: 2025-11-22 08:47:28.503454 +0000 UTC m=+0.070505606 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:47:29 np0005532048 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:47:29 np0005532048 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:47:29 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:29 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:29 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:29 np0005532048 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:47:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:34 np0005532048 python3.9[195735]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:47:34 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:34 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:34 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:35 np0005532048 python3.9[196978]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:47:35 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:35 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:35 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:36 np0005532048 python3.9[198163]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:47:37 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:37 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:37 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:38 np0005532048 python3.9[199395]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:47:38 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:38 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:38 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:38 np0005532048 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:47:38 np0005532048 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:47:38 np0005532048 systemd[1]: man-db-cache-update.service: Consumed 11.356s CPU time.
Nov 22 03:47:38 np0005532048 systemd[1]: run-rab5eca6d77344affaf3693569bb3190d.service: Deactivated successfully.
Nov 22 03:47:39 np0005532048 python3.9[199909]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:39 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:39 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:39 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:40 np0005532048 python3.9[200099]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:40 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:40 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:40 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:40 np0005532048 podman[200137]: 2025-11-22 08:47:40.961539523 +0000 UTC m=+0.116259839 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 03:47:41 np0005532048 python3.9[200314]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:41 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:41 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:41 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:42 np0005532048 python3.9[200505]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:43 np0005532048 python3.9[200660]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:43 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:43 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:43 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:44 np0005532048 python3.9[200850]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:47:44 np0005532048 systemd[1]: Reloading.
Nov 22 03:47:44 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:47:44 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:47:45 np0005532048 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 22 03:47:45 np0005532048 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 22 03:47:45 np0005532048 python3.9[201043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:46 np0005532048 auditd[703]: Audit daemon rotating log files
Nov 22 03:47:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:46 np0005532048 python3.9[201198]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:47 np0005532048 python3.9[201353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:48 np0005532048 python3.9[201508]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:49 np0005532048 python3.9[201663]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:50 np0005532048 python3.9[201818]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:51 np0005532048 python3.9[201973]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:47:52
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'vms', 'images', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'backups']
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:47:52 np0005532048 python3.9[202128]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:47:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:47:53 np0005532048 python3.9[202283]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:54 np0005532048 python3.9[202438]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:55 np0005532048 python3.9[202593]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:55 np0005532048 python3.9[202748]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:56 np0005532048 python3.9[202903]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:47:57 np0005532048 python3.9[203058]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:47:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:58 np0005532048 python3.9[203213]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:47:59 np0005532048 podman[203337]: 2025-11-22 08:47:59.190510001 +0000 UTC m=+0.065788533 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 03:47:59 np0005532048 python3.9[203382]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:48:00 np0005532048 python3.9[203534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:48:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:00 np0005532048 python3.9[203686]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:48:01 np0005532048 python3.9[203838]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:48:02 np0005532048 python3.9[203990]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:48:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:03 np0005532048 python3.9[204142]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:04 np0005532048 python3.9[204267]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763801282.440064-554-9628157796928/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:04 np0005532048 python3.9[204419]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:05 np0005532048 python3.9[204544]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763801284.2112582-554-115907907284649/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:06 np0005532048 python3.9[204696]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:07 np0005532048 python3.9[204821]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763801285.5400314-554-266527526978457/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:08 np0005532048 python3.9[204973]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:08 np0005532048 python3.9[205098]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763801287.6411545-554-253034771823611/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:09 np0005532048 python3.9[205250]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:10 np0005532048 python3.9[205375]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763801289.0978346-554-32633748894552/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:10 np0005532048 python3.9[205527]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:11 np0005532048 podman[205624]: 2025-11-22 08:48:11.365579512 +0000 UTC m=+0.096131927 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 22 03:48:11 np0005532048 python3.9[205672]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763801290.3590689-554-280850928299368/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:12 np0005532048 python3.9[205830]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:12 np0005532048 python3.9[205953]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763801291.668026-554-225551002259411/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:13 np0005532048 python3.9[206105]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:14 np0005532048 python3.9[206230]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763801292.8926613-554-95205183447008/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:14 np0005532048 python3.9[206382]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 22 03:48:15 np0005532048 python3.9[206535]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:16 np0005532048 python3.9[206687]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:17 np0005532048 python3.9[206839]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:17 np0005532048 python3.9[206991]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:18 np0005532048 python3.9[207143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:19 np0005532048 python3.9[207295]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:19 np0005532048 python3.9[207447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:20 np0005532048 python3.9[207650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:48:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1c55dc96-35cd-48e9-8bf0-a09f5766d9bb does not exist
Nov 22 03:48:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d1353b6c-6e94-45f4-9de1-bdb3dfb9c317 does not exist
Nov 22 03:48:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 93011363-c198-4181-8df1-d459da0c7209 does not exist
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:48:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:48:21 np0005532048 python3.9[207882]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:21 np0005532048 podman[208143]: 2025-11-22 08:48:21.534307877 +0000 UTC m=+0.042674705 container create bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ritchie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:48:21 np0005532048 systemd[1]: Started libpod-conmon-bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6.scope.
Nov 22 03:48:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:48:21 np0005532048 podman[208143]: 2025-11-22 08:48:21.514446112 +0000 UTC m=+0.022812970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:21 np0005532048 podman[208143]: 2025-11-22 08:48:21.667497916 +0000 UTC m=+0.175864774 container init bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ritchie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:48:21 np0005532048 podman[208143]: 2025-11-22 08:48:21.675111263 +0000 UTC m=+0.183478121 container start bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:48:21 np0005532048 fervent_ritchie[208191]: 167 167
Nov 22 03:48:21 np0005532048 systemd[1]: libpod-bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6.scope: Deactivated successfully.
Nov 22 03:48:21 np0005532048 conmon[208191]: conmon bcb68c7c0e3563cb3955 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6.scope/container/memory.events
Nov 22 03:48:21 np0005532048 podman[208143]: 2025-11-22 08:48:21.682889794 +0000 UTC m=+0.191256652 container attach bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:48:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:48:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:48:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:48:21 np0005532048 podman[208143]: 2025-11-22 08:48:21.683519963 +0000 UTC m=+0.191886801 container died bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ritchie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:48:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c1e73b85337f8826aad3cdc03e76267d99160cb5e630056d044cea17ccc775c3-merged.mount: Deactivated successfully.
Nov 22 03:48:21 np0005532048 podman[208143]: 2025-11-22 08:48:21.742535182 +0000 UTC m=+0.250902020 container remove bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ritchie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:48:21 np0005532048 systemd[1]: libpod-conmon-bcb68c7c0e3563cb395543d619dfab6b1d93d7fddee7d34b7b2f14671a8962c6.scope: Deactivated successfully.
Nov 22 03:48:21 np0005532048 python3.9[208193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:21 np0005532048 podman[208222]: 2025-11-22 08:48:21.903355154 +0000 UTC m=+0.046397960 container create d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:48:21 np0005532048 systemd[1]: Started libpod-conmon-d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312.scope.
Nov 22 03:48:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:48:21 np0005532048 podman[208222]: 2025-11-22 08:48:21.882048209 +0000 UTC m=+0.025091035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cac72f4f9488794ba5e2646f6072d743cc171057da9381f365ef1e01ebac15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cac72f4f9488794ba5e2646f6072d743cc171057da9381f365ef1e01ebac15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cac72f4f9488794ba5e2646f6072d743cc171057da9381f365ef1e01ebac15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cac72f4f9488794ba5e2646f6072d743cc171057da9381f365ef1e01ebac15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cac72f4f9488794ba5e2646f6072d743cc171057da9381f365ef1e01ebac15/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:21 np0005532048 podman[208222]: 2025-11-22 08:48:21.995347647 +0000 UTC m=+0.138390463 container init d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:48:22 np0005532048 podman[208222]: 2025-11-22 08:48:22.003402175 +0000 UTC m=+0.146444971 container start d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:48:22 np0005532048 podman[208222]: 2025-11-22 08:48:22.007664424 +0000 UTC m=+0.150707250 container attach d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:48:22 np0005532048 python3.9[208389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:23 np0005532048 python3.9[208551]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:23 np0005532048 laughing_einstein[208280]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:48:23 np0005532048 laughing_einstein[208280]: --> relative data size: 1.0
Nov 22 03:48:23 np0005532048 laughing_einstein[208280]: --> All data devices are unavailable
Nov 22 03:48:23 np0005532048 systemd[1]: libpod-d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312.scope: Deactivated successfully.
Nov 22 03:48:23 np0005532048 systemd[1]: libpod-d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312.scope: Consumed 1.032s CPU time.
Nov 22 03:48:23 np0005532048 podman[208222]: 2025-11-22 08:48:23.105984061 +0000 UTC m=+1.249026857 container died d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:48:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b2cac72f4f9488794ba5e2646f6072d743cc171057da9381f365ef1e01ebac15-merged.mount: Deactivated successfully.
Nov 22 03:48:23 np0005532048 podman[208222]: 2025-11-22 08:48:23.290840865 +0000 UTC m=+1.433883661 container remove d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:48:23 np0005532048 systemd[1]: libpod-conmon-d85c56f13217499da9fece42451b211f0c976de3b680946de2a78a4017678312.scope: Deactivated successfully.
Nov 22 03:48:23 np0005532048 python3.9[208805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:23 np0005532048 podman[208914]: 2025-11-22 08:48:23.886184528 +0000 UTC m=+0.054120863 container create f7cbb5f7eb497b345deea3e6755a426cbeb5e726cb4f5fb85e1f3ceabb249824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:48:23 np0005532048 systemd[1]: Started libpod-conmon-f7cbb5f7eb497b345deea3e6755a426cbeb5e726cb4f5fb85e1f3ceabb249824.scope.
Nov 22 03:48:23 np0005532048 podman[208914]: 2025-11-22 08:48:23.853774501 +0000 UTC m=+0.021710856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:48:23 np0005532048 podman[208914]: 2025-11-22 08:48:23.992861127 +0000 UTC m=+0.160797482 container init f7cbb5f7eb497b345deea3e6755a426cbeb5e726cb4f5fb85e1f3ceabb249824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 03:48:24 np0005532048 podman[208914]: 2025-11-22 08:48:24.001935698 +0000 UTC m=+0.169872033 container start f7cbb5f7eb497b345deea3e6755a426cbeb5e726cb4f5fb85e1f3ceabb249824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:48:24 np0005532048 pensive_ramanujan[208977]: 167 167
Nov 22 03:48:24 np0005532048 systemd[1]: libpod-f7cbb5f7eb497b345deea3e6755a426cbeb5e726cb4f5fb85e1f3ceabb249824.scope: Deactivated successfully.
Nov 22 03:48:24 np0005532048 podman[208914]: 2025-11-22 08:48:24.011301323 +0000 UTC m=+0.179237658 container attach f7cbb5f7eb497b345deea3e6755a426cbeb5e726cb4f5fb85e1f3ceabb249824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:48:24 np0005532048 podman[208914]: 2025-11-22 08:48:24.012520648 +0000 UTC m=+0.180456983 container died f7cbb5f7eb497b345deea3e6755a426cbeb5e726cb4f5fb85e1f3ceabb249824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:48:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-880d18a9fc1d2a5baba7c281efcd3b7eb3beb36df8cfbb0d249500d6bb38e22c-merged.mount: Deactivated successfully.
Nov 22 03:48:24 np0005532048 podman[208914]: 2025-11-22 08:48:24.076881914 +0000 UTC m=+0.244818249 container remove f7cbb5f7eb497b345deea3e6755a426cbeb5e726cb4f5fb85e1f3ceabb249824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ramanujan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:48:24 np0005532048 systemd[1]: libpod-conmon-f7cbb5f7eb497b345deea3e6755a426cbeb5e726cb4f5fb85e1f3ceabb249824.scope: Deactivated successfully.
Nov 22 03:48:24 np0005532048 podman[209064]: 2025-11-22 08:48:24.234527399 +0000 UTC m=+0.027030996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:24 np0005532048 podman[209064]: 2025-11-22 08:48:24.332809673 +0000 UTC m=+0.125313290 container create 33fb0d809da2b57aab0484ee65345be8a07f577ba53f28796f08e35e172cfd41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:48:24 np0005532048 python3.9[209058]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:24 np0005532048 systemd[1]: Started libpod-conmon-33fb0d809da2b57aab0484ee65345be8a07f577ba53f28796f08e35e172cfd41.scope.
Nov 22 03:48:24 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:48:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207a382cb31114e136451cde2b9eb41e8404aa8e4521fe281fe77e30e164a807/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207a382cb31114e136451cde2b9eb41e8404aa8e4521fe281fe77e30e164a807/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207a382cb31114e136451cde2b9eb41e8404aa8e4521fe281fe77e30e164a807/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/207a382cb31114e136451cde2b9eb41e8404aa8e4521fe281fe77e30e164a807/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:24 np0005532048 podman[209064]: 2025-11-22 08:48:24.541930543 +0000 UTC m=+0.334434140 container init 33fb0d809da2b57aab0484ee65345be8a07f577ba53f28796f08e35e172cfd41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:48:24 np0005532048 podman[209064]: 2025-11-22 08:48:24.555222792 +0000 UTC m=+0.347726389 container start 33fb0d809da2b57aab0484ee65345be8a07f577ba53f28796f08e35e172cfd41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:48:24 np0005532048 podman[209064]: 2025-11-22 08:48:24.584994064 +0000 UTC m=+0.377497671 container attach 33fb0d809da2b57aab0484ee65345be8a07f577ba53f28796f08e35e172cfd41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_swartz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:48:25 np0005532048 python3.9[209236]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:25 np0005532048 nice_swartz[209088]: {
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:    "0": [
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:        {
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "devices": [
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "/dev/loop3"
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            ],
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_name": "ceph_lv0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_size": "21470642176",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "name": "ceph_lv0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "tags": {
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.cluster_name": "ceph",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.crush_device_class": "",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.encrypted": "0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.osd_id": "0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.type": "block",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.vdo": "0"
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            },
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "type": "block",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "vg_name": "ceph_vg0"
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:        }
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:    ],
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:    "1": [
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:        {
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "devices": [
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "/dev/loop4"
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            ],
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_name": "ceph_lv1",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_size": "21470642176",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "name": "ceph_lv1",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "tags": {
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.cluster_name": "ceph",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.crush_device_class": "",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.encrypted": "0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.osd_id": "1",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.type": "block",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.vdo": "0"
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            },
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "type": "block",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "vg_name": "ceph_vg1"
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:        }
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:    ],
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:    "2": [
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:        {
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "devices": [
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "/dev/loop5"
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            ],
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_name": "ceph_lv2",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_size": "21470642176",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "name": "ceph_lv2",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "tags": {
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.cluster_name": "ceph",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.crush_device_class": "",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.encrypted": "0",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.osd_id": "2",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.type": "block",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:                "ceph.vdo": "0"
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            },
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "type": "block",
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:            "vg_name": "ceph_vg2"
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:        }
Nov 22 03:48:25 np0005532048 nice_swartz[209088]:    ]
Nov 22 03:48:25 np0005532048 nice_swartz[209088]: }
Nov 22 03:48:25 np0005532048 systemd[1]: libpod-33fb0d809da2b57aab0484ee65345be8a07f577ba53f28796f08e35e172cfd41.scope: Deactivated successfully.
Nov 22 03:48:25 np0005532048 podman[209064]: 2025-11-22 08:48:25.421864016 +0000 UTC m=+1.214367593 container died 33fb0d809da2b57aab0484ee65345be8a07f577ba53f28796f08e35e172cfd41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_swartz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:48:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-207a382cb31114e136451cde2b9eb41e8404aa8e4521fe281fe77e30e164a807-merged.mount: Deactivated successfully.
Nov 22 03:48:25 np0005532048 podman[209064]: 2025-11-22 08:48:25.501060891 +0000 UTC m=+1.293564498 container remove 33fb0d809da2b57aab0484ee65345be8a07f577ba53f28796f08e35e172cfd41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_swartz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:48:25 np0005532048 systemd[1]: libpod-conmon-33fb0d809da2b57aab0484ee65345be8a07f577ba53f28796f08e35e172cfd41.scope: Deactivated successfully.
Nov 22 03:48:25 np0005532048 python3.9[209369]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801304.5954962-775-142210441235778/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:26 np0005532048 podman[209667]: 2025-11-22 08:48:26.104492483 +0000 UTC m=+0.047563115 container create c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:48:26 np0005532048 systemd[1]: Started libpod-conmon-c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd.scope.
Nov 22 03:48:26 np0005532048 podman[209667]: 2025-11-22 08:48:26.078715555 +0000 UTC m=+0.021786207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:48:26 np0005532048 podman[209667]: 2025-11-22 08:48:26.204399572 +0000 UTC m=+0.147470224 container init c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldberg, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:48:26 np0005532048 podman[209667]: 2025-11-22 08:48:26.212282696 +0000 UTC m=+0.155353328 container start c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:48:26 np0005532048 podman[209667]: 2025-11-22 08:48:26.21672947 +0000 UTC m=+0.159800122 container attach c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:48:26 np0005532048 systemd[1]: libpod-c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd.scope: Deactivated successfully.
Nov 22 03:48:26 np0005532048 blissful_goldberg[209686]: 167 167
Nov 22 03:48:26 np0005532048 conmon[209686]: conmon c7f2ba8a8a8f001ebf76 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd.scope/container/memory.events
Nov 22 03:48:26 np0005532048 podman[209667]: 2025-11-22 08:48:26.224014602 +0000 UTC m=+0.167085264 container died c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:48:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dc45f7f2afd565a82516ca026dcb4da2e4a8e277c36ca0d404b5ec6e96a74622-merged.mount: Deactivated successfully.
Nov 22 03:48:26 np0005532048 python3.9[209681]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:26 np0005532048 podman[209667]: 2025-11-22 08:48:26.299736704 +0000 UTC m=+0.242807336 container remove c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goldberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:48:26 np0005532048 systemd[1]: libpod-conmon-c7f2ba8a8a8f001ebf76785190e7627493ad87e27381143a9a6ad82aaf9254bd.scope: Deactivated successfully.
Nov 22 03:48:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:26 np0005532048 podman[209751]: 2025-11-22 08:48:26.466105702 +0000 UTC m=+0.046026523 container create c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tharp, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:48:26 np0005532048 systemd[1]: Started libpod-conmon-c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c.scope.
Nov 22 03:48:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:48:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f6ee9affd40bf89a990d1da2b43d89af10f14800121dcd09f23afd3d55d1ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f6ee9affd40bf89a990d1da2b43d89af10f14800121dcd09f23afd3d55d1ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f6ee9affd40bf89a990d1da2b43d89af10f14800121dcd09f23afd3d55d1ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f6ee9affd40bf89a990d1da2b43d89af10f14800121dcd09f23afd3d55d1ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:26 np0005532048 podman[209751]: 2025-11-22 08:48:26.44640784 +0000 UTC m=+0.026328691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:26 np0005532048 podman[209751]: 2025-11-22 08:48:26.555607003 +0000 UTC m=+0.135527854 container init c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:48:26 np0005532048 podman[209751]: 2025-11-22 08:48:26.563310433 +0000 UTC m=+0.143246324 container start c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tharp, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:48:26 np0005532048 podman[209751]: 2025-11-22 08:48:26.570546894 +0000 UTC m=+0.150467705 container attach c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:48:26 np0005532048 python3.9[209853]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801305.7965512-775-262302837306158/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:27 np0005532048 python3.9[210005]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]: {
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "osd_id": 1,
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "type": "bluestore"
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:    },
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "osd_id": 0,
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "type": "bluestore"
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:    },
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "osd_id": 2,
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:        "type": "bluestore"
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]:    }
Nov 22 03:48:27 np0005532048 unruffled_tharp[209796]: }
Nov 22 03:48:27 np0005532048 systemd[1]: libpod-c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c.scope: Deactivated successfully.
Nov 22 03:48:27 np0005532048 systemd[1]: libpod-c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c.scope: Consumed 1.019s CPU time.
Nov 22 03:48:27 np0005532048 podman[209751]: 2025-11-22 08:48:27.583693731 +0000 UTC m=+1.163614592 container died c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tharp, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:48:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-42f6ee9affd40bf89a990d1da2b43d89af10f14800121dcd09f23afd3d55d1ed-merged.mount: Deactivated successfully.
Nov 22 03:48:27 np0005532048 podman[209751]: 2025-11-22 08:48:27.652614091 +0000 UTC m=+1.232534912 container remove c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:48:27 np0005532048 systemd[1]: libpod-conmon-c2abed831c7a4d2e8eff4389704376ae2635373ee7435c7edafb512381bed24c.scope: Deactivated successfully.
Nov 22 03:48:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:48:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:48:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:48:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:48:27 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8244a714-9da6-465c-863d-c4785ebf4949 does not exist
Nov 22 03:48:27 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d39575a6-16a1-43ca-973d-bd430abcbe56 does not exist
Nov 22 03:48:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:48:27.926 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:48:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:48:27.928 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:48:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:48:27.928 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:48:27 np0005532048 python3.9[210192]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801306.989825-775-240407664935982/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:28 np0005532048 python3.9[210370]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:48:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:48:29 np0005532048 python3.9[210493]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801308.1225016-775-64123223503655/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:29 np0005532048 podman[210518]: 2025-11-22 08:48:29.380388714 +0000 UTC m=+0.063344545 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 03:48:29 np0005532048 python3.9[210666]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:30 np0005532048 python3.9[210789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801309.3310034-775-225924181627162/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:30 np0005532048 python3.9[210941]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:31 np0005532048 python3.9[211064]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801310.4572923-775-209435147590673/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:32 np0005532048 python3.9[211216]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:32 np0005532048 python3.9[211339]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801311.633777-775-115268388716323/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:33 np0005532048 python3.9[211491]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:33 np0005532048 python3.9[211614]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801312.855636-775-181778441399390/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:34 np0005532048 python3.9[211766]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:35 np0005532048 python3.9[211889]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801314.128434-775-216980358072786/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:36 np0005532048 python3.9[212041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:36 np0005532048 python3.9[212164]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801315.341704-775-24062736642309/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:37 np0005532048 python3.9[212316]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:37 np0005532048 python3.9[212439]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801316.8776054-775-99544952697924/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:38 np0005532048 python3.9[212591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:39 np0005532048 python3.9[212714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801318.0742369-775-62837360600306/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:39 np0005532048 python3.9[212866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:40 np0005532048 python3.9[212989]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801319.3185158-775-54324790354934/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:41 np0005532048 python3.9[213141]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:48:41 np0005532048 podman[213264]: 2025-11-22 08:48:41.537761672 +0000 UTC m=+0.103351972 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:48:41 np0005532048 python3.9[213265]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801320.5761747-775-50959804829185/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:42 np0005532048 python3.9[213440]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:48:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:43 np0005532048 python3.9[213595]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 22 03:48:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:45 np0005532048 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 22 03:48:45 np0005532048 python3.9[213751]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:46 np0005532048 python3.9[213903]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:46 np0005532048 python3.9[214055]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:47 np0005532048 python3.9[214207]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:48 np0005532048 python3.9[214359]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:48 np0005532048 python3.9[214511]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:49 np0005532048 python3.9[214663]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:50 np0005532048 python3.9[214815]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:50 np0005532048 python3.9[214967]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:51 np0005532048 python3.9[215119]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:48:52
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'vms']
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:48:52 np0005532048 python3.9[215271]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:48:52 np0005532048 systemd[1]: Reloading.
Nov 22 03:48:52 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:48:52 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:52 np0005532048 systemd[1]: Starting libvirt logging daemon socket...
Nov 22 03:48:52 np0005532048 systemd[1]: Listening on libvirt logging daemon socket.
Nov 22 03:48:52 np0005532048 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 22 03:48:52 np0005532048 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 22 03:48:52 np0005532048 systemd[1]: Starting libvirt logging daemon...
Nov 22 03:48:52 np0005532048 systemd[1]: Started libvirt logging daemon.
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:48:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:48:53 np0005532048 python3.9[215464]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:48:53 np0005532048 systemd[1]: Reloading.
Nov 22 03:48:53 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:48:53 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:48:54 np0005532048 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 22 03:48:54 np0005532048 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 22 03:48:54 np0005532048 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 22 03:48:54 np0005532048 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 22 03:48:54 np0005532048 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 22 03:48:54 np0005532048 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 22 03:48:54 np0005532048 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 22 03:48:54 np0005532048 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 03:48:54 np0005532048 systemd[1]: Started libvirt nodedev daemon.
Nov 22 03:48:54 np0005532048 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 22 03:48:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:54 np0005532048 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 22 03:48:54 np0005532048 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 22 03:48:54 np0005532048 python3.9[215688]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:48:54 np0005532048 systemd[1]: Reloading.
Nov 22 03:48:55 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:48:55 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:48:55 np0005532048 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 22 03:48:55 np0005532048 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 22 03:48:55 np0005532048 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 22 03:48:55 np0005532048 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 22 03:48:55 np0005532048 systemd[1]: Starting libvirt proxy daemon...
Nov 22 03:48:55 np0005532048 systemd[1]: Started libvirt proxy daemon.
Nov 22 03:48:55 np0005532048 setroubleshoot[215501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5dff8093-1f53-4b81-b80a-f743d564d124
Nov 22 03:48:55 np0005532048 setroubleshoot[215501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 22 03:48:55 np0005532048 setroubleshoot[215501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5dff8093-1f53-4b81-b80a-f743d564d124
Nov 22 03:48:55 np0005532048 setroubleshoot[215501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 22 03:48:56 np0005532048 python3.9[215901]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:48:56 np0005532048 systemd[1]: Reloading.
Nov 22 03:48:56 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:48:56 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:48:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:56 np0005532048 systemd[1]: Listening on libvirt locking daemon socket.
Nov 22 03:48:56 np0005532048 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 22 03:48:56 np0005532048 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 22 03:48:56 np0005532048 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 22 03:48:56 np0005532048 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 22 03:48:56 np0005532048 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 22 03:48:56 np0005532048 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 22 03:48:56 np0005532048 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 22 03:48:56 np0005532048 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 22 03:48:56 np0005532048 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 22 03:48:56 np0005532048 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 03:48:56 np0005532048 systemd[1]: Started libvirt QEMU daemon.
Nov 22 03:48:57 np0005532048 python3.9[216116]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:48:57 np0005532048 systemd[1]: Reloading.
Nov 22 03:48:57 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:48:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:48:57 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:48:57 np0005532048 systemd[1]: Starting libvirt secret daemon socket...
Nov 22 03:48:57 np0005532048 systemd[1]: Listening on libvirt secret daemon socket.
Nov 22 03:48:57 np0005532048 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 22 03:48:57 np0005532048 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 22 03:48:57 np0005532048 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 22 03:48:57 np0005532048 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 22 03:48:57 np0005532048 systemd[1]: Starting libvirt secret daemon...
Nov 22 03:48:57 np0005532048 systemd[1]: Started libvirt secret daemon.
Nov 22 03:48:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:58 np0005532048 python3.9[216327]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:48:59 np0005532048 python3.9[216479]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:48:59 np0005532048 podman[216603]: 2025-11-22 08:48:59.935688032 +0000 UTC m=+0.068008842 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:49:00 np0005532048 python3.9[216650]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:49:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:00 np0005532048 python3.9[216804]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:49:01 np0005532048 python3.9[216954]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:02 np0005532048 python3.9[217075]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801341.1909611-1133-56168569176442/.source.xml follow=False _original_basename=secret.xml.j2 checksum=75accf511c46f9ff8970a669cbc384817da4e681 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:49:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:03 np0005532048 python3.9[217227]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 34829716-a12c-57a6-8915-c1aa615c9d8a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:49:04 np0005532048 python3.9[217389]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:05 np0005532048 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 22 03:49:05 np0005532048 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.064s CPU time.
Nov 22 03:49:05 np0005532048 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 22 03:49:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:07 np0005532048 python3.9[217852]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:08 np0005532048 python3.9[218004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:08 np0005532048 python3.9[218127]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801347.5263355-1188-7077133674468/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:09 np0005532048 python3.9[218279]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:10 np0005532048 python3.9[218431]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:10 np0005532048 python3.9[218509]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:11 np0005532048 python3.9[218661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:11 np0005532048 podman[218711]: 2025-11-22 08:49:11.794219332 +0000 UTC m=+0.133349948 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:49:11 np0005532048 python3.9[218756]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0lh1mdhs recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:12 np0005532048 python3.9[218916]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:13 np0005532048 python3.9[218994]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:14 np0005532048 python3.9[219146]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:49:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:14 np0005532048 python3[219299]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 03:49:15 np0005532048 python3.9[219451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:16 np0005532048 python3.9[219529]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:16 np0005532048 python3.9[219681]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:17 np0005532048 python3.9[219759]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:18 np0005532048 python3.9[219911]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:18 np0005532048 python3.9[219989]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:19 np0005532048 python3.9[220141]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:19 np0005532048 python3.9[220219]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:20 np0005532048 python3.9[220371]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:21 np0005532048 python3.9[220496]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763801359.941867-1313-270318133183794/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:21 np0005532048 python3.9[220648]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:22 np0005532048 python3.9[220800]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:49:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.669935) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801362670005, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1719, "num_deletes": 250, "total_data_size": 2897165, "memory_usage": 2939304, "flush_reason": "Manual Compaction"}
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 22 03:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801362697176, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1625526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11932, "largest_seqno": 13650, "table_properties": {"data_size": 1619863, "index_size": 2802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14151, "raw_average_key_size": 20, "raw_value_size": 1607409, "raw_average_value_size": 2286, "num_data_blocks": 129, "num_entries": 703, "num_filter_entries": 703, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801159, "oldest_key_time": 1763801159, "file_creation_time": 1763801362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 27292 microseconds, and 6027 cpu microseconds.
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.697234) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1625526 bytes OK
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.697257) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.703923) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.703948) EVENT_LOG_v1 {"time_micros": 1763801362703941, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.703970) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2889826, prev total WAL file size 2889826, number of live WAL files 2.
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.705122) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1587KB)], [29(7841KB)]
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801362705185, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9654934, "oldest_snapshot_seqno": -1}
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4071 keys, 7591053 bytes, temperature: kUnknown
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801362791033, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7591053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7561705, "index_size": 18038, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 96920, "raw_average_key_size": 23, "raw_value_size": 7486196, "raw_average_value_size": 1838, "num_data_blocks": 780, "num_entries": 4071, "num_filter_entries": 4071, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801362, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.791390) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7591053 bytes
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.795446) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.3 rd, 88.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.6) write-amplify(4.7) OK, records in: 4489, records dropped: 418 output_compression: NoCompression
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.795472) EVENT_LOG_v1 {"time_micros": 1763801362795460, "job": 12, "event": "compaction_finished", "compaction_time_micros": 85978, "compaction_time_cpu_micros": 18181, "output_level": 6, "num_output_files": 1, "total_output_size": 7591053, "num_input_records": 4489, "num_output_records": 4071, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801362795991, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801362797980, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.705043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.798142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.798150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.798153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.798155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:49:22 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:49:22.798157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:49:23 np0005532048 python3.9[220955]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:23 np0005532048 python3.9[221107]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:49:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:24 np0005532048 python3.9[221260]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:49:25 np0005532048 python3.9[221414]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:49:26 np0005532048 python3.9[221569]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:26 np0005532048 python3.9[221721]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:27 np0005532048 python3.9[221844]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801366.3242328-1385-226357543507973/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:49:27.928 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:49:27.931 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:49:27.931 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:49:28 np0005532048 python3.9[222019]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:49:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:49:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:49:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:49:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:49:28 np0005532048 python3.9[222239]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801367.6032271-1400-49196214703124/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:49:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:49:29 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8e68c6d1-4734-42de-911b-e218f01e3974 does not exist
Nov 22 03:49:29 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 83ce0331-08f8-4fa5-af3d-5648358b95d0 does not exist
Nov 22 03:49:29 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e6f7797d-5b61-4715-ba45-0ec8205820c5 does not exist
Nov 22 03:49:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:49:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:49:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:49:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:49:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:49:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:49:29 np0005532048 python3.9[222429]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:29 np0005532048 podman[222613]: 2025-11-22 08:49:29.712806308 +0000 UTC m=+0.106654737 container create 41558cf1a9f62e9c39ed21bf48da15b533923914f26ad46783e7d4fdc0f578de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:49:29 np0005532048 podman[222613]: 2025-11-22 08:49:29.62904588 +0000 UTC m=+0.022894299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:29 np0005532048 systemd[1]: Started libpod-conmon-41558cf1a9f62e9c39ed21bf48da15b533923914f26ad46783e7d4fdc0f578de.scope.
Nov 22 03:49:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:49:29 np0005532048 podman[222613]: 2025-11-22 08:49:29.93023106 +0000 UTC m=+0.324079519 container init 41558cf1a9f62e9c39ed21bf48da15b533923914f26ad46783e7d4fdc0f578de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:49:29 np0005532048 podman[222613]: 2025-11-22 08:49:29.940675014 +0000 UTC m=+0.334523423 container start 41558cf1a9f62e9c39ed21bf48da15b533923914f26ad46783e7d4fdc0f578de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:49:29 np0005532048 elegant_boyd[222682]: 167 167
Nov 22 03:49:29 np0005532048 systemd[1]: libpod-41558cf1a9f62e9c39ed21bf48da15b533923914f26ad46783e7d4fdc0f578de.scope: Deactivated successfully.
Nov 22 03:49:30 np0005532048 python3.9[222679]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801368.8941162-1415-250902159576572/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:49:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:49:30 np0005532048 podman[222613]: 2025-11-22 08:49:30.104733257 +0000 UTC m=+0.498581666 container attach 41558cf1a9f62e9c39ed21bf48da15b533923914f26ad46783e7d4fdc0f578de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:49:30 np0005532048 podman[222613]: 2025-11-22 08:49:30.105984287 +0000 UTC m=+0.499832696 container died 41558cf1a9f62e9c39ed21bf48da15b533923914f26ad46783e7d4fdc0f578de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:49:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-aae64b1df3324330a9805e328c7d985e077cbb8ec5e8f8ee0239e38d13ddce35-merged.mount: Deactivated successfully.
Nov 22 03:49:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:30 np0005532048 podman[222613]: 2025-11-22 08:49:30.46806215 +0000 UTC m=+0.861910559 container remove 41558cf1a9f62e9c39ed21bf48da15b533923914f26ad46783e7d4fdc0f578de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_boyd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:49:30 np0005532048 systemd[1]: libpod-conmon-41558cf1a9f62e9c39ed21bf48da15b533923914f26ad46783e7d4fdc0f578de.scope: Deactivated successfully.
Nov 22 03:49:30 np0005532048 podman[222687]: 2025-11-22 08:49:30.588214414 +0000 UTC m=+0.601445199 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 22 03:49:30 np0005532048 podman[222876]: 2025-11-22 08:49:30.651648128 +0000 UTC m=+0.056938097 container create 67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:49:30 np0005532048 systemd[1]: Started libpod-conmon-67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b.scope.
Nov 22 03:49:30 np0005532048 podman[222876]: 2025-11-22 08:49:30.622601861 +0000 UTC m=+0.027891860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:30 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:49:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29c8b01bb80b110d616e8d6eeecb62356e181ce057c16bc2efb47237d5876a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29c8b01bb80b110d616e8d6eeecb62356e181ce057c16bc2efb47237d5876a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29c8b01bb80b110d616e8d6eeecb62356e181ce057c16bc2efb47237d5876a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29c8b01bb80b110d616e8d6eeecb62356e181ce057c16bc2efb47237d5876a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29c8b01bb80b110d616e8d6eeecb62356e181ce057c16bc2efb47237d5876a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:30 np0005532048 podman[222876]: 2025-11-22 08:49:30.800834499 +0000 UTC m=+0.206124488 container init 67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:49:30 np0005532048 podman[222876]: 2025-11-22 08:49:30.812462192 +0000 UTC m=+0.217752161 container start 67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:49:30 np0005532048 podman[222876]: 2025-11-22 08:49:30.824722581 +0000 UTC m=+0.230012570 container attach 67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:49:30 np0005532048 python3.9[222864]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:49:30 np0005532048 systemd[1]: Reloading.
Nov 22 03:49:31 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:49:31 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:49:31 np0005532048 systemd[1]: Reached target edpm_libvirt.target.
Nov 22 03:49:31 np0005532048 elastic_leavitt[222892]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:49:31 np0005532048 elastic_leavitt[222892]: --> relative data size: 1.0
Nov 22 03:49:31 np0005532048 elastic_leavitt[222892]: --> All data devices are unavailable
Nov 22 03:49:32 np0005532048 systemd[1]: libpod-67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b.scope: Deactivated successfully.
Nov 22 03:49:32 np0005532048 systemd[1]: libpod-67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b.scope: Consumed 1.130s CPU time.
Nov 22 03:49:32 np0005532048 podman[222876]: 2025-11-22 08:49:32.012618442 +0000 UTC m=+1.417908411 container died 67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 22 03:49:32 np0005532048 python3.9[223100]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 03:49:32 np0005532048 systemd[1]: Reloading.
Nov 22 03:49:32 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:49:32 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:49:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:32 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cf29c8b01bb80b110d616e8d6eeecb62356e181ce057c16bc2efb47237d5876a-merged.mount: Deactivated successfully.
Nov 22 03:49:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:32 np0005532048 systemd[1]: Reloading.
Nov 22 03:49:32 np0005532048 podman[222876]: 2025-11-22 08:49:32.678439318 +0000 UTC m=+2.083729287 container remove 67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_leavitt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:49:32 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:49:32 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:49:33 np0005532048 systemd[1]: libpod-conmon-67295927a6440be201e09ecf32ed75fe2ce0b2fb68418b738eeb90293b5c134b.scope: Deactivated successfully.
Nov 22 03:49:33 np0005532048 systemd[1]: session-49.scope: Deactivated successfully.
Nov 22 03:49:33 np0005532048 systemd[1]: session-49.scope: Consumed 3min 38.336s CPU time.
Nov 22 03:49:33 np0005532048 systemd-logind[822]: Session 49 logged out. Waiting for processes to exit.
Nov 22 03:49:33 np0005532048 systemd-logind[822]: Removed session 49.
Nov 22 03:49:33 np0005532048 podman[223363]: 2025-11-22 08:49:33.594513583 +0000 UTC m=+0.075305874 container create 7cb230cf758e7e3919853f3fefd1951a3f287b52b456cbd4140aaa9fe7603398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:49:33 np0005532048 podman[223363]: 2025-11-22 08:49:33.541767589 +0000 UTC m=+0.022559900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:33 np0005532048 systemd[1]: Started libpod-conmon-7cb230cf758e7e3919853f3fefd1951a3f287b52b456cbd4140aaa9fe7603398.scope.
Nov 22 03:49:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:49:33 np0005532048 podman[223363]: 2025-11-22 08:49:33.741212494 +0000 UTC m=+0.222004785 container init 7cb230cf758e7e3919853f3fefd1951a3f287b52b456cbd4140aaa9fe7603398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:49:33 np0005532048 podman[223363]: 2025-11-22 08:49:33.752881858 +0000 UTC m=+0.233674149 container start 7cb230cf758e7e3919853f3fefd1951a3f287b52b456cbd4140aaa9fe7603398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:49:33 np0005532048 sharp_cartwright[223379]: 167 167
Nov 22 03:49:33 np0005532048 systemd[1]: libpod-7cb230cf758e7e3919853f3fefd1951a3f287b52b456cbd4140aaa9fe7603398.scope: Deactivated successfully.
Nov 22 03:49:33 np0005532048 podman[223363]: 2025-11-22 08:49:33.765348501 +0000 UTC m=+0.246141092 container attach 7cb230cf758e7e3919853f3fefd1951a3f287b52b456cbd4140aaa9fe7603398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:49:33 np0005532048 podman[223363]: 2025-11-22 08:49:33.766891038 +0000 UTC m=+0.247683329 container died 7cb230cf758e7e3919853f3fefd1951a3f287b52b456cbd4140aaa9fe7603398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:49:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e67f2cf0b0cd3f519c6712c701403bb0b3f5e09b70f26a7d807b532f99304c16-merged.mount: Deactivated successfully.
Nov 22 03:49:33 np0005532048 podman[223363]: 2025-11-22 08:49:33.863212823 +0000 UTC m=+0.344005114 container remove 7cb230cf758e7e3919853f3fefd1951a3f287b52b456cbd4140aaa9fe7603398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:49:33 np0005532048 systemd[1]: libpod-conmon-7cb230cf758e7e3919853f3fefd1951a3f287b52b456cbd4140aaa9fe7603398.scope: Deactivated successfully.
Nov 22 03:49:34 np0005532048 podman[223405]: 2025-11-22 08:49:34.032158284 +0000 UTC m=+0.046788289 container create 5dbda4310a8bce6411be3f6d0e238065de2ad3f98600725522cd4a6cb71fa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:49:34 np0005532048 systemd[1]: Started libpod-conmon-5dbda4310a8bce6411be3f6d0e238065de2ad3f98600725522cd4a6cb71fa879.scope.
Nov 22 03:49:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:49:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c61fb4a267368644758d28fe75b0d722861ba7f753b64fd9f0f217476a3f995/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c61fb4a267368644758d28fe75b0d722861ba7f753b64fd9f0f217476a3f995/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c61fb4a267368644758d28fe75b0d722861ba7f753b64fd9f0f217476a3f995/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c61fb4a267368644758d28fe75b0d722861ba7f753b64fd9f0f217476a3f995/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:34 np0005532048 podman[223405]: 2025-11-22 08:49:34.011688877 +0000 UTC m=+0.026318912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:34 np0005532048 podman[223405]: 2025-11-22 08:49:34.120512565 +0000 UTC m=+0.135142590 container init 5dbda4310a8bce6411be3f6d0e238065de2ad3f98600725522cd4a6cb71fa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:49:34 np0005532048 podman[223405]: 2025-11-22 08:49:34.127069744 +0000 UTC m=+0.141699749 container start 5dbda4310a8bce6411be3f6d0e238065de2ad3f98600725522cd4a6cb71fa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:49:34 np0005532048 podman[223405]: 2025-11-22 08:49:34.138906423 +0000 UTC m=+0.153536428 container attach 5dbda4310a8bce6411be3f6d0e238065de2ad3f98600725522cd4a6cb71fa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:49:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]: {
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:    "0": [
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:        {
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "devices": [
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "/dev/loop3"
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            ],
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_name": "ceph_lv0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_size": "21470642176",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "name": "ceph_lv0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "tags": {
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.cluster_name": "ceph",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.crush_device_class": "",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.encrypted": "0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.osd_id": "0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.type": "block",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.vdo": "0"
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            },
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "type": "block",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "vg_name": "ceph_vg0"
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:        }
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:    ],
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:    "1": [
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:        {
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "devices": [
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "/dev/loop4"
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            ],
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_name": "ceph_lv1",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_size": "21470642176",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "name": "ceph_lv1",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "tags": {
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.cluster_name": "ceph",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.crush_device_class": "",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.encrypted": "0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.osd_id": "1",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.type": "block",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.vdo": "0"
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            },
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "type": "block",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "vg_name": "ceph_vg1"
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:        }
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:    ],
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:    "2": [
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:        {
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "devices": [
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "/dev/loop5"
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            ],
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_name": "ceph_lv2",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_size": "21470642176",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "name": "ceph_lv2",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "tags": {
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.cluster_name": "ceph",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.crush_device_class": "",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.encrypted": "0",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.osd_id": "2",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.type": "block",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:                "ceph.vdo": "0"
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            },
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "type": "block",
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:            "vg_name": "ceph_vg2"
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:        }
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]:    ]
Nov 22 03:49:34 np0005532048 cranky_ishizaka[223422]: }
Nov 22 03:49:34 np0005532048 systemd[1]: libpod-5dbda4310a8bce6411be3f6d0e238065de2ad3f98600725522cd4a6cb71fa879.scope: Deactivated successfully.
Nov 22 03:49:34 np0005532048 podman[223405]: 2025-11-22 08:49:34.980124056 +0000 UTC m=+0.994754071 container died 5dbda4310a8bce6411be3f6d0e238065de2ad3f98600725522cd4a6cb71fa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:49:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5c61fb4a267368644758d28fe75b0d722861ba7f753b64fd9f0f217476a3f995-merged.mount: Deactivated successfully.
Nov 22 03:49:35 np0005532048 podman[223405]: 2025-11-22 08:49:35.098268492 +0000 UTC m=+1.112898497 container remove 5dbda4310a8bce6411be3f6d0e238065de2ad3f98600725522cd4a6cb71fa879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ishizaka, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:49:35 np0005532048 systemd[1]: libpod-conmon-5dbda4310a8bce6411be3f6d0e238065de2ad3f98600725522cd4a6cb71fa879.scope: Deactivated successfully.
Nov 22 03:49:35 np0005532048 podman[223584]: 2025-11-22 08:49:35.746559591 +0000 UTC m=+0.053829072 container create 8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:49:35 np0005532048 systemd[1]: Started libpod-conmon-8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca.scope.
Nov 22 03:49:35 np0005532048 podman[223584]: 2025-11-22 08:49:35.719163643 +0000 UTC m=+0.026433144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:49:35 np0005532048 podman[223584]: 2025-11-22 08:49:35.86406658 +0000 UTC m=+0.171336081 container init 8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heisenberg, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:49:35 np0005532048 podman[223584]: 2025-11-22 08:49:35.870943528 +0000 UTC m=+0.178213009 container start 8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heisenberg, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:49:35 np0005532048 systemd[1]: libpod-8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca.scope: Deactivated successfully.
Nov 22 03:49:35 np0005532048 elated_heisenberg[223601]: 167 167
Nov 22 03:49:35 np0005532048 conmon[223601]: conmon 8970e621b5b1ea9e954b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca.scope/container/memory.events
Nov 22 03:49:35 np0005532048 podman[223584]: 2025-11-22 08:49:35.88541262 +0000 UTC m=+0.192682131 container attach 8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:49:35 np0005532048 podman[223584]: 2025-11-22 08:49:35.88624668 +0000 UTC m=+0.193516161 container died 8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:49:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-10ee677f302728faf1fdd90f93d8edec39a3ad65ed13eed91e101aa844f1b0d1-merged.mount: Deactivated successfully.
Nov 22 03:49:35 np0005532048 podman[223584]: 2025-11-22 08:49:35.987155846 +0000 UTC m=+0.294425327 container remove 8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:49:35 np0005532048 systemd[1]: libpod-conmon-8970e621b5b1ea9e954b15d26050fbee72ae0b1f6de6856bd19df5370be56eca.scope: Deactivated successfully.
Nov 22 03:49:36 np0005532048 podman[223626]: 2025-11-22 08:49:36.158742622 +0000 UTC m=+0.025976943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:36 np0005532048 podman[223626]: 2025-11-22 08:49:36.417998483 +0000 UTC m=+0.285232784 container create 48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:49:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:36 np0005532048 systemd[1]: Started libpod-conmon-48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008.scope.
Nov 22 03:49:36 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:49:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b2e6dd664cdaa238c2162c325d71e509d22e1389b606b3c4e7ff741583e44c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b2e6dd664cdaa238c2162c325d71e509d22e1389b606b3c4e7ff741583e44c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b2e6dd664cdaa238c2162c325d71e509d22e1389b606b3c4e7ff741583e44c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b2e6dd664cdaa238c2162c325d71e509d22e1389b606b3c4e7ff741583e44c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:36 np0005532048 podman[223626]: 2025-11-22 08:49:36.780967997 +0000 UTC m=+0.648202318 container init 48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 22 03:49:36 np0005532048 podman[223626]: 2025-11-22 08:49:36.789023733 +0000 UTC m=+0.656258034 container start 48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:49:36 np0005532048 podman[223626]: 2025-11-22 08:49:36.893041815 +0000 UTC m=+0.760276146 container attach 48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:49:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:37 np0005532048 angry_hertz[223642]: {
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "osd_id": 1,
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "type": "bluestore"
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:    },
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "osd_id": 0,
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "type": "bluestore"
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:    },
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "osd_id": 2,
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:        "type": "bluestore"
Nov 22 03:49:37 np0005532048 angry_hertz[223642]:    }
Nov 22 03:49:37 np0005532048 angry_hertz[223642]: }
Nov 22 03:49:37 np0005532048 systemd[1]: libpod-48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008.scope: Deactivated successfully.
Nov 22 03:49:37 np0005532048 systemd[1]: libpod-48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008.scope: Consumed 1.090s CPU time.
Nov 22 03:49:37 np0005532048 podman[223626]: 2025-11-22 08:49:37.879920974 +0000 UTC m=+1.747155295 container died 48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:49:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay-54b2e6dd664cdaa238c2162c325d71e509d22e1389b606b3c4e7ff741583e44c-merged.mount: Deactivated successfully.
Nov 22 03:49:38 np0005532048 podman[223626]: 2025-11-22 08:49:38.02683158 +0000 UTC m=+1.894065871 container remove 48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:49:38 np0005532048 systemd[1]: libpod-conmon-48ddd40c03902ba2de3c75e2fd8405a4457f836276ecf418fb793b4c529ab008.scope: Deactivated successfully.
Nov 22 03:49:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:49:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:49:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:49:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:49:38 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bb1b046f-2839-4aa3-a3d9-f6d434926e0a does not exist
Nov 22 03:49:38 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fedb42a5-592b-468e-a118-e746b0c0600c does not exist
Nov 22 03:49:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:49:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:49:39 np0005532048 systemd-logind[822]: New session 50 of user zuul.
Nov 22 03:49:39 np0005532048 systemd[1]: Started Session 50 of User zuul.
Nov 22 03:49:40 np0005532048 python3.9[223890]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:49:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:41 np0005532048 python3.9[224044]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:49:41 np0005532048 network[224061]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:49:41 np0005532048 network[224062]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:49:41 np0005532048 network[224063]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:49:41 np0005532048 podman[224068]: 2025-11-22 08:49:41.971421575 +0000 UTC m=+0.110214863 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 03:49:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:45 np0005532048 python3.9[224362]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:49:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:46 np0005532048 python3.9[224446]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:49:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:49:52
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.control']
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:49:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:49:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:55 np0005532048 python3.9[224600]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:49:55 np0005532048 python3.9[224752]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:49:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:56 np0005532048 python3.9[224905]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:49:57 np0005532048 python3.9[225057]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:49:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:57 np0005532048 python3.9[225210]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:49:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:58 np0005532048 python3.9[225333]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801397.4374843-95-28138094442077/.source.iscsi _original_basename=.6rlq1t77 follow=False checksum=9ca3ee7661afcf9d4967a12b93fe0bace7d7b073 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:49:59 np0005532048 python3.9[225485]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:00 np0005532048 python3.9[225637]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:01 np0005532048 podman[225761]: 2025-11-22 08:50:01.172357724 +0000 UTC m=+0.057152061 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:50:01 np0005532048 python3.9[225808]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:50:01 np0005532048 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:50:02 np0005532048 python3.9[225964]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:50:02 np0005532048 systemd[1]: Reloading.
Nov 22 03:50:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:02 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:50:02 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:50:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:02 np0005532048 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 03:50:02 np0005532048 systemd[1]: Starting Open-iSCSI...
Nov 22 03:50:02 np0005532048 kernel: Loading iSCSI transport class v2.0-870.
Nov 22 03:50:02 np0005532048 systemd[1]: Started Open-iSCSI.
Nov 22 03:50:02 np0005532048 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 22 03:50:02 np0005532048 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 22 03:50:03 np0005532048 python3.9[226165]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:50:03 np0005532048 network[226182]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:50:03 np0005532048 network[226183]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:50:03 np0005532048 network[226184]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:50:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:07 np0005532048 python3.9[226456]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 03:50:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:08 np0005532048 python3.9[226608]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 22 03:50:09 np0005532048 python3.9[226764]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:10 np0005532048 python3.9[226887]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801408.8574739-172-248432734832482/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:10 np0005532048 python3.9[227039]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:12 np0005532048 python3.9[227191]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:50:12 np0005532048 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 03:50:12 np0005532048 systemd[1]: Stopped Load Kernel Modules.
Nov 22 03:50:12 np0005532048 systemd[1]: Stopping Load Kernel Modules...
Nov 22 03:50:12 np0005532048 systemd[1]: Starting Load Kernel Modules...
Nov 22 03:50:12 np0005532048 systemd[1]: Finished Load Kernel Modules.
Nov 22 03:50:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:12 np0005532048 python3.9[227347]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:50:13 np0005532048 podman[227453]: 2025-11-22 08:50:13.451364644 +0000 UTC m=+0.133850388 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:50:13 np0005532048 python3.9[227524]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:50:14 np0005532048 python3.9[227676]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:50:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:15 np0005532048 python3.9[227828]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:15 np0005532048 python3.9[227951]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801414.6714864-230-37469494720318/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:16 np0005532048 python3.9[228103]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:50:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:22 np0005532048 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 03:50:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:50:27.928 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:50:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:50:27.931 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:50:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:50:27.931 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:50:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:28 np0005532048 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 03:50:29 np0005532048 python3.9[228256]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 16.7936 seconds
Nov 22 03:50:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:30 np0005532048 python3.9[228409]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:30 np0005532048 python3.9[228561]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:31 np0005532048 podman[228635]: 2025-11-22 08:50:31.395646645 +0000 UTC m=+0.084194052 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 03:50:31 np0005532048 python3.9[228733]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:32 np0005532048 python3.9[228885]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:33 np0005532048 python3.9[229037]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:33 np0005532048 python3.9[229189]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:34 np0005532048 python3.9[229341]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:50:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:35 np0005532048 python3.9[229495]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:36 np0005532048 python3.9[229647]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:50:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:36 np0005532048 python3.9[229799]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:37 np0005532048 python3.9[229877]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:50:37 np0005532048 python3.9[230029]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:38 np0005532048 python3.9[230107]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:50:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:50:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d31069ef-0f11-4d8d-acb5-286753060c0c does not exist
Nov 22 03:50:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6bc0d710-7501-4d1a-8e37-2ae8226292ee does not exist
Nov 22 03:50:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6cc68a76-aa8a-44cf-ab74-a38ac618f308 does not exist
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:50:39 np0005532048 python3.9[230390]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:50:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:50:39 np0005532048 podman[230683]: 2025-11-22 08:50:39.79525366 +0000 UTC m=+0.026226061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:39 np0005532048 podman[230683]: 2025-11-22 08:50:39.929790793 +0000 UTC m=+0.160763174 container create f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:50:39 np0005532048 python3.9[230675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:40 np0005532048 systemd[1]: Started libpod-conmon-f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25.scope.
Nov 22 03:50:40 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:50:40 np0005532048 podman[230683]: 2025-11-22 08:50:40.229078238 +0000 UTC m=+0.460050639 container init f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:50:40 np0005532048 podman[230683]: 2025-11-22 08:50:40.24080629 +0000 UTC m=+0.471778671 container start f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:50:40 np0005532048 systemd[1]: libpod-f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25.scope: Deactivated successfully.
Nov 22 03:50:40 np0005532048 affectionate_williamson[230724]: 167 167
Nov 22 03:50:40 np0005532048 conmon[230724]: conmon f8cc87f07f82351898c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25.scope/container/memory.events
Nov 22 03:50:40 np0005532048 podman[230683]: 2025-11-22 08:50:40.403804269 +0000 UTC m=+0.634776650 container attach f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:50:40 np0005532048 podman[230683]: 2025-11-22 08:50:40.405745617 +0000 UTC m=+0.636717998 container died f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:50:40 np0005532048 python3.9[230779]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:40 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1cccea5bf8d39e31088ad49a928f093863b78e01321931f26036e5072578af52-merged.mount: Deactivated successfully.
Nov 22 03:50:41 np0005532048 python3.9[230946]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:41 np0005532048 podman[230683]: 2025-11-22 08:50:41.281374851 +0000 UTC m=+1.512347232 container remove f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williamson, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:50:41 np0005532048 systemd[1]: libpod-conmon-f8cc87f07f82351898c9e4dc253a3278507bbb7e52fd248df048967cc88cab25.scope: Deactivated successfully.
Nov 22 03:50:41 np0005532048 podman[231011]: 2025-11-22 08:50:41.453529868 +0000 UTC m=+0.026809937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:41 np0005532048 podman[231011]: 2025-11-22 08:50:41.612002725 +0000 UTC m=+0.185282774 container create 74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:50:41 np0005532048 python3.9[231043]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:41 np0005532048 systemd[1]: Started libpod-conmon-74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b.scope.
Nov 22 03:50:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:50:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c629c12d4ddef08510c57a83039c327e46c47ccfa70c5778ad06f89f8094793/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c629c12d4ddef08510c57a83039c327e46c47ccfa70c5778ad06f89f8094793/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c629c12d4ddef08510c57a83039c327e46c47ccfa70c5778ad06f89f8094793/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c629c12d4ddef08510c57a83039c327e46c47ccfa70c5778ad06f89f8094793/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c629c12d4ddef08510c57a83039c327e46c47ccfa70c5778ad06f89f8094793/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:41 np0005532048 podman[231011]: 2025-11-22 08:50:41.936414365 +0000 UTC m=+0.509694434 container init 74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:50:41 np0005532048 podman[231011]: 2025-11-22 08:50:41.94671211 +0000 UTC m=+0.519992159 container start 74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:50:42 np0005532048 podman[231011]: 2025-11-22 08:50:42.014525335 +0000 UTC m=+0.587805384 container attach 74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:50:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:42 np0005532048 python3.9[231205]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:50:42 np0005532048 systemd[1]: Reloading.
Nov 22 03:50:42 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:50:42 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:50:43 np0005532048 objective_mahavira[231050]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:50:43 np0005532048 objective_mahavira[231050]: --> relative data size: 1.0
Nov 22 03:50:43 np0005532048 objective_mahavira[231050]: --> All data devices are unavailable
Nov 22 03:50:43 np0005532048 systemd[1]: libpod-74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b.scope: Deactivated successfully.
Nov 22 03:50:43 np0005532048 podman[231011]: 2025-11-22 08:50:43.12448041 +0000 UTC m=+1.697760459 container died 74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:50:43 np0005532048 systemd[1]: libpod-74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b.scope: Consumed 1.112s CPU time.
Nov 22 03:50:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6c629c12d4ddef08510c57a83039c327e46c47ccfa70c5778ad06f89f8094793-merged.mount: Deactivated successfully.
Nov 22 03:50:43 np0005532048 podman[231011]: 2025-11-22 08:50:43.260274923 +0000 UTC m=+1.833554982 container remove 74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:50:43 np0005532048 systemd[1]: libpod-conmon-74d3a4bd440725c5ca418b9459037b4975c71ee0cf911f0c330fd2d4409db82b.scope: Deactivated successfully.
Nov 22 03:50:43 np0005532048 podman[231486]: 2025-11-22 08:50:43.596196869 +0000 UTC m=+0.096062928 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:50:43 np0005532048 python3.9[231500]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:43 np0005532048 podman[231621]: 2025-11-22 08:50:43.972275503 +0000 UTC m=+0.063542191 container create f47d60fa4e305ea899a397d2890b28923d4833300e7f64b68731742a81dd6649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:50:44 np0005532048 systemd[1]: Started libpod-conmon-f47d60fa4e305ea899a397d2890b28923d4833300e7f64b68731742a81dd6649.scope.
Nov 22 03:50:44 np0005532048 podman[231621]: 2025-11-22 08:50:43.932191336 +0000 UTC m=+0.023458044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:50:44 np0005532048 podman[231621]: 2025-11-22 08:50:44.094604301 +0000 UTC m=+0.185871019 container init f47d60fa4e305ea899a397d2890b28923d4833300e7f64b68731742a81dd6649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:50:44 np0005532048 podman[231621]: 2025-11-22 08:50:44.102831096 +0000 UTC m=+0.194097784 container start f47d60fa4e305ea899a397d2890b28923d4833300e7f64b68731742a81dd6649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:50:44 np0005532048 gracious_lewin[231688]: 167 167
Nov 22 03:50:44 np0005532048 systemd[1]: libpod-f47d60fa4e305ea899a397d2890b28923d4833300e7f64b68731742a81dd6649.scope: Deactivated successfully.
Nov 22 03:50:44 np0005532048 podman[231621]: 2025-11-22 08:50:44.117332386 +0000 UTC m=+0.208599074 container attach f47d60fa4e305ea899a397d2890b28923d4833300e7f64b68731742a81dd6649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:50:44 np0005532048 podman[231621]: 2025-11-22 08:50:44.131086868 +0000 UTC m=+0.222353556 container died f47d60fa4e305ea899a397d2890b28923d4833300e7f64b68731742a81dd6649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:50:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ba0f5b56633c8d31059cccef789d85f3680802a9cadc561f12e8351d797e3068-merged.mount: Deactivated successfully.
Nov 22 03:50:44 np0005532048 podman[231621]: 2025-11-22 08:50:44.225627807 +0000 UTC m=+0.316894485 container remove f47d60fa4e305ea899a397d2890b28923d4833300e7f64b68731742a81dd6649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lewin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:50:44 np0005532048 systemd[1]: libpod-conmon-f47d60fa4e305ea899a397d2890b28923d4833300e7f64b68731742a81dd6649.scope: Deactivated successfully.
Nov 22 03:50:44 np0005532048 python3.9[231692]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:44 np0005532048 podman[231733]: 2025-11-22 08:50:44.407995857 +0000 UTC m=+0.053739956 container create 159c25e5437c5c92c72ea23fd03ae4b5df8572a7951ca4ff3d201224b3d2768e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dijkstra, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:50:44 np0005532048 systemd[1]: Started libpod-conmon-159c25e5437c5c92c72ea23fd03ae4b5df8572a7951ca4ff3d201224b3d2768e.scope.
Nov 22 03:50:44 np0005532048 podman[231733]: 2025-11-22 08:50:44.378412492 +0000 UTC m=+0.024156611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:50:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b910779efec5109067fe09f15142acc342cb6c47a0ce8b6d6fc458e8a71a0be5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b910779efec5109067fe09f15142acc342cb6c47a0ce8b6d6fc458e8a71a0be5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b910779efec5109067fe09f15142acc342cb6c47a0ce8b6d6fc458e8a71a0be5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b910779efec5109067fe09f15142acc342cb6c47a0ce8b6d6fc458e8a71a0be5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:44 np0005532048 podman[231733]: 2025-11-22 08:50:44.520244436 +0000 UTC m=+0.165988565 container init 159c25e5437c5c92c72ea23fd03ae4b5df8572a7951ca4ff3d201224b3d2768e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dijkstra, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 22 03:50:44 np0005532048 podman[231733]: 2025-11-22 08:50:44.529163827 +0000 UTC m=+0.174907926 container start 159c25e5437c5c92c72ea23fd03ae4b5df8572a7951ca4ff3d201224b3d2768e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:50:44 np0005532048 podman[231733]: 2025-11-22 08:50:44.541199576 +0000 UTC m=+0.186943715 container attach 159c25e5437c5c92c72ea23fd03ae4b5df8572a7951ca4ff3d201224b3d2768e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:50:44 np0005532048 python3.9[231888]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]: {
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:    "0": [
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:        {
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "devices": [
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "/dev/loop3"
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            ],
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_name": "ceph_lv0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_size": "21470642176",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "name": "ceph_lv0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "tags": {
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.cluster_name": "ceph",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.crush_device_class": "",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.encrypted": "0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.osd_id": "0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.type": "block",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.vdo": "0"
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            },
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "type": "block",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "vg_name": "ceph_vg0"
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:        }
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:    ],
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:    "1": [
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:        {
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "devices": [
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "/dev/loop4"
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            ],
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_name": "ceph_lv1",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_size": "21470642176",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "name": "ceph_lv1",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "tags": {
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.cluster_name": "ceph",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.crush_device_class": "",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.encrypted": "0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.osd_id": "1",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.type": "block",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.vdo": "0"
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            },
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "type": "block",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "vg_name": "ceph_vg1"
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:        }
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:    ],
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:    "2": [
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:        {
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "devices": [
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "/dev/loop5"
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            ],
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_name": "ceph_lv2",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_size": "21470642176",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "name": "ceph_lv2",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "tags": {
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.cluster_name": "ceph",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.crush_device_class": "",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.encrypted": "0",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.osd_id": "2",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.type": "block",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:                "ceph.vdo": "0"
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            },
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "type": "block",
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:            "vg_name": "ceph_vg2"
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:        }
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]:    ]
Nov 22 03:50:45 np0005532048 loving_dijkstra[231763]: }
Nov 22 03:50:45 np0005532048 systemd[1]: libpod-159c25e5437c5c92c72ea23fd03ae4b5df8572a7951ca4ff3d201224b3d2768e.scope: Deactivated successfully.
Nov 22 03:50:45 np0005532048 podman[231733]: 2025-11-22 08:50:45.413596389 +0000 UTC m=+1.059340508 container died 159c25e5437c5c92c72ea23fd03ae4b5df8572a7951ca4ff3d201224b3d2768e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:50:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b910779efec5109067fe09f15142acc342cb6c47a0ce8b6d6fc458e8a71a0be5-merged.mount: Deactivated successfully.
Nov 22 03:50:45 np0005532048 python3.9[231968]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:45 np0005532048 podman[231733]: 2025-11-22 08:50:45.531388986 +0000 UTC m=+1.177133085 container remove 159c25e5437c5c92c72ea23fd03ae4b5df8572a7951ca4ff3d201224b3d2768e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dijkstra, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:50:45 np0005532048 systemd[1]: libpod-conmon-159c25e5437c5c92c72ea23fd03ae4b5df8572a7951ca4ff3d201224b3d2768e.scope: Deactivated successfully.
Nov 22 03:50:46 np0005532048 podman[232275]: 2025-11-22 08:50:46.196527 +0000 UTC m=+0.052381302 container create 6995d54da4d50b1ebb8c4ca27bff444353798272f1061e7787fb41fe5b8e22e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:50:46 np0005532048 systemd[1]: Started libpod-conmon-6995d54da4d50b1ebb8c4ca27bff444353798272f1061e7787fb41fe5b8e22e5.scope.
Nov 22 03:50:46 np0005532048 podman[232275]: 2025-11-22 08:50:46.1691401 +0000 UTC m=+0.024994402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:50:46 np0005532048 podman[232275]: 2025-11-22 08:50:46.295912159 +0000 UTC m=+0.151766461 container init 6995d54da4d50b1ebb8c4ca27bff444353798272f1061e7787fb41fe5b8e22e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:50:46 np0005532048 python3.9[232236]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:50:46 np0005532048 podman[232275]: 2025-11-22 08:50:46.307567469 +0000 UTC m=+0.163421761 container start 6995d54da4d50b1ebb8c4ca27bff444353798272f1061e7787fb41fe5b8e22e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_einstein, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:50:46 np0005532048 beautiful_einstein[232291]: 167 167
Nov 22 03:50:46 np0005532048 systemd[1]: libpod-6995d54da4d50b1ebb8c4ca27bff444353798272f1061e7787fb41fe5b8e22e5.scope: Deactivated successfully.
Nov 22 03:50:46 np0005532048 systemd[1]: Reloading.
Nov 22 03:50:46 np0005532048 podman[232275]: 2025-11-22 08:50:46.321888315 +0000 UTC m=+0.177742597 container attach 6995d54da4d50b1ebb8c4ca27bff444353798272f1061e7787fb41fe5b8e22e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:50:46 np0005532048 podman[232275]: 2025-11-22 08:50:46.323456504 +0000 UTC m=+0.179310796 container died 6995d54da4d50b1ebb8c4ca27bff444353798272f1061e7787fb41fe5b8e22e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_einstein, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:50:46 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:50:46 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:50:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6b51d8cc9fba474ae6d17e557bf71620737e7ee46df9d2943a99ea1bf4e86b6d-merged.mount: Deactivated successfully.
Nov 22 03:50:46 np0005532048 podman[232275]: 2025-11-22 08:50:46.724469036 +0000 UTC m=+0.580323318 container remove 6995d54da4d50b1ebb8c4ca27bff444353798272f1061e7787fb41fe5b8e22e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_einstein, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:50:46 np0005532048 systemd[1]: libpod-conmon-6995d54da4d50b1ebb8c4ca27bff444353798272f1061e7787fb41fe5b8e22e5.scope: Deactivated successfully.
Nov 22 03:50:46 np0005532048 systemd[1]: Starting Create netns directory...
Nov 22 03:50:46 np0005532048 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:50:46 np0005532048 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:50:46 np0005532048 systemd[1]: Finished Create netns directory.
Nov 22 03:50:46 np0005532048 podman[232357]: 2025-11-22 08:50:46.926721401 +0000 UTC m=+0.053855370 container create 21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:50:46 np0005532048 systemd[1]: Started libpod-conmon-21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e.scope.
Nov 22 03:50:46 np0005532048 podman[232357]: 2025-11-22 08:50:46.905115443 +0000 UTC m=+0.032249442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:47 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:50:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a245a32ce5db39786d429d3cf7a545f77ecd2beacb59a391e5244580a79decff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a245a32ce5db39786d429d3cf7a545f77ecd2beacb59a391e5244580a79decff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a245a32ce5db39786d429d3cf7a545f77ecd2beacb59a391e5244580a79decff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a245a32ce5db39786d429d3cf7a545f77ecd2beacb59a391e5244580a79decff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:47 np0005532048 podman[232357]: 2025-11-22 08:50:47.025686039 +0000 UTC m=+0.152820028 container init 21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:50:47 np0005532048 podman[232357]: 2025-11-22 08:50:47.037685287 +0000 UTC m=+0.164819256 container start 21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:50:47 np0005532048 podman[232357]: 2025-11-22 08:50:47.042735543 +0000 UTC m=+0.169869532 container attach 21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 22 03:50:47 np0005532048 python3.9[232530]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:50:48 np0005532048 clever_saha[232398]: {
Nov 22 03:50:48 np0005532048 clever_saha[232398]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "osd_id": 1,
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "type": "bluestore"
Nov 22 03:50:48 np0005532048 clever_saha[232398]:    },
Nov 22 03:50:48 np0005532048 clever_saha[232398]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "osd_id": 0,
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "type": "bluestore"
Nov 22 03:50:48 np0005532048 clever_saha[232398]:    },
Nov 22 03:50:48 np0005532048 clever_saha[232398]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "osd_id": 2,
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:50:48 np0005532048 clever_saha[232398]:        "type": "bluestore"
Nov 22 03:50:48 np0005532048 clever_saha[232398]:    }
Nov 22 03:50:48 np0005532048 clever_saha[232398]: }
Nov 22 03:50:48 np0005532048 systemd[1]: libpod-21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e.scope: Deactivated successfully.
Nov 22 03:50:48 np0005532048 podman[232357]: 2025-11-22 08:50:48.186924358 +0000 UTC m=+1.314058337 container died 21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:50:48 np0005532048 systemd[1]: libpod-21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e.scope: Consumed 1.148s CPU time.
Nov 22 03:50:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a245a32ce5db39786d429d3cf7a545f77ecd2beacb59a391e5244580a79decff-merged.mount: Deactivated successfully.
Nov 22 03:50:48 np0005532048 podman[232357]: 2025-11-22 08:50:48.290008079 +0000 UTC m=+1.417142038 container remove 21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_saha, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:50:48 np0005532048 systemd[1]: libpod-conmon-21a3edb9017486f5a4f020ae0ea9ae1c594b8acc9edc85846e400f8e4c60f48e.scope: Deactivated successfully.
Nov 22 03:50:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:50:48 np0005532048 python3.9[232708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:50:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:50:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:50:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7e6b9274-050c-431e-9a01-58b459fc7b91 does not exist
Nov 22 03:50:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7230c3cd-ee0a-4406-a9de-1ad0e3628621 does not exist
Nov 22 03:50:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:48 np0005532048 python3.9[232896]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801447.8411586-437-200581962609946/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:50:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:50:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:50:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:49 np0005532048 python3.9[233048]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:50:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:50 np0005532048 python3.9[233200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:50:51 np0005532048 python3.9[233323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801450.1405818-462-167045675944352/.source.json _original_basename=.nh5a7y1e follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:51 np0005532048 python3.9[233475]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:50:52
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms', 'default.rgw.log']
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:50:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:50:54 np0005532048 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 22 03:50:54 np0005532048 python3.9[233902]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 22 03:50:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:55 np0005532048 python3.9[234055]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:50:55 np0005532048 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 03:50:55 np0005532048 python3.9[234208]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 03:50:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:50:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3229 writes, 14K keys, 3229 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3228 writes, 3228 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1209 writes, 5339 keys, 1209 commit groups, 1.0 writes per commit group, ingest: 8.10 MB, 0.01 MB/s#012Interval WAL: 1208 writes, 1208 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     80.9      0.17              0.04         6    0.029       0      0       0.0       0.0#012  L6      1/0    7.24 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.5    115.6     96.6      0.36              0.09         5    0.072     19K   2186       0.0       0.0#012 Sum      1/0    7.24 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.5     77.9     91.5      0.53              0.13        11    0.049     19K   2186       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    103.0    105.1      0.26              0.07         6    0.043     12K   1446       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    115.6     96.6      0.36              0.09         5    0.072     19K   2186       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     81.8      0.17              0.04         5    0.034       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.014, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.03 MB/s read, 0.5 seconds#012Interval compaction: 0.03 GB write, 0.04 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 1.36 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(83,1.17 MB,0.384853%) FilterBlock(12,64.73 KB,0.0207951%) IndexBlock(12,130.14 KB,0.0418061%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 03:50:57 np0005532048 python3[234386]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:50:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:50:58 np0005532048 podman[234401]: 2025-11-22 08:50:58.813713454 +0000 UTC m=+1.077944811 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 22 03:50:58 np0005532048 podman[234458]: 2025-11-22 08:50:58.997067209 +0000 UTC m=+0.072527973 container create fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 22 03:50:58 np0005532048 podman[234458]: 2025-11-22 08:50:58.951983039 +0000 UTC m=+0.027443823 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 22 03:50:59 np0005532048 python3[234386]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 22 03:50:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:59 np0005532048 python3.9[234645]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:51:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:00 np0005532048 python3.9[234799]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:00 np0005532048 python3.9[234875]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:51:01 np0005532048 podman[234998]: 2025-11-22 08:51:01.510507372 +0000 UTC m=+0.061310074 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 03:51:01 np0005532048 python3.9[235044]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763801461.0325766-550-238574303549971/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:02 np0005532048 python3.9[235121]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:51:02 np0005532048 systemd[1]: Reloading.
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:51:02 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:51:02 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:51:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:03 np0005532048 python3.9[235231]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:51:03 np0005532048 systemd[1]: Reloading.
Nov 22 03:51:03 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:51:03 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:51:04 np0005532048 systemd[1]: Starting multipathd container...
Nov 22 03:51:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:51:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcb533eb1678d1c82b2f8667c712a43091e01fd0ce2745a3260a21a67f8daeaa/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcb533eb1678d1c82b2f8667c712a43091e01fd0ce2745a3260a21a67f8daeaa/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:04 np0005532048 systemd[1]: Started /usr/bin/podman healthcheck run fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8.
Nov 22 03:51:04 np0005532048 podman[235271]: 2025-11-22 08:51:04.188658915 +0000 UTC m=+0.159456601 container init fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 03:51:04 np0005532048 multipathd[235286]: + sudo -E kolla_set_configs
Nov 22 03:51:04 np0005532048 podman[235271]: 2025-11-22 08:51:04.221161014 +0000 UTC m=+0.191958670 container start fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 03:51:04 np0005532048 podman[235271]: multipathd
Nov 22 03:51:04 np0005532048 systemd[1]: Started multipathd container.
Nov 22 03:51:04 np0005532048 multipathd[235286]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:51:04 np0005532048 multipathd[235286]: INFO:__main__:Validating config file
Nov 22 03:51:04 np0005532048 multipathd[235286]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:51:04 np0005532048 multipathd[235286]: INFO:__main__:Writing out command to execute
Nov 22 03:51:04 np0005532048 multipathd[235286]: ++ cat /run_command
Nov 22 03:51:04 np0005532048 multipathd[235286]: + CMD='/usr/sbin/multipathd -d'
Nov 22 03:51:04 np0005532048 multipathd[235286]: + ARGS=
Nov 22 03:51:04 np0005532048 multipathd[235286]: + sudo kolla_copy_cacerts
Nov 22 03:51:04 np0005532048 multipathd[235286]: + [[ ! -n '' ]]
Nov 22 03:51:04 np0005532048 multipathd[235286]: + . kolla_extend_start
Nov 22 03:51:04 np0005532048 podman[235293]: 2025-11-22 08:51:04.300549885 +0000 UTC m=+0.065896438 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:51:04 np0005532048 multipathd[235286]: Running command: '/usr/sbin/multipathd -d'
Nov 22 03:51:04 np0005532048 multipathd[235286]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 03:51:04 np0005532048 multipathd[235286]: + umask 0022
Nov 22 03:51:04 np0005532048 multipathd[235286]: + exec /usr/sbin/multipathd -d
Nov 22 03:51:04 np0005532048 systemd[1]: fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-13a2dc41a9d3bd01.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 03:51:04 np0005532048 systemd[1]: fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-13a2dc41a9d3bd01.service: Failed with result 'exit-code'.
Nov 22 03:51:04 np0005532048 multipathd[235286]: 4315.120842 | --------start up--------
Nov 22 03:51:04 np0005532048 multipathd[235286]: 4315.120869 | read /etc/multipath.conf
Nov 22 03:51:04 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:51:04 np0005532048 multipathd[235286]: 4315.128277 | path checkers start up
Nov 22 03:51:04 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:51:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:04 np0005532048 python3.9[235475]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:51:05 np0005532048 python3.9[235629]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:05 np0005532048 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 03:51:05 np0005532048 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 22 03:51:06 np0005532048 python3.9[235796]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:51:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:06 np0005532048 systemd[1]: Stopping multipathd container...
Nov 22 03:51:06 np0005532048 multipathd[235286]: 4317.391933 | exit (signal)
Nov 22 03:51:06 np0005532048 multipathd[235286]: 4317.392475 | --------shut down-------
Nov 22 03:51:06 np0005532048 systemd[1]: libpod-fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8.scope: Deactivated successfully.
Nov 22 03:51:06 np0005532048 podman[235800]: 2025-11-22 08:51:06.609357884 +0000 UTC m=+0.061348184 container died fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 03:51:06 np0005532048 systemd[1]: fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-13a2dc41a9d3bd01.timer: Deactivated successfully.
Nov 22 03:51:06 np0005532048 systemd[1]: Stopped /usr/bin/podman healthcheck run fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8.
Nov 22 03:51:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-userdata-shm.mount: Deactivated successfully.
Nov 22 03:51:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fcb533eb1678d1c82b2f8667c712a43091e01fd0ce2745a3260a21a67f8daeaa-merged.mount: Deactivated successfully.
Nov 22 03:51:06 np0005532048 podman[235800]: 2025-11-22 08:51:06.952649243 +0000 UTC m=+0.404639533 container cleanup fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true, config_id=multipathd)
Nov 22 03:51:06 np0005532048 podman[235800]: multipathd
Nov 22 03:51:07 np0005532048 podman[235827]: multipathd
Nov 22 03:51:07 np0005532048 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 22 03:51:07 np0005532048 systemd[1]: Stopped multipathd container.
Nov 22 03:51:07 np0005532048 systemd[1]: Starting multipathd container...
Nov 22 03:51:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:51:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcb533eb1678d1c82b2f8667c712a43091e01fd0ce2745a3260a21a67f8daeaa/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcb533eb1678d1c82b2f8667c712a43091e01fd0ce2745a3260a21a67f8daeaa/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:07 np0005532048 systemd[1]: Started /usr/bin/podman healthcheck run fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8.
Nov 22 03:51:07 np0005532048 podman[235840]: 2025-11-22 08:51:07.169977652 +0000 UTC m=+0.120245988 container init fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 03:51:07 np0005532048 multipathd[235853]: + sudo -E kolla_set_configs
Nov 22 03:51:07 np0005532048 podman[235840]: 2025-11-22 08:51:07.194715226 +0000 UTC m=+0.144983542 container start fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:51:07 np0005532048 podman[235840]: multipathd
Nov 22 03:51:07 np0005532048 systemd[1]: Started multipathd container.
Nov 22 03:51:07 np0005532048 multipathd[235853]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:51:07 np0005532048 multipathd[235853]: INFO:__main__:Validating config file
Nov 22 03:51:07 np0005532048 multipathd[235853]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:51:07 np0005532048 multipathd[235853]: INFO:__main__:Writing out command to execute
Nov 22 03:51:07 np0005532048 multipathd[235853]: ++ cat /run_command
Nov 22 03:51:07 np0005532048 multipathd[235853]: + CMD='/usr/sbin/multipathd -d'
Nov 22 03:51:07 np0005532048 multipathd[235853]: + ARGS=
Nov 22 03:51:07 np0005532048 multipathd[235853]: + sudo kolla_copy_cacerts
Nov 22 03:51:07 np0005532048 multipathd[235853]: + [[ ! -n '' ]]
Nov 22 03:51:07 np0005532048 multipathd[235853]: + . kolla_extend_start
Nov 22 03:51:07 np0005532048 multipathd[235853]: Running command: '/usr/sbin/multipathd -d'
Nov 22 03:51:07 np0005532048 multipathd[235853]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 03:51:07 np0005532048 multipathd[235853]: + umask 0022
Nov 22 03:51:07 np0005532048 multipathd[235853]: + exec /usr/sbin/multipathd -d
Nov 22 03:51:07 np0005532048 multipathd[235853]: 4318.100514 | --------start up--------
Nov 22 03:51:07 np0005532048 multipathd[235853]: 4318.100533 | read /etc/multipath.conf
Nov 22 03:51:07 np0005532048 multipathd[235853]: 4318.107036 | path checkers start up
Nov 22 03:51:07 np0005532048 podman[235862]: 2025-11-22 08:51:07.305876229 +0000 UTC m=+0.098900239 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:51:07 np0005532048 systemd[1]: fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-5865c03f55730aff.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 03:51:07 np0005532048 systemd[1]: fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8-5865c03f55730aff.service: Failed with result 'exit-code'.
Nov 22 03:51:07 np0005532048 python3.9[236045]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:08 np0005532048 python3.9[236197]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 03:51:09 np0005532048 python3.9[236349]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 22 03:51:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:09 np0005532048 kernel: Key type psk registered
Nov 22 03:51:10 np0005532048 python3.9[236512]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:51:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:10 np0005532048 python3.9[236635]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763801469.7293074-630-218230609592733/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:11 np0005532048 python3.9[236787]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:12 np0005532048 python3.9[236939]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:51:12 np0005532048 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 03:51:12 np0005532048 systemd[1]: Stopped Load Kernel Modules.
Nov 22 03:51:12 np0005532048 systemd[1]: Stopping Load Kernel Modules...
Nov 22 03:51:12 np0005532048 systemd[1]: Starting Load Kernel Modules...
Nov 22 03:51:12 np0005532048 systemd[1]: Finished Load Kernel Modules.
Nov 22 03:51:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:13 np0005532048 python3.9[237095]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:51:14 np0005532048 podman[237097]: 2025-11-22 08:51:14.424435458 +0000 UTC m=+0.102425175 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.488278) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474488379, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1277, "num_deletes": 506, "total_data_size": 1497478, "memory_usage": 1532208, "flush_reason": "Manual Compaction"}
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474501660, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1472620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13651, "largest_seqno": 14927, "table_properties": {"data_size": 1466944, "index_size": 2560, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14516, "raw_average_key_size": 18, "raw_value_size": 1453633, "raw_average_value_size": 1805, "num_data_blocks": 117, "num_entries": 805, "num_filter_entries": 805, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801363, "oldest_key_time": 1763801363, "file_creation_time": 1763801474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13422 microseconds, and 4331 cpu microseconds.
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.501712) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1472620 bytes OK
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.501733) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.504369) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.504440) EVENT_LOG_v1 {"time_micros": 1763801474504432, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.504466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1490601, prev total WAL file size 1490601, number of live WAL files 2.
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.505209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1438KB)], [32(7413KB)]
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474505281, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9063673, "oldest_snapshot_seqno": -1}
Nov 22 03:51:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3851 keys, 7225646 bytes, temperature: kUnknown
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474556038, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7225646, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7197697, "index_size": 17215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94491, "raw_average_key_size": 24, "raw_value_size": 7125742, "raw_average_value_size": 1850, "num_data_blocks": 726, "num_entries": 3851, "num_filter_entries": 3851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801474, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.556373) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7225646 bytes
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.558330) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.2 rd, 142.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.2 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(11.1) write-amplify(4.9) OK, records in: 4876, records dropped: 1025 output_compression: NoCompression
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.558349) EVENT_LOG_v1 {"time_micros": 1763801474558340, "job": 14, "event": "compaction_finished", "compaction_time_micros": 50856, "compaction_time_cpu_micros": 18088, "output_level": 6, "num_output_files": 1, "total_output_size": 7225646, "num_input_records": 4876, "num_output_records": 3851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474558819, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801474560343, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.505090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:14 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:51:14.560462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:16 np0005532048 systemd[1]: Reloading.
Nov 22 03:51:16 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:51:16 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:51:16 np0005532048 systemd[1]: Reloading.
Nov 22 03:51:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:16 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:51:16 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:51:16 np0005532048 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 03:51:16 np0005532048 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 03:51:17 np0005532048 lvm[237237]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 03:51:17 np0005532048 lvm[237237]: VG ceph_vg1 finished
Nov 22 03:51:17 np0005532048 lvm[237235]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 03:51:17 np0005532048 lvm[237235]: VG ceph_vg0 finished
Nov 22 03:51:17 np0005532048 lvm[237234]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:51:17 np0005532048 lvm[237234]: VG ceph_vg2 finished
Nov 22 03:51:17 np0005532048 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:51:17 np0005532048 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:51:17 np0005532048 systemd[1]: Reloading.
Nov 22 03:51:17 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:51:17 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:51:17 np0005532048 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:51:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:18 np0005532048 python3.9[238578]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:51:19 np0005532048 systemd[1]: Stopping Open-iSCSI...
Nov 22 03:51:19 np0005532048 iscsid[226005]: iscsid shutting down.
Nov 22 03:51:19 np0005532048 systemd[1]: iscsid.service: Deactivated successfully.
Nov 22 03:51:19 np0005532048 systemd[1]: Stopped Open-iSCSI.
Nov 22 03:51:19 np0005532048 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 03:51:19 np0005532048 systemd[1]: Starting Open-iSCSI...
Nov 22 03:51:19 np0005532048 systemd[1]: Started Open-iSCSI.
Nov 22 03:51:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:19 np0005532048 python3.9[238732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:51:20 np0005532048 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:51:20 np0005532048 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:51:20 np0005532048 systemd[1]: man-db-cache-update.service: Consumed 1.705s CPU time.
Nov 22 03:51:20 np0005532048 systemd[1]: run-r7980be7ed0484ebc8d2b10d0891a5454.service: Deactivated successfully.
Nov 22 03:51:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:20 np0005532048 python3.9[238889]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:21 np0005532048 python3.9[239041]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:51:21 np0005532048 systemd[1]: Reloading.
Nov 22 03:51:22 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:51:22 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:51:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:22 np0005532048 python3.9[239225]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:51:23 np0005532048 network[239242]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:51:23 np0005532048 network[239243]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:51:23 np0005532048 network[239244]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:51:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:27 np0005532048 python3.9[239519]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:51:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:51:27.931 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:51:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:51:27.933 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:51:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:51:27.934 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:51:28 np0005532048 python3.9[239672]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:51:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:28 np0005532048 python3.9[239825]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:51:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:29 np0005532048 python3.9[239978]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:51:30 np0005532048 python3.9[240131]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:51:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:31 np0005532048 python3.9[240284]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:51:31 np0005532048 podman[240409]: 2025-11-22 08:51:31.720507001 +0000 UTC m=+0.067361294 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 03:51:32 np0005532048 python3.9[240456]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:51:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:32 np0005532048 python3.9[240609]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:51:33 np0005532048 python3.9[240762]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:34 np0005532048 python3.9[240914]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:34 np0005532048 python3.9[241066]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:35 np0005532048 python3.9[241218]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:36 np0005532048 python3.9[241370]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:36 np0005532048 python3.9[241522]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:37 np0005532048 podman[241646]: 2025-11-22 08:51:37.410284415 +0000 UTC m=+0.064290911 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 22 03:51:37 np0005532048 python3.9[241691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:38 np0005532048 python3.9[241844]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:38 np0005532048 python3.9[241996]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:39 np0005532048 python3.9[242148]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:40 np0005532048 python3.9[242300]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:40 np0005532048 python3.9[242452]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:41 np0005532048 python3.9[242604]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:42 np0005532048 python3.9[242756]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:42 np0005532048 python3.9[242908]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:43 np0005532048 python3.9[243060]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:51:44 np0005532048 python3.9[243212]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:44 np0005532048 podman[243338]: 2025-11-22 08:51:44.862478558 +0000 UTC m=+0.093306803 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Nov 22 03:51:45 np0005532048 python3.9[243377]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:51:45 np0005532048 python3.9[243540]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:51:45 np0005532048 systemd[1]: Reloading.
Nov 22 03:51:46 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:51:46 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:51:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:47 np0005532048 python3.9[243727]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:47 np0005532048 python3.9[243880]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:48 np0005532048 python3.9[244033]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:49 np0005532048 python3.9[244294]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:51:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6c9693f8-1ec6-45f5-afec-67bbea0ff1a3 does not exist
Nov 22 03:51:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 494467cf-4e16-4ec6-89eb-06f707b5c44c does not exist
Nov 22 03:51:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 74eeae6a-fde0-4d03-a7af-b6547864e0b7 does not exist
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:51:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:51:49 np0005532048 python3.9[244538]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:50 np0005532048 podman[244634]: 2025-11-22 08:51:50.001346881 +0000 UTC m=+0.023158690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:50 np0005532048 podman[244634]: 2025-11-22 08:51:50.297390134 +0000 UTC m=+0.319201923 container create 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:51:50 np0005532048 systemd[1]: Started libpod-conmon-7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe.scope.
Nov 22 03:51:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:51:50 np0005532048 podman[244634]: 2025-11-22 08:51:50.446994849 +0000 UTC m=+0.468806638 container init 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:51:50 np0005532048 podman[244634]: 2025-11-22 08:51:50.458394079 +0000 UTC m=+0.480205868 container start 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:51:50 np0005532048 awesome_almeida[244778]: 167 167
Nov 22 03:51:50 np0005532048 systemd[1]: libpod-7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe.scope: Deactivated successfully.
Nov 22 03:51:50 np0005532048 podman[244634]: 2025-11-22 08:51:50.471756877 +0000 UTC m=+0.493568676 container attach 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:51:50 np0005532048 podman[244634]: 2025-11-22 08:51:50.472217069 +0000 UTC m=+0.494028858 container died 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:51:50 np0005532048 python3.9[244775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-27c1771a61cd7be6751145788644ab7f558a460ec9be2c8a273c64ff6e8bbab1-merged.mount: Deactivated successfully.
Nov 22 03:51:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:50 np0005532048 podman[244634]: 2025-11-22 08:51:50.537838781 +0000 UTC m=+0.559650570 container remove 7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_almeida, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:51:50 np0005532048 systemd[1]: libpod-conmon-7d621f18e26f55760003bfa0603ac72bd0a96add74b05b0e21a417cabdad39fe.scope: Deactivated successfully.
Nov 22 03:51:50 np0005532048 podman[244848]: 2025-11-22 08:51:50.713947896 +0000 UTC m=+0.049675710 container create 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:51:50 np0005532048 systemd[1]: Started libpod-conmon-956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197.scope.
Nov 22 03:51:50 np0005532048 podman[244848]: 2025-11-22 08:51:50.69127344 +0000 UTC m=+0.027001274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:51:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:50 np0005532048 podman[244848]: 2025-11-22 08:51:50.855480554 +0000 UTC m=+0.191208388 container init 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 22 03:51:50 np0005532048 podman[244848]: 2025-11-22 08:51:50.864865534 +0000 UTC m=+0.200593348 container start 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:51:50 np0005532048 podman[244848]: 2025-11-22 08:51:50.870051142 +0000 UTC m=+0.205778976 container attach 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:51:51 np0005532048 python3.9[244973]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:51 np0005532048 python3.9[245126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:51:52 np0005532048 ecstatic_franklin[244906]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:51:52 np0005532048 ecstatic_franklin[244906]: --> relative data size: 1.0
Nov 22 03:51:52 np0005532048 ecstatic_franklin[244906]: --> All data devices are unavailable
Nov 22 03:51:52 np0005532048 systemd[1]: libpod-956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197.scope: Deactivated successfully.
Nov 22 03:51:52 np0005532048 systemd[1]: libpod-956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197.scope: Consumed 1.143s CPU time.
Nov 22 03:51:52 np0005532048 podman[244848]: 2025-11-22 08:51:52.070666886 +0000 UTC m=+1.406394700 container died 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:51:52
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'backups', 'images', 'default.rgw.log']
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:51:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-514a261734914ef0c0a502e831ffc534c5811d1753da01b6b61663f2059fe25b-merged.mount: Deactivated successfully.
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:52 np0005532048 podman[244848]: 2025-11-22 08:51:52.552114314 +0000 UTC m=+1.887842128 container remove 956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:51:52 np0005532048 systemd[1]: libpod-conmon-956efac2a396913d95b60b4d77a9be9acccdfb2d85d27b9db9f13c8cbc072197.scope: Deactivated successfully.
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:51:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:51:53 np0005532048 python3.9[245415]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:53 np0005532048 podman[245456]: 2025-11-22 08:51:53.205532006 +0000 UTC m=+0.025362364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:53 np0005532048 podman[245456]: 2025-11-22 08:51:53.31152558 +0000 UTC m=+0.131355908 container create 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:51:53 np0005532048 systemd[1]: Started libpod-conmon-51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284.scope.
Nov 22 03:51:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:51:53 np0005532048 podman[245456]: 2025-11-22 08:51:53.594909792 +0000 UTC m=+0.414740150 container init 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:51:53 np0005532048 podman[245456]: 2025-11-22 08:51:53.603908003 +0000 UTC m=+0.423738331 container start 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:51:53 np0005532048 systemd[1]: libpod-51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284.scope: Deactivated successfully.
Nov 22 03:51:53 np0005532048 frosty_mcclintock[245584]: 167 167
Nov 22 03:51:53 np0005532048 conmon[245584]: conmon 51f7d86aaddc37ac672a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284.scope/container/memory.events
Nov 22 03:51:53 np0005532048 podman[245456]: 2025-11-22 08:51:53.708331668 +0000 UTC m=+0.528162016 container attach 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:51:53 np0005532048 podman[245456]: 2025-11-22 08:51:53.709165709 +0000 UTC m=+0.528996047 container died 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:51:53 np0005532048 python3.9[245628]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c968a6741bcc85cfe10d6879986d9a057e30891444bf2e49ffaa51cadb168135-merged.mount: Deactivated successfully.
Nov 22 03:51:54 np0005532048 podman[245456]: 2025-11-22 08:51:54.103040505 +0000 UTC m=+0.922870833 container remove 51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:51:54 np0005532048 systemd[1]: libpod-conmon-51f7d86aaddc37ac672ac09cecddd55f0d639a8a12ad82bda07d3a0be1e09284.scope: Deactivated successfully.
Nov 22 03:51:54 np0005532048 podman[245772]: 2025-11-22 08:51:54.312059869 +0000 UTC m=+0.072486811 container create 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:51:54 np0005532048 podman[245772]: 2025-11-22 08:51:54.26568859 +0000 UTC m=+0.026115562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:54 np0005532048 systemd[1]: Started libpod-conmon-6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c.scope.
Nov 22 03:51:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:51:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:54 np0005532048 python3.9[245814]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:54 np0005532048 podman[245772]: 2025-11-22 08:51:54.629621521 +0000 UTC m=+0.390048493 container init 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:51:54 np0005532048 podman[245772]: 2025-11-22 08:51:54.640795035 +0000 UTC m=+0.401221967 container start 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:51:54 np0005532048 podman[245772]: 2025-11-22 08:51:54.675873337 +0000 UTC m=+0.436300309 container attach 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:51:55 np0005532048 python3.9[245974]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]: {
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:    "0": [
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:        {
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "devices": [
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "/dev/loop3"
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            ],
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_name": "ceph_lv0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_size": "21470642176",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "name": "ceph_lv0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "tags": {
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.cluster_name": "ceph",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.crush_device_class": "",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.encrypted": "0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.osd_id": "0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.type": "block",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.vdo": "0"
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            },
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "type": "block",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "vg_name": "ceph_vg0"
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:        }
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:    ],
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:    "1": [
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:        {
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "devices": [
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "/dev/loop4"
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            ],
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_name": "ceph_lv1",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_size": "21470642176",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "name": "ceph_lv1",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "tags": {
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.cluster_name": "ceph",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.crush_device_class": "",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.encrypted": "0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.osd_id": "1",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.type": "block",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.vdo": "0"
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            },
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "type": "block",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "vg_name": "ceph_vg1"
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:        }
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:    ],
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:    "2": [
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:        {
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "devices": [
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "/dev/loop5"
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            ],
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_name": "ceph_lv2",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_size": "21470642176",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "name": "ceph_lv2",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "tags": {
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.cluster_name": "ceph",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.crush_device_class": "",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.encrypted": "0",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.osd_id": "2",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.type": "block",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:                "ceph.vdo": "0"
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            },
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "type": "block",
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:            "vg_name": "ceph_vg2"
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:        }
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]:    ]
Nov 22 03:51:55 np0005532048 adoring_chaum[245818]: }
Nov 22 03:51:55 np0005532048 systemd[1]: libpod-6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c.scope: Deactivated successfully.
Nov 22 03:51:55 np0005532048 podman[245772]: 2025-11-22 08:51:55.548212017 +0000 UTC m=+1.308638959 container died 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:51:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a5769a15896b70814e50363884874d86a6396b6d42cee0312635f9e895d513dd-merged.mount: Deactivated successfully.
Nov 22 03:51:55 np0005532048 podman[245772]: 2025-11-22 08:51:55.874741059 +0000 UTC m=+1.635168041 container remove 6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:51:55 np0005532048 systemd[1]: libpod-conmon-6eba3fe90ef085a3a05ddc41a9af4de33754fd63a3b75353ad477fad56ef695c.scope: Deactivated successfully.
Nov 22 03:51:56 np0005532048 python3.9[246142]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:56 np0005532048 podman[246431]: 2025-11-22 08:51:56.503945926 +0000 UTC m=+0.027608460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:56 np0005532048 podman[246431]: 2025-11-22 08:51:56.608190497 +0000 UTC m=+0.131852990 container create 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:51:56 np0005532048 systemd[1]: Started libpod-conmon-28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779.scope.
Nov 22 03:51:56 np0005532048 python3.9[246447]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:51:56 np0005532048 podman[246431]: 2025-11-22 08:51:56.992593551 +0000 UTC m=+0.516256054 container init 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:51:57 np0005532048 podman[246431]: 2025-11-22 08:51:57.00276382 +0000 UTC m=+0.526426303 container start 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:51:57 np0005532048 systemd[1]: libpod-28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779.scope: Deactivated successfully.
Nov 22 03:51:57 np0005532048 bold_snyder[246452]: 167 167
Nov 22 03:51:57 np0005532048 conmon[246452]: conmon 28438b30ee9793a4e907 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779.scope/container/memory.events
Nov 22 03:51:57 np0005532048 podman[246431]: 2025-11-22 08:51:57.301950641 +0000 UTC m=+0.825613174 container attach 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:51:57 np0005532048 podman[246431]: 2025-11-22 08:51:57.30438715 +0000 UTC m=+0.828049703 container died 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:51:57 np0005532048 python3.9[246619]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-966f7a9229e4cbb1de9d63adde62f0f95ee58a35bac5f6502aa37af730cd34d3-merged.mount: Deactivated successfully.
Nov 22 03:51:58 np0005532048 python3.9[246772]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:58 np0005532048 podman[246431]: 2025-11-22 08:51:58.338794142 +0000 UTC m=+1.862456645 container remove 28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_snyder, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:51:58 np0005532048 systemd[1]: libpod-conmon-28438b30ee9793a4e907aa465765068c37215c2e115e8d75166646db85730779.scope: Deactivated successfully.
Nov 22 03:51:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:51:58 np0005532048 podman[246879]: 2025-11-22 08:51:58.488363476 +0000 UTC m=+0.024010471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:58 np0005532048 podman[246879]: 2025-11-22 08:51:58.775218003 +0000 UTC m=+0.310864978 container create 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:51:58 np0005532048 python3.9[246945]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:59 np0005532048 systemd[1]: Started libpod-conmon-86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2.scope.
Nov 22 03:51:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:51:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:59 np0005532048 podman[246879]: 2025-11-22 08:51:59.467782317 +0000 UTC m=+1.003429312 container init 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:51:59 np0005532048 podman[246879]: 2025-11-22 08:51:59.479052304 +0000 UTC m=+1.014699299 container start 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:51:59 np0005532048 python3.9[247102]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:51:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:59 np0005532048 podman[246879]: 2025-11-22 08:51:59.534417814 +0000 UTC m=+1.070064809 container attach 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]: {
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "osd_id": 1,
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "type": "bluestore"
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:    },
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "osd_id": 0,
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "type": "bluestore"
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:    },
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "osd_id": 2,
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:        "type": "bluestore"
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]:    }
Nov 22 03:52:00 np0005532048 sleepy_hofstadter[247075]: }
Nov 22 03:52:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:00 np0005532048 systemd[1]: libpod-86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2.scope: Deactivated successfully.
Nov 22 03:52:00 np0005532048 systemd[1]: libpod-86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2.scope: Consumed 1.099s CPU time.
Nov 22 03:52:00 np0005532048 podman[247157]: 2025-11-22 08:52:00.613614556 +0000 UTC m=+0.027396295 container died 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:52:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5d9a73eac1cb4d093fc2458536dda312893dfdf9e34fcbb2aa08e62c7f7aac7d-merged.mount: Deactivated successfully.
Nov 22 03:52:00 np0005532048 podman[247157]: 2025-11-22 08:52:00.683734638 +0000 UTC m=+0.097516367 container remove 86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:52:00 np0005532048 systemd[1]: libpod-conmon-86e1e0e200fda4460d8c0f212dc9413b25e3fb9772b76e04b5209d377d261ee2.scope: Deactivated successfully.
Nov 22 03:52:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:52:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:52:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:52:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:52:00 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e3e63fc8-fa1f-4406-845e-8a09e1ac0a80 does not exist
Nov 22 03:52:00 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3c39539a-7cd2-4479-a6f8-499d39216b6e does not exist
Nov 22 03:52:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:52:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:52:02 np0005532048 podman[247219]: 2025-11-22 08:52:02.377833076 +0000 UTC m=+0.064168127 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 03:52:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:05 np0005532048 python3.9[247367]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 22 03:52:06 np0005532048 python3.9[247520]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:52:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:07 np0005532048 python3.9[247678]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 03:52:07 np0005532048 podman[247680]: 2025-11-22 08:52:07.612478083 +0000 UTC m=+0.067925790 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 03:52:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:09 np0005532048 systemd-logind[822]: New session 51 of user zuul.
Nov 22 03:52:09 np0005532048 systemd[1]: Started Session 51 of User zuul.
Nov 22 03:52:09 np0005532048 systemd[1]: session-51.scope: Deactivated successfully.
Nov 22 03:52:09 np0005532048 systemd-logind[822]: Session 51 logged out. Waiting for processes to exit.
Nov 22 03:52:09 np0005532048 systemd-logind[822]: Removed session 51.
Nov 22 03:52:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:10 np0005532048 python3.9[247883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:52:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:10 np0005532048 python3.9[248004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801529.6655097-1249-205350960515205/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:52:11 np0005532048 python3.9[248154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:52:11 np0005532048 python3.9[248230]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:52:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:12 np0005532048 python3.9[248380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:52:13 np0005532048 python3.9[248501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801532.1032903-1249-209344498097658/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:52:13 np0005532048 python3.9[248651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:52:14 np0005532048 python3.9[248772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801533.3160992-1249-107570181640127/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:52:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:15 np0005532048 python3.9[248922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:52:15 np0005532048 podman[248980]: 2025-11-22 08:52:15.417610425 +0000 UTC m=+0.104937559 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 03:52:15 np0005532048 python3.9[249070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801534.6082919-1249-7896105229988/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:52:16 np0005532048 python3.9[249220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:52:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:16 np0005532048 python3.9[249341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801535.867949-1249-97481030015628/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:52:17 np0005532048 python3.9[249493]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:52:18 np0005532048 python3.9[249645]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:52:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:19 np0005532048 python3.9[249797]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:52:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:19 np0005532048 python3.9[249949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:52:20 np0005532048 python3.9[250072]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763801539.3724637-1356-225291577664943/.source _original_basename=.z5gmi96d follow=False checksum=d979bb4b9932f74e77e9b58335859e74e5c0b61d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 22 03:52:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:21 np0005532048 python3.9[250224]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:52:21 np0005532048 python3.9[250376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:52:22 np0005532048 python3.9[250497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801541.379246-1382-125692241791445/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:52:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:23 np0005532048 python3.9[250647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:52:23 np0005532048 python3.9[250768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763801542.6112316-1397-81530525551120/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:52:24 np0005532048 python3.9[250920]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 22 03:52:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:25 np0005532048 python3.9[251072]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:52:26 np0005532048 python3[251224]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:52:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:52:27.932 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:52:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:52:27.934 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:52:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:52:27.934 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:52:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:43 np0005532048 podman[251294]: 2025-11-22 08:52:43.740350364 +0000 UTC m=+11.303553418 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 03:52:43 np0005532048 podman[251305]: 2025-11-22 08:52:43.764061807 +0000 UTC m=+5.456909368 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 03:52:43 np0005532048 podman[251238]: 2025-11-22 08:52:43.858926567 +0000 UTC m=+17.482278917 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 22 03:52:44 np0005532048 podman[251357]: 2025-11-22 08:52:44.057948496 +0000 UTC m=+0.096777229 container create d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 03:52:44 np0005532048 podman[251357]: 2025-11-22 08:52:43.987469255 +0000 UTC m=+0.026298018 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 22 03:52:44 np0005532048 python3[251224]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 22 03:52:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:44 np0005532048 python3.9[251547]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:52:45 np0005532048 podman[251673]: 2025-11-22 08:52:45.638592936 +0000 UTC m=+0.096874300 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 03:52:45 np0005532048 python3.9[251721]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 22 03:52:46 np0005532048 python3.9[251879]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:52:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:47 np0005532048 python3[252031]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:52:47 np0005532048 podman[252069]: 2025-11-22 08:52:47.627965308 +0000 UTC m=+0.054116570 container create 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 03:52:47 np0005532048 podman[252069]: 2025-11-22 08:52:47.598711739 +0000 UTC m=+0.024863011 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 22 03:52:47 np0005532048 python3[252031]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 22 03:52:48 np0005532048 python3.9[252260]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:52:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:49 np0005532048 python3.9[252414]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:52:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:49 np0005532048 python3.9[252565]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763801569.2608943-1489-23894595279325/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:52:50 np0005532048 python3.9[252641]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:52:50 np0005532048 systemd[1]: Reloading.
Nov 22 03:52:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:50 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:52:50 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:52:51 np0005532048 python3.9[252753]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:52:51 np0005532048 systemd[1]: Reloading.
Nov 22 03:52:51 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:52:51 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:52:51 np0005532048 systemd[1]: Starting nova_compute container...
Nov 22 03:52:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:52:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:52 np0005532048 podman[252793]: 2025-11-22 08:52:52.079165778 +0000 UTC m=+0.135544500 container init 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:52:52 np0005532048 podman[252793]: 2025-11-22 08:52:52.088566709 +0000 UTC m=+0.144945401 container start 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 22 03:52:52 np0005532048 nova_compute[252809]: + sudo -E kolla_set_configs
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:52:52
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.meta']
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:52:52 np0005532048 podman[252793]: nova_compute
Nov 22 03:52:52 np0005532048 systemd[1]: Started nova_compute container.
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Validating config file
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying service configuration files
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Deleting /etc/ceph
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Creating directory /etc/ceph
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Writing out command to execute
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:52:52 np0005532048 nova_compute[252809]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 03:52:52 np0005532048 nova_compute[252809]: ++ cat /run_command
Nov 22 03:52:52 np0005532048 nova_compute[252809]: + CMD=nova-compute
Nov 22 03:52:52 np0005532048 nova_compute[252809]: + ARGS=
Nov 22 03:52:52 np0005532048 nova_compute[252809]: + sudo kolla_copy_cacerts
Nov 22 03:52:52 np0005532048 nova_compute[252809]: + [[ ! -n '' ]]
Nov 22 03:52:52 np0005532048 nova_compute[252809]: + . kolla_extend_start
Nov 22 03:52:52 np0005532048 nova_compute[252809]: Running command: 'nova-compute'
Nov 22 03:52:52 np0005532048 nova_compute[252809]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 03:52:52 np0005532048 nova_compute[252809]: + umask 0022
Nov 22 03:52:52 np0005532048 nova_compute[252809]: + exec nova-compute
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:52:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:52:53 np0005532048 python3.9[252970]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:52:54 np0005532048 python3.9[253120]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:52:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:54 np0005532048 python3.9[253270]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:52:55 np0005532048 python3.9[253423]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 03:52:56 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:52:56 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:52:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:56 np0005532048 python3.9[253599]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:52:57 np0005532048 systemd[1]: Stopping nova_compute container...
Nov 22 03:52:57 np0005532048 systemd[1]: libpod-5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772.scope: Deactivated successfully.
Nov 22 03:52:57 np0005532048 systemd[1]: libpod-5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772.scope: Consumed 2.012s CPU time.
Nov 22 03:52:57 np0005532048 podman[253603]: 2025-11-22 08:52:57.100169986 +0000 UTC m=+0.070373449 container died 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3)
Nov 22 03:52:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772-userdata-shm.mount: Deactivated successfully.
Nov 22 03:52:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9-merged.mount: Deactivated successfully.
Nov 22 03:52:58 np0005532048 podman[253603]: 2025-11-22 08:52:58.419674372 +0000 UTC m=+1.389877845 container cleanup 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:52:58 np0005532048 podman[253603]: nova_compute
Nov 22 03:52:58 np0005532048 podman[253633]: nova_compute
Nov 22 03:52:58 np0005532048 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 22 03:52:58 np0005532048 systemd[1]: Stopped nova_compute container.
Nov 22 03:52:58 np0005532048 systemd[1]: Starting nova_compute container...
Nov 22 03:52:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:52:58 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:52:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd4abd44a61d3900ba6349408b7696dd7d4f72e89df1421933eb6e2a1f9d97c9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:58 np0005532048 podman[253646]: 2025-11-22 08:52:58.601564361 +0000 UTC m=+0.088352282 container init 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 22 03:52:58 np0005532048 podman[253646]: 2025-11-22 08:52:58.608380258 +0000 UTC m=+0.095168159 container start 5a8fceb3abf95878000fb720fbf5c01e4c6befa987e1123bd36c645c1949e772 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 03:52:58 np0005532048 podman[253646]: nova_compute
Nov 22 03:52:58 np0005532048 nova_compute[253661]: + sudo -E kolla_set_configs
Nov 22 03:52:58 np0005532048 systemd[1]: Started nova_compute container.
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Validating config file
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying service configuration files
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /etc/ceph
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Creating directory /etc/ceph
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Writing out command to execute
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:52:58 np0005532048 nova_compute[253661]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 03:52:58 np0005532048 nova_compute[253661]: ++ cat /run_command
Nov 22 03:52:58 np0005532048 nova_compute[253661]: + CMD=nova-compute
Nov 22 03:52:58 np0005532048 nova_compute[253661]: + ARGS=
Nov 22 03:52:58 np0005532048 nova_compute[253661]: + sudo kolla_copy_cacerts
Nov 22 03:52:58 np0005532048 nova_compute[253661]: + [[ ! -n '' ]]
Nov 22 03:52:58 np0005532048 nova_compute[253661]: + . kolla_extend_start
Nov 22 03:52:58 np0005532048 nova_compute[253661]: Running command: 'nova-compute'
Nov 22 03:52:58 np0005532048 nova_compute[253661]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 03:52:58 np0005532048 nova_compute[253661]: + umask 0022
Nov 22 03:52:58 np0005532048 nova_compute[253661]: + exec nova-compute
Nov 22 03:52:59 np0005532048 python3.9[253824]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 03:52:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:59 np0005532048 systemd[1]: Started libpod-conmon-d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706.scope.
Nov 22 03:52:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:52:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ea78d8dc84818553b60a0a84a8dec96e3ddbaf2671a2d3554399936ecf8aeb/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ea78d8dc84818553b60a0a84a8dec96e3ddbaf2671a2d3554399936ecf8aeb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ea78d8dc84818553b60a0a84a8dec96e3ddbaf2671a2d3554399936ecf8aeb/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:59 np0005532048 podman[253847]: 2025-11-22 08:52:59.94564197 +0000 UTC m=+0.265727919 container init d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 03:52:59 np0005532048 podman[253847]: 2025-11-22 08:52:59.953543013 +0000 UTC m=+0.273628942 container start d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:52:59 np0005532048 python3.9[253824]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Applying nova statedir ownership
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 22 03:53:00 np0005532048 nova_compute_init[253869]: INFO:nova_statedir:Nova statedir ownership complete
Nov 22 03:53:00 np0005532048 systemd[1]: libpod-d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706.scope: Deactivated successfully.
Nov 22 03:53:00 np0005532048 podman[253870]: 2025-11-22 08:53:00.034791859 +0000 UTC m=+0.029760882 container died d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:53:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706-userdata-shm.mount: Deactivated successfully.
Nov 22 03:53:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-72ea78d8dc84818553b60a0a84a8dec96e3ddbaf2671a2d3554399936ecf8aeb-merged.mount: Deactivated successfully.
Nov 22 03:53:00 np0005532048 podman[253880]: 2025-11-22 08:53:00.111362051 +0000 UTC m=+0.077934636 container cleanup d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:53:00 np0005532048 systemd[1]: libpod-conmon-d0d414d3d23f9ef555d563d421cdeafdbe7c77fcebabdff1292c9044df9bc706.scope: Deactivated successfully.
Nov 22 03:53:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:00 np0005532048 systemd-logind[822]: Session 50 logged out. Waiting for processes to exit.
Nov 22 03:53:00 np0005532048 systemd[1]: session-50.scope: Deactivated successfully.
Nov 22 03:53:00 np0005532048 systemd[1]: session-50.scope: Consumed 2min 29.504s CPU time.
Nov 22 03:53:00 np0005532048 systemd-logind[822]: Removed session 50.
Nov 22 03:53:00 np0005532048 nova_compute[253661]: 2025-11-22 08:53:00.977 253665 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 22 03:53:00 np0005532048 nova_compute[253661]: 2025-11-22 08:53:00.978 253665 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 22 03:53:00 np0005532048 nova_compute[253661]: 2025-11-22 08:53:00.978 253665 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 22 03:53:00 np0005532048 nova_compute[253661]: 2025-11-22 08:53:00.978 253665 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 22 03:53:01 np0005532048 nova_compute[253661]: 2025-11-22 08:53:01.191 253665 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:53:01 np0005532048 nova_compute[253661]: 2025-11-22 08:53:01.213 253665 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:53:01 np0005532048 nova_compute[253661]: 2025-11-22 08:53:01.214 253665 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:53:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b1bf8f8c-b474-44fe-b4d0-e265ef158eba does not exist
Nov 22 03:53:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2cd76acd-9714-43ef-97e9-86f9d8f70a57 does not exist
Nov 22 03:53:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e6abb35c-8a1c-44e2-9561-12c99fc1b338 does not exist
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:53:01 np0005532048 nova_compute[253661]: 2025-11-22 08:53:01.831 253665 INFO nova.virt.driver [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:53:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.060 253665 INFO nova.compute.provider_config [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.069 253665 DEBUG oslo_concurrency.lockutils [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.070 253665 DEBUG oslo_concurrency.lockutils [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.070 253665 DEBUG oslo_concurrency.lockutils [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.070 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.071 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.072 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.073 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.073 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.073 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.073 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.074 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.075 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.076 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.077 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.078 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.078 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.078 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.078 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.079 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.080 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.081 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.081 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.081 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.081 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.082 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.083 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.083 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.083 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.083 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.084 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.084 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.084 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.084 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.085 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.085 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.085 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.085 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.086 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.087 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.088 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.088 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.088 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.088 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.089 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.090 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.091 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.092 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.093 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.094 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.095 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.096 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.097 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.098 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.099 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.099 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.099 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.099 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.100 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.101 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.102 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.103 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.104 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.105 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.106 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.107 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.108 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.109 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.110 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.111 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.112 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.113 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.114 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.115 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.116 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.117 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.118 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.119 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.120 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.121 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.122 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.123 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.124 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.125 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.126 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.127 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.128 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.129 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.130 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.131 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.132 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.133 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.134 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.134 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.134 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.134 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.135 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.136 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.137 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.137 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.137 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.138 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.139 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.140 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.141 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.142 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.142 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.142 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.143 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.143 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.143 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.144 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.145 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.146 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.147 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.148 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.149 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.150 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.151 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.152 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.153 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.154 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.155 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.156 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.157 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.158 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.159 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.160 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.161 253665 WARNING oslo_config.cfg [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 03:53:02 np0005532048 nova_compute[253661]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 03:53:02 np0005532048 nova_compute[253661]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 03:53:02 np0005532048 nova_compute[253661]: and ``live_migration_inbound_addr`` respectively.
Nov 22 03:53:02 np0005532048 nova_compute[253661]: ).  Its value may be silently ignored in the future.#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.161 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.161 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.161 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.162 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.163 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.164 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_secret_uuid        = 34829716-a12c-57a6-8915-c1aa615c9d8a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.165 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.166 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.167 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.168 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.169 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.170 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.171 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.172 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.173 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.174 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.175 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.176 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.177 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.178 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.179 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.180 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.181 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.182 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.183 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.184 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.185 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.186 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.187 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.188 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.189 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.190 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.191 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.192 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.193 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.194 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.195 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.196 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.197 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.198 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.199 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.200 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.201 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.202 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.203 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.204 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.205 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.206 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.207 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.208 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.209 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.210 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.211 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.212 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.213 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.214 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.215 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.216 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.217 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.218 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.219 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.220 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.221 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.222 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.223 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.224 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.225 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.226 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.227 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.228 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.229 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.229 253665 DEBUG oslo_service.service [None req-b397328b-c002-46c0-8f1f-0a2b0bcf91ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.230 253665 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.241 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.242 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.242 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.242 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 22 03:53:02 np0005532048 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 03:53:02 np0005532048 systemd[1]: Started libvirt QEMU daemon.
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.332 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb134462820> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.336 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb134462820> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.337 253665 INFO nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 22 03:53:02 np0005532048 podman[254231]: 2025-11-22 08:53:02.340231246 +0000 UTC m=+0.051712202 container create 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.358 253665 WARNING nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 22 03:53:02 np0005532048 nova_compute[253661]: 2025-11-22 08:53:02.359 253665 DEBUG nova.virt.libvirt.volume.mount [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 22 03:53:02 np0005532048 systemd[1]: Started libpod-conmon-11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5.scope.
Nov 22 03:53:02 np0005532048 podman[254231]: 2025-11-22 08:53:02.314949955 +0000 UTC m=+0.026430951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:53:02 np0005532048 podman[254231]: 2025-11-22 08:53:02.451569621 +0000 UTC m=+0.163050607 container init 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:53:02 np0005532048 podman[254231]: 2025-11-22 08:53:02.462611522 +0000 UTC m=+0.174092488 container start 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:53:02 np0005532048 podman[254231]: 2025-11-22 08:53:02.469049951 +0000 UTC m=+0.180530917 container attach 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:53:02 np0005532048 dazzling_saha[254268]: 167 167
Nov 22 03:53:02 np0005532048 systemd[1]: libpod-11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5.scope: Deactivated successfully.
Nov 22 03:53:02 np0005532048 podman[254231]: 2025-11-22 08:53:02.472498455 +0000 UTC m=+0.183979421 container died 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:53:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f4530dcd359ddc3bf11ba45add225fec7c3244e3228f2e092317a45ccf4ca61e-merged.mount: Deactivated successfully.
Nov 22 03:53:02 np0005532048 podman[254231]: 2025-11-22 08:53:02.535232486 +0000 UTC m=+0.246713442 container remove 11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:53:02 np0005532048 systemd[1]: libpod-conmon-11ab04b884c6b5224ef78c8cca741030b04ebb1976c75292070527429ae86fe5.scope: Deactivated successfully.
Nov 22 03:53:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:02 np0005532048 podman[254298]: 2025-11-22 08:53:02.710156273 +0000 UTC m=+0.046469932 container create f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:53:02 np0005532048 systemd[1]: Started libpod-conmon-f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa.scope.
Nov 22 03:53:02 np0005532048 podman[254298]: 2025-11-22 08:53:02.687375914 +0000 UTC m=+0.023689603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:53:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:02 np0005532048 podman[254298]: 2025-11-22 08:53:02.805936707 +0000 UTC m=+0.142250386 container init f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:53:02 np0005532048 podman[254298]: 2025-11-22 08:53:02.813359119 +0000 UTC m=+0.149672788 container start f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:53:02 np0005532048 podman[254298]: 2025-11-22 08:53:02.818207438 +0000 UTC m=+0.154521137 container attach f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.191 253665 INFO nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host capabilities <capabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <host>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <uuid>02722e9f-996f-4a01-8f30-68e10821087c</uuid>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <arch>x86_64</arch>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model>EPYC-Rome-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <vendor>AMD</vendor>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <microcode version='16777317'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <signature family='23' model='49' stepping='0'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='x2apic'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='tsc-deadline'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='osxsave'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='hypervisor'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='tsc_adjust'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='spec-ctrl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='stibp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='arch-capabilities'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='cmp_legacy'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='topoext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='virt-ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='lbrv'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='tsc-scale'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='vmcb-clean'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='pause-filter'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='pfthreshold'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='svme-addr-chk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='rdctl-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='skip-l1dfl-vmentry'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='mds-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature name='pschange-mc-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <pages unit='KiB' size='4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <pages unit='KiB' size='2048'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <pages unit='KiB' size='1048576'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <power_management>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <suspend_mem/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </power_management>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <iommu support='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <migration_features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <live/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <uri_transports>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <uri_transport>tcp</uri_transport>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <uri_transport>rdma</uri_transport>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </uri_transports>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </migration_features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <topology>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <cells num='1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <cell id='0'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:          <memory unit='KiB'>7864312</memory>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:          <pages unit='KiB' size='4'>1966078</pages>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:          <pages unit='KiB' size='2048'>0</pages>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:          <distances>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:            <sibling id='0' value='10'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:          </distances>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:          <cpus num='8'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:          </cpus>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        </cell>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </cells>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </topology>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <cache>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </cache>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <secmodel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model>selinux</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <doi>0</doi>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </secmodel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <secmodel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model>dac</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <doi>0</doi>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </secmodel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </host>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <guest>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <os_type>hvm</os_type>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <arch name='i686'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <wordsize>32</wordsize>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <domain type='qemu'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <domain type='kvm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </arch>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <pae/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <nonpae/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <acpi default='on' toggle='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <apic default='on' toggle='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <cpuselection/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <deviceboot/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <disksnapshot default='on' toggle='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <externalSnapshot/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </guest>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <guest>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <os_type>hvm</os_type>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <arch name='x86_64'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <wordsize>64</wordsize>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <domain type='qemu'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <domain type='kvm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </arch>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <acpi default='on' toggle='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <apic default='on' toggle='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <cpuselection/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <deviceboot/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <disksnapshot default='on' toggle='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <externalSnapshot/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </guest>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 
Nov 22 03:53:03 np0005532048 nova_compute[253661]: </capabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: #033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.203 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.230 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 22 03:53:03 np0005532048 nova_compute[253661]: <domainCapabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <domain>kvm</domain>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <arch>i686</arch>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <vcpu max='240'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <iothreads supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <os supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <enum name='firmware'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <loader supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>rom</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pflash</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='readonly'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>yes</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>no</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='secure'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>no</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </loader>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </os>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='host-passthrough' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='hostPassthroughMigratable'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>on</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>off</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='maximum' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='maximumMigratable'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>on</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>off</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='host-model' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <vendor>AMD</vendor>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='x2apic'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='hypervisor'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='stibp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='overflow-recov'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='succor'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='lbrv'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc-scale'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='flushbyasid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='pause-filter'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='pfthreshold'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='disable' name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='custom' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Dhyana-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Genoa'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='auto-ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='auto-ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-128'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-256'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-512'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v6'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v7'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='KnightsMill'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4fmaps'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4vnniw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512er'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512pf'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='KnightsMill-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4fmaps'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4vnniw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512er'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512pf'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G4-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tbm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G5-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tbm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SierraForest'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ne-convert'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cmpccxadd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SierraForest-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ne-convert'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cmpccxadd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='athlon'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='athlon-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='core2duo'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='core2duo-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='coreduo'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='coreduo-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='n270'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='n270-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='phenom'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='phenom-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <memoryBacking supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <enum name='sourceType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>file</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>anonymous</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>memfd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </memoryBacking>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <devices>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <disk supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='diskDevice'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>disk</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>cdrom</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>floppy</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>lun</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='bus'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ide</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>fdc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>scsi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>sata</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-non-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </disk>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <graphics supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vnc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>egl-headless</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dbus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <video supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='modelType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vga</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>cirrus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>none</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>bochs</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ramfb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </video>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <hostdev supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='mode'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>subsystem</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='startupPolicy'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>default</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>mandatory</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>requisite</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>optional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='subsysType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pci</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>scsi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='capsType'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='pciBackend'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </hostdev>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <rng supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-non-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>random</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>egd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>builtin</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </rng>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <filesystem supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='driverType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>path</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>handle</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtiofs</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </filesystem>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <tpm supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tpm-tis</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tpm-crb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>emulator</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>external</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendVersion'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>2.0</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </tpm>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <redirdev supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='bus'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </redirdev>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <channel supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pty</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>unix</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </channel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <crypto supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>qemu</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>builtin</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </crypto>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <interface supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>default</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>passt</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </interface>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <panic supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>isa</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>hyperv</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </panic>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <console supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>null</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pty</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dev</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>file</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pipe</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>stdio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>udp</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tcp</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>unix</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>qemu-vdagent</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dbus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </console>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </devices>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <gic supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <vmcoreinfo supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <genid supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <backingStoreInput supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <backup supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <async-teardown supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <ps2 supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <sev supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <sgx supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <hyperv supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='features'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>relaxed</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vapic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>spinlocks</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vpindex</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>runtime</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>synic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>stimer</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>reset</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vendor_id</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>frequencies</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>reenlightenment</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tlbflush</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ipi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>avic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>emsr_bitmap</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>xmm_input</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <defaults>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <spinlocks>4095</spinlocks>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <stimer_direct>on</stimer_direct>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </defaults>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </hyperv>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <launchSecurity supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='sectype'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tdx</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </launchSecurity>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: </domainCapabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.242 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 22 03:53:03 np0005532048 nova_compute[253661]: <domainCapabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <domain>kvm</domain>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <arch>i686</arch>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <vcpu max='4096'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <iothreads supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <os supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <enum name='firmware'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <loader supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>rom</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pflash</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='readonly'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>yes</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>no</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='secure'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>no</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </loader>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </os>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='host-passthrough' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='hostPassthroughMigratable'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>on</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>off</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='maximum' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='maximumMigratable'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>on</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>off</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='host-model' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <vendor>AMD</vendor>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='x2apic'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='hypervisor'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='stibp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='overflow-recov'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='succor'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='lbrv'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc-scale'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='flushbyasid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='pause-filter'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='pfthreshold'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='disable' name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='custom' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Dhyana-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Genoa'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='auto-ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='auto-ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-128'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-256'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-512'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v6'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v7'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='KnightsMill'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4fmaps'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4vnniw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512er'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512pf'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='KnightsMill-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4fmaps'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4vnniw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512er'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512pf'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G4-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tbm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G5-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tbm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SierraForest'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ne-convert'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cmpccxadd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SierraForest-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ne-convert'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cmpccxadd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='athlon'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='athlon-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='core2duo'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='core2duo-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='coreduo'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='coreduo-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='n270'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='n270-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='phenom'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='phenom-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <memoryBacking supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <enum name='sourceType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>file</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>anonymous</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>memfd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </memoryBacking>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <devices>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <disk supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='diskDevice'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>disk</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>cdrom</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>floppy</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>lun</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='bus'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>fdc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>scsi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>sata</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-non-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </disk>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <graphics supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vnc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>egl-headless</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dbus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <video supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='modelType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vga</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>cirrus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>none</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>bochs</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ramfb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </video>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <hostdev supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='mode'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>subsystem</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='startupPolicy'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>default</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>mandatory</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>requisite</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>optional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='subsysType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pci</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>scsi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='capsType'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='pciBackend'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </hostdev>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <rng supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-non-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>random</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>egd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>builtin</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </rng>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <filesystem supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='driverType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>path</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>handle</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtiofs</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </filesystem>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <tpm supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tpm-tis</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tpm-crb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>emulator</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>external</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendVersion'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>2.0</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </tpm>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <redirdev supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='bus'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </redirdev>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <channel supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pty</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>unix</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </channel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <crypto supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>qemu</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>builtin</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </crypto>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <interface supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>default</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>passt</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </interface>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <panic supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>isa</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>hyperv</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </panic>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <console supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>null</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pty</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dev</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>file</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pipe</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>stdio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>udp</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tcp</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>unix</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>qemu-vdagent</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dbus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </console>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </devices>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <gic supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <vmcoreinfo supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <genid supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <backingStoreInput supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <backup supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <async-teardown supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <ps2 supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <sev supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <sgx supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <hyperv supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='features'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>relaxed</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vapic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>spinlocks</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vpindex</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>runtime</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>synic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>stimer</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>reset</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vendor_id</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>frequencies</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>reenlightenment</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tlbflush</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ipi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>avic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>emsr_bitmap</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>xmm_input</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <defaults>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <spinlocks>4095</spinlocks>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <stimer_direct>on</stimer_direct>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </defaults>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </hyperv>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <launchSecurity supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='sectype'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tdx</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </launchSecurity>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: </domainCapabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.268 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.276 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 22 03:53:03 np0005532048 nova_compute[253661]: <domainCapabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <domain>kvm</domain>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <arch>x86_64</arch>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <vcpu max='240'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <iothreads supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <os supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <enum name='firmware'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <loader supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>rom</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pflash</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='readonly'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>yes</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>no</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='secure'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>no</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </loader>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </os>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='host-passthrough' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='hostPassthroughMigratable'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>on</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>off</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='maximum' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='maximumMigratable'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>on</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>off</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='host-model' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <vendor>AMD</vendor>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='x2apic'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='hypervisor'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='stibp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='overflow-recov'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='succor'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='lbrv'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc-scale'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='flushbyasid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='pause-filter'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='pfthreshold'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='disable' name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='custom' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Dhyana-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Genoa'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='auto-ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='auto-ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-128'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-256'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-512'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v6'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v7'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='KnightsMill'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4fmaps'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4vnniw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512er'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512pf'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='KnightsMill-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4fmaps'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4vnniw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512er'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512pf'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G4-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tbm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G5-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tbm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SierraForest'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ne-convert'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cmpccxadd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SierraForest-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ne-convert'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cmpccxadd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='athlon'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='athlon-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='core2duo'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='core2duo-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='coreduo'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='coreduo-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='n270'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='n270-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='phenom'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='phenom-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <memoryBacking supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <enum name='sourceType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>file</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>anonymous</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>memfd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </memoryBacking>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <devices>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <disk supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='diskDevice'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>disk</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>cdrom</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>floppy</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>lun</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='bus'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ide</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>fdc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>scsi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>sata</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-non-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </disk>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <graphics supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vnc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>egl-headless</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dbus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <video supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='modelType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vga</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>cirrus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>none</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>bochs</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ramfb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </video>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <hostdev supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='mode'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>subsystem</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='startupPolicy'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>default</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>mandatory</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>requisite</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>optional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='subsysType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pci</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>scsi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='capsType'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='pciBackend'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </hostdev>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <rng supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-non-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>random</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>egd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>builtin</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </rng>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <filesystem supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='driverType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>path</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>handle</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtiofs</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </filesystem>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <tpm supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tpm-tis</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tpm-crb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>emulator</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>external</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendVersion'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>2.0</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </tpm>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <redirdev supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='bus'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </redirdev>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <channel supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pty</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>unix</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </channel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <crypto supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>qemu</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>builtin</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </crypto>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <interface supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>default</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>passt</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </interface>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <panic supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>isa</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>hyperv</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </panic>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <console supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>null</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pty</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dev</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>file</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pipe</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>stdio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>udp</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tcp</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>unix</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>qemu-vdagent</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dbus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </console>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </devices>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <gic supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <vmcoreinfo supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <genid supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <backingStoreInput supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <backup supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <async-teardown supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <ps2 supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <sev supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <sgx supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <hyperv supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='features'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>relaxed</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vapic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>spinlocks</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vpindex</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>runtime</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>synic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>stimer</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>reset</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vendor_id</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>frequencies</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>reenlightenment</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tlbflush</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ipi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>avic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>emsr_bitmap</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>xmm_input</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <defaults>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <spinlocks>4095</spinlocks>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <stimer_direct>on</stimer_direct>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </defaults>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </hyperv>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <launchSecurity supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='sectype'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tdx</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </launchSecurity>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: </domainCapabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.339 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 22 03:53:03 np0005532048 nova_compute[253661]: <domainCapabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <domain>kvm</domain>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <arch>x86_64</arch>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <vcpu max='4096'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <iothreads supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <os supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <enum name='firmware'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>efi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <loader supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>rom</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pflash</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='readonly'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>yes</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>no</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='secure'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>yes</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>no</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </loader>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </os>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='host-passthrough' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='hostPassthroughMigratable'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>on</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>off</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='maximum' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='maximumMigratable'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>on</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>off</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='host-model' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <vendor>AMD</vendor>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='x2apic'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc-deadline'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='hypervisor'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc_adjust'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='spec-ctrl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='stibp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='cmp_legacy'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='overflow-recov'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='succor'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='amd-ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='virt-ssbd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='lbrv'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='tsc-scale'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='vmcb-clean'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='flushbyasid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='pause-filter'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='pfthreshold'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <feature policy='disable' name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <mode name='custom' supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Broadwell-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cascadelake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Cooperlake-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Denverton-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Dhyana-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Genoa'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='auto-ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Genoa-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='auto-ibrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Milan-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amd-psfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='no-nested-data-bp'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='null-sel-clr-base'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='stibp-always-on'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-Rome-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='EPYC-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='GraniteRapids-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-128'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-256'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx10-512'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='prefetchiti'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Haswell-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-noTSX'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v6'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Icelake-Server-v7'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='IvyBridge-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='KnightsMill'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4fmaps'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4vnniw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512er'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512pf'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='KnightsMill-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4fmaps'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-4vnniw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512er'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512pf'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G4-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tbm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Opteron_G5-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fma4'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tbm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xop'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SapphireRapids-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='amx-tile'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-bf16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-fp16'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512-vpopcntdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bitalg'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vbmi2'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrc'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fzrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='la57'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='taa-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='tsx-ldtrk'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xfd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SierraForest'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ne-convert'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cmpccxadd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='SierraForest-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ifma'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-ne-convert'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx-vnni-int8'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='bus-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cmpccxadd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fbsdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='fsrs'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ibrs-all'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mcdt-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pbrsb-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='psdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='sbdr-ssdp-no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='serialize'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vaes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='vpclmulqdq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Client-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='hle'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='rtm'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Skylake-Server-v5'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512bw'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512cd'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512dq'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512f'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='avx512vl'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='invpcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pcid'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='pku'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='mpx'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v2'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v3'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='core-capability'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='split-lock-detect'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='Snowridge-v4'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='cldemote'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='erms'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='gfni'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdir64b'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='movdiri'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='xsaves'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='athlon'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='athlon-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='core2duo'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='core2duo-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='coreduo'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='coreduo-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='n270'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='n270-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='ss'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='phenom'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <blockers model='phenom-v1'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnow'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <feature name='3dnowext'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </blockers>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </mode>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <memoryBacking supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <enum name='sourceType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>file</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>anonymous</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <value>memfd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </memoryBacking>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <devices>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <disk supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='diskDevice'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>disk</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>cdrom</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>floppy</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>lun</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='bus'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>fdc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>scsi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>sata</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-non-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </disk>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <graphics supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vnc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>egl-headless</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dbus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <video supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='modelType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vga</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>cirrus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>none</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>bochs</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ramfb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </video>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <hostdev supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='mode'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>subsystem</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='startupPolicy'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>default</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>mandatory</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>requisite</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>optional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='subsysType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pci</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>scsi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='capsType'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='pciBackend'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </hostdev>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <rng supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtio-non-transitional</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>random</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>egd</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>builtin</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </rng>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <filesystem supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='driverType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>path</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>handle</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>virtiofs</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </filesystem>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <tpm supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tpm-tis</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tpm-crb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>emulator</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>external</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendVersion'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>2.0</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </tpm>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <redirdev supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='bus'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>usb</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </redirdev>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <channel supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pty</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>unix</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </channel>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <crypto supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>qemu</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendModel'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>builtin</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </crypto>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <interface supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='backendType'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>default</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>passt</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </interface>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <panic supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='model'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>isa</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>hyperv</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </panic>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <console supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='type'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>null</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vc</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pty</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dev</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>file</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>pipe</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>stdio</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>udp</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tcp</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>unix</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>qemu-vdagent</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>dbus</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </console>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </devices>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  <features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <gic supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <vmcoreinfo supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <genid supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <backingStoreInput supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <backup supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <async-teardown supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <ps2 supported='yes'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <sev supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <sgx supported='no'/>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <hyperv supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='features'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>relaxed</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vapic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>spinlocks</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vpindex</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>runtime</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>synic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>stimer</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>reset</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>vendor_id</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>frequencies</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>reenlightenment</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tlbflush</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>ipi</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>avic</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>emsr_bitmap</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>xmm_input</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <defaults>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <spinlocks>4095</spinlocks>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <stimer_direct>on</stimer_direct>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </defaults>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </hyperv>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    <launchSecurity supported='yes'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      <enum name='sectype'>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:        <value>tdx</value>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:      </enum>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:    </launchSecurity>
Nov 22 03:53:03 np0005532048 nova_compute[253661]:  </features>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: </domainCapabilities>
Nov 22 03:53:03 np0005532048 nova_compute[253661]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.405 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.406 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.406 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.406 253665 INFO nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Secure Boot support detected#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.409 253665 INFO nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.410 253665 INFO nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.422 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.452 253665 INFO nova.virt.node [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Determined node identity f0c5987a-d277-4022-aba2-19e7fecb4518 from /var/lib/nova/compute_id#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.467 253665 WARNING nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Compute nodes ['f0c5987a-d277-4022-aba2-19e7fecb4518'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.494 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.527 253665 WARNING nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.528 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.528 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.528 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.528 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.529 253665 DEBUG oslo_concurrency.processutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:53:03 np0005532048 keen_blackwell[254314]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:53:03 np0005532048 keen_blackwell[254314]: --> relative data size: 1.0
Nov 22 03:53:03 np0005532048 keen_blackwell[254314]: --> All data devices are unavailable
Nov 22 03:53:03 np0005532048 systemd[1]: libpod-f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa.scope: Deactivated successfully.
Nov 22 03:53:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:53:03 np0005532048 systemd[1]: libpod-f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa.scope: Consumed 1.058s CPU time.
Nov 22 03:53:03 np0005532048 podman[254298]: 2025-11-22 08:53:03.942501548 +0000 UTC m=+1.278815217 container died f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:53:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111935792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:03 np0005532048 nova_compute[253661]: 2025-11-22 08:53:03.978 253665 DEBUG oslo_concurrency.processutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:53:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8e7490f024a190dd38b6ee3968412493f62ce24bd95349b03cdf992e5c803f1e-merged.mount: Deactivated successfully.
Nov 22 03:53:04 np0005532048 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 03:53:04 np0005532048 podman[254298]: 2025-11-22 08:53:04.037420859 +0000 UTC m=+1.373734528 container remove f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_blackwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:53:04 np0005532048 systemd[1]: libpod-conmon-f6fd9422caf4d3de3d5dc88cb976261a424daa91065afd8bcd7d40c3e0e1affa.scope: Deactivated successfully.
Nov 22 03:53:04 np0005532048 systemd[1]: Started libvirt nodedev daemon.
Nov 22 03:53:04 np0005532048 nova_compute[253661]: 2025-11-22 08:53:04.328 253665 WARNING nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 03:53:04 np0005532048 nova_compute[253661]: 2025-11-22 08:53:04.330 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 03:53:04 np0005532048 nova_compute[253661]: 2025-11-22 08:53:04.330 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:53:04 np0005532048 nova_compute[253661]: 2025-11-22 08:53:04.331 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:53:04 np0005532048 nova_compute[253661]: 2025-11-22 08:53:04.342 253665 WARNING nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] No compute node record for compute-0.ctlplane.example.com:f0c5987a-d277-4022-aba2-19e7fecb4518: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host f0c5987a-d277-4022-aba2-19e7fecb4518 could not be found.#033[00m
Nov 22 03:53:04 np0005532048 nova_compute[253661]: 2025-11-22 08:53:04.357 253665 INFO nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: f0c5987a-d277-4022-aba2-19e7fecb4518#033[00m
Nov 22 03:53:04 np0005532048 nova_compute[253661]: 2025-11-22 08:53:04.425 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 03:53:04 np0005532048 nova_compute[253661]: 2025-11-22 08:53:04.425 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 03:53:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:04 np0005532048 podman[254555]: 2025-11-22 08:53:04.656123809 +0000 UTC m=+0.025483507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:04 np0005532048 podman[254555]: 2025-11-22 08:53:04.760139474 +0000 UTC m=+0.129499172 container create 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:53:04 np0005532048 systemd[1]: Started libpod-conmon-115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019.scope.
Nov 22 03:53:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:53:04 np0005532048 podman[254555]: 2025-11-22 08:53:04.877167069 +0000 UTC m=+0.246526787 container init 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:53:04 np0005532048 podman[254555]: 2025-11-22 08:53:04.885711319 +0000 UTC m=+0.255071027 container start 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:53:04 np0005532048 podman[254555]: 2025-11-22 08:53:04.889674957 +0000 UTC m=+0.259034685 container attach 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:53:04 np0005532048 angry_shockley[254571]: 167 167
Nov 22 03:53:04 np0005532048 systemd[1]: libpod-115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019.scope: Deactivated successfully.
Nov 22 03:53:04 np0005532048 podman[254555]: 2025-11-22 08:53:04.892287281 +0000 UTC m=+0.261646979 container died 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:53:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0055add2d4c9f67f69ca8cea567759fe837662567c495813a849d049ba1a28c7-merged.mount: Deactivated successfully.
Nov 22 03:53:04 np0005532048 podman[254555]: 2025-11-22 08:53:04.947841395 +0000 UTC m=+0.317201093 container remove 115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shockley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:53:04 np0005532048 systemd[1]: libpod-conmon-115ca2a8aca93c222e976d3eacbcec5d9f8e7d9c866671e7a0226950868df019.scope: Deactivated successfully.
Nov 22 03:53:05 np0005532048 podman[254595]: 2025-11-22 08:53:05.137083504 +0000 UTC m=+0.066470994 container create 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:53:05 np0005532048 systemd[1]: Started libpod-conmon-15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78.scope.
Nov 22 03:53:05 np0005532048 podman[254595]: 2025-11-22 08:53:05.095580145 +0000 UTC m=+0.024967665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:53:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:05 np0005532048 podman[254595]: 2025-11-22 08:53:05.231459283 +0000 UTC m=+0.160846803 container init 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:53:05 np0005532048 podman[254595]: 2025-11-22 08:53:05.239501121 +0000 UTC m=+0.168888611 container start 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:53:05 np0005532048 podman[254595]: 2025-11-22 08:53:05.243387435 +0000 UTC m=+0.172774925 container attach 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:53:05 np0005532048 nova_compute[253661]: 2025-11-22 08:53:05.248 253665 INFO nova.scheduler.client.report [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [req-877c3a5b-cb07-4895-8df9-ac0149f5dab8] Created resource provider record via placement API for resource provider with UUID f0c5987a-d277-4022-aba2-19e7fecb4518 and name compute-0.ctlplane.example.com.#033[00m
Nov 22 03:53:05 np0005532048 nova_compute[253661]: 2025-11-22 08:53:05.612 253665 DEBUG oslo_concurrency.processutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:53:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:53:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1370162445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]: {
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.064 253665 DEBUG oslo_concurrency.processutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:    "0": [
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:        {
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "devices": [
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "/dev/loop3"
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            ],
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_name": "ceph_lv0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_size": "21470642176",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "name": "ceph_lv0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "tags": {
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.cluster_name": "ceph",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.crush_device_class": "",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.encrypted": "0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.osd_id": "0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.type": "block",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.vdo": "0"
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            },
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "type": "block",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "vg_name": "ceph_vg0"
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:        }
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:    ],
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:    "1": [
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:        {
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "devices": [
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "/dev/loop4"
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            ],
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_name": "ceph_lv1",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_size": "21470642176",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "name": "ceph_lv1",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "tags": {
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.cluster_name": "ceph",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.crush_device_class": "",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.encrypted": "0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.osd_id": "1",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.type": "block",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.vdo": "0"
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            },
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "type": "block",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "vg_name": "ceph_vg1"
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:        }
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:    ],
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:    "2": [
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:        {
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "devices": [
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "/dev/loop5"
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            ],
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_name": "ceph_lv2",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_size": "21470642176",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "name": "ceph_lv2",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "tags": {
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.cluster_name": "ceph",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.crush_device_class": "",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.encrypted": "0",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.osd_id": "2",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.type": "block",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:                "ceph.vdo": "0"
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            },
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "type": "block",
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:            "vg_name": "ceph_vg2"
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:        }
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]:    ]
Nov 22 03:53:06 np0005532048 condescending_elgamal[254611]: }
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.071 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 22 03:53:06 np0005532048 nova_compute[253661]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.071 253665 INFO nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.072 253665 DEBUG nova.compute.provider_tree [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.072 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 03:53:06 np0005532048 systemd[1]: libpod-15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78.scope: Deactivated successfully.
Nov 22 03:53:06 np0005532048 podman[254595]: 2025-11-22 08:53:06.106745315 +0000 UTC m=+1.036132805 container died 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:53:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7f48adf92f464b28152f85ca2b7152e7ad311da9d81374567bb6eca24b5477a4-merged.mount: Deactivated successfully.
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.148 253665 DEBUG nova.scheduler.client.report [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updated inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.149 253665 DEBUG nova.compute.provider_tree [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updating resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.150 253665 DEBUG nova.compute.provider_tree [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 03:53:06 np0005532048 podman[254595]: 2025-11-22 08:53:06.191298702 +0000 UTC m=+1.120686192 container remove 15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:53:06 np0005532048 systemd[1]: libpod-conmon-15681e6f234f356f54761c9cf0e53bc9fd623bb022a18ecd6439635db4ba1e78.scope: Deactivated successfully.
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.386 253665 DEBUG nova.compute.provider_tree [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Updating resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.416 253665 DEBUG nova.compute.resource_tracker [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.417 253665 DEBUG oslo_concurrency.lockutils [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.417 253665 DEBUG nova.service [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.471 253665 DEBUG nova.service [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 22 03:53:06 np0005532048 nova_compute[253661]: 2025-11-22 08:53:06.472 253665 DEBUG nova.servicegroup.drivers.db [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 22 03:53:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:06 np0005532048 podman[254797]: 2025-11-22 08:53:06.835409076 +0000 UTC m=+0.041202683 container create f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:53:06 np0005532048 systemd[1]: Started libpod-conmon-f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd.scope.
Nov 22 03:53:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:53:06 np0005532048 podman[254797]: 2025-11-22 08:53:06.817432765 +0000 UTC m=+0.023226412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:06 np0005532048 podman[254797]: 2025-11-22 08:53:06.927050427 +0000 UTC m=+0.132844064 container init f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:53:06 np0005532048 podman[254797]: 2025-11-22 08:53:06.936441098 +0000 UTC m=+0.142234715 container start f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:53:06 np0005532048 busy_shockley[254813]: 167 167
Nov 22 03:53:06 np0005532048 systemd[1]: libpod-f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd.scope: Deactivated successfully.
Nov 22 03:53:06 np0005532048 podman[254797]: 2025-11-22 08:53:06.944055085 +0000 UTC m=+0.149848702 container attach f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:53:06 np0005532048 conmon[254813]: conmon f090d080f8e88f1d58de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd.scope/container/memory.events
Nov 22 03:53:06 np0005532048 podman[254797]: 2025-11-22 08:53:06.944970168 +0000 UTC m=+0.150763785 container died f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:53:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7b7af3c9f36e0a3be90e80abc25ab5d7ee33993645e27a72bf5164f1001e523d-merged.mount: Deactivated successfully.
Nov 22 03:53:06 np0005532048 podman[254797]: 2025-11-22 08:53:06.986385015 +0000 UTC m=+0.192178632 container remove f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:53:06 np0005532048 systemd[1]: libpod-conmon-f090d080f8e88f1d58def02e0e0da3efa4d1ec2a497df5a9d82c266cfd27b9bd.scope: Deactivated successfully.
Nov 22 03:53:07 np0005532048 podman[254835]: 2025-11-22 08:53:07.180793881 +0000 UTC m=+0.059617125 container create 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:53:07 np0005532048 systemd[1]: Started libpod-conmon-082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4.scope.
Nov 22 03:53:07 np0005532048 podman[254835]: 2025-11-22 08:53:07.158588935 +0000 UTC m=+0.037412199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:53:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:07 np0005532048 podman[254835]: 2025-11-22 08:53:07.277570208 +0000 UTC m=+0.156393452 container init 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:53:07 np0005532048 podman[254835]: 2025-11-22 08:53:07.284503728 +0000 UTC m=+0.163326962 container start 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:53:07 np0005532048 podman[254835]: 2025-11-22 08:53:07.288188599 +0000 UTC m=+0.167011833 container attach 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]: {
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "osd_id": 1,
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "type": "bluestore"
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:    },
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "osd_id": 0,
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "type": "bluestore"
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:    },
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "osd_id": 2,
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:        "type": "bluestore"
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]:    }
Nov 22 03:53:08 np0005532048 serene_mcnulty[254851]: }
Nov 22 03:53:08 np0005532048 systemd[1]: libpod-082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4.scope: Deactivated successfully.
Nov 22 03:53:08 np0005532048 podman[254835]: 2025-11-22 08:53:08.364028048 +0000 UTC m=+1.242851282 container died 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:53:08 np0005532048 systemd[1]: libpod-082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4.scope: Consumed 1.073s CPU time.
Nov 22 03:53:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fb98fe7abfbf881518b6aac7f8f4aabec1f55a1725971dc6f7bf577fdd489aae-merged.mount: Deactivated successfully.
Nov 22 03:53:08 np0005532048 podman[254835]: 2025-11-22 08:53:08.704844061 +0000 UTC m=+1.583667295 container remove 082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:53:08 np0005532048 systemd[1]: libpod-conmon-082d0ee2823c07a7f1ff6a63e73bc5415f669df81d3943486818968131582eb4.scope: Deactivated successfully.
Nov 22 03:53:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:53:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:53:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:53:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:53:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7e977a2a-a0f2-48cb-8a16-9a7620a7dd0f does not exist
Nov 22 03:53:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d4e26ff9-d998-485e-9dc9-cdb7efc3717b does not exist
Nov 22 03:53:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:53:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:53:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:14 np0005532048 podman[254946]: 2025-11-22 08:53:14.385648348 +0000 UTC m=+0.068136875 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:53:14 np0005532048 podman[254947]: 2025-11-22 08:53:14.391035641 +0000 UTC m=+0.074160213 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 03:53:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:16 np0005532048 podman[254985]: 2025-11-22 08:53:16.410240265 +0000 UTC m=+0.102025037 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:53:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1158647160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1158647160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1010212063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1010212063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575153798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575153798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:53:27.934 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:53:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:53:27.935 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:53:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:53:27.935 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:53:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:39 np0005532048 nova_compute[253661]: 2025-11-22 08:53:39.474 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:53:39 np0005532048 nova_compute[253661]: 2025-11-22 08:53:39.491 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:53:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:45 np0005532048 podman[255012]: 2025-11-22 08:53:45.375952547 +0000 UTC m=+0.062414035 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 03:53:45 np0005532048 podman[255011]: 2025-11-22 08:53:45.392005786 +0000 UTC m=+0.080821143 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 03:53:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:47 np0005532048 podman[255047]: 2025-11-22 08:53:47.398437831 +0000 UTC m=+0.093641102 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 03:53:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:53:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5753 writes, 24K keys, 5753 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5753 writes, 944 syncs, 6.09 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 9.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 22 03:53:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:53:52
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:53:52 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:53:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:53:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:53:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.2 total, 600.0 interval#012Cumulative writes: 6777 writes, 28K keys, 6777 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6777 writes, 1219 syncs, 5.56 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 22 03:53:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:53:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:54:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5671 writes, 24K keys, 5671 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5671 writes, 874 syncs, 6.49 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 22 03:54:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.246 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.248 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.248 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.248 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.281 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.282 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.282 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.282 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.283 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:54:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:54:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2597109270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.739 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.908 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.909 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5241MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.910 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.910 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.982 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 03:54:01 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.983 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 03:54:02 np0005532048 nova_compute[253661]: 2025-11-22 08:54:01.999 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 03:54:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:54:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822899044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:02 np0005532048 nova_compute[253661]: 2025-11-22 08:54:02.436 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:54:02 np0005532048 nova_compute[253661]: 2025-11-22 08:54:02.446 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 03:54:02 np0005532048 nova_compute[253661]: 2025-11-22 08:54:02.461 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 03:54:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:02 np0005532048 nova_compute[253661]: 2025-11-22 08:54:02.620 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 03:54:02 np0005532048 nova_compute[253661]: 2025-11-22 08:54:02.620 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:54:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 03:54:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/693235960' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 03:54:06 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 03:54:06 np0005532048 ceph-mgr[75315]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 03:54:06 np0005532048 ceph-mgr[75315]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 03:54:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:54:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:54:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:54:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:54:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:54:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:54:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:54:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:11 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 208dda32-f250-4715-8e50-143c19ffc685 does not exist
Nov 22 03:54:11 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2585fbd6-39f7-4e53-80f5-d3508e0e09fb does not exist
Nov 22 03:54:11 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 72d6b1c9-d24f-481f-96f9-a513f736ca8b does not exist
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.045940) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651045970, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1605, "num_deletes": 251, "total_data_size": 2620860, "memory_usage": 2653984, "flush_reason": "Manual Compaction"}
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651123390, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2574802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14928, "largest_seqno": 16532, "table_properties": {"data_size": 2567379, "index_size": 4430, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14985, "raw_average_key_size": 19, "raw_value_size": 2552545, "raw_average_value_size": 3354, "num_data_blocks": 202, "num_entries": 761, "num_filter_entries": 761, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801475, "oldest_key_time": 1763801475, "file_creation_time": 1763801651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 77501 microseconds, and 6636 cpu microseconds.
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.123438) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2574802 bytes OK
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.123458) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.164405) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.164464) EVENT_LOG_v1 {"time_micros": 1763801651164451, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.164558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2613910, prev total WAL file size 2613910, number of live WAL files 2.
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.165610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2514KB)], [35(7056KB)]
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651165705, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9800448, "oldest_snapshot_seqno": -1}
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4098 keys, 7998637 bytes, temperature: kUnknown
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651287187, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7998637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7968373, "index_size": 18890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 100049, "raw_average_key_size": 24, "raw_value_size": 7891400, "raw_average_value_size": 1925, "num_data_blocks": 797, "num_entries": 4098, "num_filter_entries": 4098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.287499) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7998637 bytes
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.290528) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.7 rd, 65.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 6.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 4612, records dropped: 514 output_compression: NoCompression
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.290550) EVENT_LOG_v1 {"time_micros": 1763801651290540, "job": 16, "event": "compaction_finished", "compaction_time_micros": 121429, "compaction_time_cpu_micros": 20763, "output_level": 6, "num_output_files": 1, "total_output_size": 7998637, "num_input_records": 4612, "num_output_records": 4098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651291046, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801651292306, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.165490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:54:11.292494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:54:11 np0005532048 podman[255507]: 2025-11-22 08:54:11.656886664 +0000 UTC m=+0.071346137 container create a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:54:11 np0005532048 podman[255507]: 2025-11-22 08:54:11.607778512 +0000 UTC m=+0.022237995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:54:11 np0005532048 systemd[1]: Started libpod-conmon-a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983.scope.
Nov 22 03:54:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:54:11 np0005532048 podman[255507]: 2025-11-22 08:54:11.954877201 +0000 UTC m=+0.369336694 container init a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:54:11 np0005532048 podman[255507]: 2025-11-22 08:54:11.962482639 +0000 UTC m=+0.376942112 container start a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:54:11 np0005532048 zealous_volhard[255523]: 167 167
Nov 22 03:54:11 np0005532048 systemd[1]: libpod-a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983.scope: Deactivated successfully.
Nov 22 03:54:12 np0005532048 podman[255507]: 2025-11-22 08:54:12.110030901 +0000 UTC m=+0.524490394 container attach a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:54:12 np0005532048 podman[255507]: 2025-11-22 08:54:12.110615617 +0000 UTC m=+0.525075100 container died a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:54:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0eab33910bc65b42a77fa232cd0df77856347cff9068e2998846ba309a9d2991-merged.mount: Deactivated successfully.
Nov 22 03:54:12 np0005532048 podman[255507]: 2025-11-22 08:54:12.261635495 +0000 UTC m=+0.676094988 container remove a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:54:12 np0005532048 systemd[1]: libpod-conmon-a4b8534e9955ce53f0a0d918684e546eb7a1949c581df683b766d925b42f5983.scope: Deactivated successfully.
Nov 22 03:54:12 np0005532048 podman[255547]: 2025-11-22 08:54:12.436553588 +0000 UTC m=+0.052491277 container create 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:54:12 np0005532048 systemd[1]: Started libpod-conmon-45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7.scope.
Nov 22 03:54:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:54:12 np0005532048 podman[255547]: 2025-11-22 08:54:12.412566672 +0000 UTC m=+0.028504381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:54:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:12 np0005532048 podman[255547]: 2025-11-22 08:54:12.53429345 +0000 UTC m=+0.150231139 container init 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:54:12 np0005532048 podman[255547]: 2025-11-22 08:54:12.542205047 +0000 UTC m=+0.158142736 container start 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:54:12 np0005532048 podman[255547]: 2025-11-22 08:54:12.554247578 +0000 UTC m=+0.170185267 container attach 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:54:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:13 np0005532048 loving_ptolemy[255563]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:54:13 np0005532048 loving_ptolemy[255563]: --> relative data size: 1.0
Nov 22 03:54:13 np0005532048 loving_ptolemy[255563]: --> All data devices are unavailable
Nov 22 03:54:13 np0005532048 systemd[1]: libpod-45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7.scope: Deactivated successfully.
Nov 22 03:54:13 np0005532048 systemd[1]: libpod-45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7.scope: Consumed 1.063s CPU time.
Nov 22 03:54:13 np0005532048 podman[255547]: 2025-11-22 08:54:13.670433777 +0000 UTC m=+1.286371466 container died 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:54:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d152f7f0cc7ad1f89778faf56315f09d51f4efcbe4a6067730872d3d77cc5917-merged.mount: Deactivated successfully.
Nov 22 03:54:14 np0005532048 podman[255547]: 2025-11-22 08:54:14.03945265 +0000 UTC m=+1.655390339 container remove 45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ptolemy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:54:14 np0005532048 systemd[1]: libpod-conmon-45146decc71d0836c0e78d6ac8d9d3e2aa5facf909ee49601f4d95891cfbccb7.scope: Deactivated successfully.
Nov 22 03:54:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:14 np0005532048 podman[255743]: 2025-11-22 08:54:14.734045098 +0000 UTC m=+0.105757184 container create e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:54:14 np0005532048 podman[255743]: 2025-11-22 08:54:14.658052926 +0000 UTC m=+0.029765042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:54:14 np0005532048 systemd[1]: Started libpod-conmon-e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d.scope.
Nov 22 03:54:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:54:14 np0005532048 podman[255743]: 2025-11-22 08:54:14.899131895 +0000 UTC m=+0.270843981 container init e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:54:14 np0005532048 podman[255743]: 2025-11-22 08:54:14.905846702 +0000 UTC m=+0.277558788 container start e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:54:14 np0005532048 interesting_curie[255759]: 167 167
Nov 22 03:54:14 np0005532048 systemd[1]: libpod-e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d.scope: Deactivated successfully.
Nov 22 03:54:14 np0005532048 podman[255743]: 2025-11-22 08:54:14.930945327 +0000 UTC m=+0.302657413 container attach e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:54:14 np0005532048 podman[255743]: 2025-11-22 08:54:14.931947022 +0000 UTC m=+0.303659108 container died e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:54:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-aff8254085b606c2eb096f4258fef9369185c5860e30281b09a10b6fd4155905-merged.mount: Deactivated successfully.
Nov 22 03:54:15 np0005532048 podman[255743]: 2025-11-22 08:54:15.208023433 +0000 UTC m=+0.579735549 container remove e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:54:15 np0005532048 systemd[1]: libpod-conmon-e665b947135dec74e376c99e530b4eaf773dd5fdea53b9a42b28c31fba6a747d.scope: Deactivated successfully.
Nov 22 03:54:15 np0005532048 podman[255783]: 2025-11-22 08:54:15.434752816 +0000 UTC m=+0.072090665 container create c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:54:15 np0005532048 podman[255783]: 2025-11-22 08:54:15.39152165 +0000 UTC m=+0.028859489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:54:15 np0005532048 systemd[1]: Started libpod-conmon-c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9.scope.
Nov 22 03:54:15 np0005532048 podman[255797]: 2025-11-22 08:54:15.568446124 +0000 UTC m=+0.096562984 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 03:54:15 np0005532048 podman[255798]: 2025-11-22 08:54:15.576040633 +0000 UTC m=+0.103877226 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:54:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:54:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:15 np0005532048 podman[255783]: 2025-11-22 08:54:15.722185069 +0000 UTC m=+0.359522978 container init c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:54:15 np0005532048 podman[255783]: 2025-11-22 08:54:15.731656965 +0000 UTC m=+0.368994784 container start c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:54:15 np0005532048 podman[255783]: 2025-11-22 08:54:15.747966862 +0000 UTC m=+0.385304681 container attach c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]: {
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:    "0": [
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:        {
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "devices": [
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "/dev/loop3"
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            ],
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_name": "ceph_lv0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_size": "21470642176",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "name": "ceph_lv0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "tags": {
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.cluster_name": "ceph",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.crush_device_class": "",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.encrypted": "0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.osd_id": "0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.type": "block",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.vdo": "0"
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            },
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "type": "block",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "vg_name": "ceph_vg0"
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:        }
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:    ],
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:    "1": [
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:        {
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "devices": [
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "/dev/loop4"
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            ],
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_name": "ceph_lv1",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_size": "21470642176",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "name": "ceph_lv1",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "tags": {
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.cluster_name": "ceph",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.crush_device_class": "",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.encrypted": "0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.osd_id": "1",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.type": "block",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.vdo": "0"
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            },
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "type": "block",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "vg_name": "ceph_vg1"
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:        }
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:    ],
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:    "2": [
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:        {
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "devices": [
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "/dev/loop5"
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            ],
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_name": "ceph_lv2",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_size": "21470642176",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "name": "ceph_lv2",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "tags": {
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.cluster_name": "ceph",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.crush_device_class": "",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.encrypted": "0",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.osd_id": "2",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.type": "block",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:                "ceph.vdo": "0"
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            },
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "type": "block",
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:            "vg_name": "ceph_vg2"
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:        }
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]:    ]
Nov 22 03:54:16 np0005532048 sleepy_kalam[255828]: }
Nov 22 03:54:16 np0005532048 systemd[1]: libpod-c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9.scope: Deactivated successfully.
Nov 22 03:54:16 np0005532048 podman[255783]: 2025-11-22 08:54:16.580972233 +0000 UTC m=+1.218310052 container died c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:54:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6706b83164e5af8af437ecbde6b0bbc80b5dc91e035d953e6f9425e533c72925-merged.mount: Deactivated successfully.
Nov 22 03:54:17 np0005532048 podman[255783]: 2025-11-22 08:54:17.26230226 +0000 UTC m=+1.899640079 container remove c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:54:17 np0005532048 systemd[1]: libpod-conmon-c4e83d9b9179fd676762d13a8c4f1daa213d75b1a2d1ef783d511235d96117a9.scope: Deactivated successfully.
Nov 22 03:54:17 np0005532048 podman[255905]: 2025-11-22 08:54:17.584537769 +0000 UTC m=+0.101589038 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:54:18 np0005532048 podman[256023]: 2025-11-22 08:54:17.957773039 +0000 UTC m=+0.026297806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:54:18 np0005532048 podman[256023]: 2025-11-22 08:54:18.144951567 +0000 UTC m=+0.213476314 container create 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:54:18 np0005532048 systemd[1]: Started libpod-conmon-4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529.scope.
Nov 22 03:54:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:54:18 np0005532048 podman[256023]: 2025-11-22 08:54:18.352960144 +0000 UTC m=+0.421484911 container init 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:54:18 np0005532048 podman[256023]: 2025-11-22 08:54:18.36405903 +0000 UTC m=+0.432583767 container start 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:54:18 np0005532048 sweet_faraday[256040]: 167 167
Nov 22 03:54:18 np0005532048 systemd[1]: libpod-4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529.scope: Deactivated successfully.
Nov 22 03:54:18 np0005532048 conmon[256040]: conmon 4e18c9661c3c7569fe26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529.scope/container/memory.events
Nov 22 03:54:18 np0005532048 podman[256023]: 2025-11-22 08:54:18.400728402 +0000 UTC m=+0.469253169 container attach 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:54:18 np0005532048 podman[256023]: 2025-11-22 08:54:18.401203485 +0000 UTC m=+0.469728232 container died 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:54:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay-30b3ceea982e801f663247f4f9645e083195d1c5c4ba29703a0829073e99a874-merged.mount: Deactivated successfully.
Nov 22 03:54:18 np0005532048 podman[256023]: 2025-11-22 08:54:18.964614347 +0000 UTC m=+1.033139114 container remove 4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:54:19 np0005532048 systemd[1]: libpod-conmon-4e18c9661c3c7569fe26e6b64ddcc4da960f4160b6f3fd89d9c04af3e2989529.scope: Deactivated successfully.
Nov 22 03:54:19 np0005532048 podman[256064]: 2025-11-22 08:54:19.197925143 +0000 UTC m=+0.115999298 container create 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:54:19 np0005532048 podman[256064]: 2025-11-22 08:54:19.107993215 +0000 UTC m=+0.026067390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:54:19 np0005532048 systemd[1]: Started libpod-conmon-9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00.scope.
Nov 22 03:54:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:54:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:19 np0005532048 podman[256064]: 2025-11-22 08:54:19.404138015 +0000 UTC m=+0.322212200 container init 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:54:19 np0005532048 podman[256064]: 2025-11-22 08:54:19.414605426 +0000 UTC m=+0.332679581 container start 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:54:19 np0005532048 podman[256064]: 2025-11-22 08:54:19.630652723 +0000 UTC m=+0.548726908 container attach 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 03:54:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]: {
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "osd_id": 1,
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "type": "bluestore"
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:    },
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "osd_id": 0,
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "type": "bluestore"
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:    },
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "osd_id": 2,
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:        "type": "bluestore"
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]:    }
Nov 22 03:54:20 np0005532048 angry_zhukovsky[256081]: }
Nov 22 03:54:20 np0005532048 systemd[1]: libpod-9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00.scope: Deactivated successfully.
Nov 22 03:54:20 np0005532048 podman[256064]: 2025-11-22 08:54:20.472463712 +0000 UTC m=+1.390537877 container died 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:54:20 np0005532048 systemd[1]: libpod-9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00.scope: Consumed 1.055s CPU time.
Nov 22 03:54:20 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c6d741e8ad95cb6c76fc6027e4a00cd55dc846833f37d5d7af6a1ae58013c221-merged.mount: Deactivated successfully.
Nov 22 03:54:20 np0005532048 podman[256064]: 2025-11-22 08:54:20.584339367 +0000 UTC m=+1.502413522 container remove 9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:54:20 np0005532048 systemd[1]: libpod-conmon-9c4a8e3c6e36e5bb1bd467db6be96426534b07487c23f5d77f0376967c51be00.scope: Deactivated successfully.
Nov 22 03:54:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:54:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:54:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 110a56f2-7ff6-4c85-99a5-05fb17b87f64 does not exist
Nov 22 03:54:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a260e74e-fde6-46fc-811d-4969dc5b8d68 does not exist
Nov 22 03:54:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:54:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 03:54:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2974503250' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 03:54:21 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 03:54:21 np0005532048 ceph-mgr[75315]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 03:54:21 np0005532048 ceph-mgr[75315]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 03:54:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:54:27.935 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:54:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:54:27.936 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:54:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:54:27.937 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:54:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:46 np0005532048 podman[256178]: 2025-11-22 08:54:46.388597531 +0000 UTC m=+0.074260959 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:54:46 np0005532048 podman[256179]: 2025-11-22 08:54:46.390568069 +0000 UTC m=+0.074853003 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 03:54:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:48 np0005532048 podman[256215]: 2025-11-22 08:54:48.400923023 +0000 UTC m=+0.090883483 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:54:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:54:52
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root', 'vms', 'backups', 'volumes', 'default.rgw.log']
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:54:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:54:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:54:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.615 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.631 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.631 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.632 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.632 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.632 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.632 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.654 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.654 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.655 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.655 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 03:55:02 np0005532048 nova_compute[253661]: 2025-11-22 08:55:02.655 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:55:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:55:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050746616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.160 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.336 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.338 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5219MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.339 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.339 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.411 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.412 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.431 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:55:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:55:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1883544117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.973 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.979 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.996 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.998 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 03:55:03 np0005532048 nova_compute[253661]: 2025-11-22 08:55:03.998 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:55:04 np0005532048 nova_compute[253661]: 2025-11-22 08:55:04.595 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:04 np0005532048 nova_compute[253661]: 2025-11-22 08:55:04.596 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:04 np0005532048 nova_compute[253661]: 2025-11-22 08:55:04.596 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 03:55:04 np0005532048 nova_compute[253661]: 2025-11-22 08:55:04.596 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 03:55:04 np0005532048 nova_compute[253661]: 2025-11-22 08:55:04.610 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 03:55:04 np0005532048 nova_compute[253661]: 2025-11-22 08:55:04.610 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:04 np0005532048 nova_compute[253661]: 2025-11-22 08:55:04.610 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:55:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/890003468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/890003468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:55:12.755 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 03:55:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:55:12.756 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 03:55:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:55:12.758 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 03:55:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:17 np0005532048 podman[256286]: 2025-11-22 08:55:17.372661409 +0000 UTC m=+0.062114147 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:17 np0005532048 podman[256287]: 2025-11-22 08:55:17.376199026 +0000 UTC m=+0.065192383 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 03:55:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:19 np0005532048 podman[256326]: 2025-11-22 08:55:19.422635086 +0000 UTC m=+0.119826423 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 03:55:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:55:21 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a0e49f75-f232-4330-bdbe-07c6f72cc35a does not exist
Nov 22 03:55:21 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1314c2df-0bd4-41fc-9292-ed298993c7eb does not exist
Nov 22 03:55:21 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c3943bdb-3b4f-4c4c-a524-676bb0bcd2b4 does not exist
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:55:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:55:22 np0005532048 podman[256625]: 2025-11-22 08:55:22.318177628 +0000 UTC m=+0.028054729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:22 np0005532048 podman[256625]: 2025-11-22 08:55:22.423814628 +0000 UTC m=+0.133691709 container create c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:55:22 np0005532048 systemd[1]: Started libpod-conmon-c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89.scope.
Nov 22 03:55:22 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:55:22 np0005532048 podman[256625]: 2025-11-22 08:55:22.539056946 +0000 UTC m=+0.248934057 container init c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:55:22 np0005532048 podman[256625]: 2025-11-22 08:55:22.549748031 +0000 UTC m=+0.259625122 container start c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:55:22 np0005532048 quizzical_banzai[256641]: 167 167
Nov 22 03:55:22 np0005532048 systemd[1]: libpod-c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89.scope: Deactivated successfully.
Nov 22 03:55:22 np0005532048 podman[256625]: 2025-11-22 08:55:22.591292655 +0000 UTC m=+0.301169766 container attach c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:55:22 np0005532048 podman[256625]: 2025-11-22 08:55:22.592515466 +0000 UTC m=+0.302392547 container died c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:55:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Nov 22 03:55:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cdb04fae261fe17baf0a51d34efbd35d9a21731b83d2ac5dca3569ff21a73d4f-merged.mount: Deactivated successfully.
Nov 22 03:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:22 np0005532048 podman[256625]: 2025-11-22 08:55:22.817638889 +0000 UTC m=+0.527515970 container remove c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:55:22 np0005532048 systemd[1]: libpod-conmon-c70639132a4fcba7aea8e1581af45d86ec039eaca56b54cda683451af6582d89.scope: Deactivated successfully.
Nov 22 03:55:23 np0005532048 podman[256665]: 2025-11-22 08:55:23.011187046 +0000 UTC m=+0.051287748 container create d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:55:23 np0005532048 systemd[1]: Started libpod-conmon-d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b.scope.
Nov 22 03:55:23 np0005532048 podman[256665]: 2025-11-22 08:55:22.990234194 +0000 UTC m=+0.030334896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:55:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:23 np0005532048 podman[256665]: 2025-11-22 08:55:23.119247645 +0000 UTC m=+0.159348367 container init d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:55:23 np0005532048 podman[256665]: 2025-11-22 08:55:23.128173448 +0000 UTC m=+0.168274150 container start d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:55:23 np0005532048 podman[256665]: 2025-11-22 08:55:23.136442923 +0000 UTC m=+0.176543695 container attach d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:55:24 np0005532048 tender_nightingale[256681]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:55:24 np0005532048 tender_nightingale[256681]: --> relative data size: 1.0
Nov 22 03:55:24 np0005532048 tender_nightingale[256681]: --> All data devices are unavailable
Nov 22 03:55:24 np0005532048 systemd[1]: libpod-d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b.scope: Deactivated successfully.
Nov 22 03:55:24 np0005532048 podman[256665]: 2025-11-22 08:55:24.361286206 +0000 UTC m=+1.401386928 container died d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:55:24 np0005532048 systemd[1]: libpod-d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b.scope: Consumed 1.102s CPU time.
Nov 22 03:55:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-aaf7fe5b74471476a4aae3acac63dc768994bac470d7d030ea7c654e0c6d5821-merged.mount: Deactivated successfully.
Nov 22 03:55:24 np0005532048 podman[256665]: 2025-11-22 08:55:24.437713079 +0000 UTC m=+1.477813781 container remove d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:55:24 np0005532048 systemd[1]: libpod-conmon-d49a562ea9ad21cc3970e61638be1874f70b849b8d9e7ddfe0cb086ada5eb43b.scope: Deactivated successfully.
Nov 22 03:55:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 22 03:55:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:25 np0005532048 podman[256861]: 2025-11-22 08:55:25.094755471 +0000 UTC m=+0.040684174 container create fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:55:25 np0005532048 systemd[1]: Started libpod-conmon-fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0.scope.
Nov 22 03:55:25 np0005532048 podman[256861]: 2025-11-22 08:55:25.07740863 +0000 UTC m=+0.023337363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:55:25 np0005532048 podman[256861]: 2025-11-22 08:55:25.202535223 +0000 UTC m=+0.148463946 container init fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:55:25 np0005532048 podman[256861]: 2025-11-22 08:55:25.212119022 +0000 UTC m=+0.158047725 container start fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:55:25 np0005532048 angry_sammet[256878]: 167 167
Nov 22 03:55:25 np0005532048 systemd[1]: libpod-fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0.scope: Deactivated successfully.
Nov 22 03:55:25 np0005532048 conmon[256878]: conmon fcb2bdb64893895d278b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0.scope/container/memory.events
Nov 22 03:55:25 np0005532048 podman[256861]: 2025-11-22 08:55:25.229976527 +0000 UTC m=+0.175905250 container attach fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:55:25 np0005532048 podman[256861]: 2025-11-22 08:55:25.230425227 +0000 UTC m=+0.176353930 container died fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:55:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f798cbf963aef6890f985f06a32eff95d76d6a230efd4c6f199031c7f2a77036-merged.mount: Deactivated successfully.
Nov 22 03:55:25 np0005532048 podman[256861]: 2025-11-22 08:55:25.29360963 +0000 UTC m=+0.239538333 container remove fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:55:25 np0005532048 systemd[1]: libpod-conmon-fcb2bdb64893895d278be720888c3fd24108c244b1e02d0b744163a365f60fb0.scope: Deactivated successfully.
Nov 22 03:55:25 np0005532048 podman[256902]: 2025-11-22 08:55:25.474075271 +0000 UTC m=+0.045681518 container create 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:55:25 np0005532048 systemd[1]: Started libpod-conmon-499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1.scope.
Nov 22 03:55:25 np0005532048 podman[256902]: 2025-11-22 08:55:25.454955985 +0000 UTC m=+0.026562252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:55:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:25 np0005532048 podman[256902]: 2025-11-22 08:55:25.578859189 +0000 UTC m=+0.150465456 container init 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:55:25 np0005532048 podman[256902]: 2025-11-22 08:55:25.587646898 +0000 UTC m=+0.159253145 container start 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:25 np0005532048 podman[256902]: 2025-11-22 08:55:25.592139689 +0000 UTC m=+0.163745936 container attach 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]: {
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:    "0": [
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:        {
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "devices": [
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "/dev/loop3"
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            ],
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_name": "ceph_lv0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_size": "21470642176",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "name": "ceph_lv0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "tags": {
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.cluster_name": "ceph",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.crush_device_class": "",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.encrypted": "0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.osd_id": "0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.type": "block",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.vdo": "0"
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            },
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "type": "block",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "vg_name": "ceph_vg0"
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:        }
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:    ],
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:    "1": [
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:        {
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "devices": [
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "/dev/loop4"
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            ],
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_name": "ceph_lv1",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_size": "21470642176",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "name": "ceph_lv1",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "tags": {
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.cluster_name": "ceph",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.crush_device_class": "",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.encrypted": "0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.osd_id": "1",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.type": "block",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.vdo": "0"
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            },
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "type": "block",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "vg_name": "ceph_vg1"
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:        }
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:    ],
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:    "2": [
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:        {
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "devices": [
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "/dev/loop5"
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            ],
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_name": "ceph_lv2",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_size": "21470642176",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "name": "ceph_lv2",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "tags": {
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.cluster_name": "ceph",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.crush_device_class": "",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.encrypted": "0",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.osd_id": "2",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.type": "block",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:                "ceph.vdo": "0"
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            },
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "type": "block",
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:            "vg_name": "ceph_vg2"
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:        }
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]:    ]
Nov 22 03:55:26 np0005532048 sleepy_williams[256919]: }
Nov 22 03:55:26 np0005532048 systemd[1]: libpod-499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1.scope: Deactivated successfully.
Nov 22 03:55:26 np0005532048 podman[256902]: 2025-11-22 08:55:26.475363101 +0000 UTC m=+1.046969348 container died 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:55:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e0f374a8883b7b4d2b088f47d4dfe4584b352cd3165317cdbad23b3001f8b772-merged.mount: Deactivated successfully.
Nov 22 03:55:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 22 03:55:26 np0005532048 podman[256902]: 2025-11-22 08:55:26.689246694 +0000 UTC m=+1.260852941 container remove 499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:55:26 np0005532048 systemd[1]: libpod-conmon-499a398291637097290292bba44c7bf12ead12502fcf3cf21dbbec06196426a1.scope: Deactivated successfully.
Nov 22 03:55:27 np0005532048 podman[257081]: 2025-11-22 08:55:27.33155364 +0000 UTC m=+0.023394944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:27 np0005532048 podman[257081]: 2025-11-22 08:55:27.741983854 +0000 UTC m=+0.433825138 container create c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:55:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:55:27.937 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:55:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:55:27.938 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:55:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:55:27.938 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:55:28 np0005532048 systemd[1]: Started libpod-conmon-c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7.scope.
Nov 22 03:55:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:55:28 np0005532048 podman[257081]: 2025-11-22 08:55:28.28344436 +0000 UTC m=+0.975285664 container init c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:55:28 np0005532048 podman[257081]: 2025-11-22 08:55:28.291651104 +0000 UTC m=+0.983492408 container start c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:55:28 np0005532048 beautiful_khayyam[257097]: 167 167
Nov 22 03:55:28 np0005532048 systemd[1]: libpod-c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7.scope: Deactivated successfully.
Nov 22 03:55:28 np0005532048 podman[257081]: 2025-11-22 08:55:28.319289882 +0000 UTC m=+1.011131166 container attach c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:55:28 np0005532048 podman[257081]: 2025-11-22 08:55:28.320106262 +0000 UTC m=+1.011947546 container died c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:55:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f50b66920f706a270db322b34a73296796aa4ee6cdab5c29f0001e14657bb314-merged.mount: Deactivated successfully.
Nov 22 03:55:28 np0005532048 podman[257081]: 2025-11-22 08:55:28.515117635 +0000 UTC m=+1.206958959 container remove c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:55:28 np0005532048 systemd[1]: libpod-conmon-c457ae9f64617b2089aeac5381394a24cadaf53d223614a67be7e50d61805ff7.scope: Deactivated successfully.
Nov 22 03:55:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:55:28 np0005532048 podman[257123]: 2025-11-22 08:55:28.732312801 +0000 UTC m=+0.082446363 container create cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:28 np0005532048 podman[257123]: 2025-11-22 08:55:28.675919537 +0000 UTC m=+0.026053119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:28 np0005532048 systemd[1]: Started libpod-conmon-cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5.scope.
Nov 22 03:55:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:55:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:28 np0005532048 podman[257123]: 2025-11-22 08:55:28.836937255 +0000 UTC m=+0.187070847 container init cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:55:28 np0005532048 podman[257123]: 2025-11-22 08:55:28.84639821 +0000 UTC m=+0.196531772 container start cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:55:28 np0005532048 podman[257123]: 2025-11-22 08:55:28.884529189 +0000 UTC m=+0.234662771 container attach cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:55:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]: {
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "osd_id": 1,
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "type": "bluestore"
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:    },
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "osd_id": 0,
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "type": "bluestore"
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:    },
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "osd_id": 2,
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:        "type": "bluestore"
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]:    }
Nov 22 03:55:29 np0005532048 sharp_liskov[257139]: }
Nov 22 03:55:29 np0005532048 systemd[1]: libpod-cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5.scope: Deactivated successfully.
Nov 22 03:55:29 np0005532048 podman[257123]: 2025-11-22 08:55:29.938042018 +0000 UTC m=+1.288175580 container died cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:55:29 np0005532048 systemd[1]: libpod-cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5.scope: Consumed 1.091s CPU time.
Nov 22 03:55:29 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bb92843ee725581e1a7075c7bf721a1928e00c9adc7264d5b9695f1e8a78dd81-merged.mount: Deactivated successfully.
Nov 22 03:55:30 np0005532048 podman[257123]: 2025-11-22 08:55:30.068931326 +0000 UTC m=+1.419064878 container remove cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:55:30 np0005532048 systemd[1]: libpod-conmon-cc2ad7d7301220d33b35ca58ede60a4eeed6bc7eb5461ebf26fa02b3a12188e5.scope: Deactivated successfully.
Nov 22 03:55:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:55:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:55:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:55:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:55:30 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0e8817b0-31e0-49b3-acda-8ee8525806ca does not exist
Nov 22 03:55:30 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev cffcdb08-ecf3-4c42-a71a-dadb1761366e does not exist
Nov 22 03:55:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:55:31 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:55:31 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:55:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:55:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Nov 22 03:55:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 03:55:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 03:55:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:48 np0005532048 podman[257234]: 2025-11-22 08:55:48.38140207 +0000 UTC m=+0.069657785 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:55:48 np0005532048 podman[257235]: 2025-11-22 08:55:48.410332389 +0000 UTC m=+0.098343488 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:55:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:50 np0005532048 podman[257272]: 2025-11-22 08:55:50.420479547 +0000 UTC m=+0.108364927 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:55:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:55:52
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'vms']
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:55:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:55:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:55:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:02 np0005532048 nova_compute[253661]: 2025-11-22 08:56:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:56:02 np0005532048 nova_compute[253661]: 2025-11-22 08:56:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:56:02 np0005532048 nova_compute[253661]: 2025-11-22 08:56:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:56:02 np0005532048 nova_compute[253661]: 2025-11-22 08:56:02.413 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:56:02 np0005532048 nova_compute[253661]: 2025-11-22 08:56:02.414 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:56:02 np0005532048 nova_compute[253661]: 2025-11-22 08:56:02.414 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:56:02 np0005532048 nova_compute[253661]: 2025-11-22 08:56:02.414 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 03:56:02 np0005532048 nova_compute[253661]: 2025-11-22 08:56:02.414 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:56:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:56:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1843863018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:02 np0005532048 nova_compute[253661]: 2025-11-22 08:56:02.946 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.124 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.125 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5212MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.126 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.126 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.186 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.186 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.209 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:56:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:56:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838507600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.701 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.708 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.722 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.723 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 03:56:03 np0005532048 nova_compute[253661]: 2025-11-22 08:56:03.724 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:56:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.724 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.724 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.725 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.725 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.738 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.739 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.740 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.740 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.740 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:56:04 np0005532048 nova_compute[253661]: 2025-11-22 08:56:04.740 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 03:56:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471482244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471482244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:19 np0005532048 podman[257342]: 2025-11-22 08:56:19.35594834 +0000 UTC m=+0.050365232 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:56:19 np0005532048 podman[257343]: 2025-11-22 08:56:19.37041833 +0000 UTC m=+0.063207872 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:56:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:21 np0005532048 podman[257381]: 2025-11-22 08:56:21.40798755 +0000 UTC m=+0.090321375 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 03:56:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:56:27.938 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:56:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:56:27.939 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:56:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:56:27.939 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:56:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:31 np0005532048 podman[257577]: 2025-11-22 08:56:31.09358012 +0000 UTC m=+0.087032523 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:56:31 np0005532048 podman[257577]: 2025-11-22 08:56:31.201089411 +0000 UTC m=+0.194541814 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:56:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:56:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:56:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 56d776df-58f0-4c66-80be-32ab058c23ca does not exist
Nov 22 03:56:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a22f662b-b53f-456f-84f4-44db2f324fc5 does not exist
Nov 22 03:56:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 462888b9-3332-4b28-96f7-ce8b678c4930 does not exist
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:56:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:56:33 np0005532048 podman[258006]: 2025-11-22 08:56:33.245156503 +0000 UTC m=+0.051180861 container create 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:56:33 np0005532048 systemd[1]: Started libpod-conmon-42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a.scope.
Nov 22 03:56:33 np0005532048 podman[258006]: 2025-11-22 08:56:33.2152036 +0000 UTC m=+0.021227988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:56:33 np0005532048 podman[258006]: 2025-11-22 08:56:33.34283812 +0000 UTC m=+0.148862508 container init 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:33 np0005532048 podman[258006]: 2025-11-22 08:56:33.351456114 +0000 UTC m=+0.157480472 container start 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:56:33 np0005532048 friendly_sinoussi[258021]: 167 167
Nov 22 03:56:33 np0005532048 systemd[1]: libpod-42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a.scope: Deactivated successfully.
Nov 22 03:56:33 np0005532048 podman[258006]: 2025-11-22 08:56:33.361744499 +0000 UTC m=+0.167768857 container attach 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:33 np0005532048 podman[258006]: 2025-11-22 08:56:33.362538669 +0000 UTC m=+0.168563027 container died 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:56:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b83d58e934a501c19e17120ab4eb8241d38694d2adbe7fd2e4294f8a5fe4c51e-merged.mount: Deactivated successfully.
Nov 22 03:56:33 np0005532048 podman[258006]: 2025-11-22 08:56:33.432440455 +0000 UTC m=+0.238464813 container remove 42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sinoussi, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:56:33 np0005532048 systemd[1]: libpod-conmon-42f02c30fee3b9e3e4db8f9a31f0711e96a461a4db6ee2ee46cfec92b8c8828a.scope: Deactivated successfully.
Nov 22 03:56:33 np0005532048 podman[258049]: 2025-11-22 08:56:33.61341415 +0000 UTC m=+0.054727390 container create 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:56:33 np0005532048 systemd[1]: Started libpod-conmon-29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3.scope.
Nov 22 03:56:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:56:33 np0005532048 podman[258049]: 2025-11-22 08:56:33.587447486 +0000 UTC m=+0.028760776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:33 np0005532048 podman[258049]: 2025-11-22 08:56:33.699480769 +0000 UTC m=+0.140794029 container init 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:56:33 np0005532048 podman[258049]: 2025-11-22 08:56:33.707350014 +0000 UTC m=+0.148663284 container start 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:56:33 np0005532048 podman[258049]: 2025-11-22 08:56:33.714799389 +0000 UTC m=+0.156112629 container attach 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:34 np0005532048 cool_lewin[258065]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:56:34 np0005532048 cool_lewin[258065]: --> relative data size: 1.0
Nov 22 03:56:34 np0005532048 cool_lewin[258065]: --> All data devices are unavailable
Nov 22 03:56:34 np0005532048 systemd[1]: libpod-29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3.scope: Deactivated successfully.
Nov 22 03:56:34 np0005532048 systemd[1]: libpod-29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3.scope: Consumed 1.065s CPU time.
Nov 22 03:56:34 np0005532048 podman[258049]: 2025-11-22 08:56:34.822596295 +0000 UTC m=+1.263909535 container died 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2fcb91e092046801883cebd730de8a868d0ba986a96cadd2203ead412d9ed119-merged.mount: Deactivated successfully.
Nov 22 03:56:34 np0005532048 podman[258049]: 2025-11-22 08:56:34.889932488 +0000 UTC m=+1.331245738 container remove 29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:56:34 np0005532048 systemd[1]: libpod-conmon-29556038ea4bda36effe4e8a9a80370b8b4fe5d02dcef8858f3a6c2e74caadb3.scope: Deactivated successfully.
Nov 22 03:56:35 np0005532048 podman[258247]: 2025-11-22 08:56:35.548106376 +0000 UTC m=+0.039017120 container create 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:56:35 np0005532048 systemd[1]: Started libpod-conmon-68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f.scope.
Nov 22 03:56:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:56:35 np0005532048 podman[258247]: 2025-11-22 08:56:35.62476743 +0000 UTC m=+0.115678194 container init 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:56:35 np0005532048 podman[258247]: 2025-11-22 08:56:35.532179491 +0000 UTC m=+0.023090265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:35 np0005532048 podman[258247]: 2025-11-22 08:56:35.631672312 +0000 UTC m=+0.122583056 container start 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:56:35 np0005532048 podman[258247]: 2025-11-22 08:56:35.635509117 +0000 UTC m=+0.126419891 container attach 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:56:35 np0005532048 serene_noether[258263]: 167 167
Nov 22 03:56:35 np0005532048 systemd[1]: libpod-68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f.scope: Deactivated successfully.
Nov 22 03:56:35 np0005532048 conmon[258263]: conmon 68393964448faac77352 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f.scope/container/memory.events
Nov 22 03:56:35 np0005532048 podman[258247]: 2025-11-22 08:56:35.63925647 +0000 UTC m=+0.130167214 container died 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:56:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9df5f4fe58b11c8e4facf267e385189292a7edb48bc7cbfe876c2ce3cf5d45ab-merged.mount: Deactivated successfully.
Nov 22 03:56:35 np0005532048 podman[258247]: 2025-11-22 08:56:35.680224897 +0000 UTC m=+0.171135641 container remove 68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_noether, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:56:35 np0005532048 systemd[1]: libpod-conmon-68393964448faac773524868acb16daf33271d274207f61b0783f11fcfbdee3f.scope: Deactivated successfully.
Nov 22 03:56:35 np0005532048 podman[258287]: 2025-11-22 08:56:35.848063346 +0000 UTC m=+0.048086185 container create 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:56:35 np0005532048 systemd[1]: Started libpod-conmon-69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef.scope.
Nov 22 03:56:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:56:35 np0005532048 podman[258287]: 2025-11-22 08:56:35.82928424 +0000 UTC m=+0.029307099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:35 np0005532048 podman[258287]: 2025-11-22 08:56:35.952689085 +0000 UTC m=+0.152711944 container init 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:56:35 np0005532048 podman[258287]: 2025-11-22 08:56:35.959600167 +0000 UTC m=+0.159623006 container start 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:56:35 np0005532048 podman[258287]: 2025-11-22 08:56:35.966140369 +0000 UTC m=+0.166163208 container attach 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:56:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]: {
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:    "0": [
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:        {
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "devices": [
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "/dev/loop3"
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            ],
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_name": "ceph_lv0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_size": "21470642176",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "name": "ceph_lv0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "tags": {
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.cluster_name": "ceph",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.crush_device_class": "",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.encrypted": "0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.osd_id": "0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.type": "block",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.vdo": "0"
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            },
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "type": "block",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "vg_name": "ceph_vg0"
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:        }
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:    ],
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:    "1": [
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:        {
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "devices": [
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "/dev/loop4"
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            ],
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_name": "ceph_lv1",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_size": "21470642176",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "name": "ceph_lv1",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "tags": {
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.cluster_name": "ceph",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.crush_device_class": "",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.encrypted": "0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.osd_id": "1",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.type": "block",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.vdo": "0"
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            },
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "type": "block",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "vg_name": "ceph_vg1"
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:        }
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:    ],
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:    "2": [
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:        {
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "devices": [
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "/dev/loop5"
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            ],
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_name": "ceph_lv2",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_size": "21470642176",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "name": "ceph_lv2",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "tags": {
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.cluster_name": "ceph",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.crush_device_class": "",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.encrypted": "0",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.osd_id": "2",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.type": "block",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:                "ceph.vdo": "0"
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            },
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "type": "block",
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:            "vg_name": "ceph_vg2"
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:        }
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]:    ]
Nov 22 03:56:36 np0005532048 nervous_pascal[258303]: }
Nov 22 03:56:36 np0005532048 systemd[1]: libpod-69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef.scope: Deactivated successfully.
Nov 22 03:56:36 np0005532048 podman[258287]: 2025-11-22 08:56:36.829494384 +0000 UTC m=+1.029517243 container died 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:56:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c26eda30544bdce34dd763d77335c152d9650b25eadad2c810d760ef27523a24-merged.mount: Deactivated successfully.
Nov 22 03:56:36 np0005532048 podman[258287]: 2025-11-22 08:56:36.899247777 +0000 UTC m=+1.099270616 container remove 69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:56:36 np0005532048 systemd[1]: libpod-conmon-69c1c29e8cec7fea6ed0d451748d778f24bfcccb2a046aabf2e3a2c6dfcf14ef.scope: Deactivated successfully.
Nov 22 03:56:37 np0005532048 podman[258467]: 2025-11-22 08:56:37.57936416 +0000 UTC m=+0.060075113 container create f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:56:37 np0005532048 podman[258467]: 2025-11-22 08:56:37.542888334 +0000 UTC m=+0.023599307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:37 np0005532048 systemd[1]: Started libpod-conmon-f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801.scope.
Nov 22 03:56:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:56:37 np0005532048 podman[258467]: 2025-11-22 08:56:37.763941404 +0000 UTC m=+0.244652387 container init f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 22 03:56:37 np0005532048 podman[258467]: 2025-11-22 08:56:37.77139932 +0000 UTC m=+0.252110273 container start f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:56:37 np0005532048 jolly_buck[258484]: 167 167
Nov 22 03:56:37 np0005532048 systemd[1]: libpod-f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801.scope: Deactivated successfully.
Nov 22 03:56:37 np0005532048 podman[258467]: 2025-11-22 08:56:37.938663754 +0000 UTC m=+0.419374737 container attach f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:56:37 np0005532048 podman[258467]: 2025-11-22 08:56:37.939429154 +0000 UTC m=+0.420140107 container died f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:56:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6972e95b520ff4260b80690f89571e9e5db8d7526dec5b1be591d93fec853ab6-merged.mount: Deactivated successfully.
Nov 22 03:56:38 np0005532048 podman[258467]: 2025-11-22 08:56:38.142369355 +0000 UTC m=+0.623080308 container remove f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_buck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:56:38 np0005532048 systemd[1]: libpod-conmon-f1d8c0a35ad30bc57eafcd367f86e3a5baf201386a52abb81bad95f3a5bad801.scope: Deactivated successfully.
Nov 22 03:56:38 np0005532048 podman[258509]: 2025-11-22 08:56:38.326808825 +0000 UTC m=+0.051393957 container create 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:56:38 np0005532048 systemd[1]: Started libpod-conmon-164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935.scope.
Nov 22 03:56:38 np0005532048 podman[258509]: 2025-11-22 08:56:38.302508612 +0000 UTC m=+0.027093774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:56:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:38 np0005532048 podman[258509]: 2025-11-22 08:56:38.432485621 +0000 UTC m=+0.157070773 container init 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:56:38 np0005532048 podman[258509]: 2025-11-22 08:56:38.441522645 +0000 UTC m=+0.166107777 container start 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:56:38 np0005532048 podman[258509]: 2025-11-22 08:56:38.445024732 +0000 UTC m=+0.169609884 container attach 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:56:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]: {
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "osd_id": 1,
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "type": "bluestore"
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:    },
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "osd_id": 0,
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "type": "bluestore"
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:    },
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "osd_id": 2,
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:        "type": "bluestore"
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]:    }
Nov 22 03:56:39 np0005532048 xenodochial_diffie[258526]: }
Nov 22 03:56:39 np0005532048 systemd[1]: libpod-164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935.scope: Deactivated successfully.
Nov 22 03:56:39 np0005532048 systemd[1]: libpod-164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935.scope: Consumed 1.102s CPU time.
Nov 22 03:56:39 np0005532048 podman[258559]: 2025-11-22 08:56:39.571201745 +0000 UTC m=+0.023329080 container died 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:56:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e8165297b6ef65bb624c56626251705077c7ea5daf2a77812cef1ec6770586d4-merged.mount: Deactivated successfully.
Nov 22 03:56:39 np0005532048 podman[258559]: 2025-11-22 08:56:39.621380961 +0000 UTC m=+0.073508286 container remove 164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:56:39 np0005532048 systemd[1]: libpod-conmon-164a1d17dd0785b1eca005c3360ad45f941b17820dba816f138d0fcef27e1935.scope: Deactivated successfully.
Nov 22 03:56:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:56:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:56:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev f53eb80e-e3a1-4fdd-a3fc-b628d65c8657 does not exist
Nov 22 03:56:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 28f92e2d-12c1-452e-9e27-4c3ea6d94a58 does not exist
Nov 22 03:56:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:56:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:50 np0005532048 podman[258624]: 2025-11-22 08:56:50.387444687 +0000 UTC m=+0.070557073 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 03:56:50 np0005532048 podman[258625]: 2025-11-22 08:56:50.389773765 +0000 UTC m=+0.072933063 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 03:56:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:56:52
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.mgr']
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:56:52 np0005532048 podman[258663]: 2025-11-22 08:56:52.4174693 +0000 UTC m=+0.112531956 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:56:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:56:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:56:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:57:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:57:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:57:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2955240106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.720 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.881 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.882 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5184MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.883 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.883 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.946 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.946 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 03:57:03 np0005532048 nova_compute[253661]: 2025-11-22 08:57:03.964 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:57:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:57:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664952485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:04 np0005532048 nova_compute[253661]: 2025-11-22 08:57:04.436 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:57:04 np0005532048 nova_compute[253661]: 2025-11-22 08:57:04.443 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 03:57:04 np0005532048 nova_compute[253661]: 2025-11-22 08:57:04.457 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 03:57:04 np0005532048 nova_compute[253661]: 2025-11-22 08:57:04.459 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 03:57:04 np0005532048 nova_compute[253661]: 2025-11-22 08:57:04.459 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:57:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.460 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.461 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.461 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.461 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.591 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:05 np0005532048 nova_compute[253661]: 2025-11-22 08:57:05.593 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 03:57:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:06 np0005532048 nova_compute[253661]: 2025-11-22 08:57:06.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:06 np0005532048 nova_compute[253661]: 2025-11-22 08:57:06.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:57:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300167450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300167450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:21 np0005532048 podman[258734]: 2025-11-22 08:57:21.392696593 +0000 UTC m=+0.062071984 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:57:21 np0005532048 podman[258735]: 2025-11-22 08:57:21.392792895 +0000 UTC m=+0.070518563 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:23 np0005532048 podman[258772]: 2025-11-22 08:57:23.415979799 +0000 UTC m=+0.109678506 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:57:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:57:27.939 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:57:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:57:27.939 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:57:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:57:27.940 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:57:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:57:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 91c8444f-f7c5-49d4-8dba-cd838a390a1c does not exist
Nov 22 03:57:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ef3b1578-0128-4401-917b-a7d7cd991d38 does not exist
Nov 22 03:57:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 418813d1-564a-4815-a8d2-525ab7fb26a1 does not exist
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:57:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:57:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:57:41 np0005532048 podman[259069]: 2025-11-22 08:57:41.363022044 +0000 UTC m=+0.109800499 container create cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:57:41 np0005532048 podman[259069]: 2025-11-22 08:57:41.278920535 +0000 UTC m=+0.025699010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:41 np0005532048 systemd[1]: Started libpod-conmon-cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054.scope.
Nov 22 03:57:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:57:41 np0005532048 podman[259069]: 2025-11-22 08:57:41.545188139 +0000 UTC m=+0.291966624 container init cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 22 03:57:41 np0005532048 podman[259069]: 2025-11-22 08:57:41.558018358 +0000 UTC m=+0.304796823 container start cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:57:41 np0005532048 peaceful_rosalind[259085]: 167 167
Nov 22 03:57:41 np0005532048 systemd[1]: libpod-cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054.scope: Deactivated successfully.
Nov 22 03:57:41 np0005532048 podman[259069]: 2025-11-22 08:57:41.582840634 +0000 UTC m=+0.329619109 container attach cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:57:41 np0005532048 podman[259069]: 2025-11-22 08:57:41.583916391 +0000 UTC m=+0.330694846 container died cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:57:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d958eaf352527c01141fe707a2386a0d304270b32c043fecf80c0836385ed37e-merged.mount: Deactivated successfully.
Nov 22 03:57:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:57:42 np0005532048 podman[259069]: 2025-11-22 08:57:42.045141087 +0000 UTC m=+0.791919542 container remove cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_rosalind, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:57:42 np0005532048 systemd[1]: libpod-conmon-cd62e789d5906305e9dc6fc41d967d4c7dc91e82e4e4ddce404c0009fa8c0054.scope: Deactivated successfully.
Nov 22 03:57:42 np0005532048 podman[259108]: 2025-11-22 08:57:42.296583943 +0000 UTC m=+0.114185208 container create b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:57:42 np0005532048 podman[259108]: 2025-11-22 08:57:42.212479543 +0000 UTC m=+0.030080828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:42 np0005532048 systemd[1]: Started libpod-conmon-b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b.scope.
Nov 22 03:57:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:57:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:42 np0005532048 podman[259108]: 2025-11-22 08:57:42.734215973 +0000 UTC m=+0.551817258 container init b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:57:42 np0005532048 podman[259108]: 2025-11-22 08:57:42.74215469 +0000 UTC m=+0.559755955 container start b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:57:42 np0005532048 podman[259108]: 2025-11-22 08:57:42.792754147 +0000 UTC m=+0.610355432 container attach b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:57:43 np0005532048 vigorous_dirac[259125]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:57:43 np0005532048 vigorous_dirac[259125]: --> relative data size: 1.0
Nov 22 03:57:43 np0005532048 vigorous_dirac[259125]: --> All data devices are unavailable
Nov 22 03:57:43 np0005532048 systemd[1]: libpod-b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b.scope: Deactivated successfully.
Nov 22 03:57:43 np0005532048 systemd[1]: libpod-b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b.scope: Consumed 1.061s CPU time.
Nov 22 03:57:43 np0005532048 podman[259108]: 2025-11-22 08:57:43.850230983 +0000 UTC m=+1.667832258 container died b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:57:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dff368d3f22756a5d84f23df344b44055456c87aa0b857453baea0fc97e45eac-merged.mount: Deactivated successfully.
Nov 22 03:57:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:45 np0005532048 podman[259108]: 2025-11-22 08:57:45.805042139 +0000 UTC m=+3.622643404 container remove b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dirac, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:57:45 np0005532048 systemd[1]: libpod-conmon-b1c2d71b34b153ea9e6da24f6fa69f0313768cdbea3be642481a6ed6806ed35b.scope: Deactivated successfully.
Nov 22 03:57:46 np0005532048 podman[259307]: 2025-11-22 08:57:46.504226616 +0000 UTC m=+0.024610902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:46 np0005532048 podman[259307]: 2025-11-22 08:57:46.667274545 +0000 UTC m=+0.187658841 container create a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:57:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:46 np0005532048 systemd[1]: Started libpod-conmon-a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c.scope.
Nov 22 03:57:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:57:47 np0005532048 podman[259307]: 2025-11-22 08:57:47.180813802 +0000 UTC m=+0.701198138 container init a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 22 03:57:47 np0005532048 podman[259307]: 2025-11-22 08:57:47.192562384 +0000 UTC m=+0.712946650 container start a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:57:47 np0005532048 amazing_sinoussi[259324]: 167 167
Nov 22 03:57:47 np0005532048 systemd[1]: libpod-a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c.scope: Deactivated successfully.
Nov 22 03:57:47 np0005532048 podman[259307]: 2025-11-22 08:57:47.532919918 +0000 UTC m=+1.053304264 container attach a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:57:47 np0005532048 podman[259307]: 2025-11-22 08:57:47.535167435 +0000 UTC m=+1.055551771 container died a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:57:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6ec064df00836a9214e88fa63d82a6fbd45051631642a52b36d40e8d1a332afc-merged.mount: Deactivated successfully.
Nov 22 03:57:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:49 np0005532048 podman[259307]: 2025-11-22 08:57:49.50479914 +0000 UTC m=+3.025183416 container remove a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sinoussi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:57:49 np0005532048 systemd[1]: libpod-conmon-a7983e06e32a3395c2b5c079bd022815f30cc636285e917c0c099291a2cdfd2c.scope: Deactivated successfully.
Nov 22 03:57:49 np0005532048 podman[259348]: 2025-11-22 08:57:49.679842947 +0000 UTC m=+0.022431628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:49 np0005532048 podman[259348]: 2025-11-22 08:57:49.888750396 +0000 UTC m=+0.231339077 container create f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:57:50 np0005532048 systemd[1]: Started libpod-conmon-f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23.scope.
Nov 22 03:57:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:57:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:50 np0005532048 podman[259348]: 2025-11-22 08:57:50.405351329 +0000 UTC m=+0.747940040 container init f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:57:50 np0005532048 podman[259348]: 2025-11-22 08:57:50.414371512 +0000 UTC m=+0.756960213 container start f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:57:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:50 np0005532048 podman[259348]: 2025-11-22 08:57:50.658402354 +0000 UTC m=+1.000991215 container attach f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:57:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]: {
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:    "0": [
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:        {
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "devices": [
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "/dev/loop3"
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            ],
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_name": "ceph_lv0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_size": "21470642176",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "name": "ceph_lv0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "tags": {
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.cluster_name": "ceph",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.crush_device_class": "",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.encrypted": "0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.osd_id": "0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.type": "block",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.vdo": "0"
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            },
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "type": "block",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "vg_name": "ceph_vg0"
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:        }
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:    ],
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:    "1": [
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:        {
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "devices": [
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "/dev/loop4"
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            ],
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_name": "ceph_lv1",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_size": "21470642176",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "name": "ceph_lv1",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "tags": {
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.cluster_name": "ceph",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.crush_device_class": "",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.encrypted": "0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.osd_id": "1",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.type": "block",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.vdo": "0"
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            },
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "type": "block",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "vg_name": "ceph_vg1"
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:        }
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:    ],
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:    "2": [
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:        {
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "devices": [
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "/dev/loop5"
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            ],
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_name": "ceph_lv2",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_size": "21470642176",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "name": "ceph_lv2",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "tags": {
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.cluster_name": "ceph",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.crush_device_class": "",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.encrypted": "0",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.osd_id": "2",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.type": "block",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:                "ceph.vdo": "0"
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            },
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "type": "block",
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:            "vg_name": "ceph_vg2"
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:        }
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]:    ]
Nov 22 03:57:51 np0005532048 dreamy_cori[259364]: }
Nov 22 03:57:51 np0005532048 systemd[1]: libpod-f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23.scope: Deactivated successfully.
Nov 22 03:57:51 np0005532048 podman[259348]: 2025-11-22 08:57:51.232580116 +0000 UTC m=+1.575168767 container died f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:57:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e59e5d117255538a20899d028126c73efa16ea066e5986882a2610106eb0d4f7-merged.mount: Deactivated successfully.
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:57:52
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups']
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:57:52 np0005532048 podman[259348]: 2025-11-22 08:57:52.289126778 +0000 UTC m=+2.631715439 container remove f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cori, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:57:52 np0005532048 systemd[1]: libpod-conmon-f0bdc272a8d8559bc853da250f64844af75b2ba4526007229a1cc5c42615dd23.scope: Deactivated successfully.
Nov 22 03:57:52 np0005532048 podman[259387]: 2025-11-22 08:57:52.432297095 +0000 UTC m=+0.917397889 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:57:52 np0005532048 podman[259386]: 2025-11-22 08:57:52.448301082 +0000 UTC m=+0.929089469 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:53 np0005532048 podman[259565]: 2025-11-22 08:57:52.962197897 +0000 UTC m=+0.023728741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:53 np0005532048 podman[259565]: 2025-11-22 08:57:53.153784776 +0000 UTC m=+0.215315610 container create a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:57:53 np0005532048 systemd[1]: Started libpod-conmon-a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c.scope.
Nov 22 03:57:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:57:53 np0005532048 podman[259565]: 2025-11-22 08:57:53.699507922 +0000 UTC m=+0.761038836 container init a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:57:53 np0005532048 podman[259565]: 2025-11-22 08:57:53.710516495 +0000 UTC m=+0.772047319 container start a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:57:53 np0005532048 zealous_mcnulty[259582]: 167 167
Nov 22 03:57:53 np0005532048 systemd[1]: libpod-a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c.scope: Deactivated successfully.
Nov 22 03:57:53 np0005532048 podman[259565]: 2025-11-22 08:57:53.769202002 +0000 UTC m=+0.830732886 container attach a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:57:53 np0005532048 podman[259565]: 2025-11-22 08:57:53.772437523 +0000 UTC m=+0.833968387 container died a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:57:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:57:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ceb379e373f56ab1a6043f2b9e2aeb9ea0dd39c4295827abcaa71075a41028b4-merged.mount: Deactivated successfully.
Nov 22 03:57:54 np0005532048 podman[259565]: 2025-11-22 08:57:54.373300838 +0000 UTC m=+1.434831702 container remove a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mcnulty, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 03:57:54 np0005532048 systemd[1]: libpod-conmon-a2e439c7309cfb48acab133e81693d83e6231be97e6422af0cad131f5153a75c.scope: Deactivated successfully.
Nov 22 03:57:54 np0005532048 podman[259584]: 2025-11-22 08:57:54.546530661 +0000 UTC m=+1.053464979 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 03:57:54 np0005532048 podman[259636]: 2025-11-22 08:57:54.613806142 +0000 UTC m=+0.034031427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:55 np0005532048 podman[259636]: 2025-11-22 08:57:55.122645511 +0000 UTC m=+0.542870696 container create 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:57:55 np0005532048 systemd[1]: Started libpod-conmon-237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b.scope.
Nov 22 03:57:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:57:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:55 np0005532048 podman[259636]: 2025-11-22 08:57:55.43138553 +0000 UTC m=+0.851610815 container init 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:57:55 np0005532048 podman[259636]: 2025-11-22 08:57:55.441578952 +0000 UTC m=+0.861804137 container start 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:57:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:55 np0005532048 podman[259636]: 2025-11-22 08:57:55.659418403 +0000 UTC m=+1.079643698 container attach 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]: {
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "osd_id": 1,
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "type": "bluestore"
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:    },
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "osd_id": 0,
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "type": "bluestore"
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:    },
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "osd_id": 2,
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:        "type": "bluestore"
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]:    }
Nov 22 03:57:56 np0005532048 keen_northcutt[259653]: }
Nov 22 03:57:56 np0005532048 systemd[1]: libpod-237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b.scope: Deactivated successfully.
Nov 22 03:57:56 np0005532048 podman[259636]: 2025-11-22 08:57:56.579661381 +0000 UTC m=+1.999886566 container died 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:57:56 np0005532048 systemd[1]: libpod-237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b.scope: Consumed 1.142s CPU time.
Nov 22 03:57:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:57:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f69a260b2b5d25b56c736d6aba659817f1d6587c2ceb9aa4c0269708056f6f6e-merged.mount: Deactivated successfully.
Nov 22 03:57:57 np0005532048 podman[259636]: 2025-11-22 08:57:57.154826637 +0000 UTC m=+2.575051822 container remove 237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:57:57 np0005532048 systemd[1]: libpod-conmon-237cebe6a57aa73690c7c600534a5d9206e730c75e63bf2a8b04c69ced43092b.scope: Deactivated successfully.
Nov 22 03:57:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:57:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:57:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:57:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:57:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4a1a04d0-d044-475f-9eb6-cd565a1c8229 does not exist
Nov 22 03:57:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2256e572-8609-44c2-a675-3ca3d0d5cd75 does not exist
Nov 22 03:57:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:57:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:57:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.659958) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880660027, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 251, "total_data_size": 3452623, "memory_usage": 3498744, "flush_reason": "Manual Compaction"}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880707728, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1959094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16533, "largest_seqno": 18577, "table_properties": {"data_size": 1952501, "index_size": 3411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16610, "raw_average_key_size": 20, "raw_value_size": 1937886, "raw_average_value_size": 2363, "num_data_blocks": 158, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801651, "oldest_key_time": 1763801651, "file_creation_time": 1763801880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 47813 microseconds, and 6905 cpu microseconds.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.707780) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1959094 bytes OK
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.707803) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.714041) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.714077) EVENT_LOG_v1 {"time_micros": 1763801880714069, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.714096) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3444057, prev total WAL file size 3455368, number of live WAL files 2.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.715207) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1913KB)], [38(7811KB)]
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880715283, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9957731, "oldest_snapshot_seqno": -1}
Nov 22 03:58:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4512 keys, 8063844 bytes, temperature: kUnknown
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880804731, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8063844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8032410, "index_size": 19036, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 108780, "raw_average_key_size": 24, "raw_value_size": 7949682, "raw_average_value_size": 1761, "num_data_blocks": 810, "num_entries": 4512, "num_filter_entries": 4512, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.805468) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8063844 bytes
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.809776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.2 rd, 90.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 4918, records dropped: 406 output_compression: NoCompression
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.809830) EVENT_LOG_v1 {"time_micros": 1763801880809813, "job": 18, "event": "compaction_finished", "compaction_time_micros": 89559, "compaction_time_cpu_micros": 22731, "output_level": 6, "num_output_files": 1, "total_output_size": 8063844, "num_input_records": 4918, "num_output_records": 4512, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880810809, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880812716, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.715051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.812794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.813202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880813285, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 251, "total_data_size": 13330, "memory_usage": 19592, "flush_reason": "Manual Compaction"}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880827869, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 13302, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18578, "largest_seqno": 18833, "table_properties": {"data_size": 11549, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4640, "raw_average_key_size": 18, "raw_value_size": 8177, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801880, "oldest_key_time": 1763801880, "file_creation_time": 1763801880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 14715 microseconds, and 1112 cpu microseconds.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.827922) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 13302 bytes OK
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.827956) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832037) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832060) EVENT_LOG_v1 {"time_micros": 1763801880832052, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832087) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 11311, prev total WAL file size 11311, number of live WAL files 2.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832500) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(12KB)], [41(7874KB)]
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880832534, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8077146, "oldest_snapshot_seqno": -1}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4262 keys, 6306880 bytes, temperature: kUnknown
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880902779, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6306880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6278879, "index_size": 16244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 104280, "raw_average_key_size": 24, "raw_value_size": 6202221, "raw_average_value_size": 1455, "num_data_blocks": 683, "num_entries": 4262, "num_filter_entries": 4262, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.903175) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6306880 bytes
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.916713) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.8 rd, 89.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 7.7 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(1081.3) write-amplify(474.1) OK, records in: 4768, records dropped: 506 output_compression: NoCompression
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.916769) EVENT_LOG_v1 {"time_micros": 1763801880916747, "job": 20, "event": "compaction_finished", "compaction_time_micros": 70348, "compaction_time_cpu_micros": 16273, "output_level": 6, "num_output_files": 1, "total_output_size": 6306880, "num_input_records": 4768, "num_output_records": 4262, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880916996, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801880920042, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.832428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:58:00.920146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:58:01 np0005532048 nova_compute[253661]: 2025-11-22 08:58:01.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:01 np0005532048 nova_compute[253661]: 2025-11-22 08:58:01.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 03:58:01 np0005532048 nova_compute[253661]: 2025-11-22 08:58:01.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 03:58:01 np0005532048 nova_compute[253661]: 2025-11-22 08:58:01.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:01 np0005532048 nova_compute[253661]: 2025-11-22 08:58:01.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 03:58:01 np0005532048 nova_compute[253661]: 2025-11-22 08:58:01.261 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:58:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.272 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.273 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.273 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.285 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.285 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.309 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.310 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.310 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.310 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.311 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:58:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:58:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2837107446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:04 np0005532048 nova_compute[253661]: 2025-11-22 08:58:04.938 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.116 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.117 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5158MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.117 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.118 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.390 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.481 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.482 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.501 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.525 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 03:58:05 np0005532048 nova_compute[253661]: 2025-11-22 08:58:05.543 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:58:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:58:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/281208527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.013 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.020 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.034 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.036 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.036 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:58:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.980 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.981 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.981 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.982 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.982 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.982 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:06 np0005532048 nova_compute[253661]: 2025-11-22 08:58:06.983 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 03:58:08 np0005532048 nova_compute[253661]: 2025-11-22 08:58:08.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:58:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968193383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968193383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:23 np0005532048 podman[259794]: 2025-11-22 08:58:23.370499588 +0000 UTC m=+0.061158187 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:58:23 np0005532048 podman[259795]: 2025-11-22 08:58:23.385473213 +0000 UTC m=+0.074704408 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:58:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:25 np0005532048 podman[259835]: 2025-11-22 08:58:25.441227266 +0000 UTC m=+0.128308996 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:58:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:58:27.940 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:58:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:58:27.942 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:58:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:58:27.942 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:58:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:58:52
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'images']
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:58:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:58:54 np0005532048 podman[259864]: 2025-11-22 08:58:54.383691754 +0000 UTC m=+0.064126218 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 22 03:58:54 np0005532048 podman[259865]: 2025-11-22 08:58:54.388910848 +0000 UTC m=+0.069413873 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 03:58:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:56 np0005532048 podman[259901]: 2025-11-22 08:58:56.439207932 +0000 UTC m=+0.124030134 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:58:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:58:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:58:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:58:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:58:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:58:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:58:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6c0dfd06-4a0b-4fff-a302-ea2bd7b472a1 does not exist
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 24b0c496-ffe9-4367-9f25-79e468ca6c42 does not exist
Nov 22 03:59:02 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9f5d8286-cc9b-47d3-99ad-8f5b15b006e5 does not exist
Nov 22 03:59:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:59:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:59:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:59:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:59:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:59:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:59:03 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:59:03 np0005532048 podman[260196]: 2025-11-22 08:59:03.810155089 +0000 UTC m=+0.038224026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:04 np0005532048 podman[260196]: 2025-11-22 08:59:04.152147896 +0000 UTC m=+0.380216803 container create 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.250 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:59:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:59:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:59:04 np0005532048 systemd[1]: Started libpod-conmon-9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449.scope.
Nov 22 03:59:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:59:04 np0005532048 podman[260196]: 2025-11-22 08:59:04.619704392 +0000 UTC m=+0.847773339 container init 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:59:04 np0005532048 podman[260196]: 2025-11-22 08:59:04.647358496 +0000 UTC m=+0.875427443 container start 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:59:04 np0005532048 nostalgic_fermat[260232]: 167 167
Nov 22 03:59:04 np0005532048 systemd[1]: libpod-9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449.scope: Deactivated successfully.
Nov 22 03:59:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:59:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742451565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.704 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:59:04 np0005532048 podman[260196]: 2025-11-22 08:59:04.731277581 +0000 UTC m=+0.959346518 container attach 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:59:04 np0005532048 podman[260196]: 2025-11-22 08:59:04.731788573 +0000 UTC m=+0.959857480 container died 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:59:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.892 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.895 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.953 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.953 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 03:59:04 np0005532048 nova_compute[253661]: 2025-11-22 08:59:04.971 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 03:59:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6af04a34783e3cc94e79fbe87f5da29e256ef7d6f53e2defe555ad0f427ced3c-merged.mount: Deactivated successfully.
Nov 22 03:59:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:59:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842877680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:05 np0005532048 podman[260196]: 2025-11-22 08:59:05.429401229 +0000 UTC m=+1.657470136 container remove 9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:59:05 np0005532048 nova_compute[253661]: 2025-11-22 08:59:05.429 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 03:59:05 np0005532048 nova_compute[253661]: 2025-11-22 08:59:05.435 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 03:59:05 np0005532048 nova_compute[253661]: 2025-11-22 08:59:05.447 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 03:59:05 np0005532048 nova_compute[253661]: 2025-11-22 08:59:05.449 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 03:59:05 np0005532048 nova_compute[253661]: 2025-11-22 08:59:05.449 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:59:05 np0005532048 systemd[1]: libpod-conmon-9206e8d893c9201ff85a293a442eaa6ef3c514c8b63ec84e746428f8fcf19449.scope: Deactivated successfully.
Nov 22 03:59:05 np0005532048 podman[260279]: 2025-11-22 08:59:05.587008017 +0000 UTC m=+0.031796533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:05 np0005532048 podman[260279]: 2025-11-22 08:59:05.745641758 +0000 UTC m=+0.190430264 container create 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:59:05 np0005532048 systemd[1]: Started libpod-conmon-974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae.scope.
Nov 22 03:59:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:59:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:05 np0005532048 podman[260279]: 2025-11-22 08:59:05.878389827 +0000 UTC m=+0.323178343 container init 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:59:05 np0005532048 podman[260279]: 2025-11-22 08:59:05.887704248 +0000 UTC m=+0.332492754 container start 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:59:05 np0005532048 podman[260279]: 2025-11-22 08:59:05.903104392 +0000 UTC m=+0.347892918 container attach 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.450 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.451 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.451 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.463 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.464 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:06 np0005532048 nova_compute[253661]: 2025-11-22 08:59:06.466 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 03:59:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:06 np0005532048 adoring_buck[260295]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:59:06 np0005532048 adoring_buck[260295]: --> relative data size: 1.0
Nov 22 03:59:06 np0005532048 adoring_buck[260295]: --> All data devices are unavailable
Nov 22 03:59:07 np0005532048 systemd[1]: libpod-974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae.scope: Deactivated successfully.
Nov 22 03:59:07 np0005532048 podman[260279]: 2025-11-22 08:59:07.026939097 +0000 UTC m=+1.471727603 container died 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:59:07 np0005532048 systemd[1]: libpod-974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae.scope: Consumed 1.082s CPU time.
Nov 22 03:59:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7737fd56b1defbe0a353e237a75da3a129bf547eee73175c80a36eda6a78f83f-merged.mount: Deactivated successfully.
Nov 22 03:59:07 np0005532048 podman[260279]: 2025-11-22 08:59:07.102051584 +0000 UTC m=+1.546840090 container remove 974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:59:07 np0005532048 systemd[1]: libpod-conmon-974d5b383e08313d5ca1c1a93b8e0ee2873972e2e1cfdae841f4b9140da48cae.scope: Deactivated successfully.
Nov 22 03:59:07 np0005532048 nova_compute[253661]: 2025-11-22 08:59:07.235 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:07 np0005532048 podman[260476]: 2025-11-22 08:59:07.784244716 +0000 UTC m=+0.047600427 container create 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:59:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:07 np0005532048 systemd[1]: Started libpod-conmon-32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c.scope.
Nov 22 03:59:07 np0005532048 podman[260476]: 2025-11-22 08:59:07.761789525 +0000 UTC m=+0.025145046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:59:07 np0005532048 podman[260476]: 2025-11-22 08:59:07.887850086 +0000 UTC m=+0.151205607 container init 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:59:07 np0005532048 podman[260476]: 2025-11-22 08:59:07.897814701 +0000 UTC m=+0.161170242 container start 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:59:07 np0005532048 thirsty_sutherland[260492]: 167 167
Nov 22 03:59:07 np0005532048 systemd[1]: libpod-32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c.scope: Deactivated successfully.
Nov 22 03:59:07 np0005532048 podman[260476]: 2025-11-22 08:59:07.907995232 +0000 UTC m=+0.171350763 container attach 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:59:07 np0005532048 podman[260476]: 2025-11-22 08:59:07.909250051 +0000 UTC m=+0.172605562 container died 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:59:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3b4c19aa0ef49b8e10e3c0161ab14aa28aa10aaf1ccfa7937f41ddf9d84ad568-merged.mount: Deactivated successfully.
Nov 22 03:59:07 np0005532048 podman[260476]: 2025-11-22 08:59:07.990238857 +0000 UTC m=+0.253594358 container remove 32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sutherland, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:59:08 np0005532048 systemd[1]: libpod-conmon-32bae2c6694dc7f6559ccf717d47bc970282c364eba21499699bc5e29646148c.scope: Deactivated successfully.
Nov 22 03:59:08 np0005532048 podman[260517]: 2025-11-22 08:59:08.193508663 +0000 UTC m=+0.078315393 container create 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:59:08 np0005532048 podman[260517]: 2025-11-22 08:59:08.141454023 +0000 UTC m=+0.026260793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:08 np0005532048 systemd[1]: Started libpod-conmon-0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6.scope.
Nov 22 03:59:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:59:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:08 np0005532048 podman[260517]: 2025-11-22 08:59:08.370006127 +0000 UTC m=+0.254812847 container init 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 22 03:59:08 np0005532048 podman[260517]: 2025-11-22 08:59:08.380826203 +0000 UTC m=+0.265632923 container start 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:59:08 np0005532048 podman[260517]: 2025-11-22 08:59:08.411474568 +0000 UTC m=+0.296281288 container attach 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:59:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]: {
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:    "0": [
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:        {
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "devices": [
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "/dev/loop3"
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            ],
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_name": "ceph_lv0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_size": "21470642176",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "name": "ceph_lv0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "tags": {
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.cluster_name": "ceph",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.crush_device_class": "",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.encrypted": "0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.osd_id": "0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.type": "block",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.vdo": "0"
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            },
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "type": "block",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "vg_name": "ceph_vg0"
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:        }
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:    ],
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:    "1": [
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:        {
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "devices": [
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "/dev/loop4"
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            ],
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_name": "ceph_lv1",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_size": "21470642176",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "name": "ceph_lv1",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "tags": {
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.cluster_name": "ceph",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.crush_device_class": "",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.encrypted": "0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.osd_id": "1",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.type": "block",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.vdo": "0"
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            },
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "type": "block",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "vg_name": "ceph_vg1"
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:        }
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:    ],
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:    "2": [
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:        {
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "devices": [
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "/dev/loop5"
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            ],
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_name": "ceph_lv2",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_size": "21470642176",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "name": "ceph_lv2",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "tags": {
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.cephx_lockbox_secret": "",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.cluster_name": "ceph",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.crush_device_class": "",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.encrypted": "0",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.osd_id": "2",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.type": "block",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:                "ceph.vdo": "0"
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            },
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "type": "block",
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:            "vg_name": "ceph_vg2"
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:        }
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]:    ]
Nov 22 03:59:09 np0005532048 condescending_goldwasser[260533]: }
Nov 22 03:59:09 np0005532048 systemd[1]: libpod-0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6.scope: Deactivated successfully.
Nov 22 03:59:09 np0005532048 podman[260517]: 2025-11-22 08:59:09.215992073 +0000 UTC m=+1.100798803 container died 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:59:09 np0005532048 nova_compute[253661]: 2025-11-22 08:59:09.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-795f7486ca4b0acd45ca01caa9ade537245a749125956a360b36fb051cbf8870-merged.mount: Deactivated successfully.
Nov 22 03:59:09 np0005532048 podman[260517]: 2025-11-22 08:59:09.33680102 +0000 UTC m=+1.221607741 container remove 0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:59:09 np0005532048 systemd[1]: libpod-conmon-0b39c22ccd26796e3612a37909654433ce26600c85e5cf29e261f32faaec45e6.scope: Deactivated successfully.
Nov 22 03:59:10 np0005532048 podman[260698]: 2025-11-22 08:59:10.054972593 +0000 UTC m=+0.069157367 container create 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:59:10 np0005532048 podman[260698]: 2025-11-22 08:59:10.014845384 +0000 UTC m=+0.029030188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:10 np0005532048 systemd[1]: Started libpod-conmon-3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd.scope.
Nov 22 03:59:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:59:10 np0005532048 podman[260698]: 2025-11-22 08:59:10.180386538 +0000 UTC m=+0.194571392 container init 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:59:10 np0005532048 podman[260698]: 2025-11-22 08:59:10.189006683 +0000 UTC m=+0.203191467 container start 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:59:10 np0005532048 nice_noether[260714]: 167 167
Nov 22 03:59:10 np0005532048 systemd[1]: libpod-3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd.scope: Deactivated successfully.
Nov 22 03:59:10 np0005532048 podman[260698]: 2025-11-22 08:59:10.213795718 +0000 UTC m=+0.227980502 container attach 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:59:10 np0005532048 podman[260698]: 2025-11-22 08:59:10.215216172 +0000 UTC m=+0.229400926 container died 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:59:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2d490eb1d2733c4f67dd0f3127f59dfca5eaa3e3b8bb6dab5f00033ff925a737-merged.mount: Deactivated successfully.
Nov 22 03:59:10 np0005532048 podman[260698]: 2025-11-22 08:59:10.31030032 +0000 UTC m=+0.324485094 container remove 3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:59:10 np0005532048 systemd[1]: libpod-conmon-3c75a88d9f6f32464e8740ddb424e0edd863a6099a9691819737b7f8a30edbbd.scope: Deactivated successfully.
Nov 22 03:59:10 np0005532048 podman[260740]: 2025-11-22 08:59:10.511501439 +0000 UTC m=+0.070444627 container create 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:59:10 np0005532048 podman[260740]: 2025-11-22 08:59:10.469368952 +0000 UTC m=+0.028312160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:10 np0005532048 systemd[1]: Started libpod-conmon-11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9.scope.
Nov 22 03:59:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 03:59:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:10 np0005532048 podman[260740]: 2025-11-22 08:59:10.673307945 +0000 UTC m=+0.232251173 container init 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:59:10 np0005532048 podman[260740]: 2025-11-22 08:59:10.680915355 +0000 UTC m=+0.239858543 container start 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:59:10 np0005532048 podman[260740]: 2025-11-22 08:59:10.688019453 +0000 UTC m=+0.246962661 container attach 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:59:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:11 np0005532048 nova_compute[253661]: 2025-11-22 08:59:11.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]: {
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "osd_id": 1,
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "type": "bluestore"
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:    },
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "osd_id": 0,
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "type": "bluestore"
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:    },
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "osd_id": 2,
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:        "type": "bluestore"
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]:    }
Nov 22 03:59:11 np0005532048 friendly_hamilton[260757]: }
Nov 22 03:59:11 np0005532048 systemd[1]: libpod-11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9.scope: Deactivated successfully.
Nov 22 03:59:11 np0005532048 systemd[1]: libpod-11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9.scope: Consumed 1.108s CPU time.
Nov 22 03:59:11 np0005532048 podman[260740]: 2025-11-22 08:59:11.783109319 +0000 UTC m=+1.342052507 container died 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:59:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4547c4e1a49953d53dc6a7830d127d006c2d3cb10de297a39831342d55440962-merged.mount: Deactivated successfully.
Nov 22 03:59:12 np0005532048 podman[260740]: 2025-11-22 08:59:12.009918262 +0000 UTC m=+1.568861470 container remove 11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:59:12 np0005532048 systemd[1]: libpod-conmon-11e9d039ca6488104bb07d05141e25148313b8e3c78310b35fa801a9b88a34b9.scope: Deactivated successfully.
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:59:12 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 22fb056c-8545-4376-ac08-3129cdbd53f4 does not exist
Nov 22 03:59:12 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 56d86b35-cf2d-4004-9006-36d09e2134dc does not exist
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2057405644' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2057405644' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.925720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801952925855, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 788, "num_deletes": 256, "total_data_size": 1016885, "memory_usage": 1031336, "flush_reason": "Manual Compaction"}
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801952953691, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1007885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18834, "largest_seqno": 19621, "table_properties": {"data_size": 1003838, "index_size": 1763, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8516, "raw_average_key_size": 18, "raw_value_size": 995731, "raw_average_value_size": 2141, "num_data_blocks": 80, "num_entries": 465, "num_filter_entries": 465, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801881, "oldest_key_time": 1763801881, "file_creation_time": 1763801952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 28061 microseconds, and 6065 cpu microseconds.
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.953790) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1007885 bytes OK
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.953836) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.981386) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.981447) EVENT_LOG_v1 {"time_micros": 1763801952981432, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.981487) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1012889, prev total WAL file size 1012889, number of live WAL files 2.
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.982604) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(984KB)], [44(6159KB)]
Nov 22 03:59:12 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801952982785, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7314765, "oldest_snapshot_seqno": -1}
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4203 keys, 7164255 bytes, temperature: kUnknown
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801953097390, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7164255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7135234, "index_size": 17391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104164, "raw_average_key_size": 24, "raw_value_size": 7058196, "raw_average_value_size": 1679, "num_data_blocks": 727, "num_entries": 4203, "num_filter_entries": 4203, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763801952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.097812) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7164255 bytes
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.119649) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.8 rd, 62.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(14.4) write-amplify(7.1) OK, records in: 4727, records dropped: 524 output_compression: NoCompression
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.119699) EVENT_LOG_v1 {"time_micros": 1763801953119682, "job": 22, "event": "compaction_finished", "compaction_time_micros": 114714, "compaction_time_cpu_micros": 37335, "output_level": 6, "num_output_files": 1, "total_output_size": 7164255, "num_input_records": 4727, "num_output_records": 4203, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801953120105, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763801953121289, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:12.982376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-08:59:13.121390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:59:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 03:59:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:25 np0005532048 podman[260856]: 2025-11-22 08:59:25.401170806 +0000 UTC m=+0.081048107 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:59:25 np0005532048 podman[260855]: 2025-11-22 08:59:25.419468099 +0000 UTC m=+0.101431229 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 03:59:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:27 np0005532048 podman[260892]: 2025-11-22 08:59:27.41835477 +0000 UTC m=+0.111422627 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:59:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:59:27.940 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 03:59:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:59:27.941 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 03:59:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 08:59:27.941 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 03:59:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_08:59:52
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.control', 'images', 'vms']
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:59:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:59:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:56 np0005532048 podman[260920]: 2025-11-22 08:59:56.40618534 +0000 UTC m=+0.084073940 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:59:56 np0005532048 podman[260919]: 2025-11-22 08:59:56.424548274 +0000 UTC m=+0.105121337 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 22 03:59:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:59:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:58 np0005532048 podman[260958]: 2025-11-22 08:59:58.415279417 +0000 UTC m=+0.102376691 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 03:59:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:00:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:03 np0005532048 nova_compute[253661]: 2025-11-22 09:00:03.267 253665 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 1.76 sec#033[00m
Nov 22 04:00:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:00:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:00:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1134528289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.746 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.930 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.931 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.931 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:00:05 np0005532048 nova_compute[253661]: 2025-11-22 09:00:05.931 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:00:06 np0005532048 nova_compute[253661]: 2025-11-22 09:00:06.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:00:06 np0005532048 nova_compute[253661]: 2025-11-22 09:00:06.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:00:06 np0005532048 nova_compute[253661]: 2025-11-22 09:00:06.270 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:00:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:00:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3160554198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:06 np0005532048 nova_compute[253661]: 2025-11-22 09:00:06.948 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:00:06 np0005532048 nova_compute[253661]: 2025-11-22 09:00:06.956 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:00:06 np0005532048 nova_compute[253661]: 2025-11-22 09:00:06.969 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:00:06 np0005532048 nova_compute[253661]: 2025-11-22 09:00:06.970 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:00:06 np0005532048 nova_compute[253661]: 2025-11-22 09:00:06.970 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:00:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.971 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.971 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.971 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.972 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.990 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.990 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:00:08 np0005532048 nova_compute[253661]: 2025-11-22 09:00:08.991 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:00:09 np0005532048 nova_compute[253661]: 2025-11-22 09:00:09.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:00:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/547708793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/547708793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:00:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1f8fdf84-5ce7-4078-a4fd-63a2e8c739c8 does not exist
Nov 22 04:00:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 052239fe-87dd-4d39-acd9-1b9320dda08f does not exist
Nov 22 04:00:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 164c87a6-9209-4e92-af0b-7069362472fd does not exist
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:00:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:00:14 np0005532048 podman[261298]: 2025-11-22 09:00:13.914001468 +0000 UTC m=+0.025397312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:14 np0005532048 podman[261298]: 2025-11-22 09:00:14.099545065 +0000 UTC m=+0.210940899 container create 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:00:14 np0005532048 systemd[1]: Started libpod-conmon-5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9.scope.
Nov 22 04:00:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:00:14 np0005532048 podman[261298]: 2025-11-22 09:00:14.454491829 +0000 UTC m=+0.565887703 container init 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:00:14 np0005532048 podman[261298]: 2025-11-22 09:00:14.465874458 +0000 UTC m=+0.577270292 container start 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:00:14 np0005532048 systemd[1]: libpod-5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9.scope: Deactivated successfully.
Nov 22 04:00:14 np0005532048 crazy_tu[261315]: 167 167
Nov 22 04:00:14 np0005532048 conmon[261315]: conmon 5cc45acd968f6699c6fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9.scope/container/memory.events
Nov 22 04:00:14 np0005532048 podman[261298]: 2025-11-22 09:00:14.504594324 +0000 UTC m=+0.615990158 container attach 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:00:14 np0005532048 podman[261298]: 2025-11-22 09:00:14.505225049 +0000 UTC m=+0.616620893 container died 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:00:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-466be050fa72a8ecec9ca44da0e889d777e0b29f9107a2b40ec91d6e103f384f-merged.mount: Deactivated successfully.
Nov 22 04:00:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:14 np0005532048 podman[261298]: 2025-11-22 09:00:14.962763039 +0000 UTC m=+1.074158903 container remove 5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:00:15 np0005532048 systemd[1]: libpod-conmon-5cc45acd968f6699c6fe615807f90ffdef0bc2e99c8ee1598f96d059c9a8c3f9.scope: Deactivated successfully.
Nov 22 04:00:15 np0005532048 podman[261339]: 2025-11-22 09:00:15.129743767 +0000 UTC m=+0.029478938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:15 np0005532048 podman[261339]: 2025-11-22 09:00:15.279270114 +0000 UTC m=+0.179005275 container create deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 04:00:15 np0005532048 systemd[1]: Started libpod-conmon-deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7.scope.
Nov 22 04:00:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:00:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:15 np0005532048 podman[261339]: 2025-11-22 09:00:15.689570738 +0000 UTC m=+0.589305969 container init deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:15 np0005532048 podman[261339]: 2025-11-22 09:00:15.703605066 +0000 UTC m=+0.603340247 container start deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:00:15 np0005532048 podman[261339]: 2025-11-22 09:00:15.78796213 +0000 UTC m=+0.687697281 container attach deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:00:16 np0005532048 clever_grothendieck[261355]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:00:16 np0005532048 clever_grothendieck[261355]: --> relative data size: 1.0
Nov 22 04:00:16 np0005532048 clever_grothendieck[261355]: --> All data devices are unavailable
Nov 22 04:00:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:16 np0005532048 systemd[1]: libpod-deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7.scope: Deactivated successfully.
Nov 22 04:00:16 np0005532048 podman[261339]: 2025-11-22 09:00:16.833266674 +0000 UTC m=+1.733001845 container died deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:00:16 np0005532048 systemd[1]: libpod-deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7.scope: Consumed 1.088s CPU time.
Nov 22 04:00:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a42acfc55db580888cda9d3714670bc45f58ea39b713e7446c7b39c3d2d08827-merged.mount: Deactivated successfully.
Nov 22 04:00:22 np0005532048 podman[261339]: 2025-11-22 09:00:22.643715513 +0000 UTC m=+7.543450664 container remove deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:22 np0005532048 systemd[1]: libpod-conmon-deb8b4ebe29b0e3a15c17f6991a47175c8ea5a16a110f2215bb5ef2afa53dfe7.scope: Deactivated successfully.
Nov 22 04:00:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:23 np0005532048 podman[261537]: 2025-11-22 09:00:23.295032627 +0000 UTC m=+0.025739599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:23 np0005532048 podman[261537]: 2025-11-22 09:00:23.546601338 +0000 UTC m=+0.277308310 container create fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:00:23 np0005532048 systemd[1]: Started libpod-conmon-fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8.scope.
Nov 22 04:00:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:00:23 np0005532048 podman[261537]: 2025-11-22 09:00:23.962176656 +0000 UTC m=+0.692883628 container init fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:23 np0005532048 podman[261537]: 2025-11-22 09:00:23.970952574 +0000 UTC m=+0.701659526 container start fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:00:23 np0005532048 practical_chaplygin[261553]: 167 167
Nov 22 04:00:23 np0005532048 systemd[1]: libpod-fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8.scope: Deactivated successfully.
Nov 22 04:00:24 np0005532048 podman[261537]: 2025-11-22 09:00:24.205543203 +0000 UTC m=+0.936250175 container attach fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:00:24 np0005532048 podman[261537]: 2025-11-22 09:00:24.207172238 +0000 UTC m=+0.937879270 container died fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:00:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-72f258035da3f721c681202928e2f0ad5ba1464c29ef660589ddf3ee13b6057a-merged.mount: Deactivated successfully.
Nov 22 04:00:25 np0005532048 podman[261537]: 2025-11-22 09:00:25.405555357 +0000 UTC m=+2.136262309 container remove fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 22 04:00:25 np0005532048 systemd[1]: libpod-conmon-fdeccccdef4d7bbd9286fa0902153cb9ee04ae7c352666e213a2faa537f679e8.scope: Deactivated successfully.
Nov 22 04:00:25 np0005532048 podman[261576]: 2025-11-22 09:00:25.614536773 +0000 UTC m=+0.028692982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:26 np0005532048 podman[261576]: 2025-11-22 09:00:26.32292271 +0000 UTC m=+0.737078869 container create a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 04:00:26 np0005532048 systemd[1]: Started libpod-conmon-a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f.scope.
Nov 22 04:00:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:00:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:26 np0005532048 podman[261576]: 2025-11-22 09:00:26.930191767 +0000 UTC m=+1.344347916 container init a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:00:26 np0005532048 podman[261576]: 2025-11-22 09:00:26.940048346 +0000 UTC m=+1.354204465 container start a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:00:27 np0005532048 podman[261576]: 2025-11-22 09:00:27.095213767 +0000 UTC m=+1.509369916 container attach a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:00:27 np0005532048 podman[261595]: 2025-11-22 09:00:27.217642172 +0000 UTC m=+0.607521264 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 22 04:00:27 np0005532048 podman[261596]: 2025-11-22 09:00:27.322608274 +0000 UTC m=+0.712113118 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]: {
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:    "0": [
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:        {
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "devices": [
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "/dev/loop3"
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            ],
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_name": "ceph_lv0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_size": "21470642176",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "name": "ceph_lv0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "tags": {
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.cluster_name": "ceph",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.crush_device_class": "",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.encrypted": "0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.osd_id": "0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.type": "block",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.vdo": "0"
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            },
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "type": "block",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "vg_name": "ceph_vg0"
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:        }
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:    ],
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:    "1": [
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:        {
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "devices": [
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "/dev/loop4"
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            ],
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_name": "ceph_lv1",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_size": "21470642176",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "name": "ceph_lv1",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "tags": {
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.cluster_name": "ceph",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.crush_device_class": "",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.encrypted": "0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.osd_id": "1",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.type": "block",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.vdo": "0"
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            },
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "type": "block",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "vg_name": "ceph_vg1"
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:        }
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:    ],
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:    "2": [
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:        {
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "devices": [
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "/dev/loop5"
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            ],
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_name": "ceph_lv2",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_size": "21470642176",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "name": "ceph_lv2",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "tags": {
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.cluster_name": "ceph",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.crush_device_class": "",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.encrypted": "0",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.osd_id": "2",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.type": "block",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:                "ceph.vdo": "0"
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            },
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "type": "block",
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:            "vg_name": "ceph_vg2"
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:        }
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]:    ]
Nov 22 04:00:27 np0005532048 eloquent_poincare[261593]: }
Nov 22 04:00:27 np0005532048 systemd[1]: libpod-a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f.scope: Deactivated successfully.
Nov 22 04:00:27 np0005532048 podman[261641]: 2025-11-22 09:00:27.838857194 +0000 UTC m=+0.026742189 container died a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:00:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:00:27.942 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:00:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:00:27.943 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:00:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:00:27.943 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:00:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ab75126ef1ba97dfde44ec131f6e4e9a37a0093055d388747cee875c88d5ec01-merged.mount: Deactivated successfully.
Nov 22 04:00:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:30 np0005532048 podman[261641]: 2025-11-22 09:00:30.040342812 +0000 UTC m=+2.228227777 container remove a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_poincare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:00:30 np0005532048 systemd[1]: libpod-conmon-a43875e7b4c8f77d76f97620cc84719f17302ff9c523fdae05fab03511161c2f.scope: Deactivated successfully.
Nov 22 04:00:30 np0005532048 podman[261656]: 2025-11-22 09:00:30.123431829 +0000 UTC m=+1.562286503 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 04:00:30 np0005532048 podman[261822]: 2025-11-22 09:00:30.693037334 +0000 UTC m=+0.024664686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:30 np0005532048 podman[261822]: 2025-11-22 09:00:30.821753603 +0000 UTC m=+0.153380925 container create 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:00:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:30 np0005532048 systemd[1]: Started libpod-conmon-56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d.scope.
Nov 22 04:00:30 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:00:31 np0005532048 podman[261822]: 2025-11-22 09:00:31.207578508 +0000 UTC m=+0.539205850 container init 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:00:31 np0005532048 podman[261822]: 2025-11-22 09:00:31.222101548 +0000 UTC m=+0.553728890 container start 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:00:31 np0005532048 zealous_khorana[261838]: 167 167
Nov 22 04:00:31 np0005532048 systemd[1]: libpod-56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d.scope: Deactivated successfully.
Nov 22 04:00:31 np0005532048 podman[261822]: 2025-11-22 09:00:31.362586386 +0000 UTC m=+0.694213738 container attach 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:31 np0005532048 podman[261822]: 2025-11-22 09:00:31.364038717 +0000 UTC m=+0.695666039 container died 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:00:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-65640d35b213629dc4315c12a0196083d9f97c4a47460a66f02bcdd4f27586c6-merged.mount: Deactivated successfully.
Nov 22 04:00:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:31 np0005532048 podman[261822]: 2025-11-22 09:00:31.954215559 +0000 UTC m=+1.285842881 container remove 56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:00:31 np0005532048 systemd[1]: libpod-conmon-56596517e521fe5d828df816b721b763fc6d4e7cf1cc7a05f1789a3f61c11f3d.scope: Deactivated successfully.
Nov 22 04:00:32 np0005532048 podman[261862]: 2025-11-22 09:00:32.114252754 +0000 UTC m=+0.023669175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:32 np0005532048 podman[261862]: 2025-11-22 09:00:32.491060468 +0000 UTC m=+0.400476869 container create 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:32 np0005532048 systemd[1]: Started libpod-conmon-2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37.scope.
Nov 22 04:00:32 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:00:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:32 np0005532048 podman[261862]: 2025-11-22 09:00:32.896986952 +0000 UTC m=+0.806403393 container init 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:00:32 np0005532048 podman[261862]: 2025-11-22 09:00:32.905212207 +0000 UTC m=+0.814628608 container start 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:00:33 np0005532048 podman[261862]: 2025-11-22 09:00:33.300907913 +0000 UTC m=+1.210324344 container attach 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]: {
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "osd_id": 1,
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "type": "bluestore"
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:    },
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "osd_id": 0,
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "type": "bluestore"
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:    },
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "osd_id": 2,
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:        "type": "bluestore"
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]:    }
Nov 22 04:00:33 np0005532048 laughing_northcutt[261878]: }
Nov 22 04:00:34 np0005532048 systemd[1]: libpod-2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37.scope: Deactivated successfully.
Nov 22 04:00:34 np0005532048 podman[261862]: 2025-11-22 09:00:34.02005469 +0000 UTC m=+1.929471091 container died 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:00:34 np0005532048 systemd[1]: libpod-2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37.scope: Consumed 1.118s CPU time.
Nov 22 04:00:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6bbe871f52feb4bcba195fb98ca80aaf5c771a964cddeb3a6f177f496d3327b5-merged.mount: Deactivated successfully.
Nov 22 04:00:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:35 np0005532048 podman[261862]: 2025-11-22 09:00:35.516174172 +0000 UTC m=+3.425590613 container remove 2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:00:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:00:35 np0005532048 systemd[1]: libpod-conmon-2c5d495cc663a76d8aacbfaa4b608f85383c206811a09e7e7d70e781975d1b37.scope: Deactivated successfully.
Nov 22 04:00:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:00:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:00:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:00:36 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d80e1d42-7273-4dbb-b3d0-a13c192abef6 does not exist
Nov 22 04:00:36 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c0ce25ce-50c8-4018-b080-4d4fea127c0f does not exist
Nov 22 04:00:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:37 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:00:37 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:00:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:00:52
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'vms']
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:00:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:00:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:00:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:00:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:00:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:00:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:00:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:00:53 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:00:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:00:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:00:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:57 np0005532048 podman[261974]: 2025-11-22 09:00:57.416034974 +0000 UTC m=+0.102185484 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:00:57 np0005532048 podman[261993]: 2025-11-22 09:00:57.539278625 +0000 UTC m=+0.086697285 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:00:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:00:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4506 writes, 20K keys, 4506 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4506 writes, 4506 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1277 writes, 5838 keys, 1277 commit groups, 1.0 writes per commit group, ingest: 8.41 MB, 0.01 MB/s#012Interval WAL: 1278 writes, 1278 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     58.5      0.36              0.07        11    0.032       0      0       0.0       0.0#012  L6      1/0    6.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    103.8     86.5      0.81              0.21        10    0.081     43K   5161       0.0       0.0#012 Sum      1/0    6.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     72.1     78.0      1.16              0.27        21    0.055     43K   5161       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.2     67.1     66.5      0.63              0.14        10    0.063     23K   2975       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    103.8     86.5      0.81              0.21        10    0.081     43K   5161       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     58.7      0.35              0.07        10    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 1.2 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 6.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000146 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(409,5.97 MB,1.96262%) FilterBlock(22,131.48 KB,0.0422377%) IndexBlock(22,239.73 KB,0.0770117%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 04:00:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:00 np0005532048 podman[262015]: 2025-11-22 09:01:00.40878786 +0000 UTC m=+0.107023147 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 22 04:01:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:01:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:01:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:01:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1832247781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:07 np0005532048 nova_compute[253661]: 2025-11-22 09:01:07.827 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.014 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.016 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5192MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.016 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.017 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.073 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.074 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.090 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:01:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:01:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4102044728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.640 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.646 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.662 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.666 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:01:08 np0005532048 nova_compute[253661]: 2025-11-22 09:01:08.666 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:01:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:09 np0005532048 nova_compute[253661]: 2025-11-22 09:01:09.668 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:09 np0005532048 nova_compute[253661]: 2025-11-22 09:01:09.669 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:09 np0005532048 nova_compute[253661]: 2025-11-22 09:01:09.670 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:01:09 np0005532048 nova_compute[253661]: 2025-11-22 09:01:09.670 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:01:09 np0005532048 nova_compute[253661]: 2025-11-22 09:01:09.688 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:01:09 np0005532048 nova_compute[253661]: 2025-11-22 09:01:09.690 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:09 np0005532048 nova_compute[253661]: 2025-11-22 09:01:09.690 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:09 np0005532048 nova_compute[253661]: 2025-11-22 09:01:09.691 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:11 np0005532048 nova_compute[253661]: 2025-11-22 09:01:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:11 np0005532048 nova_compute[253661]: 2025-11-22 09:01:11.249 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:01:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:01:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954446876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:01:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/954446876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:01:27.943 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:01:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:01:27.944 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:01:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:01:27.944 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:01:28 np0005532048 podman[262097]: 2025-11-22 09:01:28.371513938 +0000 UTC m=+0.062054392 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 04:01:28 np0005532048 podman[262096]: 2025-11-22 09:01:28.390736266 +0000 UTC m=+0.083031587 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:01:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:31 np0005532048 podman[262134]: 2025-11-22 09:01:31.405650264 +0000 UTC m=+0.092983070 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:01:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:01:37 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 32c5d87c-d364-42ce-9fcb-9276d09dc427 does not exist
Nov 22 04:01:37 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 370a6ea7-0ad8-4efd-a943-2063f934b86d does not exist
Nov 22 04:01:37 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d2eb7618-43c9-4f46-ae9d-c4df8214a684 does not exist
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:01:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:01:38 np0005532048 podman[262432]: 2025-11-22 09:01:38.160540221 +0000 UTC m=+0.020789414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:01:38 np0005532048 podman[262432]: 2025-11-22 09:01:38.633170234 +0000 UTC m=+0.493419397 container create 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:01:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:01:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:01:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:01:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:38 np0005532048 systemd[1]: Started libpod-conmon-98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6.scope.
Nov 22 04:01:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:01:39 np0005532048 podman[262432]: 2025-11-22 09:01:39.198197921 +0000 UTC m=+1.058447114 container init 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:01:39 np0005532048 podman[262432]: 2025-11-22 09:01:39.210463322 +0000 UTC m=+1.070712485 container start 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:01:39 np0005532048 systemd[1]: libpod-98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6.scope: Deactivated successfully.
Nov 22 04:01:39 np0005532048 nervous_wu[262449]: 167 167
Nov 22 04:01:39 np0005532048 conmon[262449]: conmon 98e4a48852128f8990e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6.scope/container/memory.events
Nov 22 04:01:39 np0005532048 podman[262432]: 2025-11-22 09:01:39.337194327 +0000 UTC m=+1.197443500 container attach 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:01:39 np0005532048 podman[262432]: 2025-11-22 09:01:39.338433834 +0000 UTC m=+1.198682997 container died 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:01:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-292b8aa43a20628a9365107f1211ecd7e18123d03dd8f1000f9d0ae19079d264-merged.mount: Deactivated successfully.
Nov 22 04:01:39 np0005532048 podman[262432]: 2025-11-22 09:01:39.650258617 +0000 UTC m=+1.510507770 container remove 98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:01:39 np0005532048 systemd[1]: libpod-conmon-98e4a48852128f8990e647371cd6afce92d7f71f5ee54d6f3cb978baa8f100f6.scope: Deactivated successfully.
Nov 22 04:01:39 np0005532048 podman[262473]: 2025-11-22 09:01:39.79756362 +0000 UTC m=+0.025323069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:01:39 np0005532048 podman[262473]: 2025-11-22 09:01:39.904078326 +0000 UTC m=+0.131837755 container create 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:01:40 np0005532048 systemd[1]: Started libpod-conmon-26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520.scope.
Nov 22 04:01:40 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:01:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:40 np0005532048 podman[262473]: 2025-11-22 09:01:40.129701366 +0000 UTC m=+0.357460805 container init 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:01:40 np0005532048 podman[262473]: 2025-11-22 09:01:40.137722996 +0000 UTC m=+0.365482425 container start 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:01:40 np0005532048 podman[262473]: 2025-11-22 09:01:40.162688077 +0000 UTC m=+0.390447526 container attach 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:01:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:41 np0005532048 gracious_tharp[262490]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:01:41 np0005532048 gracious_tharp[262490]: --> relative data size: 1.0
Nov 22 04:01:41 np0005532048 gracious_tharp[262490]: --> All data devices are unavailable
Nov 22 04:01:41 np0005532048 systemd[1]: libpod-26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520.scope: Deactivated successfully.
Nov 22 04:01:41 np0005532048 systemd[1]: libpod-26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520.scope: Consumed 1.184s CPU time.
Nov 22 04:01:41 np0005532048 podman[262473]: 2025-11-22 09:01:41.368540945 +0000 UTC m=+1.596300374 container died 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:01:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fb9beafd093c4c3320839d45cd7d24ce897adcae5afea84a9a6cad2487b3e694-merged.mount: Deactivated successfully.
Nov 22 04:01:41 np0005532048 podman[262473]: 2025-11-22 09:01:41.687609822 +0000 UTC m=+1.915369271 container remove 26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:01:41 np0005532048 systemd[1]: libpod-conmon-26adf5ab0a8447a454bbf69344b4b35d6c02a2b7839116ca2006046190939520.scope: Deactivated successfully.
Nov 22 04:01:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:42 np0005532048 podman[262672]: 2025-11-22 09:01:42.463881794 +0000 UTC m=+0.107131410 container create 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:01:42 np0005532048 podman[262672]: 2025-11-22 09:01:42.387266714 +0000 UTC m=+0.030516420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:01:42 np0005532048 systemd[1]: Started libpod-conmon-27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f.scope.
Nov 22 04:01:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:01:42 np0005532048 podman[262672]: 2025-11-22 09:01:42.670283174 +0000 UTC m=+0.313532820 container init 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:01:42 np0005532048 podman[262672]: 2025-11-22 09:01:42.67857301 +0000 UTC m=+0.321822626 container start 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:01:42 np0005532048 upbeat_villani[262688]: 167 167
Nov 22 04:01:42 np0005532048 systemd[1]: libpod-27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f.scope: Deactivated successfully.
Nov 22 04:01:42 np0005532048 podman[262672]: 2025-11-22 09:01:42.741437507 +0000 UTC m=+0.384687173 container attach 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:01:42 np0005532048 podman[262672]: 2025-11-22 09:01:42.742949979 +0000 UTC m=+0.386199635 container died 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:01:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2bf976185d1b1f7502c506be3d827a161805ebe46ec7bbc12e3f7456e62cc312-merged.mount: Deactivated successfully.
Nov 22 04:01:43 np0005532048 podman[262672]: 2025-11-22 09:01:43.151601881 +0000 UTC m=+0.794851497 container remove 27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:01:43 np0005532048 systemd[1]: libpod-conmon-27905d58e476d1a0660a92f482b96ae4e9f83cda0be8eab9d5b8972b31d48b2f.scope: Deactivated successfully.
Nov 22 04:01:43 np0005532048 podman[262712]: 2025-11-22 09:01:43.413531093 +0000 UTC m=+0.103943192 container create cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:01:43 np0005532048 podman[262712]: 2025-11-22 09:01:43.339780353 +0000 UTC m=+0.030192522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:01:43 np0005532048 systemd[1]: Started libpod-conmon-cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56.scope.
Nov 22 04:01:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:01:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:43 np0005532048 podman[262712]: 2025-11-22 09:01:43.548093185 +0000 UTC m=+0.238505314 container init cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:01:43 np0005532048 podman[262712]: 2025-11-22 09:01:43.554818888 +0000 UTC m=+0.245230987 container start cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 04:01:43 np0005532048 podman[262712]: 2025-11-22 09:01:43.581034846 +0000 UTC m=+0.271446955 container attach cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]: {
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:    "0": [
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:        {
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "devices": [
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "/dev/loop3"
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            ],
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_name": "ceph_lv0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_size": "21470642176",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "name": "ceph_lv0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "tags": {
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.cluster_name": "ceph",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.crush_device_class": "",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.encrypted": "0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.osd_id": "0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.type": "block",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.vdo": "0"
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            },
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "type": "block",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "vg_name": "ceph_vg0"
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:        }
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:    ],
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:    "1": [
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:        {
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "devices": [
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "/dev/loop4"
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            ],
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_name": "ceph_lv1",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_size": "21470642176",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "name": "ceph_lv1",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "tags": {
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.cluster_name": "ceph",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.crush_device_class": "",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.encrypted": "0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.osd_id": "1",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.type": "block",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.vdo": "0"
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            },
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "type": "block",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "vg_name": "ceph_vg1"
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:        }
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:    ],
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:    "2": [
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:        {
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "devices": [
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "/dev/loop5"
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            ],
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_name": "ceph_lv2",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_size": "21470642176",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "name": "ceph_lv2",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "tags": {
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.cluster_name": "ceph",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.crush_device_class": "",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.encrypted": "0",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.osd_id": "2",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.type": "block",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:                "ceph.vdo": "0"
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            },
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "type": "block",
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:            "vg_name": "ceph_vg2"
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:        }
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]:    ]
Nov 22 04:01:44 np0005532048 unruffled_satoshi[262728]: }
Nov 22 04:01:44 np0005532048 systemd[1]: libpod-cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56.scope: Deactivated successfully.
Nov 22 04:01:44 np0005532048 podman[262712]: 2025-11-22 09:01:44.386680832 +0000 UTC m=+1.077092951 container died cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 04:01:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c891309a051fc4425cf2fcc7fa2e95db3224fe7bf08427762197362bcc0f26c0-merged.mount: Deactivated successfully.
Nov 22 04:01:45 np0005532048 podman[262712]: 2025-11-22 09:01:45.629030957 +0000 UTC m=+2.319443056 container remove cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:01:45 np0005532048 systemd[1]: libpod-conmon-cdf51647a4827956925d03d451c51fc31fa4fcbedb0a8a19993f719ff6010b56.scope: Deactivated successfully.
Nov 22 04:01:46 np0005532048 podman[262892]: 2025-11-22 09:01:46.333274876 +0000 UTC m=+0.028565348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:01:46 np0005532048 podman[262892]: 2025-11-22 09:01:46.595518725 +0000 UTC m=+0.290809197 container create 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 04:01:46 np0005532048 systemd[1]: Started libpod-conmon-3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd.scope.
Nov 22 04:01:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:01:46 np0005532048 podman[262892]: 2025-11-22 09:01:46.834598049 +0000 UTC m=+0.529888501 container init 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:01:46 np0005532048 podman[262892]: 2025-11-22 09:01:46.842450117 +0000 UTC m=+0.537740549 container start 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:01:46 np0005532048 mystifying_edison[262909]: 167 167
Nov 22 04:01:46 np0005532048 systemd[1]: libpod-3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd.scope: Deactivated successfully.
Nov 22 04:01:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:46 np0005532048 podman[262892]: 2025-11-22 09:01:46.878926032 +0000 UTC m=+0.574216464 container attach 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:01:46 np0005532048 podman[262892]: 2025-11-22 09:01:46.880140138 +0000 UTC m=+0.575430570 container died 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:01:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a90cbefe5fd8b2fd0ca2e100c77dd25ef23334f2a79ebba63b888b42a122f131-merged.mount: Deactivated successfully.
Nov 22 04:01:47 np0005532048 podman[262892]: 2025-11-22 09:01:47.31039954 +0000 UTC m=+1.005689992 container remove 3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:01:47 np0005532048 systemd[1]: libpod-conmon-3ec28d867def57c28794953abdba89546f89057649349b378f9f0c0e39594ecd.scope: Deactivated successfully.
Nov 22 04:01:47 np0005532048 podman[262931]: 2025-11-22 09:01:47.516218838 +0000 UTC m=+0.062586893 container create 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:01:47 np0005532048 systemd[1]: Started libpod-conmon-8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825.scope.
Nov 22 04:01:47 np0005532048 podman[262931]: 2025-11-22 09:01:47.482599333 +0000 UTC m=+0.028967438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:01:47 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:01:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:47 np0005532048 podman[262931]: 2025-11-22 09:01:47.613655721 +0000 UTC m=+0.160023806 container init 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:01:47 np0005532048 podman[262931]: 2025-11-22 09:01:47.623157702 +0000 UTC m=+0.169525767 container start 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:01:47 np0005532048 podman[262931]: 2025-11-22 09:01:47.638782685 +0000 UTC m=+0.185150780 container attach 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]: {
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "osd_id": 1,
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "type": "bluestore"
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:    },
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "osd_id": 0,
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "type": "bluestore"
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:    },
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "osd_id": 2,
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:        "type": "bluestore"
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]:    }
Nov 22 04:01:48 np0005532048 nifty_gauss[262948]: }
Nov 22 04:01:48 np0005532048 systemd[1]: libpod-8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825.scope: Deactivated successfully.
Nov 22 04:01:48 np0005532048 systemd[1]: libpod-8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825.scope: Consumed 1.064s CPU time.
Nov 22 04:01:48 np0005532048 podman[262931]: 2025-11-22 09:01:48.682031515 +0000 UTC m=+1.228399600 container died 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:01:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-50d74e21cee7edc93010a0a83f7ddf81369dc5a92e1ecebcd8400cb9462cdc21-merged.mount: Deactivated successfully.
Nov 22 04:01:48 np0005532048 podman[262931]: 2025-11-22 09:01:48.799738559 +0000 UTC m=+1.346106624 container remove 8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gauss, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:01:48 np0005532048 systemd[1]: libpod-conmon-8a7d2d3f07a711f0ab9522a399624a12e210108b409d7e87f36175fec203a825.scope: Deactivated successfully.
Nov 22 04:01:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:01:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:01:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:01:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:01:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8755f0d7-7072-465b-bff1-f15f25ebe547 does not exist
Nov 22 04:01:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a0969d3e-570e-4444-b1f4-2529fd6c925a does not exist
Nov 22 04:01:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:01:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:01:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:01:52
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'images', '.rgw.root', 'backups', 'vms', 'volumes']
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:53 np0005532048 ceph-mgr[75315]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1636168236
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:01:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:01:59 np0005532048 podman[263046]: 2025-11-22 09:01:59.422144976 +0000 UTC m=+0.090836665 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 22 04:01:59 np0005532048 podman[263045]: 2025-11-22 09:01:59.440734208 +0000 UTC m=+0.117627657 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:02:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:00 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 22 04:02:00 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:00.989269) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:02:00 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 22 04:02:00 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802120989382, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1521, "num_deletes": 251, "total_data_size": 2465243, "memory_usage": 2504544, "flush_reason": "Manual Compaction"}
Nov 22 04:02:00 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121021561, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2431358, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19622, "largest_seqno": 21142, "table_properties": {"data_size": 2424235, "index_size": 4194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14519, "raw_average_key_size": 19, "raw_value_size": 2410050, "raw_average_value_size": 3292, "num_data_blocks": 191, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763801953, "oldest_key_time": 1763801953, "file_creation_time": 1763802120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 32419 microseconds, and 11362 cpu microseconds.
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.021684) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2431358 bytes OK
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.021723) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.024120) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.024151) EVENT_LOG_v1 {"time_micros": 1763802121024141, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.024183) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2458593, prev total WAL file size 2458593, number of live WAL files 2.
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.025607) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2374KB)], [47(6996KB)]
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121025700, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9595613, "oldest_snapshot_seqno": -1}
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4421 keys, 7827837 bytes, temperature: kUnknown
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121108021, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7827837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7796815, "index_size": 18880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109256, "raw_average_key_size": 24, "raw_value_size": 7715314, "raw_average_value_size": 1745, "num_data_blocks": 790, "num_entries": 4421, "num_filter_entries": 4421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.108387) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7827837 bytes
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.112686) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.4 rd, 95.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 4935, records dropped: 514 output_compression: NoCompression
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.112709) EVENT_LOG_v1 {"time_micros": 1763802121112698, "job": 24, "event": "compaction_finished", "compaction_time_micros": 82422, "compaction_time_cpu_micros": 33866, "output_level": 6, "num_output_files": 1, "total_output_size": 7827837, "num_input_records": 4935, "num_output_records": 4421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121113337, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802121114997, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.025416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:02:01.115176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:02:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:02 np0005532048 podman[263084]: 2025-11-22 09:02:02.41429009 +0000 UTC m=+0.099165688 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:02:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.253 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:02:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:02:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240995301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.740 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:02:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.897 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.898 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5193MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.898 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.899 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.954 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.955 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:02:08 np0005532048 nova_compute[253661]: 2025-11-22 09:02:08.971 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:02:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:02:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3879418390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:09 np0005532048 nova_compute[253661]: 2025-11-22 09:02:09.545 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:02:09 np0005532048 nova_compute[253661]: 2025-11-22 09:02:09.552 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:02:09 np0005532048 nova_compute[253661]: 2025-11-22 09:02:09.563 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:02:09 np0005532048 nova_compute[253661]: 2025-11-22 09:02:09.565 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:02:09 np0005532048 nova_compute[253661]: 2025-11-22 09:02:09.565 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:02:10 np0005532048 nova_compute[253661]: 2025-11-22 09:02:10.566 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:02:10 np0005532048 nova_compute[253661]: 2025-11-22 09:02:10.566 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:02:10 np0005532048 nova_compute[253661]: 2025-11-22 09:02:10.567 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:02:10 np0005532048 nova_compute[253661]: 2025-11-22 09:02:10.567 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:02:10 np0005532048 nova_compute[253661]: 2025-11-22 09:02:10.580 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:02:10 np0005532048 nova_compute[253661]: 2025-11-22 09:02:10.581 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:02:10 np0005532048 nova_compute[253661]: 2025-11-22 09:02:10.582 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:02:10 np0005532048 nova_compute[253661]: 2025-11-22 09:02:10.582 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:02:10 np0005532048 nova_compute[253661]: 2025-11-22 09:02:10.582 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:02:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2988617963' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2988617963' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:13 np0005532048 nova_compute[253661]: 2025-11-22 09:02:13.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:02:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:02:27.945 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:02:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:02:27.945 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:02:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:02:27.945 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:02:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 22 04:02:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 22 04:02:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 22 04:02:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 22 04:02:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 22 04:02:30 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 22 04:02:30 np0005532048 podman[263155]: 2025-11-22 09:02:30.360484835 +0000 UTC m=+0.056541722 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 04:02:30 np0005532048 podman[263156]: 2025-11-22 09:02:30.362471333 +0000 UTC m=+0.054691056 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:02:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 22 04:02:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 22 04:02:32 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 22 04:02:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 8.4 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 MiB/s wr, 6 op/s
Nov 22 04:02:33 np0005532048 podman[263190]: 2025-11-22 09:02:33.407377034 +0000 UTC m=+0.094790553 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 04:02:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 22 04:02:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 22 04:02:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 22 04:02:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 16 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.8 MiB/s wr, 29 op/s
Nov 22 04:02:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 33 MiB data, 182 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.0 MiB/s wr, 49 op/s
Nov 22 04:02:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 22 04:02:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 22 04:02:37 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 22 04:02:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.2 MiB/s wr, 54 op/s
Nov 22 04:02:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.1 MiB/s wr, 42 op/s
Nov 22 04:02:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 22 04:02:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 22 04:02:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 22 04:02:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 26 op/s
Nov 22 04:02:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.0 MiB/s wr, 30 op/s
Nov 22 04:02:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 22 04:02:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 22 04:02:46 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 22 04:02:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 16 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Nov 22 04:02:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 456 KiB data, 162 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:02:50 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ecd74cb3-2e5c-400a-bdf9-b28326b504b0 does not exist
Nov 22 04:02:50 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 99342604-3675-45d5-87f1-05b07bbdebae does not exist
Nov 22 04:02:50 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 61b2b36c-e5d9-48eb-bee7-06f40dce4204 does not exist
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:02:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:02:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 456 KiB data, 162 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 22 04:02:50 np0005532048 podman[263609]: 2025-11-22 09:02:50.940667187 +0000 UTC m=+0.066931407 container create 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:02:50 np0005532048 systemd[1]: Started libpod-conmon-742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e.scope.
Nov 22 04:02:50 np0005532048 podman[263609]: 2025-11-22 09:02:50.899690289 +0000 UTC m=+0.025954519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:51 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:02:51 np0005532048 podman[263609]: 2025-11-22 09:02:51.054457606 +0000 UTC m=+0.180721846 container init 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:02:51 np0005532048 podman[263609]: 2025-11-22 09:02:51.062247418 +0000 UTC m=+0.188511638 container start 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 04:02:51 np0005532048 relaxed_bouman[263625]: 167 167
Nov 22 04:02:51 np0005532048 systemd[1]: libpod-742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e.scope: Deactivated successfully.
Nov 22 04:02:51 np0005532048 podman[263609]: 2025-11-22 09:02:51.07127135 +0000 UTC m=+0.197535580 container attach 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:02:51 np0005532048 podman[263609]: 2025-11-22 09:02:51.071795622 +0000 UTC m=+0.198059842 container died 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:02:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5e82f89e3b6418bd8685ecfd663cd59ee973645796f39fc37a2116afd0e7b0df-merged.mount: Deactivated successfully.
Nov 22 04:02:51 np0005532048 podman[263609]: 2025-11-22 09:02:51.152821795 +0000 UTC m=+0.279086005 container remove 742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bouman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:02:51 np0005532048 systemd[1]: libpod-conmon-742714d178cddb4c4e37a6c47e037a7c7c04efff6a2014e5104feff5f9fa157e.scope: Deactivated successfully.
Nov 22 04:02:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:02:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:02:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:02:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:02:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:02:51 np0005532048 podman[263651]: 2025-11-22 09:02:51.321862264 +0000 UTC m=+0.052175274 container create 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:02:51 np0005532048 systemd[1]: Started libpod-conmon-61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3.scope.
Nov 22 04:02:51 np0005532048 podman[263651]: 2025-11-22 09:02:51.298289534 +0000 UTC m=+0.028602574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:51 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:02:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:51 np0005532048 podman[263651]: 2025-11-22 09:02:51.415801275 +0000 UTC m=+0.146114315 container init 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:02:51 np0005532048 podman[263651]: 2025-11-22 09:02:51.4245795 +0000 UTC m=+0.154892520 container start 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:02:51 np0005532048 podman[263651]: 2025-11-22 09:02:51.428363183 +0000 UTC m=+0.158676233 container attach 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:02:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 22 04:02:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 22 04:02:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:02:52
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups']
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:02:52 np0005532048 eloquent_fermat[263667]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:02:52 np0005532048 eloquent_fermat[263667]: --> relative data size: 1.0
Nov 22 04:02:52 np0005532048 eloquent_fermat[263667]: --> All data devices are unavailable
Nov 22 04:02:52 np0005532048 systemd[1]: libpod-61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3.scope: Deactivated successfully.
Nov 22 04:02:52 np0005532048 podman[263651]: 2025-11-22 09:02:52.602647159 +0000 UTC m=+1.332960179 container died 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:02:52 np0005532048 systemd[1]: libpod-61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3.scope: Consumed 1.122s CPU time.
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-db12deeab3afc07165708903c2b6984adadc094e4be67f9e3c7e697e69d6fbea-merged.mount: Deactivated successfully.
Nov 22 04:02:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Nov 22 04:02:53 np0005532048 podman[263651]: 2025-11-22 09:02:53.024008694 +0000 UTC m=+1.754321724 container remove 61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:02:53 np0005532048 systemd[1]: libpod-conmon-61bfd293e4e4be2772fa43c3b7b3e90aaceb549287e1d76309531fd6e3350ba3.scope: Deactivated successfully.
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:02:54 np0005532048 podman[263852]: 2025-11-22 09:02:54.689151023 +0000 UTC m=+0.040945648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 KiB/s wr, 36 op/s
Nov 22 04:02:55 np0005532048 podman[263852]: 2025-11-22 09:02:55.073857335 +0000 UTC m=+0.425651950 container create 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:02:55 np0005532048 systemd[1]: Started libpod-conmon-6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327.scope.
Nov 22 04:02:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:02:55 np0005532048 podman[263852]: 2025-11-22 09:02:55.774526171 +0000 UTC m=+1.126320876 container init 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:02:55 np0005532048 podman[263852]: 2025-11-22 09:02:55.78709654 +0000 UTC m=+1.138891175 container start 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:02:55 np0005532048 frosty_darwin[263869]: 167 167
Nov 22 04:02:55 np0005532048 systemd[1]: libpod-6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327.scope: Deactivated successfully.
Nov 22 04:02:55 np0005532048 podman[263852]: 2025-11-22 09:02:55.911852739 +0000 UTC m=+1.263647434 container attach 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:02:55 np0005532048 podman[263852]: 2025-11-22 09:02:55.912544656 +0000 UTC m=+1.264339321 container died 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:02:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-258a28ccb2fe3bdf5f7b78b51f857af8e399660ef49f0b2a08969ac1f2cdde0a-merged.mount: Deactivated successfully.
Nov 22 04:02:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 818 B/s wr, 7 op/s
Nov 22 04:02:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:56 np0005532048 podman[263852]: 2025-11-22 09:02:56.984674378 +0000 UTC m=+2.336469003 container remove 6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 04:02:56 np0005532048 systemd[1]: libpod-conmon-6fbbdbfcb2513a0acd8ab812912428229512aee24c4e2ca20f18a004c0cf7327.scope: Deactivated successfully.
Nov 22 04:02:57 np0005532048 podman[263892]: 2025-11-22 09:02:57.168223064 +0000 UTC m=+0.028564384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:57 np0005532048 podman[263892]: 2025-11-22 09:02:57.272508329 +0000 UTC m=+0.132849629 container create bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:02:57 np0005532048 systemd[1]: Started libpod-conmon-bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5.scope.
Nov 22 04:02:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:02:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:57 np0005532048 podman[263892]: 2025-11-22 09:02:57.47750856 +0000 UTC m=+0.337849900 container init bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 04:02:57 np0005532048 podman[263892]: 2025-11-22 09:02:57.486612735 +0000 UTC m=+0.346954035 container start bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:02:57 np0005532048 podman[263892]: 2025-11-22 09:02:57.500915006 +0000 UTC m=+0.361256376 container attach bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]: {
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:    "0": [
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:        {
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "devices": [
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "/dev/loop3"
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            ],
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_name": "ceph_lv0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_size": "21470642176",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "name": "ceph_lv0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "tags": {
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.cluster_name": "ceph",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.crush_device_class": "",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.encrypted": "0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.osd_id": "0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.type": "block",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.vdo": "0"
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            },
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "type": "block",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "vg_name": "ceph_vg0"
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:        }
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:    ],
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:    "1": [
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:        {
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "devices": [
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "/dev/loop4"
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            ],
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_name": "ceph_lv1",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_size": "21470642176",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "name": "ceph_lv1",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "tags": {
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.cluster_name": "ceph",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.crush_device_class": "",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.encrypted": "0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.osd_id": "1",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.type": "block",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.vdo": "0"
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            },
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "type": "block",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "vg_name": "ceph_vg1"
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:        }
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:    ],
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:    "2": [
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:        {
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "devices": [
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "/dev/loop5"
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            ],
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_name": "ceph_lv2",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_size": "21470642176",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "name": "ceph_lv2",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "tags": {
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.cluster_name": "ceph",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.crush_device_class": "",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.encrypted": "0",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.osd_id": "2",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.type": "block",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:                "ceph.vdo": "0"
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            },
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "type": "block",
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:            "vg_name": "ceph_vg2"
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:        }
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]:    ]
Nov 22 04:02:58 np0005532048 quirky_faraday[263909]: }
Nov 22 04:02:58 np0005532048 systemd[1]: libpod-bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5.scope: Deactivated successfully.
Nov 22 04:02:58 np0005532048 podman[263892]: 2025-11-22 09:02:58.286133221 +0000 UTC m=+1.146474521 container died bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:02:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-142da861e7bab3a7392f66bdc884ecf6b13fb3e5db6567679be1e62fbf9b38b0-merged.mount: Deactivated successfully.
Nov 22 04:02:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:02:59 np0005532048 podman[263892]: 2025-11-22 09:02:59.245866698 +0000 UTC m=+2.106207998 container remove bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_faraday, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:02:59 np0005532048 systemd[1]: libpod-conmon-bc4d7cf1d7e8a9db68af5bd5c6dc5333befab423fcf92157665cb030327e98d5.scope: Deactivated successfully.
Nov 22 04:03:00 np0005532048 podman[264070]: 2025-11-22 09:03:00.059702548 +0000 UTC m=+0.033071725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:03:00 np0005532048 podman[264070]: 2025-11-22 09:03:00.221166579 +0000 UTC m=+0.194535706 container create 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:03:00 np0005532048 systemd[1]: Started libpod-conmon-77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed.scope.
Nov 22 04:03:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:03:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 22 04:03:00 np0005532048 podman[264070]: 2025-11-22 09:03:00.629086583 +0000 UTC m=+0.602455700 container init 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:03:00 np0005532048 podman[264070]: 2025-11-22 09:03:00.638218898 +0000 UTC m=+0.611588025 container start 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:03:00 np0005532048 peaceful_hamilton[264087]: 167 167
Nov 22 04:03:00 np0005532048 systemd[1]: libpod-77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed.scope: Deactivated successfully.
Nov 22 04:03:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 22 04:03:00 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 22 04:03:00 np0005532048 podman[264070]: 2025-11-22 09:03:00.708805494 +0000 UTC m=+0.682174641 container attach 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:03:00 np0005532048 podman[264070]: 2025-11-22 09:03:00.710493415 +0000 UTC m=+0.683862562 container died 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:03:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ef6eb6c1eb075c38c3633f2957ef78384e5147aef9c1b46d548f6224ee29da16-merged.mount: Deactivated successfully.
Nov 22 04:03:01 np0005532048 podman[264070]: 2025-11-22 09:03:01.590924961 +0000 UTC m=+1.564294088 container remove 77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:03:01 np0005532048 systemd[1]: libpod-conmon-77645be5a6695e5738052009e7594461aa6d70d75bec2fe79c25eb46339e09ed.scope: Deactivated successfully.
Nov 22 04:03:01 np0005532048 podman[264092]: 2025-11-22 09:03:01.680254389 +0000 UTC m=+0.999321352 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:03:01 np0005532048 podman[264100]: 2025-11-22 09:03:01.685703303 +0000 UTC m=+1.004311274 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:03:01 np0005532048 podman[264149]: 2025-11-22 09:03:01.809990021 +0000 UTC m=+0.044866475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:03:01 np0005532048 podman[264149]: 2025-11-22 09:03:01.964607513 +0000 UTC m=+0.199483947 container create 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:03:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:02 np0005532048 systemd[1]: Started libpod-conmon-16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d.scope.
Nov 22 04:03:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:03:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:02 np0005532048 podman[264149]: 2025-11-22 09:03:02.444061747 +0000 UTC m=+0.678938201 container init 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:03:02 np0005532048 podman[264149]: 2025-11-22 09:03:02.453904859 +0000 UTC m=+0.688781283 container start 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:03:02 np0005532048 podman[264149]: 2025-11-22 09:03:02.549869979 +0000 UTC m=+0.784746413 container attach 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:03:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 102 B/s wr, 1 op/s
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]: {
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "osd_id": 1,
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "type": "bluestore"
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:    },
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "osd_id": 0,
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "type": "bluestore"
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:    },
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "osd_id": 2,
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:        "type": "bluestore"
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]:    }
Nov 22 04:03:03 np0005532048 xenodochial_galois[264165]: }
Nov 22 04:03:03 np0005532048 systemd[1]: libpod-16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d.scope: Deactivated successfully.
Nov 22 04:03:03 np0005532048 podman[264149]: 2025-11-22 09:03:03.508121691 +0000 UTC m=+1.742998115 container died 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:03:03 np0005532048 systemd[1]: libpod-16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d.scope: Consumed 1.059s CPU time.
Nov 22 04:03:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b65312179aea1b38fab8c71d2deffd4ade33ffcdbd29a2f2e01229a3212de433-merged.mount: Deactivated successfully.
Nov 22 04:03:04 np0005532048 podman[264149]: 2025-11-22 09:03:04.564723282 +0000 UTC m=+2.799599756 container remove 16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:03:04 np0005532048 systemd[1]: libpod-conmon-16bfdb7896c234bbe779d4ea7a4c134a0bb63f0ce295dbf5487d6287f3ef425d.scope: Deactivated successfully.
Nov 22 04:03:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:03:04 np0005532048 podman[264199]: 2025-11-22 09:03:04.699500457 +0000 UTC m=+1.157573366 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:03:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:03:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:03:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 04:03:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:03:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bd27353b-f202-475c-8e2f-5b3e6abb360e does not exist
Nov 22 04:03:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b73bfafe-61b7-4d82-9300-ba5707a9a16a does not exist
Nov 22 04:03:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:03:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:03:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 04:03:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:07 np0005532048 nova_compute[253661]: 2025-11-22 09:03:07.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:07 np0005532048 nova_compute[253661]: 2025-11-22 09:03:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:03:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:03:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2234246451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.677 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.894 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.896 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:03:08 np0005532048 nova_compute[253661]: 2025-11-22 09:03:08.897 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:03:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.025 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.025 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.099 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.187 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.187 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.202 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.221 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.238 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:03:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:03:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833152661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.656 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.663 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.681 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.683 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.683 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.684 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.684 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.695 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:03:09 np0005532048 nova_compute[253661]: 2025-11-22 09:03:09.695 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:10 np0005532048 nova_compute[253661]: 2025-11-22 09:03:10.702 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:10 np0005532048 nova_compute[253661]: 2025-11-22 09:03:10.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:10 np0005532048 nova_compute[253661]: 2025-11-22 09:03:10.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:10 np0005532048 nova_compute[253661]: 2025-11-22 09:03:10.704 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:03:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 22 04:03:11 np0005532048 nova_compute[253661]: 2025-11-22 09:03:11.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:11 np0005532048 nova_compute[253661]: 2025-11-22 09:03:11.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:11 np0005532048 nova_compute[253661]: 2025-11-22 09:03:11.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:03:11 np0005532048 nova_compute[253661]: 2025-11-22 09:03:11.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:03:11 np0005532048 nova_compute[253661]: 2025-11-22 09:03:11.254 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:03:11 np0005532048 nova_compute[253661]: 2025-11-22 09:03:11.254 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:11 np0005532048 nova_compute[253661]: 2025-11-22 09:03:11.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:03:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3614750764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:03:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:03:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3614750764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:03:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Nov 22 04:03:14 np0005532048 nova_compute[253661]: 2025-11-22 09:03:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:14 np0005532048 nova_compute[253661]: 2025-11-22 09:03:14.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Nov 22 04:03:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:03:27.946 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:03:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:03:27.947 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:03:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:03:27.947 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:03:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:32 np0005532048 podman[264330]: 2025-11-22 09:03:32.399225758 +0000 UTC m=+0.081260100 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 04:03:32 np0005532048 podman[264329]: 2025-11-22 09:03:32.421695961 +0000 UTC m=+0.103674242 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:03:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:35 np0005532048 podman[264369]: 2025-11-22 09:03:35.413441633 +0000 UTC m=+0.106394388 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:03:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:39 np0005532048 nova_compute[253661]: 2025-11-22 09:03:39.475 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:03:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:03:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6139 writes, 25K keys, 6139 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6139 writes, 1122 syncs, 5.47 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 386 writes, 857 keys, 386 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 386 writes, 178 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:03:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:03:52
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'volumes']
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:03:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:03:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.2 total, 600.0 interval#012Cumulative writes: 7208 writes, 29K keys, 7208 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7208 writes, 1408 syncs, 5.12 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 431 writes, 1187 keys, 431 commit groups, 1.0 writes per commit group, ingest: 0.56 MB, 0.00 MB/s#012Interval WAL: 431 writes, 189 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:03:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:03:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:04:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6060 writes, 25K keys, 6060 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6060 writes, 1047 syncs, 5.79 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 389 writes, 934 keys, 389 commit groups, 1.0 writes per commit group, ingest: 0.51 MB, 0.00 MB/s#012Interval WAL: 389 writes, 173 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:04:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:04:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:03 np0005532048 podman[264398]: 2025-11-22 09:04:03.365295351 +0000 UTC m=+0.059336330 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 04:04:03 np0005532048 podman[264399]: 2025-11-22 09:04:03.370208763 +0000 UTC m=+0.061155016 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:04:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:04:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:04:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:04:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:04:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:04:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:04:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4c00fc96-2d58-4330-8354-99c58b966397 does not exist
Nov 22 04:04:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ffa2018d-50ab-4eaf-99db-f933e757bc75 does not exist
Nov 22 04:04:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3bb63b79-a43d-4e95-9a5f-731bbbb8bf53 does not exist
Nov 22 04:04:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:04:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:04:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:04:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:04:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:04:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:04:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:04:06 np0005532048 podman[264593]: 2025-11-22 09:04:06.261593145 +0000 UTC m=+0.102229506 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:04:06 np0005532048 podman[264735]: 2025-11-22 09:04:06.751968076 +0000 UTC m=+0.081128296 container create f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:04:06 np0005532048 podman[264735]: 2025-11-22 09:04:06.696349548 +0000 UTC m=+0.025509768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:06 np0005532048 systemd[1]: Started libpod-conmon-f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b.scope.
Nov 22 04:04:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:04:06 np0005532048 podman[264735]: 2025-11-22 09:04:06.868175315 +0000 UTC m=+0.197335605 container init f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:06 np0005532048 podman[264735]: 2025-11-22 09:04:06.876851448 +0000 UTC m=+0.206011668 container start f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:04:06 np0005532048 quirky_jones[264752]: 167 167
Nov 22 04:04:06 np0005532048 systemd[1]: libpod-f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b.scope: Deactivated successfully.
Nov 22 04:04:06 np0005532048 conmon[264752]: conmon f47c9f54da1a9cdd6c6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b.scope/container/memory.events
Nov 22 04:04:06 np0005532048 podman[264735]: 2025-11-22 09:04:06.89316572 +0000 UTC m=+0.222326010 container attach f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 04:04:06 np0005532048 podman[264735]: 2025-11-22 09:04:06.895046876 +0000 UTC m=+0.224207096 container died f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:04:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-79d1139afc8682eeed2ef129b18506066f8ae10f0ffa2792befcb115492eb419-merged.mount: Deactivated successfully.
Nov 22 04:04:06 np0005532048 podman[264735]: 2025-11-22 09:04:06.994553053 +0000 UTC m=+0.323713253 container remove f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:04:07 np0005532048 systemd[1]: libpod-conmon-f47c9f54da1a9cdd6c6ba4a8d038d4a77821bfbe41c690d3bb2ab3897fde2f1b.scope: Deactivated successfully.
Nov 22 04:04:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:04:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:04:07 np0005532048 podman[264776]: 2025-11-22 09:04:07.234568478 +0000 UTC m=+0.067459380 container create 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:04:07 np0005532048 systemd[1]: Started libpod-conmon-9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8.scope.
Nov 22 04:04:07 np0005532048 podman[264776]: 2025-11-22 09:04:07.207589784 +0000 UTC m=+0.040480706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:04:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:07 np0005532048 podman[264776]: 2025-11-22 09:04:07.388643048 +0000 UTC m=+0.221533980 container init 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:04:07 np0005532048 podman[264776]: 2025-11-22 09:04:07.397440134 +0000 UTC m=+0.230331036 container start 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:04:07 np0005532048 podman[264776]: 2025-11-22 09:04:07.403736599 +0000 UTC m=+0.236627501 container attach 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.253 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:04:08 np0005532048 keen_shaw[264793]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:04:08 np0005532048 keen_shaw[264793]: --> relative data size: 1.0
Nov 22 04:04:08 np0005532048 keen_shaw[264793]: --> All data devices are unavailable
Nov 22 04:04:08 np0005532048 systemd[1]: libpod-9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8.scope: Deactivated successfully.
Nov 22 04:04:08 np0005532048 systemd[1]: libpod-9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8.scope: Consumed 1.107s CPU time.
Nov 22 04:04:08 np0005532048 podman[264842]: 2025-11-22 09:04:08.612440111 +0000 UTC m=+0.034554442 container died 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 04:04:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717246583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.697 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:04:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cba4f716f95453c6078cf8a56be6930e16ab91c9d99cfddd3530123a6f909467-merged.mount: Deactivated successfully.
Nov 22 04:04:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:08 np0005532048 podman[264842]: 2025-11-22 09:04:08.790276795 +0000 UTC m=+0.212391086 container remove 9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shaw, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:04:08 np0005532048 systemd[1]: libpod-conmon-9ec78c570e0c124f8392bc94cd29c222a9c39c957e0fc29d52d8a94ffe3b1bc8.scope: Deactivated successfully.
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.887 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.888 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.889 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.889 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:04:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.947 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.948 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:04:08 np0005532048 nova_compute[253661]: 2025-11-22 09:04:08.975 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:04:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087432625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:09 np0005532048 nova_compute[253661]: 2025-11-22 09:04:09.430 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:04:09 np0005532048 nova_compute[253661]: 2025-11-22 09:04:09.436 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:04:09 np0005532048 nova_compute[253661]: 2025-11-22 09:04:09.449 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:04:09 np0005532048 nova_compute[253661]: 2025-11-22 09:04:09.450 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:04:09 np0005532048 nova_compute[253661]: 2025-11-22 09:04:09.450 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:04:09 np0005532048 podman[265020]: 2025-11-22 09:04:09.505641212 +0000 UTC m=+0.053525038 container create e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:04:09 np0005532048 systemd[1]: Started libpod-conmon-e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a.scope.
Nov 22 04:04:09 np0005532048 podman[265020]: 2025-11-22 09:04:09.47550316 +0000 UTC m=+0.023387006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:04:09 np0005532048 podman[265020]: 2025-11-22 09:04:09.610682376 +0000 UTC m=+0.158566232 container init e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:04:09 np0005532048 podman[265020]: 2025-11-22 09:04:09.619209255 +0000 UTC m=+0.167093101 container start e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:04:09 np0005532048 systemd[1]: libpod-e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a.scope: Deactivated successfully.
Nov 22 04:04:09 np0005532048 unruffled_leakey[265037]: 167 167
Nov 22 04:04:09 np0005532048 conmon[265037]: conmon e989b969a4695c7195b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a.scope/container/memory.events
Nov 22 04:04:09 np0005532048 podman[265020]: 2025-11-22 09:04:09.629653622 +0000 UTC m=+0.177537478 container attach e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:09 np0005532048 podman[265020]: 2025-11-22 09:04:09.630565994 +0000 UTC m=+0.178449820 container died e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:04:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9ee82271193e99c5d0512d533e6fc8f98bf13359bafb287e2d63f9f123b93a49-merged.mount: Deactivated successfully.
Nov 22 04:04:09 np0005532048 podman[265020]: 2025-11-22 09:04:09.720733853 +0000 UTC m=+0.268617679 container remove e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:04:09 np0005532048 systemd[1]: libpod-conmon-e989b969a4695c7195b2579cdd240ea9c8aa4d9b3347ecb1e2f426fc937e630a.scope: Deactivated successfully.
Nov 22 04:04:09 np0005532048 podman[265063]: 2025-11-22 09:04:09.902153935 +0000 UTC m=+0.048509945 container create b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:04:09 np0005532048 systemd[1]: Started libpod-conmon-b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78.scope.
Nov 22 04:04:09 np0005532048 podman[265063]: 2025-11-22 09:04:09.880277947 +0000 UTC m=+0.026633987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:04:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:10 np0005532048 podman[265063]: 2025-11-22 09:04:10.021941792 +0000 UTC m=+0.168297822 container init b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:04:10 np0005532048 podman[265063]: 2025-11-22 09:04:10.029845616 +0000 UTC m=+0.176201626 container start b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:04:10 np0005532048 podman[265063]: 2025-11-22 09:04:10.041616526 +0000 UTC m=+0.187972536 container attach b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:04:10 np0005532048 nova_compute[253661]: 2025-11-22 09:04:10.451 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]: {
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:    "0": [
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:        {
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "devices": [
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "/dev/loop3"
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            ],
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_name": "ceph_lv0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_size": "21470642176",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "name": "ceph_lv0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "tags": {
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.cluster_name": "ceph",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.crush_device_class": "",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.encrypted": "0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.osd_id": "0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.type": "block",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.vdo": "0"
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            },
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "type": "block",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "vg_name": "ceph_vg0"
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:        }
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:    ],
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:    "1": [
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:        {
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "devices": [
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "/dev/loop4"
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            ],
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_name": "ceph_lv1",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_size": "21470642176",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "name": "ceph_lv1",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "tags": {
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.cluster_name": "ceph",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.crush_device_class": "",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.encrypted": "0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.osd_id": "1",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.type": "block",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.vdo": "0"
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            },
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "type": "block",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "vg_name": "ceph_vg1"
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:        }
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:    ],
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:    "2": [
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:        {
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "devices": [
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "/dev/loop5"
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            ],
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_name": "ceph_lv2",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_size": "21470642176",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "name": "ceph_lv2",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "tags": {
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.cluster_name": "ceph",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.crush_device_class": "",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.encrypted": "0",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.osd_id": "2",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.type": "block",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:                "ceph.vdo": "0"
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            },
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "type": "block",
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:            "vg_name": "ceph_vg2"
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:        }
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]:    ]
Nov 22 04:04:10 np0005532048 nostalgic_robinson[265079]: }
Nov 22 04:04:10 np0005532048 systemd[1]: libpod-b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78.scope: Deactivated successfully.
Nov 22 04:04:10 np0005532048 podman[265063]: 2025-11-22 09:04:10.836786266 +0000 UTC m=+0.983142276 container died b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:04:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a34fc84a58c5fe9e7dfb77005ab13827de0d4282e2023440545cdd9a45fbd827-merged.mount: Deactivated successfully.
Nov 22 04:04:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:10 np0005532048 podman[265063]: 2025-11-22 09:04:10.951359743 +0000 UTC m=+1.097715773 container remove b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:04:10 np0005532048 systemd[1]: libpod-conmon-b38699d226ef5612642ea8496f4c630e643e2fb019bc1ec43e399b0da33c7d78.scope: Deactivated successfully.
Nov 22 04:04:11 np0005532048 nova_compute[253661]: 2025-11-22 09:04:11.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:04:11 np0005532048 nova_compute[253661]: 2025-11-22 09:04:11.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:04:11 np0005532048 nova_compute[253661]: 2025-11-22 09:04:11.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:04:11 np0005532048 podman[265238]: 2025-11-22 09:04:11.678073259 +0000 UTC m=+0.063968404 container create 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:04:11 np0005532048 systemd[1]: Started libpod-conmon-1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af.scope.
Nov 22 04:04:11 np0005532048 podman[265238]: 2025-11-22 09:04:11.638898205 +0000 UTC m=+0.024793350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:04:11 np0005532048 podman[265238]: 2025-11-22 09:04:11.78174969 +0000 UTC m=+0.167644915 container init 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 04:04:11 np0005532048 podman[265238]: 2025-11-22 09:04:11.788757752 +0000 UTC m=+0.174652887 container start 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:04:11 np0005532048 infallible_thompson[265254]: 167 167
Nov 22 04:04:11 np0005532048 systemd[1]: libpod-1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af.scope: Deactivated successfully.
Nov 22 04:04:11 np0005532048 podman[265238]: 2025-11-22 09:04:11.796117053 +0000 UTC m=+0.182012178 container attach 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:04:11 np0005532048 podman[265238]: 2025-11-22 09:04:11.796924662 +0000 UTC m=+0.182819797 container died 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:04:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0ce63d6cfa057926c1c06e58d59b09a43b77e75d783d3eaf7cccf6c043b029d7-merged.mount: Deactivated successfully.
Nov 22 04:04:11 np0005532048 podman[265238]: 2025-11-22 09:04:11.847497316 +0000 UTC m=+0.233392431 container remove 1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:04:11 np0005532048 systemd[1]: libpod-conmon-1993a0d3d324b1b4ea0ba6683a0b79483d3eaea2d8c027cbccfbd9c7606b55af.scope: Deactivated successfully.
Nov 22 04:04:12 np0005532048 podman[265276]: 2025-11-22 09:04:12.047783664 +0000 UTC m=+0.060949281 container create 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:04:12 np0005532048 systemd[1]: Started libpod-conmon-7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871.scope.
Nov 22 04:04:12 np0005532048 podman[265276]: 2025-11-22 09:04:12.021089546 +0000 UTC m=+0.034255243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:04:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:12 np0005532048 podman[265276]: 2025-11-22 09:04:12.143641701 +0000 UTC m=+0.156807368 container init 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 04:04:12 np0005532048 podman[265276]: 2025-11-22 09:04:12.152228352 +0000 UTC m=+0.165393969 container start 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:04:12 np0005532048 podman[265276]: 2025-11-22 09:04:12.156717113 +0000 UTC m=+0.169882750 container attach 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:04:12 np0005532048 nova_compute[253661]: 2025-11-22 09:04:12.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:04:12 np0005532048 nova_compute[253661]: 2025-11-22 09:04:12.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:04:12 np0005532048 nova_compute[253661]: 2025-11-22 09:04:12.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:04:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:04:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651213839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:04:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3651213839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:13 np0005532048 nova_compute[253661]: 2025-11-22 09:04:13.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:04:13 np0005532048 nova_compute[253661]: 2025-11-22 09:04:13.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:04:13 np0005532048 nova_compute[253661]: 2025-11-22 09:04:13.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:04:13 np0005532048 nova_compute[253661]: 2025-11-22 09:04:13.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]: {
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "osd_id": 1,
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "type": "bluestore"
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:    },
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "osd_id": 0,
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "type": "bluestore"
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:    },
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "osd_id": 2,
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:        "type": "bluestore"
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]:    }
Nov 22 04:04:13 np0005532048 trusting_davinci[265293]: }
Nov 22 04:04:13 np0005532048 systemd[1]: libpod-7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871.scope: Deactivated successfully.
Nov 22 04:04:13 np0005532048 systemd[1]: libpod-7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871.scope: Consumed 1.208s CPU time.
Nov 22 04:04:13 np0005532048 podman[265276]: 2025-11-22 09:04:13.356433433 +0000 UTC m=+1.369599070 container died 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:04:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-469b1380c4a14f89c8adaba68e06b4fac1a95878c39a86e0fe2ccf07f18294a9-merged.mount: Deactivated successfully.
Nov 22 04:04:13 np0005532048 podman[265276]: 2025-11-22 09:04:13.522875227 +0000 UTC m=+1.536040844 container remove 7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 04:04:13 np0005532048 systemd[1]: libpod-conmon-7d4b888ad19d0f6ef5a077ef887c1ee5b724f71d1bf80c2eb4704ad204854871.scope: Deactivated successfully.
Nov 22 04:04:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:04:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:04:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:04:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:04:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 11fe2974-7366-4abd-8a75-b58d9e7f2fb5 does not exist
Nov 22 04:04:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d086aa3c-7c7a-422f-959d-26ba794eb759 does not exist
Nov 22 04:04:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:04:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:04:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:16 np0005532048 nova_compute[253661]: 2025-11-22 09:04:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:04:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:04:27.947 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:04:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:04:27.948 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:04:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:04:27.948 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:04:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:34 np0005532048 podman[265390]: 2025-11-22 09:04:34.400705928 +0000 UTC m=+0.077312982 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:04:34 np0005532048 podman[265391]: 2025-11-22 09:04:34.412478248 +0000 UTC m=+0.084405738 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 04:04:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:36 np0005532048 podman[265427]: 2025-11-22 09:04:36.414171182 +0000 UTC m=+0.098007208 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 04:04:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:04:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 22 04:04:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 22 04:04:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:04:52
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'vms', 'images', 'volumes']
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 8.4 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 819 KiB/s wr, 1 op/s
Nov 22 04:04:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:04:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 22 04:04:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 22 04:04:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 22 04:04:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Nov 22 04:04:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 29 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.6 MiB/s wr, 24 op/s
Nov 22 04:04:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 22 04:05:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.7 MiB/s wr, 42 op/s
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:05:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 36 op/s
Nov 22 04:05:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 22 04:05:05 np0005532048 podman[265454]: 2025-11-22 09:05:05.384874836 +0000 UTC m=+0.068072866 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:05:05 np0005532048 podman[265453]: 2025-11-22 09:05:05.387210622 +0000 UTC m=+0.071361735 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:05:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Nov 22 04:05:07 np0005532048 podman[265491]: 2025-11-22 09:05:07.459525361 +0000 UTC m=+0.142923423 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 04:05:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.0 MiB/s wr, 15 op/s
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.254 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042158996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.704 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.894 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.895 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.964 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.965 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:05:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:05:10 np0005532048 nova_compute[253661]: 2025-11-22 09:05:10.979 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446078130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:11 np0005532048 nova_compute[253661]: 2025-11-22 09:05:11.437 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:11 np0005532048 nova_compute[253661]: 2025-11-22 09:05:11.447 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:05:11 np0005532048 nova_compute[253661]: 2025-11-22 09:05:11.461 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:05:11 np0005532048 nova_compute[253661]: 2025-11-22 09:05:11.464 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:05:11 np0005532048 nova_compute[253661]: 2025-11-22 09:05:11.464 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3574899484' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3574899484' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:05:13 np0005532048 nova_compute[253661]: 2025-11-22 09:05:13.465 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:13 np0005532048 nova_compute[253661]: 2025-11-22 09:05:13.466 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:13 np0005532048 nova_compute[253661]: 2025-11-22 09:05:13.466 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:13 np0005532048 nova_compute[253661]: 2025-11-22 09:05:13.467 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:13 np0005532048 nova_compute[253661]: 2025-11-22 09:05:13.467 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:05:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:14 np0005532048 nova_compute[253661]: 2025-11-22 09:05:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:05:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:05:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:05:15 np0005532048 nova_compute[253661]: 2025-11-22 09:05:15.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:15 np0005532048 nova_compute[253661]: 2025-11-22 09:05:15.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:05:15 np0005532048 nova_compute[253661]: 2025-11-22 09:05:15.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:05:15 np0005532048 nova_compute[253661]: 2025-11-22 09:05:15.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:05:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:16 np0005532048 podman[265954]: 2025-11-22 09:05:15.977943659 +0000 UTC m=+0.032049845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:16 np0005532048 nova_compute[253661]: 2025-11-22 09:05:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:16 np0005532048 podman[265954]: 2025-11-22 09:05:16.278586172 +0000 UTC m=+0.332692368 container create 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 04:05:16 np0005532048 systemd[1]: Started libpod-conmon-1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34.scope.
Nov 22 04:05:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:05:16 np0005532048 podman[265954]: 2025-11-22 09:05:16.740410968 +0000 UTC m=+0.794517204 container init 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:05:16 np0005532048 podman[265954]: 2025-11-22 09:05:16.749544128 +0000 UTC m=+0.803650294 container start 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 04:05:16 np0005532048 adoring_faraday[265970]: 167 167
Nov 22 04:05:16 np0005532048 systemd[1]: libpod-1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34.scope: Deactivated successfully.
Nov 22 04:05:16 np0005532048 podman[265954]: 2025-11-22 09:05:16.840087676 +0000 UTC m=+0.894193862 container attach 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:05:16 np0005532048 podman[265954]: 2025-11-22 09:05:16.843777625 +0000 UTC m=+0.897883811 container died 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:05:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:05:17 np0005532048 nova_compute[253661]: 2025-11-22 09:05:17.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:05:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fe2b1981e40899537cb820ff39d72900fed9ccf200b68ec7155270b6a626cf7e-merged.mount: Deactivated successfully.
Nov 22 04:05:18 np0005532048 podman[265954]: 2025-11-22 09:05:18.533628325 +0000 UTC m=+2.587734521 container remove 1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:05:18 np0005532048 systemd[1]: libpod-conmon-1e050e8a79c9ce9450087f6648b1858ba49647dde933c6bb322f199f4a529c34.scope: Deactivated successfully.
Nov 22 04:05:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:18 np0005532048 podman[265994]: 2025-11-22 09:05:18.799667482 +0000 UTC m=+0.073746083 container create 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:05:18 np0005532048 podman[265994]: 2025-11-22 09:05:18.753297662 +0000 UTC m=+0.027376283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:18 np0005532048 systemd[1]: Started libpod-conmon-03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a.scope.
Nov 22 04:05:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:05:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:18 np0005532048 podman[265994]: 2025-11-22 09:05:18.945392762 +0000 UTC m=+0.219471343 container init 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:05:18 np0005532048 podman[265994]: 2025-11-22 09:05:18.953467406 +0000 UTC m=+0.227545987 container start 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 04:05:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:05:18 np0005532048 podman[265994]: 2025-11-22 09:05:18.981191896 +0000 UTC m=+0.255270477 container attach 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]: [
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:    {
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        "available": false,
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        "ceph_device": false,
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        "lsm_data": {},
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        "lvs": [],
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        "path": "/dev/sr0",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        "rejected_reasons": [
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "Insufficient space (<5GB)",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "Has a FileSystem"
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        ],
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        "sys_api": {
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "actuators": null,
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "device_nodes": "sr0",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "devname": "sr0",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "human_readable_size": "482.00 KB",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "id_bus": "ata",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "model": "QEMU DVD-ROM",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "nr_requests": "2",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "parent": "/dev/sr0",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "partitions": {},
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "path": "/dev/sr0",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "removable": "1",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "rev": "2.5+",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "ro": "0",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "rotational": "1",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "sas_address": "",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "sas_device_handle": "",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "scheduler_mode": "mq-deadline",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "sectors": 0,
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "sectorsize": "2048",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "size": 493568.0,
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "support_discard": "2048",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "type": "disk",
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:            "vendor": "QEMU"
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:        }
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]:    }
Nov 22 04:05:20 np0005532048 intelligent_diffie[266011]: ]
Nov 22 04:05:20 np0005532048 systemd[1]: libpod-03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a.scope: Deactivated successfully.
Nov 22 04:05:20 np0005532048 systemd[1]: libpod-03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a.scope: Consumed 1.682s CPU time.
Nov 22 04:05:20 np0005532048 podman[265994]: 2025-11-22 09:05:20.601102126 +0000 UTC m=+1.875180747 container died 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:05:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:05:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4e9e61b2ac3e86ceaf1b93e9686c8240b4e1d977b540d170356bb54fa010abc9-merged.mount: Deactivated successfully.
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.339 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.340 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.341 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.341 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.364 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.409 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.493 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.494 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.502 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.502 253665 INFO nova.compute.claims [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.508 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:21 np0005532048 podman[265994]: 2025-11-22 09:05:21.565923312 +0000 UTC m=+2.840001893 container remove 03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:05:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:05:21 np0005532048 nova_compute[253661]: 2025-11-22 09:05:21.616 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:21 np0005532048 systemd[1]: libpod-conmon-03acfebab5f5312e1ec033301a2465681c6e33410f9ae31750fb889c1697682a.scope: Deactivated successfully.
Nov 22 04:05:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475159984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.082 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.094 253665 DEBUG nova.compute.provider_tree [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.118 253665 DEBUG nova.scheduler.client.report [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.141 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.141 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.143 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.151 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.151 253665 INFO nova.compute.claims [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.273 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3bf7f732-2ecb-41d8-baef-74fb5a007a7f does not exist
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 433fa37c-fcd9-4915-adfb-b2e7b1cde7d8 does not exist
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ca6f116e-adea-4ba0-8828-50b83944e2a6 does not exist
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.302 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.325 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.348 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:05:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:05:22.384 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:05:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:05:22.386 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.474 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.475 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.475 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Creating image(s)#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.512 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.546 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.575 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.579 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.580 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592572438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.767 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.773 253665 DEBUG nova.compute.provider_tree [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.794 253665 DEBUG nova.scheduler.client.report [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.817 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.818 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.866 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.867 253665 DEBUG nova.network.neutron [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.891 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:05:22 np0005532048 nova_compute[253661]: 2025-11-22 09:05:22.910 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:05:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 22 04:05:22 np0005532048 podman[268430]: 2025-11-22 09:05:22.976069385 +0000 UTC m=+0.072509162 container create f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.011 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.013 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.013 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Creating image(s)#033[00m
Nov 22 04:05:23 np0005532048 podman[268430]: 2025-11-22 09:05:22.928667461 +0000 UTC m=+0.025107258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.031 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.055 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.086 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.092 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:23 np0005532048 systemd[1]: Started libpod-conmon-f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8.scope.
Nov 22 04:05:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:05:23 np0005532048 podman[268430]: 2025-11-22 09:05:23.283509432 +0000 UTC m=+0.379949229 container init f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:05:23 np0005532048 podman[268430]: 2025-11-22 09:05:23.291629138 +0000 UTC m=+0.388068915 container start f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:05:23 np0005532048 clever_raman[268500]: 167 167
Nov 22 04:05:23 np0005532048 systemd[1]: libpod-f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8.scope: Deactivated successfully.
Nov 22 04:05:23 np0005532048 podman[268430]: 2025-11-22 09:05:23.437559113 +0000 UTC m=+0.533998910 container attach f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:05:23 np0005532048 podman[268430]: 2025-11-22 09:05:23.438294981 +0000 UTC m=+0.534734778 container died f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:05:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ee34a853407c3588020d59faa8a1970aea21c0ff2f04c8f9d4455d84b587e927-merged.mount: Deactivated successfully.
Nov 22 04:05:23 np0005532048 podman[268430]: 2025-11-22 09:05:23.634572722 +0000 UTC m=+0.731012499 container remove f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.650 253665 DEBUG nova.virt.libvirt.imagebackend [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/878156d4-57f6-4a8b-8f4c-cbde182bb832/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/878156d4-57f6-4a8b-8f4c-cbde182bb832/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 22 04:05:23 np0005532048 systemd[1]: libpod-conmon-f6c464f7284108db70e72153d6a7d2cbded83b0cd69adb93a2591ee076123dd8.scope: Deactivated successfully.
Nov 22 04:05:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.814 253665 DEBUG nova.network.neutron [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:05:23 np0005532048 nova_compute[253661]: 2025-11-22 09:05:23.815 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:05:23 np0005532048 podman[268524]: 2025-11-22 09:05:23.824594913 +0000 UTC m=+0.053170676 container create 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:05:23 np0005532048 systemd[1]: Started libpod-conmon-467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae.scope.
Nov 22 04:05:23 np0005532048 podman[268524]: 2025-11-22 09:05:23.794860014 +0000 UTC m=+0.023435827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:05:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:23 np0005532048 podman[268524]: 2025-11-22 09:05:23.924324241 +0000 UTC m=+0.152900024 container init 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:05:23 np0005532048 podman[268524]: 2025-11-22 09:05:23.934599129 +0000 UTC m=+0.163174892 container start 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:05:23 np0005532048 podman[268524]: 2025-11-22 09:05:23.940340038 +0000 UTC m=+0.168915801 container attach 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:05:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Nov 22 04:05:25 np0005532048 charming_wozniak[268540]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:05:25 np0005532048 charming_wozniak[268540]: --> relative data size: 1.0
Nov 22 04:05:25 np0005532048 charming_wozniak[268540]: --> All data devices are unavailable
Nov 22 04:05:25 np0005532048 systemd[1]: libpod-467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae.scope: Deactivated successfully.
Nov 22 04:05:25 np0005532048 systemd[1]: libpod-467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae.scope: Consumed 1.126s CPU time.
Nov 22 04:05:25 np0005532048 podman[268569]: 2025-11-22 09:05:25.265124279 +0000 UTC m=+0.025952217 container died 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:05:25 np0005532048 nova_compute[253661]: 2025-11-22 09:05:25.302 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:25 np0005532048 nova_compute[253661]: 2025-11-22 09:05:25.366 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:25 np0005532048 nova_compute[253661]: 2025-11-22 09:05:25.367 253665 DEBUG nova.virt.images [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] 878156d4-57f6-4a8b-8f4c-cbde182bb832 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 22 04:05:25 np0005532048 nova_compute[253661]: 2025-11-22 09:05:25.370 253665 DEBUG nova.privsep.utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 22 04:05:25 np0005532048 nova_compute[253661]: 2025-11-22 09:05:25.371 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.part /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9516a9d252cd8c4f6a6b0adb0e66696c03171fa7cb21645fd342d428e9c7428f-merged.mount: Deactivated successfully.
Nov 22 04:05:25 np0005532048 podman[268569]: 2025-11-22 09:05:25.670595074 +0000 UTC m=+0.431422982 container remove 467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wozniak, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:05:25 np0005532048 systemd[1]: libpod-conmon-467fb53a5f19d7848cfc59bc8e7a5831bc66163c79950856a2e85e72c8606cae.scope: Deactivated successfully.
Nov 22 04:05:25 np0005532048 nova_compute[253661]: 2025-11-22 09:05:25.976 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.part /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.converted" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:25 np0005532048 nova_compute[253661]: 2025-11-22 09:05:25.983 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:26 np0005532048 nova_compute[253661]: 2025-11-22 09:05:26.108 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4.converted --force-share --output=json" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:26 np0005532048 nova_compute[253661]: 2025-11-22 09:05:26.110 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:26 np0005532048 nova_compute[253661]: 2025-11-22 09:05:26.139 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:26 np0005532048 nova_compute[253661]: 2025-11-22 09:05:26.146 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:26 np0005532048 nova_compute[253661]: 2025-11-22 09:05:26.172 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 3.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:26 np0005532048 nova_compute[253661]: 2025-11-22 09:05:26.174 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:26 np0005532048 nova_compute[253661]: 2025-11-22 09:05:26.212 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:26 np0005532048 nova_compute[253661]: 2025-11-22 09:05:26.217 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b9672702-d5e5-407f-bc86-ee9e64f90a01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 22 04:05:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 22 04:05:26 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 22 04:05:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:05:26.388 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:05:26 np0005532048 podman[268807]: 2025-11-22 09:05:26.456540219 +0000 UTC m=+0.052453328 container create 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:05:26 np0005532048 systemd[1]: Started libpod-conmon-412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1.scope.
Nov 22 04:05:26 np0005532048 podman[268807]: 2025-11-22 09:05:26.432193941 +0000 UTC m=+0.028107030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:05:26 np0005532048 podman[268807]: 2025-11-22 09:05:26.543290684 +0000 UTC m=+0.139203763 container init 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:05:26 np0005532048 podman[268807]: 2025-11-22 09:05:26.550761745 +0000 UTC m=+0.146674814 container start 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:05:26 np0005532048 systemd[1]: libpod-412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1.scope: Deactivated successfully.
Nov 22 04:05:26 np0005532048 podman[268807]: 2025-11-22 09:05:26.557417895 +0000 UTC m=+0.153330984 container attach 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:05:26 np0005532048 adoring_hugle[268823]: 167 167
Nov 22 04:05:26 np0005532048 conmon[268823]: conmon 412db7bb520d6aa228be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1.scope/container/memory.events
Nov 22 04:05:26 np0005532048 podman[268807]: 2025-11-22 09:05:26.559061916 +0000 UTC m=+0.154974985 container died 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:05:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c5870e6cd5a0441a557a1fe368b2848b9f34ee78043574fa8661c7f35a6172de-merged.mount: Deactivated successfully.
Nov 22 04:05:26 np0005532048 podman[268807]: 2025-11-22 09:05:26.619300801 +0000 UTC m=+0.215213870 container remove 412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hugle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:05:26 np0005532048 systemd[1]: libpod-conmon-412db7bb520d6aa228bebfbb444d2c57c5a6249236de1b2f26b535df744d81c1.scope: Deactivated successfully.
Nov 22 04:05:26 np0005532048 podman[268847]: 2025-11-22 09:05:26.790536197 +0000 UTC m=+0.048542224 container create 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:05:26 np0005532048 systemd[1]: Started libpod-conmon-92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb.scope.
Nov 22 04:05:26 np0005532048 podman[268847]: 2025-11-22 09:05:26.76995477 +0000 UTC m=+0.027960587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:05:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:26 np0005532048 podman[268847]: 2025-11-22 09:05:26.882785086 +0000 UTC m=+0.140790883 container init 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:05:26 np0005532048 podman[268847]: 2025-11-22 09:05:26.893017122 +0000 UTC m=+0.151022919 container start 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:05:26 np0005532048 podman[268847]: 2025-11-22 09:05:26.895967904 +0000 UTC m=+0.153973701 container attach 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:05:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 838 KiB/s rd, 0 B/s wr, 31 op/s
Nov 22 04:05:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 22 04:05:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 22 04:05:27 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 22 04:05:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:05:27.949 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:05:27.950 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:05:27.950 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]: {
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:    "0": [
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:        {
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "devices": [
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "/dev/loop3"
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            ],
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_name": "ceph_lv0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_size": "21470642176",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "name": "ceph_lv0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "tags": {
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.cluster_name": "ceph",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.crush_device_class": "",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.encrypted": "0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.osd_id": "0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.type": "block",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.vdo": "0"
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            },
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "type": "block",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "vg_name": "ceph_vg0"
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:        }
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:    ],
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:    "1": [
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:        {
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "devices": [
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "/dev/loop4"
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            ],
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_name": "ceph_lv1",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_size": "21470642176",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "name": "ceph_lv1",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "tags": {
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.cluster_name": "ceph",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.crush_device_class": "",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.encrypted": "0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.osd_id": "1",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.type": "block",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.vdo": "0"
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            },
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "type": "block",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "vg_name": "ceph_vg1"
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:        }
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:    ],
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:    "2": [
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:        {
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "devices": [
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "/dev/loop5"
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            ],
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_name": "ceph_lv2",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_size": "21470642176",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "name": "ceph_lv2",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "tags": {
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.cluster_name": "ceph",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.crush_device_class": "",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.encrypted": "0",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.osd_id": "2",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.type": "block",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:                "ceph.vdo": "0"
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            },
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "type": "block",
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:            "vg_name": "ceph_vg2"
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:        }
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]:    ]
Nov 22 04:05:28 np0005532048 adoring_shirley[268863]: }
Nov 22 04:05:28 np0005532048 systemd[1]: libpod-92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb.scope: Deactivated successfully.
Nov 22 04:05:28 np0005532048 podman[268847]: 2025-11-22 09:05:28.152934837 +0000 UTC m=+1.410940634 container died 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:05:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-378752d4134a7dcec120edec373383ba3662832b42ce0234a9505919002da7b6-merged.mount: Deactivated successfully.
Nov 22 04:05:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 53 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 222 KiB/s wr, 130 op/s
Nov 22 04:05:29 np0005532048 podman[268847]: 2025-11-22 09:05:29.608547939 +0000 UTC m=+2.866553736 container remove 92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shirley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:05:29 np0005532048 systemd[1]: libpod-conmon-92685ab12beefd541ce8e20f88c6979642cabfac4be6ffcebb185079866f77bb.scope: Deactivated successfully.
Nov 22 04:05:30 np0005532048 podman[269036]: 2025-11-22 09:05:30.338644915 +0000 UTC m=+0.028922710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:30 np0005532048 nova_compute[253661]: 2025-11-22 09:05:30.461 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:30 np0005532048 podman[269036]: 2025-11-22 09:05:30.769186524 +0000 UTC m=+0.459464279 container create 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:05:30 np0005532048 nova_compute[253661]: 2025-11-22 09:05:30.855 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b9672702-d5e5-407f-bc86-ee9e64f90a01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:30 np0005532048 nova_compute[253661]: 2025-11-22 09:05:30.907 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] resizing rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:05:30 np0005532048 systemd[1]: Started libpod-conmon-2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9.scope.
Nov 22 04:05:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 53 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 222 KiB/s wr, 123 op/s
Nov 22 04:05:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:05:31 np0005532048 nova_compute[253661]: 2025-11-22 09:05:31.093 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] resizing rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:05:31 np0005532048 podman[269036]: 2025-11-22 09:05:31.132926071 +0000 UTC m=+0.823203806 container init 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:05:31 np0005532048 podman[269036]: 2025-11-22 09:05:31.141489918 +0000 UTC m=+0.831767643 container start 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:05:31 np0005532048 modest_volhard[269125]: 167 167
Nov 22 04:05:31 np0005532048 systemd[1]: libpod-2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9.scope: Deactivated successfully.
Nov 22 04:05:31 np0005532048 podman[269036]: 2025-11-22 09:05:31.21192848 +0000 UTC m=+0.902206225 container attach 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:05:31 np0005532048 podman[269036]: 2025-11-22 09:05:31.212947844 +0000 UTC m=+0.903225569 container died 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:05:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-505a6935600b7158cbd31234e95e9e9a8dddd0358ce42ba5c5e663bea0b389b2-merged.mount: Deactivated successfully.
Nov 22 04:05:32 np0005532048 podman[269036]: 2025-11-22 09:05:32.034695445 +0000 UTC m=+1.724973190 container remove 2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_volhard, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:05:32 np0005532048 systemd[1]: libpod-conmon-2a68238aac045bbc4faa7113534961838a529542c8e666f8de7eb3577fe3e3d9.scope: Deactivated successfully.
Nov 22 04:05:32 np0005532048 podman[269184]: 2025-11-22 09:05:32.225534926 +0000 UTC m=+0.031016232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:32 np0005532048 podman[269184]: 2025-11-22 09:05:32.349903469 +0000 UTC m=+0.155384725 container create 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.377 253665 DEBUG nova.objects.instance [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.396 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.396 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Ensure instance console log exists: /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.397 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.397 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.397 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.399 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.405 253665 WARNING nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.411 253665 DEBUG nova.virt.libvirt.host [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.412 253665 DEBUG nova.virt.libvirt.host [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.414 253665 DEBUG nova.virt.libvirt.host [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.415 253665 DEBUG nova.virt.libvirt.host [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.415 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.415 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.416 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.416 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.416 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.417 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.417 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.417 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.417 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.418 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.418 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.418 253665 DEBUG nova.virt.hardware [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.421 253665 DEBUG nova.privsep.utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.422 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:32 np0005532048 systemd[1]: Started libpod-conmon-27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856.scope.
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.504 253665 DEBUG nova.objects.instance [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'migration_context' on Instance uuid b9672702-d5e5-407f-bc86-ee9e64f90a01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.518 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.519 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Ensure instance console log exists: /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.519 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.520 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.520 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.521 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:05:32 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.527 253665 WARNING nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:05:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.531 253665 DEBUG nova.virt.libvirt.host [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.532 253665 DEBUG nova.virt.libvirt.host [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:05:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.537 253665 DEBUG nova.virt.libvirt.host [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.537 253665 DEBUG nova.virt.libvirt.host [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.538 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.538 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.539 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.539 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.540 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.540 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.540 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.540 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.541 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.541 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.541 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.542 253665 DEBUG nova.virt.hardware [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.545 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:32 np0005532048 podman[269184]: 2025-11-22 09:05:32.589449286 +0000 UTC m=+0.394930582 container init 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:05:32 np0005532048 podman[269184]: 2025-11-22 09:05:32.59917459 +0000 UTC m=+0.404655846 container start 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:05:32 np0005532048 podman[269184]: 2025-11-22 09:05:32.715717896 +0000 UTC m=+0.521199162 container attach 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:05:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:05:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869589781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.863 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.894 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.901 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:05:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3170161559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 91 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 135 op/s
Nov 22 04:05:32 np0005532048 nova_compute[253661]: 2025-11-22 09:05:32.982 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.009 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.013 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:05:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759623925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.376 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.380 253665 DEBUG nova.objects.instance [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.398 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <uuid>6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb</uuid>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <name>instance-00000002</name>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:name>tempest-AutoAllocateNetworkTest-server-2058029404</nova:name>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:05:32</nova:creationTime>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:user uuid="3a894a86983342d3bdece6fcf23fe1a9">tempest-AutoAllocateNetworkTest-135593428-project-member</nova:user>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:project uuid="4c55d55859ed4fb9adf33d6c40da9051">tempest-AutoAllocateNetworkTest-135593428</nova:project>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="serial">6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="uuid">6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/console.log" append="off"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:05:33 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:05:33 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:05:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:05:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970409928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.512 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.514 253665 DEBUG nova.objects.instance [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9672702-d5e5-407f-bc86-ee9e64f90a01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.538 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <uuid>b9672702-d5e5-407f-bc86-ee9e64f90a01</uuid>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <name>instance-00000001</name>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1912718424</nova:name>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:05:32</nova:creationTime>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:user uuid="786abc4100494e9a9e9977b0d6534f9d">tempest-DeleteServersAdminTestJSON-1856447615-project-member</nova:user>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <nova:project uuid="ae421771a97c45f7a0288a4b8cfd48c5">tempest-DeleteServersAdminTestJSON-1856447615</nova:project>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="serial">b9672702-d5e5-407f-bc86-ee9e64f90a01</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="uuid">b9672702-d5e5-407f-bc86-ee9e64f90a01</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b9672702-d5e5-407f-bc86-ee9e64f90a01_disk">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/console.log" append="off"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:05:33 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:05:33 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:05:33 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:05:33 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.557 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.559 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.560 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Using config drive#033[00m
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]: {
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "osd_id": 1,
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "type": "bluestore"
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:    },
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "osd_id": 0,
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "type": "bluestore"
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:    },
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "osd_id": 2,
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:        "type": "bluestore"
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]:    }
Nov 22 04:05:33 np0005532048 wizardly_mestorf[269237]: }
Nov 22 04:05:33 np0005532048 systemd[1]: libpod-27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856.scope: Deactivated successfully.
Nov 22 04:05:33 np0005532048 systemd[1]: libpod-27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856.scope: Consumed 1.053s CPU time.
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.669 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:33 np0005532048 podman[269410]: 2025-11-22 09:05:33.715887506 +0000 UTC m=+0.033153842 container died 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.720 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.720 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.721 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Using config drive#033[00m
Nov 22 04:05:33 np0005532048 nova_compute[253661]: 2025-11-22 09:05:33.742 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 22 04:05:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 22 04:05:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 22 04:05:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e533c83f20735235a17c434ec6dc78df160f2261ced9f5e19151c069018a61c8-merged.mount: Deactivated successfully.
Nov 22 04:05:34 np0005532048 podman[269410]: 2025-11-22 09:05:34.075900322 +0000 UTC m=+0.393166628 container remove 27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mestorf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:05:34 np0005532048 systemd[1]: libpod-conmon-27ecb7fb2cb7e43faa33f9fda9cbb253afa573a08af4516ecbcb6aefc798f856.scope: Deactivated successfully.
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.309442) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802334309526, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2122, "num_deletes": 253, "total_data_size": 3466718, "memory_usage": 3517312, "flush_reason": "Manual Compaction"}
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev aaf08777-6fa5-42b0-bf74-bcc0d509eab6 does not exist
Nov 22 04:05:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0aadcae5-8185-454d-9ee9-a315203b2fe0 does not exist
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802334363795, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3386323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21143, "largest_seqno": 23264, "table_properties": {"data_size": 3376609, "index_size": 6208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19519, "raw_average_key_size": 20, "raw_value_size": 3357234, "raw_average_value_size": 3486, "num_data_blocks": 279, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802122, "oldest_key_time": 1763802122, "file_creation_time": 1763802334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 54414 microseconds, and 10622 cpu microseconds.
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.363865) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3386323 bytes OK
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.363899) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.475507) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.475559) EVENT_LOG_v1 {"time_micros": 1763802334475547, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.475589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3457803, prev total WAL file size 3498451, number of live WAL files 2.
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.478250) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3306KB)], [50(7644KB)]
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802334478292, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11214160, "oldest_snapshot_seqno": -1}
Nov 22 04:05:34 np0005532048 nova_compute[253661]: 2025-11-22 09:05:34.601 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Creating config drive at /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config#033[00m
Nov 22 04:05:34 np0005532048 nova_compute[253661]: 2025-11-22 09:05:34.606 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqyrvsnfg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:34 np0005532048 nova_compute[253661]: 2025-11-22 09:05:34.665 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Creating config drive at /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config#033[00m
Nov 22 04:05:34 np0005532048 nova_compute[253661]: 2025-11-22 09:05:34.669 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68dq5ztu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:34 np0005532048 nova_compute[253661]: 2025-11-22 09:05:34.963 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68dq5ztu" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4864 keys, 9460474 bytes, temperature: kUnknown
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802334973167, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9460474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9424818, "index_size": 22426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 119146, "raw_average_key_size": 24, "raw_value_size": 9333726, "raw_average_value_size": 1918, "num_data_blocks": 941, "num_entries": 4864, "num_filter_entries": 4864, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:05:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 137 op/s
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.973638) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9460474 bytes
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.023473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.7 rd, 19.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5384, records dropped: 520 output_compression: NoCompression
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.023508) EVENT_LOG_v1 {"time_micros": 1763802335023492, "job": 26, "event": "compaction_finished", "compaction_time_micros": 494971, "compaction_time_cpu_micros": 28970, "output_level": 6, "num_output_files": 1, "total_output_size": 9460474, "num_input_records": 5384, "num_output_records": 4864, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802335024879, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802335027046, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:34.478132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:05:35 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:05:35.027181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:05:35 np0005532048 nova_compute[253661]: 2025-11-22 09:05:35.028 253665 DEBUG nova.storage.rbd_utils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:35 np0005532048 nova_compute[253661]: 2025-11-22 09:05:35.033 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:35 np0005532048 nova_compute[253661]: 2025-11-22 09:05:35.053 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqyrvsnfg" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:35 np0005532048 nova_compute[253661]: 2025-11-22 09:05:35.089 253665 DEBUG nova.storage.rbd_utils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] rbd image 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:35 np0005532048 nova_compute[253661]: 2025-11-22 09:05:35.093 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:36 np0005532048 nova_compute[253661]: 2025-11-22 09:05:36.227 253665 DEBUG oslo_concurrency.processutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:36 np0005532048 nova_compute[253661]: 2025-11-22 09:05:36.228 253665 INFO nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Deleting local config drive /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb/disk.config because it was imported into RBD.#033[00m
Nov 22 04:05:36 np0005532048 systemd[1]: Starting libvirt secret daemon...
Nov 22 04:05:36 np0005532048 nova_compute[253661]: 2025-11-22 09:05:36.267 253665 DEBUG oslo_concurrency.processutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config b9672702-d5e5-407f-bc86-ee9e64f90a01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:36 np0005532048 nova_compute[253661]: 2025-11-22 09:05:36.267 253665 INFO nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Deleting local config drive /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01/disk.config because it was imported into RBD.#033[00m
Nov 22 04:05:36 np0005532048 systemd[1]: Started libvirt secret daemon.
Nov 22 04:05:36 np0005532048 podman[269575]: 2025-11-22 09:05:36.350042625 +0000 UTC m=+0.070452242 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 22 04:05:36 np0005532048 podman[269576]: 2025-11-22 09:05:36.359421682 +0000 UTC m=+0.079812949 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:05:36 np0005532048 systemd-machined[215941]: New machine qemu-1-instance-00000002.
Nov 22 04:05:36 np0005532048 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Nov 22 04:05:36 np0005532048 systemd-machined[215941]: New machine qemu-2-instance-00000001.
Nov 22 04:05:36 np0005532048 systemd[1]: Started Virtual Machine qemu-2-instance-00000001.
Nov 22 04:05:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 120 op/s
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.151 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802337.1502492, 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.151 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.155 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.155 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.160 253665 INFO nova.virt.libvirt.driver [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance spawned successfully.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.160 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.199 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.205 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.210 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.211 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.211 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.212 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.212 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.213 253665 DEBUG nova.virt.libvirt.driver [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.233 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.234 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802337.1506598, 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.235 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] VM Started (Lifecycle Event)#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.267 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.271 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.292 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.303 253665 INFO nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Took 14.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.305 253665 DEBUG nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.307 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802337.306414, b9672702-d5e5-407f-bc86-ee9e64f90a01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.308 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.311 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.311 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.320 253665 INFO nova.virt.libvirt.driver [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance spawned successfully.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.322 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.325 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.331 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.370 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.371 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.371 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.372 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.373 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.373 253665 DEBUG nova.virt.libvirt.driver [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.401 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.402 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802337.3065295, b9672702-d5e5-407f-bc86-ee9e64f90a01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.402 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] VM Started (Lifecycle Event)#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.419 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.423 253665 INFO nova.compute.manager [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Took 15.96 seconds to build instance.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.426 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.447 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.475 253665 DEBUG oslo_concurrency.lockutils [None req-0fc96da0-51fc-45d7-afc5-4815fa7feaf5 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.512 253665 INFO nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Took 14.50 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.514 253665 DEBUG nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.686 253665 INFO nova.compute.manager [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Took 16.20 seconds to build instance.#033[00m
Nov 22 04:05:37 np0005532048 nova_compute[253661]: 2025-11-22 09:05:37.727 253665 DEBUG oslo_concurrency.lockutils [None req-274c45d4-f4fb-451c-a18c-ef3764c922c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:38 np0005532048 podman[269743]: 2025-11-22 09:05:38.47101908 +0000 UTC m=+0.144142443 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:05:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 99 op/s
Nov 22 04:05:39 np0005532048 nova_compute[253661]: 2025-11-22 09:05:39.853 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquiring lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:39 np0005532048 nova_compute[253661]: 2025-11-22 09:05:39.854 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:39 np0005532048 nova_compute[253661]: 2025-11-22 09:05:39.854 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquiring lock "b9672702-d5e5-407f-bc86-ee9e64f90a01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:39 np0005532048 nova_compute[253661]: 2025-11-22 09:05:39.855 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:39 np0005532048 nova_compute[253661]: 2025-11-22 09:05:39.855 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:39 np0005532048 nova_compute[253661]: 2025-11-22 09:05:39.856 253665 INFO nova.compute.manager [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Terminating instance#033[00m
Nov 22 04:05:39 np0005532048 nova_compute[253661]: 2025-11-22 09:05:39.857 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquiring lock "refresh_cache-b9672702-d5e5-407f-bc86-ee9e64f90a01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:05:39 np0005532048 nova_compute[253661]: 2025-11-22 09:05:39.858 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquired lock "refresh_cache-b9672702-d5e5-407f-bc86-ee9e64f90a01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:05:39 np0005532048 nova_compute[253661]: 2025-11-22 09:05:39.858 253665 DEBUG nova.network.neutron [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.061 253665 DEBUG nova.network.neutron [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.337 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.338 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.338 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.339 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.339 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.340 253665 INFO nova.compute.manager [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Terminating instance#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.341 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "refresh_cache-6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.341 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquired lock "refresh_cache-6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.341 253665 DEBUG nova.network.neutron [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.547 253665 DEBUG nova.network.neutron [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.571 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Releasing lock "refresh_cache-b9672702-d5e5-407f-bc86-ee9e64f90a01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.571 253665 DEBUG nova.compute.manager [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:05:40 np0005532048 nova_compute[253661]: 2025-11-22 09:05:40.659 253665 DEBUG nova.network.neutron [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:05:40 np0005532048 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 22 04:05:40 np0005532048 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Consumed 3.784s CPU time.
Nov 22 04:05:40 np0005532048 systemd-machined[215941]: Machine qemu-2-instance-00000001 terminated.
Nov 22 04:05:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.1 MiB/s wr, 99 op/s
Nov 22 04:05:41 np0005532048 nova_compute[253661]: 2025-11-22 09:05:41.002 253665 INFO nova.virt.libvirt.driver [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance destroyed successfully.#033[00m
Nov 22 04:05:41 np0005532048 nova_compute[253661]: 2025-11-22 09:05:41.002 253665 DEBUG nova.objects.instance [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lazy-loading 'resources' on Instance uuid b9672702-d5e5-407f-bc86-ee9e64f90a01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:05:41 np0005532048 nova_compute[253661]: 2025-11-22 09:05:41.071 253665 DEBUG nova.network.neutron [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:05:41 np0005532048 nova_compute[253661]: 2025-11-22 09:05:41.085 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Releasing lock "refresh_cache-6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:05:41 np0005532048 nova_compute[253661]: 2025-11-22 09:05:41.086 253665 DEBUG nova.compute.manager [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:05:41 np0005532048 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 22 04:05:41 np0005532048 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 4.468s CPU time.
Nov 22 04:05:41 np0005532048 systemd-machined[215941]: Machine qemu-1-instance-00000002 terminated.
Nov 22 04:05:41 np0005532048 nova_compute[253661]: 2025-11-22 09:05:41.509 253665 INFO nova.virt.libvirt.driver [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance destroyed successfully.#033[00m
Nov 22 04:05:41 np0005532048 nova_compute[253661]: 2025-11-22 09:05:41.510 253665 DEBUG nova.objects.instance [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lazy-loading 'resources' on Instance uuid 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:05:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 158 op/s
Nov 22 04:05:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 27 KiB/s wr, 173 op/s
Nov 22 04:05:46 np0005532048 nova_compute[253661]: 2025-11-22 09:05:46.710 253665 INFO nova.virt.libvirt.driver [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Deleting instance files /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01_del#033[00m
Nov 22 04:05:46 np0005532048 nova_compute[253661]: 2025-11-22 09:05:46.711 253665 INFO nova.virt.libvirt.driver [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Deletion of /var/lib/nova/instances/b9672702-d5e5-407f-bc86-ee9e64f90a01_del complete#033[00m
Nov 22 04:05:46 np0005532048 nova_compute[253661]: 2025-11-22 09:05:46.718 253665 INFO nova.virt.libvirt.driver [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Deleting instance files /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_del#033[00m
Nov 22 04:05:46 np0005532048 nova_compute[253661]: 2025-11-22 09:05:46.719 253665 INFO nova.virt.libvirt.driver [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Deletion of /var/lib/nova/instances/6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb_del complete#033[00m
Nov 22 04:05:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 97 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 178 op/s
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.147 253665 DEBUG nova.virt.libvirt.host [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.147 253665 INFO nova.virt.libvirt.host [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] UEFI support detected#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.149 253665 INFO nova.compute.manager [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Took 6.58 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.150 253665 DEBUG oslo.service.loopingcall [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.151 253665 DEBUG nova.compute.manager [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.151 253665 DEBUG nova.network.neutron [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.219 253665 INFO nova.compute.manager [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Took 6.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.220 253665 DEBUG oslo.service.loopingcall [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.221 253665 DEBUG nova.compute.manager [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.221 253665 DEBUG nova.network.neutron [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.670 253665 DEBUG nova.network.neutron [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.682 253665 DEBUG nova.network.neutron [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.693 253665 INFO nova.compute.manager [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Took 0.54 seconds to deallocate network for instance.#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.767 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.768 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.772 253665 DEBUG nova.network.neutron [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.783 253665 DEBUG nova.network.neutron [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.796 253665 INFO nova.compute.manager [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Took 0.57 seconds to deallocate network for instance.#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.836 253665 DEBUG oslo_concurrency.processutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:47 np0005532048 nova_compute[253661]: 2025-11-22 09:05:47.946 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223022577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.338 253665 DEBUG oslo_concurrency.processutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.346 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.377 253665 ERROR nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] [req-2eeba460-000b-4da4-af91-9b68da56f9d6] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID f0c5987a-d277-4022-aba2-19e7fecb4518.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-2eeba460-000b-4da4-af91-9b68da56f9d6"}]}#033[00m
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.397 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.413 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.413 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.426 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.446 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.490 253665 DEBUG oslo_concurrency.processutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997063578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.971 253665 DEBUG oslo_concurrency.processutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:48 np0005532048 nova_compute[253661]: 2025-11-22 09:05:48.979 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:05:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 196 op/s
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.030 253665 DEBUG nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updated inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.031 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.031 253665 DEBUG nova.compute.provider_tree [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.066 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.069 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.102 253665 INFO nova.scheduler.client.report [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Deleted allocations for instance b9672702-d5e5-407f-bc86-ee9e64f90a01#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.124 253665 DEBUG oslo_concurrency.processutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.185 253665 DEBUG oslo_concurrency.lockutils [None req-6632262c-972d-421e-a19b-7656c8f935b4 363468ecc47d437f8bad53b4db93cde9 e42a94a05d74479cb9484cd3c318b9c9 - - default default] Lock "b9672702-d5e5-407f-bc86-ee9e64f90a01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190372886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.562 253665 DEBUG oslo_concurrency.processutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.570 253665 DEBUG nova.compute.provider_tree [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.584 253665 DEBUG nova.scheduler.client.report [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.644 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.687 253665 INFO nova.scheduler.client.report [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Deleted allocations for instance 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb#033[00m
Nov 22 04:05:49 np0005532048 nova_compute[253661]: 2025-11-22 09:05:49.776 253665 DEBUG oslo_concurrency.lockutils [None req-b2a94299-691c-45bd-8904-540858119814 3a894a86983342d3bdece6fcf23fe1a9 4c55d55859ed4fb9adf33d6c40da9051 - - default default] Lock "6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.3 KiB/s wr, 149 op/s
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:05:52
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes', '.mgr', 'backups', 'images', 'cephfs.cephfs.meta']
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:52 np0005532048 nova_compute[253661]: 2025-11-22 09:05:52.724 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:52 np0005532048 nova_compute[253661]: 2025-11-22 09:05:52.724 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:52 np0005532048 nova_compute[253661]: 2025-11-22 09:05:52.766 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:05:52 np0005532048 nova_compute[253661]: 2025-11-22 09:05:52.945 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:52 np0005532048 nova_compute[253661]: 2025-11-22 09:05:52.946 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:52 np0005532048 nova_compute[253661]: 2025-11-22 09:05:52.954 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:05:52 np0005532048 nova_compute[253661]: 2025-11-22 09:05:52.954 253665 INFO nova.compute.claims [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:05:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.3 KiB/s wr, 150 op/s
Nov 22 04:05:53 np0005532048 nova_compute[253661]: 2025-11-22 09:05:53.092 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3970738560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:53 np0005532048 nova_compute[253661]: 2025-11-22 09:05:53.523 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:53 np0005532048 nova_compute[253661]: 2025-11-22 09:05:53.530 253665 DEBUG nova.compute.provider_tree [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:05:53 np0005532048 nova_compute[253661]: 2025-11-22 09:05:53.544 253665 DEBUG nova.scheduler.client.report [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:05:53 np0005532048 nova_compute[253661]: 2025-11-22 09:05:53.603 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:53 np0005532048 nova_compute[253661]: 2025-11-22 09:05:53.604 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:05:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:53 np0005532048 nova_compute[253661]: 2025-11-22 09:05:53.797 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:05:53 np0005532048 nova_compute[253661]: 2025-11-22 09:05:53.798 253665 DEBUG nova.network.neutron [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:05:53 np0005532048 nova_compute[253661]: 2025-11-22 09:05:53.851 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.035 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.303 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.305 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.305 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Creating image(s)#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.330 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.354 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.382 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.387 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.423 253665 DEBUG nova.network.neutron [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.424 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.455 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.456 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.457 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.458 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.483 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.487 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:54 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.817 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.877 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] resizing rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:05:54 np0005532048 nova_compute[253661]: 2025-11-22 09:05:54.988 253665 DEBUG nova.objects.instance [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'migration_context' on Instance uuid 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:05:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.3 KiB/s wr, 90 op/s
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.003 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.004 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Ensure instance console log exists: /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.004 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.005 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.005 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.007 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.013 253665 WARNING nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.020 253665 DEBUG nova.virt.libvirt.host [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.020 253665 DEBUG nova.virt.libvirt.host [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.030 253665 DEBUG nova.virt.libvirt.host [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.031 253665 DEBUG nova.virt.libvirt.host [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.031 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.032 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.032 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.033 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.033 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.033 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.033 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.034 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.034 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.034 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.035 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.035 253665 DEBUG nova.virt.hardware [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.039 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:05:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985298784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.500 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.532 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.537 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:05:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123296519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.977 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.979 253665 DEBUG nova.objects.instance [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:05:55 np0005532048 nova_compute[253661]: 2025-11-22 09:05:55.992 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <uuid>77eaf2e6-9498-4f2e-9c1d-496e6369a9d1</uuid>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <name>instance-00000003</name>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-949701683</nova:name>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:05:55</nova:creationTime>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <nova:user uuid="786abc4100494e9a9e9977b0d6534f9d">tempest-DeleteServersAdminTestJSON-1856447615-project-member</nova:user>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <nova:project uuid="ae421771a97c45f7a0288a4b8cfd48c5">tempest-DeleteServersAdminTestJSON-1856447615</nova:project>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <entry name="serial">77eaf2e6-9498-4f2e-9c1d-496e6369a9d1</entry>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <entry name="uuid">77eaf2e6-9498-4f2e-9c1d-496e6369a9d1</entry>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/console.log" append="off"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:05:55 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:05:55 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:05:55 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:05:55 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.000 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802340.9994485, b9672702-d5e5-407f-bc86-ee9e64f90a01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.000 253665 INFO nova.compute.manager [-] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.015 253665 DEBUG nova.compute.manager [None req-f219254b-530d-4d32-9b32-8bcc639701f7 - - - - - -] [instance: b9672702-d5e5-407f-bc86-ee9e64f90a01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.239 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.240 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.241 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Using config drive#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.377 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.507 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802341.50647, 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.508 253665 INFO nova.compute.manager [-] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.528 253665 DEBUG nova.compute.manager [None req-d28ca804-7a35-4e3d-843f-1d01dbdb0e1a - - - - - -] [instance: 6f5575b9-a2e6-4c8d-850a-0cb73b46bbeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.553 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Creating config drive at /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.558 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtv5v5xl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.686 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtv5v5xl" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.716 253665 DEBUG nova.storage.rbd_utils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] rbd image 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.721 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.919 253665 DEBUG oslo_concurrency.processutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:05:56 np0005532048 nova_compute[253661]: 2025-11-22 09:05:56.920 253665 INFO nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Deleting local config drive /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1/disk.config because it was imported into RBD.#033[00m
Nov 22 04:05:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 56 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 521 KiB/s wr, 54 op/s
Nov 22 04:05:57 np0005532048 systemd-machined[215941]: New machine qemu-3-instance-00000003.
Nov 22 04:05:57 np0005532048 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.403 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802357.4028168, 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.405 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.408 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.409 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.414 253665 INFO nova.virt.libvirt.driver [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance spawned successfully.#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.414 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.424 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.430 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.433 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.433 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.434 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.434 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.435 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.435 253665 DEBUG nova.virt.libvirt.driver [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.459 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.460 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802357.403159, 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.461 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.479 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.483 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.510 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.532 253665 INFO nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Took 3.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.532 253665 DEBUG nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.629 253665 INFO nova.compute.manager [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Took 4.71 seconds to build instance.#033[00m
Nov 22 04:05:57 np0005532048 nova_compute[253661]: 2025-11-22 09:05:57.655 253665 DEBUG oslo_concurrency.lockutils [None req-0f9571b6-5931-488c-b6ca-7204eabf2b55 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:05:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 22 04:06:00 np0005532048 nova_compute[253661]: 2025-11-22 09:06:00.951 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:00 np0005532048 nova_compute[253661]: 2025-11-22 09:06:00.952 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:00 np0005532048 nova_compute[253661]: 2025-11-22 09:06:00.952 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:00 np0005532048 nova_compute[253661]: 2025-11-22 09:06:00.952 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:00 np0005532048 nova_compute[253661]: 2025-11-22 09:06:00.953 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:00 np0005532048 nova_compute[253661]: 2025-11-22 09:06:00.954 253665 INFO nova.compute.manager [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Terminating instance#033[00m
Nov 22 04:06:00 np0005532048 nova_compute[253661]: 2025-11-22 09:06:00.955 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "refresh_cache-77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:06:00 np0005532048 nova_compute[253661]: 2025-11-22 09:06:00.955 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquired lock "refresh_cache-77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:06:00 np0005532048 nova_compute[253661]: 2025-11-22 09:06:00.955 253665 DEBUG nova.network.neutron [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:06:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 258 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 22 04:06:01 np0005532048 nova_compute[253661]: 2025-11-22 09:06:01.121 253665 DEBUG nova.network.neutron [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:06:01 np0005532048 nova_compute[253661]: 2025-11-22 09:06:01.385 253665 DEBUG nova.network.neutron [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:06:01 np0005532048 nova_compute[253661]: 2025-11-22 09:06:01.402 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Releasing lock "refresh_cache-77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:06:01 np0005532048 nova_compute[253661]: 2025-11-22 09:06:01.403 253665 DEBUG nova.compute.manager [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:06:01 np0005532048 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 22 04:06:01 np0005532048 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 4.532s CPU time.
Nov 22 04:06:01 np0005532048 systemd-machined[215941]: Machine qemu-3-instance-00000003 terminated.
Nov 22 04:06:01 np0005532048 nova_compute[253661]: 2025-11-22 09:06:01.622 253665 INFO nova.virt.libvirt.driver [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance destroyed successfully.#033[00m
Nov 22 04:06:01 np0005532048 nova_compute[253661]: 2025-11-22 09:06:01.623 253665 DEBUG nova.objects.instance [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lazy-loading 'resources' on Instance uuid 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.172 253665 INFO nova.virt.libvirt.driver [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Deleting instance files /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_del#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.174 253665 INFO nova.virt.libvirt.driver [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Deletion of /var/lib/nova/instances/77eaf2e6-9498-4f2e-9c1d-496e6369a9d1_del complete#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.251 253665 INFO nova.compute.manager [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.252 253665 DEBUG oslo.service.loopingcall [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.252 253665 DEBUG nova.compute.manager [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.253 253665 DEBUG nova.network.neutron [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003482863067330859 of space, bias 1.0, pg target 0.10448589201992577 quantized to 32 (current 32)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.616 253665 DEBUG nova.network.neutron [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.631 253665 DEBUG nova.network.neutron [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.647 253665 INFO nova.compute.manager [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Took 0.39 seconds to deallocate network for instance.#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.740 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.741 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:02 np0005532048 nova_compute[253661]: 2025-11-22 09:06:02.783 253665 DEBUG oslo_concurrency.processutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 805 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 22 04:06:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:06:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928697489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:03 np0005532048 nova_compute[253661]: 2025-11-22 09:06:03.216 253665 DEBUG oslo_concurrency.processutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:03 np0005532048 nova_compute[253661]: 2025-11-22 09:06:03.222 253665 DEBUG nova.compute.provider_tree [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:06:03 np0005532048 nova_compute[253661]: 2025-11-22 09:06:03.236 253665 DEBUG nova.scheduler.client.report [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:06:03 np0005532048 nova_compute[253661]: 2025-11-22 09:06:03.266 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:03 np0005532048 nova_compute[253661]: 2025-11-22 09:06:03.336 253665 INFO nova.scheduler.client.report [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Deleted allocations for instance 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1#033[00m
Nov 22 04:06:03 np0005532048 nova_compute[253661]: 2025-11-22 09:06:03.396 253665 DEBUG oslo_concurrency.lockutils [None req-0af75e81-8c54-462d-9de6-6a40aa5b98c8 786abc4100494e9a9e9977b0d6534f9d ae421771a97c45f7a0288a4b8cfd48c5 - - default default] Lock "77eaf2e6-9498-4f2e-9c1d-496e6369a9d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 56 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Nov 22 04:06:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 22 04:06:07 np0005532048 podman[270292]: 2025-11-22 09:06:07.38423125 +0000 UTC m=+0.062540911 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd)
Nov 22 04:06:07 np0005532048 podman[270291]: 2025-11-22 09:06:07.406240502 +0000 UTC m=+0.087477724 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:06:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 113 op/s
Nov 22 04:06:09 np0005532048 podman[270328]: 2025-11-22 09:06:09.436851793 +0000 UTC m=+0.127858889 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:06:10 np0005532048 nova_compute[253661]: 2025-11-22 09:06:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:06:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 89 op/s
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.246 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.247 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.248 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:06:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2196221149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.684 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.889 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.891 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5085MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.892 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.892 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.950 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:06:11 np0005532048 nova_compute[253661]: 2025-11-22 09:06:11.974 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:06:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3364744456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:06:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3364744456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:06:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694971963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:12 np0005532048 nova_compute[253661]: 2025-11-22 09:06:12.403 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:12 np0005532048 nova_compute[253661]: 2025-11-22 09:06:12.411 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:06:12 np0005532048 nova_compute[253661]: 2025-11-22 09:06:12.435 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:06:12 np0005532048 nova_compute[253661]: 2025-11-22 09:06:12.472 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:06:12 np0005532048 nova_compute[253661]: 2025-11-22 09:06:12.473 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 89 op/s
Nov 22 04:06:13 np0005532048 nova_compute[253661]: 2025-11-22 09:06:13.473 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:06:13 np0005532048 nova_compute[253661]: 2025-11-22 09:06:13.474 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:06:13 np0005532048 nova_compute[253661]: 2025-11-22 09:06:13.475 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:06:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:14 np0005532048 nova_compute[253661]: 2025-11-22 09:06:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:06:14 np0005532048 nova_compute[253661]: 2025-11-22 09:06:14.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:06:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.2 KiB/s wr, 63 op/s
Nov 22 04:06:15 np0005532048 nova_compute[253661]: 2025-11-22 09:06:15.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:06:16 np0005532048 nova_compute[253661]: 2025-11-22 09:06:16.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:06:16 np0005532048 nova_compute[253661]: 2025-11-22 09:06:16.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:06:16 np0005532048 nova_compute[253661]: 2025-11-22 09:06:16.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:06:16 np0005532048 nova_compute[253661]: 2025-11-22 09:06:16.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:06:16 np0005532048 nova_compute[253661]: 2025-11-22 09:06:16.245 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:06:16 np0005532048 nova_compute[253661]: 2025-11-22 09:06:16.621 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802361.6194797, 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:16 np0005532048 nova_compute[253661]: 2025-11-22 09:06:16.622 253665 INFO nova.compute.manager [-] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:06:16 np0005532048 nova_compute[253661]: 2025-11-22 09:06:16.743 253665 DEBUG nova.compute.manager [None req-ff40e480-f139-46f0-88fc-e7c0f61fc6f7 - - - - - -] [instance: 77eaf2e6-9498-4f2e-9c1d-496e6369a9d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 852 B/s wr, 20 op/s
Nov 22 04:06:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:18 np0005532048 nova_compute[253661]: 2025-11-22 09:06:18.994 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:18 np0005532048 nova_compute[253661]: 2025-11-22 09:06:18.994 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.030 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.155 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.156 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.165 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.166 253665 INFO nova.compute.claims [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.298 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:06:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410429959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.756 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.764 253665 DEBUG nova.compute.provider_tree [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.789 253665 DEBUG nova.scheduler.client.report [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.825 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.826 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.892 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.893 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.925 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:06:19 np0005532048 nova_compute[253661]: 2025-11-22 09:06:19.983 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.105 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.106 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.106 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Creating image(s)#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.126 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.157 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.191 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.195 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.257 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.258 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.259 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.259 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.281 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.286 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.459 253665 WARNING oslo_policy.policy [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.459 253665 WARNING oslo_policy.policy [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 22 04:06:20 np0005532048 nova_compute[253661]: 2025-11-22 09:06:20.462 253665 DEBUG nova.policy [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fabb775e44cc437680ea15de97d50877', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:06:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:06:21 np0005532048 nova_compute[253661]: 2025-11-22 09:06:21.307 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:21 np0005532048 nova_compute[253661]: 2025-11-22 09:06:21.376 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] resizing rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:06:21 np0005532048 nova_compute[253661]: 2025-11-22 09:06:21.504 253665 DEBUG nova.objects.instance [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e90ab44-2028-4ef8-ab7a-3c603be3e750 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:21 np0005532048 nova_compute[253661]: 2025-11-22 09:06:21.524 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:06:21 np0005532048 nova_compute[253661]: 2025-11-22 09:06:21.524 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Ensure instance console log exists: /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:06:21 np0005532048 nova_compute[253661]: 2025-11-22 09:06:21.525 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:21 np0005532048 nova_compute[253661]: 2025-11-22 09:06:21.525 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:21 np0005532048 nova_compute[253661]: 2025-11-22 09:06:21.525 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:22 np0005532048 nova_compute[253661]: 2025-11-22 09:06:22.675 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Successfully created port: f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 59 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.1 MiB/s wr, 12 op/s
Nov 22 04:06:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:23.653 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:06:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:23.655 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:06:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:23 np0005532048 nova_compute[253661]: 2025-11-22 09:06:23.930 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Successfully updated port: f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:06:23 np0005532048 nova_compute[253661]: 2025-11-22 09:06:23.955 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:06:23 np0005532048 nova_compute[253661]: 2025-11-22 09:06:23.955 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquired lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:06:23 np0005532048 nova_compute[253661]: 2025-11-22 09:06:23.955 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:06:24 np0005532048 nova_compute[253661]: 2025-11-22 09:06:24.140 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:06:24 np0005532048 nova_compute[253661]: 2025-11-22 09:06:24.515 253665 DEBUG nova.compute.manager [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-changed-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:06:24 np0005532048 nova_compute[253661]: 2025-11-22 09:06:24.516 253665 DEBUG nova.compute.manager [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Refreshing instance network info cache due to event network-changed-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:06:24 np0005532048 nova_compute[253661]: 2025-11-22 09:06:24.516 253665 DEBUG oslo_concurrency.lockutils [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:06:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 88 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.898 253665 DEBUG nova.network.neutron [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updating instance_info_cache with network_info: [{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.929 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Releasing lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.930 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance network_info: |[{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.931 253665 DEBUG oslo_concurrency.lockutils [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.932 253665 DEBUG nova.network.neutron [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Refreshing network info cache for port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.937 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start _get_guest_xml network_info=[{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.945 253665 WARNING nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.951 253665 DEBUG nova.virt.libvirt.host [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.952 253665 DEBUG nova.virt.libvirt.host [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.958 253665 DEBUG nova.virt.libvirt.host [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.958 253665 DEBUG nova.virt.libvirt.host [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.959 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.960 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:06:09Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1309919375',id=20,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-681809065',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.960 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.960 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.961 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.961 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.961 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.961 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.962 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.962 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.962 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.962 253665 DEBUG nova.virt.hardware [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:06:25 np0005532048 nova_compute[253661]: 2025-11-22 09:06:25.965 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:06:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873309407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.451 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.479 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.484 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:06:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648644769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.947 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.949 253665 DEBUG nova.virt.libvirt.vif [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-382486397',id=4,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-8sum2ias',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:06:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=4e90ab44-2028-4ef8-ab7a-3c603be3e750,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.950 253665 DEBUG nova.network.os_vif_util [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.951 253665 DEBUG nova.network.os_vif_util [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.953 253665 DEBUG nova.objects.instance [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e90ab44-2028-4ef8-ab7a-3c603be3e750 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.966 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <uuid>4e90ab44-2028-4ef8-ab7a-3c603be3e750</uuid>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <name>instance-00000004</name>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-382486397</nova:name>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:06:25</nova:creationTime>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-681809065">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <nova:user uuid="fabb775e44cc437680ea15de97d50877">tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member</nova:user>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <nova:project uuid="4e0153a0f27f4c68ad2f7910dc78a992">tempest-ServersWithSpecificFlavorTestJSON-1107415015</nova:project>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <nova:port uuid="f5fa33e1-ab24-4daa-9790-5e0dbcbf4907">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <entry name="serial">4e90ab44-2028-4ef8-ab7a-3c603be3e750</entry>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <entry name="uuid">4e90ab44-2028-4ef8-ab7a-3c603be3e750</entry>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:78:b6:44"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <target dev="tapf5fa33e1-ab"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/console.log" append="off"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:06:26 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:06:26 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:06:26 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:06:26 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.968 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Preparing to wait for external event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.969 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.969 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.969 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.970 253665 DEBUG nova.virt.libvirt.vif [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-382486397',id=4,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-8sum2ias',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:06:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=4e90ab44-2028-4ef8-ab7a-3c603be3e750,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.970 253665 DEBUG nova.network.os_vif_util [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.971 253665 DEBUG nova.network.os_vif_util [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:06:26 np0005532048 nova_compute[253661]: 2025-11-22 09:06:26.971 253665 DEBUG os_vif [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:06:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.057 253665 DEBUG ovsdbapp.backend.ovs_idl [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.058 253665 DEBUG ovsdbapp.backend.ovs_idl [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.058 253665 DEBUG ovsdbapp.backend.ovs_idl [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.059 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.060 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [POLLOUT] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.067 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.078 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.079 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.079 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.080 253665 INFO oslo.privsep.daemon [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp2j1fng5x/privsep.sock']#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.824 253665 DEBUG nova.network.neutron [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updated VIF entry in instance network info cache for port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.825 253665 DEBUG nova.network.neutron [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updating instance_info_cache with network_info: [{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.839 253665 DEBUG oslo_concurrency.lockutils [req-bdf8949d-b9bb-4db0-b1f4-3076ac64884d req-8f241697-a81f-4c9f-9be0-d67f88549e64 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:06:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:27.950 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:27.950 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:27.951 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.992 253665 INFO oslo.privsep.daemon [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.778 270653 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.784 270653 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.787 270653 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 22 04:06:27 np0005532048 nova_compute[253661]: 2025-11-22 09:06:27.787 270653 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270653#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.464 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.465 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5fa33e1-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.466 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5fa33e1-ab, col_values=(('external_ids', {'iface-id': 'f5fa33e1-ab24-4daa-9790-5e0dbcbf4907', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:b6:44', 'vm-uuid': '4e90ab44-2028-4ef8-ab7a-3c603be3e750'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:06:28 np0005532048 NetworkManager[48920]: <info>  [1763802388.4703] manager: (tapf5fa33e1-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.477 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.479 253665 INFO os_vif [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab')#033[00m
Nov 22 04:06:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 22 04:06:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 22 04:06:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.555 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.556 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.556 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No VIF found with MAC fa:16:3e:78:b6:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.558 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Using config drive#033[00m
Nov 22 04:06:28 np0005532048 nova_compute[253661]: 2025-11-22 09:06:28.585 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:28.657 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:06:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.234 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Creating config drive at /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config#033[00m
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.241 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4o3zhabs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.371 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4o3zhabs" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.398 253665 DEBUG nova.storage.rbd_utils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.403 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.602 253665 DEBUG oslo_concurrency.processutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config 4e90ab44-2028-4ef8-ab7a-3c603be3e750_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.603 253665 INFO nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Deleting local config drive /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750/disk.config because it was imported into RBD.#033[00m
Nov 22 04:06:29 np0005532048 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 22 04:06:29 np0005532048 kernel: tapf5fa33e1-ab: entered promiscuous mode
Nov 22 04:06:29 np0005532048 NetworkManager[48920]: <info>  [1763802389.6711] manager: (tapf5fa33e1-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Nov 22 04:06:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:29Z|00027|binding|INFO|Claiming lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 for this chassis.
Nov 22 04:06:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:29Z|00028|binding|INFO|f5fa33e1-ab24-4daa-9790-5e0dbcbf4907: Claiming fa:16:3e:78:b6:44 10.100.0.6
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.672 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.676 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:29 np0005532048 systemd-udevd[270731]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:06:29 np0005532048 NetworkManager[48920]: <info>  [1763802389.7187] device (tapf5fa33e1-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:06:29 np0005532048 NetworkManager[48920]: <info>  [1763802389.7196] device (tapf5fa33e1-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:06:29 np0005532048 systemd-machined[215941]: New machine qemu-4-instance-00000004.
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.754 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:29 np0005532048 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 22 04:06:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:29.752 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:b6:44 10.100.0.6'], port_security=['fa:16:3e:78:b6:44 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e90ab44-2028-4ef8-ab7a-3c603be3e750', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27705719-461d-420b-a9b8-656219b295b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6e7bbd2-3ac0-4509-872c-a46868ca499e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e343bc4-f111-4a21-942b-257d99455815, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:06:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:29.755 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 in datapath 27705719-461d-420b-a9b8-656219b295b7 bound to our chassis#033[00m
Nov 22 04:06:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:29.758 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27705719-461d-420b-a9b8-656219b295b7#033[00m
Nov 22 04:06:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:29.760 162862 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpwr905jmy/privsep.sock']#033[00m
Nov 22 04:06:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:29Z|00029|binding|INFO|Setting lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 ovn-installed in OVS
Nov 22 04:06:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:29Z|00030|binding|INFO|Setting lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 up in Southbound
Nov 22 04:06:29 np0005532048 nova_compute[253661]: 2025-11-22 09:06:29.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.541 162862 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 22 04:06:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.542 162862 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwr905jmy/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 22 04:06:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.364 270751 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 22 04:06:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.368 270751 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 22 04:06:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.370 270751 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 22 04:06:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.371 270751 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270751#033[00m
Nov 22 04:06:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:30.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[54b32ca0-b11a-48a7-a4c3-68bd08944e33]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:30 np0005532048 nova_compute[253661]: 2025-11-22 09:06:30.714 253665 DEBUG nova.compute.manager [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:06:30 np0005532048 nova_compute[253661]: 2025-11-22 09:06:30.715 253665 DEBUG oslo_concurrency.lockutils [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:30 np0005532048 nova_compute[253661]: 2025-11-22 09:06:30.715 253665 DEBUG oslo_concurrency.lockutils [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:30 np0005532048 nova_compute[253661]: 2025-11-22 09:06:30.716 253665 DEBUG oslo_concurrency.lockutils [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:30 np0005532048 nova_compute[253661]: 2025-11-22 09:06:30.716 253665 DEBUG nova.compute.manager [req-123869f2-59fa-492a-8fd2-f0b1e4150241 req-f6885682-51bf-4a8d-856f-b55d23441182 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Processing event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:06:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 88 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.632 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.633 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802391.6316912, 4e90ab44-2028-4ef8-ab7a-3c603be3e750 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.633 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] VM Started (Lifecycle Event)#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.635 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.640 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance spawned successfully.#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.641 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.651 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.660 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.664 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.664 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.665 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.665 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.666 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.666 253665 DEBUG nova.virt.libvirt.driver [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.704 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.704 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802391.6345031, 4e90ab44-2028-4ef8-ab7a-3c603be3e750 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.705 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.722 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.726 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802391.6355124, 4e90ab44-2028-4ef8-ab7a-3c603be3e750 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.726 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.744 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.749 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.777 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.786 253665 INFO nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Took 11.68 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.787 253665 DEBUG nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.864 253665 INFO nova.compute.manager [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Took 12.74 seconds to build instance.#033[00m
Nov 22 04:06:31 np0005532048 nova_compute[253661]: 2025-11-22 09:06:31.888 253665 DEBUG oslo_concurrency.lockutils [None req-76ec94e2-e288-45d5-8c60-93d2d3352805 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 22 04:06:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 22 04:06:32 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 22 04:06:32 np0005532048 nova_compute[253661]: 2025-11-22 09:06:32.961 253665 DEBUG nova.compute.manager [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:06:32 np0005532048 nova_compute[253661]: 2025-11-22 09:06:32.961 253665 DEBUG oslo_concurrency.lockutils [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:32 np0005532048 nova_compute[253661]: 2025-11-22 09:06:32.961 253665 DEBUG oslo_concurrency.lockutils [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:32 np0005532048 nova_compute[253661]: 2025-11-22 09:06:32.962 253665 DEBUG oslo_concurrency.lockutils [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:32 np0005532048 nova_compute[253661]: 2025-11-22 09:06:32.962 253665 DEBUG nova.compute.manager [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] No waiting events found dispatching network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:06:32 np0005532048 nova_compute[253661]: 2025-11-22 09:06:32.962 253665 WARNING nova.compute.manager [req-f4975d3e-7379-4975-aa8b-73346d13067f req-22794981-4518-4e96-a990-5b425360209c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received unexpected event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:06:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 88 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 23 KiB/s wr, 27 op/s
Nov 22 04:06:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:33.206 270751 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:33.207 270751 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:33.207 270751 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:33 np0005532048 nova_compute[253661]: 2025-11-22 09:06:33.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:34 np0005532048 nova_compute[253661]: 2025-11-22 09:06:34.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 88 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 25 KiB/s wr, 83 op/s
Nov 22 04:06:36 np0005532048 podman[270970]: 2025-11-22 09:06:36.000765154 +0000 UTC m=+0.686237317 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:06:36 np0005532048 podman[270991]: 2025-11-22 09:06:36.221721432 +0000 UTC m=+0.073352944 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:06:36 np0005532048 podman[270970]: 2025-11-22 09:06:36.464399573 +0000 UTC m=+1.149871746 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:06:36 np0005532048 nova_compute[253661]: 2025-11-22 09:06:36.970 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:36 np0005532048 NetworkManager[48920]: <info>  [1763802396.9752] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 22 04:06:36 np0005532048 NetworkManager[48920]: <info>  [1763802396.9762] device (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 04:06:36 np0005532048 NetworkManager[48920]: <info>  [1763802396.9777] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 22 04:06:36 np0005532048 NetworkManager[48920]: <info>  [1763802396.9782] device (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 04:06:36 np0005532048 NetworkManager[48920]: <info>  [1763802396.9797] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 22 04:06:36 np0005532048 NetworkManager[48920]: <info>  [1763802396.9806] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 22 04:06:36 np0005532048 NetworkManager[48920]: <info>  [1763802396.9811] device (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 04:06:36 np0005532048 NetworkManager[48920]: <info>  [1763802396.9815] device (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 04:06:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:36.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d77bf46-6a0d-4839-b503-5dc02f4b140b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:36.996 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27705719-41 in ovnmeta-27705719-461d-420b-a9b8-656219b295b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:06:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:36.999 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27705719-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:06:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:36.999 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe76fd1-a785-4dd1-992a-f5b2fa3de569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 23 KiB/s wr, 136 op/s
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.022 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08344934-1748-4fd4-a72b-85e2cdbda257]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.049 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[055fb78d-8d30-4fa7-a3af-6d6af5a513d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:37 np0005532048 nova_compute[253661]: 2025-11-22 09:06:37.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:37 np0005532048 nova_compute[253661]: 2025-11-22 09:06:37.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.107 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[468b06ba-3768-473e-b941-fe2a8cb0aa85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.109 162862 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp5jmxc_6u/privsep.sock']#033[00m
Nov 22 04:06:37 np0005532048 nova_compute[253661]: 2025-11-22 09:06:37.457 253665 DEBUG nova.compute.manager [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-changed-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:06:37 np0005532048 nova_compute[253661]: 2025-11-22 09:06:37.458 253665 DEBUG nova.compute.manager [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Refreshing instance network info cache due to event network-changed-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:06:37 np0005532048 nova_compute[253661]: 2025-11-22 09:06:37.458 253665 DEBUG oslo_concurrency.lockutils [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:06:37 np0005532048 nova_compute[253661]: 2025-11-22 09:06:37.459 253665 DEBUG oslo_concurrency.lockutils [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:06:37 np0005532048 nova_compute[253661]: 2025-11-22 09:06:37.459 253665 DEBUG nova.network.neutron [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Refreshing network info cache for port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:06:37 np0005532048 podman[271108]: 2025-11-22 09:06:37.565113192 +0000 UTC m=+0.079948752 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:06:37 np0005532048 podman[271107]: 2025-11-22 09:06:37.58532575 +0000 UTC m=+0.103918951 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 04:06:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:06:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:06:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.884 162862 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.886 162862 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp5jmxc_6u/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.715 271173 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.720 271173 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.722 271173 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.722 271173 INFO oslo.privsep.daemon [-] privsep daemon running as pid 271173#033[00m
Nov 22 04:06:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:37.889 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[55b8e029-8087-45ea-9a50-863fbd5a5887]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:38.452 271173 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:38.452 271173 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:38.452 271173 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:38 np0005532048 nova_compute[253661]: 2025-11-22 09:06:38.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:06:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:06:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:06:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:06:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:06:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 22 04:06:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 125 op/s
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7a78a27a-2d2a-4f76-8964-ea0779fd8cba does not exist
Nov 22 04:06:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 28155b19-d7ce-436c-960e-f5f27da8f577 does not exist
Nov 22 04:06:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 729b9385-c492-4c8a-9771-e844f7990655 does not exist
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.121 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7246d9-0af7-401f-8b7a-644f55454fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.141 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed6308f-ecef-4d6e-9044-e1c387621cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 NetworkManager[48920]: <info>  [1763802399.1427] manager: (tap27705719-40): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Nov 22 04:06:39 np0005532048 nova_compute[253661]: 2025-11-22 09:06:39.148 253665 DEBUG nova.network.neutron [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updated VIF entry in instance network info cache for port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:06:39 np0005532048 nova_compute[253661]: 2025-11-22 09:06:39.148 253665 DEBUG nova.network.neutron [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updating instance_info_cache with network_info: [{"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:06:39 np0005532048 systemd-udevd[271363]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.180 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[79e340c1-8563-432c-ad8b-c13610b0d64c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.183 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2e798e78-ad49-40c0-81e0-14fbc0261fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 NetworkManager[48920]: <info>  [1763802399.2203] device (tap27705719-40): carrier: link connected
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.227 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f5e5a4-c9e5-4b92-a50f-6f277a7fb75e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.251 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc18d1e1-65d4-404a-85f0-60b66698b2c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27705719-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524995, 'reachable_time': 36724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271396, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.269 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2870749c-4efd-4f5b-84a2-d9fd3a781506]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:b616'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524995, 'tstamp': 524995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271407, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.288 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81608563-a0e8-49ef-9784-f84bd8db2193]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27705719-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524995, 'reachable_time': 36724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271410, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 nova_compute[253661]: 2025-11-22 09:06:39.293 253665 DEBUG oslo_concurrency.lockutils [req-06f0993b-e483-49b9-bc79-e57953ca0fb1 req-d76638a6-c7bb-4223-86c1-884d180ad9b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4e90ab44-2028-4ef8-ab7a-3c603be3e750" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.319 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d882f45e-e56b-4f2c-ad7f-8837b780b722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2e3bb8-c3b7-4ddb-8cdb-37a21b98bcd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.426 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27705719-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.426 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.426 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27705719-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:06:39 np0005532048 NetworkManager[48920]: <info>  [1763802399.4300] manager: (tap27705719-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 22 04:06:39 np0005532048 nova_compute[253661]: 2025-11-22 09:06:39.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:39 np0005532048 kernel: tap27705719-40: entered promiscuous mode
Nov 22 04:06:39 np0005532048 nova_compute[253661]: 2025-11-22 09:06:39.434 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.436 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27705719-40, col_values=(('external_ids', {'iface-id': '66390fc9-eaea-4181-96b2-4d926c45e6e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:06:39 np0005532048 nova_compute[253661]: 2025-11-22 09:06:39.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:39Z|00031|binding|INFO|Releasing lport 66390fc9-eaea-4181-96b2-4d926c45e6e5 from this chassis (sb_readonly=0)
Nov 22 04:06:39 np0005532048 nova_compute[253661]: 2025-11-22 09:06:39.459 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:39 np0005532048 nova_compute[253661]: 2025-11-22 09:06:39.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.461 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.462 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95ff9dee-5b49-44df-b4b9-49f7f1057bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.464 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-27705719-461d-420b-a9b8-656219b295b7
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:06:39 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 27705719-461d-420b-a9b8-656219b295b7
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:06:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:39.465 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'env', 'PROCESS_TAG=haproxy-27705719-461d-420b-a9b8-656219b295b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27705719-461d-420b-a9b8-656219b295b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:06:39 np0005532048 podman[271485]: 2025-11-22 09:06:39.732554838 +0000 UTC m=+0.025932627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:39 np0005532048 podman[271485]: 2025-11-22 09:06:39.90944112 +0000 UTC m=+0.202818889 container create e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:06:40 np0005532048 systemd[1]: Started libpod-conmon-e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291.scope.
Nov 22 04:06:40 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:06:40 np0005532048 podman[271527]: 2025-11-22 09:06:40.114602807 +0000 UTC m=+0.137537183 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:06:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:06:40 np0005532048 podman[271485]: 2025-11-22 09:06:40.530551464 +0000 UTC m=+0.823929263 container init e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:06:40 np0005532048 podman[271485]: 2025-11-22 09:06:40.544610974 +0000 UTC m=+0.837988743 container start e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:06:40 np0005532048 frosty_feynman[271554]: 167 167
Nov 22 04:06:40 np0005532048 systemd[1]: libpod-e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291.scope: Deactivated successfully.
Nov 22 04:06:40 np0005532048 conmon[271554]: conmon e7e73344c562f26e8948 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291.scope/container/memory.events
Nov 22 04:06:40 np0005532048 podman[271485]: 2025-11-22 09:06:40.845489992 +0000 UTC m=+1.138867781 container attach e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:06:40 np0005532048 podman[271485]: 2025-11-22 09:06:40.849296374 +0000 UTC m=+1.142674143 container died e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:06:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.4 KiB/s wr, 122 op/s
Nov 22 04:06:41 np0005532048 podman[271527]: 2025-11-22 09:06:41.239859849 +0000 UTC m=+1.262794195 container create ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:06:41 np0005532048 systemd[1]: Started libpod-conmon-ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51.scope.
Nov 22 04:06:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:06:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676a94e4d511baf87b86317c8fc75c84c67f7a8300dfb76d8ed2f43fa82c4669/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2ed641bd7a5d551a1a7633b060359d388c3f7a16daff85e9fdf48761f0fcb46b-merged.mount: Deactivated successfully.
Nov 22 04:06:41 np0005532048 podman[271485]: 2025-11-22 09:06:41.828971928 +0000 UTC m=+2.122349697 container remove e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 04:06:42 np0005532048 podman[271527]: 2025-11-22 09:06:42.078996138 +0000 UTC m=+2.101930504 container init ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:06:42 np0005532048 podman[271527]: 2025-11-22 09:06:42.086816448 +0000 UTC m=+2.109750794 container start ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:06:42 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [NOTICE]   (271604) : New worker (271609) forked
Nov 22 04:06:42 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [NOTICE]   (271604) : Loading success.
Nov 22 04:06:42 np0005532048 systemd[1]: libpod-conmon-e7e73344c562f26e89483b611ae64fc445e650d5b3fd0fc2ac0c81dcc0519291.scope: Deactivated successfully.
Nov 22 04:06:42 np0005532048 podman[271592]: 2025-11-22 09:06:42.092791562 +0000 UTC m=+0.121519437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:42 np0005532048 podman[271592]: 2025-11-22 09:06:42.189708983 +0000 UTC m=+0.218436848 container create 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:06:42 np0005532048 podman[271513]: 2025-11-22 09:06:42.225798135 +0000 UTC m=+2.272175638 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:06:42 np0005532048 systemd[1]: Started libpod-conmon-986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34.scope.
Nov 22 04:06:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:06:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:42 np0005532048 podman[271592]: 2025-11-22 09:06:42.334109051 +0000 UTC m=+0.362836946 container init 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:06:42 np0005532048 podman[271592]: 2025-11-22 09:06:42.344283156 +0000 UTC m=+0.373011021 container start 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:06:42 np0005532048 podman[271592]: 2025-11-22 09:06:42.383582135 +0000 UTC m=+0.412310020 container attach 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:06:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 KiB/s wr, 106 op/s
Nov 22 04:06:43 np0005532048 nova_compute[253661]: 2025-11-22 09:06:43.479 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:43 np0005532048 fervent_borg[271620]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:06:43 np0005532048 fervent_borg[271620]: --> relative data size: 1.0
Nov 22 04:06:43 np0005532048 fervent_borg[271620]: --> All data devices are unavailable
Nov 22 04:06:43 np0005532048 systemd[1]: libpod-986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34.scope: Deactivated successfully.
Nov 22 04:06:43 np0005532048 podman[271592]: 2025-11-22 09:06:43.558908385 +0000 UTC m=+1.587636260 container died 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:06:43 np0005532048 systemd[1]: libpod-986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34.scope: Consumed 1.142s CPU time.
Nov 22 04:06:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3edb5ab73d99db128480dd62dc744225c28fadb44b933b96ddc561199bf9291b-merged.mount: Deactivated successfully.
Nov 22 04:06:43 np0005532048 podman[271592]: 2025-11-22 09:06:43.644027964 +0000 UTC m=+1.672755829 container remove 986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_borg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:06:43 np0005532048 systemd[1]: libpod-conmon-986817f29448b6fbf9b6cb619581debac3daad7c74820accf8ab95fcfb81ba34.scope: Deactivated successfully.
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.802337) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403802388, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1151, "num_deletes": 506, "total_data_size": 1168463, "memory_usage": 1198000, "flush_reason": "Manual Compaction"}
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403810214, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 869500, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23265, "largest_seqno": 24415, "table_properties": {"data_size": 864904, "index_size": 1608, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14042, "raw_average_key_size": 18, "raw_value_size": 853305, "raw_average_value_size": 1145, "num_data_blocks": 72, "num_entries": 745, "num_filter_entries": 745, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802334, "oldest_key_time": 1763802334, "file_creation_time": 1763802403, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 7936 microseconds, and 4019 cpu microseconds.
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.810277) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 869500 bytes OK
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.810310) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.812686) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.812723) EVENT_LOG_v1 {"time_micros": 1763802403812713, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.812748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1162048, prev total WAL file size 1162048, number of live WAL files 2.
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.813412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(849KB)], [53(9238KB)]
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403813467, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10329974, "oldest_snapshot_seqno": -1}
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4601 keys, 7206770 bytes, temperature: kUnknown
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403881363, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7206770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7175822, "index_size": 18358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11525, "raw_key_size": 115307, "raw_average_key_size": 25, "raw_value_size": 7092369, "raw_average_value_size": 1541, "num_data_blocks": 760, "num_entries": 4601, "num_filter_entries": 4601, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802403, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.881649) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7206770 bytes
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.884027) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.0 rd, 106.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.0 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(20.2) write-amplify(8.3) OK, records in: 5609, records dropped: 1008 output_compression: NoCompression
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.884249) EVENT_LOG_v1 {"time_micros": 1763802403884038, "job": 28, "event": "compaction_finished", "compaction_time_micros": 67978, "compaction_time_cpu_micros": 18828, "output_level": 6, "num_output_files": 1, "total_output_size": 7206770, "num_input_records": 5609, "num_output_records": 4601, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403884578, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802403886157, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.813352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:06:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:06:43.886303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:06:44 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 04:06:44 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 04:06:44 np0005532048 podman[271804]: 2025-11-22 09:06:44.334417162 +0000 UTC m=+0.041902050 container create 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:06:44 np0005532048 systemd[1]: Started libpod-conmon-81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3.scope.
Nov 22 04:06:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:06:44 np0005532048 podman[271804]: 2025-11-22 09:06:44.317911484 +0000 UTC m=+0.025396392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:44 np0005532048 podman[271804]: 2025-11-22 09:06:44.426599431 +0000 UTC m=+0.134084319 container init 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:06:44 np0005532048 podman[271804]: 2025-11-22 09:06:44.434723806 +0000 UTC m=+0.142208704 container start 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:06:44 np0005532048 podman[271804]: 2025-11-22 09:06:44.438424045 +0000 UTC m=+0.145908933 container attach 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:06:44 np0005532048 eloquent_kepler[271821]: 167 167
Nov 22 04:06:44 np0005532048 systemd[1]: libpod-81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3.scope: Deactivated successfully.
Nov 22 04:06:44 np0005532048 podman[271804]: 2025-11-22 09:06:44.441501449 +0000 UTC m=+0.148986337 container died 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:06:44 np0005532048 nova_compute[253661]: 2025-11-22 09:06:44.464 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-eb4614ba6e52a28ff53134013a2170860323faa1b10def68bb92168122f30f6c-merged.mount: Deactivated successfully.
Nov 22 04:06:44 np0005532048 podman[271804]: 2025-11-22 09:06:44.493696486 +0000 UTC m=+0.201181374 container remove 81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:06:44 np0005532048 systemd[1]: libpod-conmon-81128a8e25bf674c9d50e64b3919a1ee24c3fe32e6d8856741a4c3730476c7a3.scope: Deactivated successfully.
Nov 22 04:06:44 np0005532048 podman[271845]: 2025-11-22 09:06:44.672591591 +0000 UTC m=+0.038905536 container create a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:06:44 np0005532048 systemd[1]: Started libpod-conmon-a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2.scope.
Nov 22 04:06:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:06:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:44 np0005532048 podman[271845]: 2025-11-22 09:06:44.657062198 +0000 UTC m=+0.023376163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:44 np0005532048 podman[271845]: 2025-11-22 09:06:44.756419989 +0000 UTC m=+0.122733954 container init a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:06:44 np0005532048 podman[271845]: 2025-11-22 09:06:44.763176992 +0000 UTC m=+0.129490937 container start a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:06:44 np0005532048 podman[271845]: 2025-11-22 09:06:44.766397659 +0000 UTC m=+0.132711604 container attach a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:06:44 np0005532048 nova_compute[253661]: 2025-11-22 09:06:44.847 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:44 np0005532048 nova_compute[253661]: 2025-11-22 09:06:44.848 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:44 np0005532048 nova_compute[253661]: 2025-11-22 09:06:44.888 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:06:44 np0005532048 nova_compute[253661]: 2025-11-22 09:06:44.973 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:44 np0005532048 nova_compute[253661]: 2025-11-22 09:06:44.974 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:44 np0005532048 nova_compute[253661]: 2025-11-22 09:06:44.984 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:06:44 np0005532048 nova_compute[253661]: 2025-11-22 09:06:44.985 253665 INFO nova.compute.claims [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:06:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 88 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 66 op/s
Nov 22 04:06:45 np0005532048 nova_compute[253661]: 2025-11-22 09:06:45.189 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:45Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:b6:44 10.100.0.6
Nov 22 04:06:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:45Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:b6:44 10.100.0.6
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]: {
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:    "0": [
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:        {
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "devices": [
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "/dev/loop3"
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            ],
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_name": "ceph_lv0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_size": "21470642176",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "name": "ceph_lv0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "tags": {
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.cluster_name": "ceph",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.crush_device_class": "",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.encrypted": "0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.osd_id": "0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.type": "block",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.vdo": "0"
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            },
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "type": "block",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "vg_name": "ceph_vg0"
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:        }
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:    ],
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:    "1": [
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:        {
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "devices": [
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "/dev/loop4"
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            ],
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_name": "ceph_lv1",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_size": "21470642176",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "name": "ceph_lv1",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "tags": {
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.cluster_name": "ceph",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.crush_device_class": "",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.encrypted": "0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.osd_id": "1",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.type": "block",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.vdo": "0"
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            },
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "type": "block",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "vg_name": "ceph_vg1"
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:        }
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:    ],
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:    "2": [
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:        {
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "devices": [
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "/dev/loop5"
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            ],
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_name": "ceph_lv2",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_size": "21470642176",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "name": "ceph_lv2",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "tags": {
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.cluster_name": "ceph",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.crush_device_class": "",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.encrypted": "0",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.osd_id": "2",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.type": "block",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:                "ceph.vdo": "0"
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            },
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "type": "block",
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:            "vg_name": "ceph_vg2"
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:        }
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]:    ]
Nov 22 04:06:45 np0005532048 lucid_sinoussi[271862]: }
Nov 22 04:06:45 np0005532048 systemd[1]: libpod-a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2.scope: Deactivated successfully.
Nov 22 04:06:45 np0005532048 podman[271845]: 2025-11-22 09:06:45.586293855 +0000 UTC m=+0.952607820 container died a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:06:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-43df9e9fd36b285df50039ed4900afef75ff86842b69fae5ad69562cbfbc32f0-merged.mount: Deactivated successfully.
Nov 22 04:06:45 np0005532048 podman[271845]: 2025-11-22 09:06:45.661358172 +0000 UTC m=+1.027672117 container remove a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sinoussi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:06:45 np0005532048 systemd[1]: libpod-conmon-a1e44e43f2efcb5d193d3a80a5e938b0a3d1a69ba69f5bc857f6517ccdc551d2.scope: Deactivated successfully.
Nov 22 04:06:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:06:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488326385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:45 np0005532048 nova_compute[253661]: 2025-11-22 09:06:45.734 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:45 np0005532048 nova_compute[253661]: 2025-11-22 09:06:45.746 253665 DEBUG nova.compute.provider_tree [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:06:45 np0005532048 nova_compute[253661]: 2025-11-22 09:06:45.766 253665 DEBUG nova.scheduler.client.report [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:06:45 np0005532048 nova_compute[253661]: 2025-11-22 09:06:45.865 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:45 np0005532048 nova_compute[253661]: 2025-11-22 09:06:45.869 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:06:45 np0005532048 nova_compute[253661]: 2025-11-22 09:06:45.947 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:06:45 np0005532048 nova_compute[253661]: 2025-11-22 09:06:45.948 253665 DEBUG nova.network.neutron [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:06:45 np0005532048 nova_compute[253661]: 2025-11-22 09:06:45.986 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.093 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.275 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.277 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.278 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Creating image(s)#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.306 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.337 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:46 np0005532048 podman[272051]: 2025-11-22 09:06:46.358976024 +0000 UTC m=+0.053323404 container create 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.366 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.375 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:46 np0005532048 systemd[1]: Started libpod-conmon-520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52.scope.
Nov 22 04:06:46 np0005532048 podman[272051]: 2025-11-22 09:06:46.335085539 +0000 UTC m=+0.029432949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.444 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.446 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.447 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.447 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:46 np0005532048 podman[272051]: 2025-11-22 09:06:46.449197876 +0000 UTC m=+0.143545276 container init 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:06:46 np0005532048 podman[272051]: 2025-11-22 09:06:46.45724964 +0000 UTC m=+0.151597020 container start 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:06:46 np0005532048 podman[272051]: 2025-11-22 09:06:46.461218085 +0000 UTC m=+0.155565485 container attach 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:06:46 np0005532048 hungry_swanson[272114]: 167 167
Nov 22 04:06:46 np0005532048 systemd[1]: libpod-520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52.scope: Deactivated successfully.
Nov 22 04:06:46 np0005532048 podman[272051]: 2025-11-22 09:06:46.467046756 +0000 UTC m=+0.161394156 container died 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.469 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.474 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-79f0480eb4c88024d2265b3ed1e228bf200adfc052998d7e0f7c3d434e233c6f-merged.mount: Deactivated successfully.
Nov 22 04:06:46 np0005532048 podman[272051]: 2025-11-22 09:06:46.518018332 +0000 UTC m=+0.212365712 container remove 520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:06:46 np0005532048 systemd[1]: libpod-conmon-520c107509e4e4f9a79c2c96799d435a563004ea54399931ffc01a230da34a52.scope: Deactivated successfully.
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.539 253665 DEBUG nova.network.neutron [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.540 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:06:46 np0005532048 podman[272176]: 2025-11-22 09:06:46.735074687 +0000 UTC m=+0.073858358 container create 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:06:46 np0005532048 podman[272176]: 2025-11-22 09:06:46.688827944 +0000 UTC m=+0.027611635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:46 np0005532048 systemd[1]: Started libpod-conmon-252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003.scope.
Nov 22 04:06:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:06:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.853 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:46 np0005532048 podman[272176]: 2025-11-22 09:06:46.900998981 +0000 UTC m=+0.239782672 container init 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:06:46 np0005532048 podman[272176]: 2025-11-22 09:06:46.909290831 +0000 UTC m=+0.248074512 container start 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:06:46 np0005532048 podman[272176]: 2025-11-22 09:06:46.932573041 +0000 UTC m=+0.271356742 container attach 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.936 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] resizing rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.981 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:46 np0005532048 nova_compute[253661]: 2025-11-22 09:06:46.981 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 108 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 1.2 MiB/s wr, 57 op/s
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.040 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.048 253665 DEBUG nova.objects.instance [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.059 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.059 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Ensure instance console log exists: /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.059 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.060 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.060 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.061 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.067 253665 WARNING nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.073 253665 DEBUG nova.virt.libvirt.host [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.078 253665 DEBUG nova.virt.libvirt.host [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.082 253665 DEBUG nova.virt.libvirt.host [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.083 253665 DEBUG nova.virt.libvirt.host [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.083 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.083 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.084 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.084 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.084 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.084 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.085 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.086 253665 DEBUG nova.virt.hardware [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.089 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.283 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.284 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.292 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.293 253665 INFO nova.compute.claims [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.446 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:06:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2770213352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.581 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.606 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.610 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:06:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/21243115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.905 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.912 253665 DEBUG nova.compute.provider_tree [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.927 253665 DEBUG nova.scheduler.client.report [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.993 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:47 np0005532048 nova_compute[253661]: 2025-11-22 09:06:47.994 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]: {
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "osd_id": 1,
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "type": "bluestore"
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:    },
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "osd_id": 0,
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "type": "bluestore"
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:    },
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "osd_id": 2,
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:        "type": "bluestore"
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]:    }
Nov 22 04:06:48 np0005532048 sharp_jennings[272192]: }
Nov 22 04:06:48 np0005532048 systemd[1]: libpod-252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003.scope: Deactivated successfully.
Nov 22 04:06:48 np0005532048 systemd[1]: libpod-252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003.scope: Consumed 1.155s CPU time.
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.094 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.094 253665 DEBUG nova.network.neutron [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:06:48 np0005532048 podman[272379]: 2025-11-22 09:06:48.123535158 +0000 UTC m=+0.028514877 container died 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.124 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:06:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-406e797e5d93b0adb1d270bf8533f19464a21d30f96e112595338ffcadd72825-merged.mount: Deactivated successfully.
Nov 22 04:06:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:06:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905890687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:48 np0005532048 podman[272379]: 2025-11-22 09:06:48.192332304 +0000 UTC m=+0.097312003 container remove 252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jennings, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:06:48 np0005532048 systemd[1]: libpod-conmon-252ca392ef9c71f4619d1d6707216267ea74560c5de77e8139faba24d6d1a003.scope: Deactivated successfully.
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.219 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.221 253665 DEBUG nova.objects.instance [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.234 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <uuid>4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76</uuid>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <name>instance-00000005</name>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1457643607</nova:name>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:06:47</nova:creationTime>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <nova:user uuid="62c2fa81e90346db80e713e8b110de6e">tempest-ListImageFiltersTestJSON-1780489870-project-member</nova:user>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <nova:project uuid="4d149c68c4874b3bbb2b6c134b8855e0">tempest-ListImageFiltersTestJSON-1780489870</nova:project>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <entry name="serial">4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76</entry>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <entry name="uuid">4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76</entry>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/console.log" append="off"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:06:48 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:06:48 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:06:48 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:06:48 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:06:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:06:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.249 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:06:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 404adf2b-76d2-4c7c-80d1-8f19d39135b8 does not exist
Nov 22 04:06:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8ec49176-660e-47c4-9382-9d9213404e56 does not exist
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.296 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.297 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.297 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Using config drive#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.323 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.395 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.397 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.397 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Creating image(s)#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.421 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.445 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.472 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.476 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.523 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Creating config drive at /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.529 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzstiju50 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.555 253665 DEBUG nova.network.neutron [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.556 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.556 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.557 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.557 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.558 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.577 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.580 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bfc23def-6d15-4b5e-959e-3165bc676f9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.663 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzstiju50" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.689 253665 DEBUG nova.storage.rbd_utils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.695 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.941 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bfc23def-6d15-4b5e-959e-3165bc676f9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.977 253665 DEBUG oslo_concurrency.processutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:48 np0005532048 nova_compute[253661]: 2025-11-22 09:06:48.978 253665 INFO nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Deleting local config drive /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76/disk.config because it was imported into RBD.#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.014 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] resizing rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:06:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 151 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 399 KiB/s rd, 4.0 MiB/s wr, 87 op/s
Nov 22 04:06:49 np0005532048 systemd-machined[215941]: New machine qemu-5-instance-00000005.
Nov 22 04:06:49 np0005532048 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.119 253665 DEBUG nova.objects.instance [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'migration_context' on Instance uuid bfc23def-6d15-4b5e-959e-3165bc676f9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.133 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.133 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Ensure instance console log exists: /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.134 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.134 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.134 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.135 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.140 253665 WARNING nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.145 253665 DEBUG nova.virt.libvirt.host [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.145 253665 DEBUG nova.virt.libvirt.host [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.148 253665 DEBUG nova.virt.libvirt.host [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.148 253665 DEBUG nova.virt.libvirt.host [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.149 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.149 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.149 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.149 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.150 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.151 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.151 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.151 253665 DEBUG nova.virt.hardware [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.154 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.499 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802409.498485, 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.500 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.503 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.503 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.507 253665 INFO nova.virt.libvirt.driver [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance spawned successfully.#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.507 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.526 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.532 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.535 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.535 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.536 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.536 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.537 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.537 253665 DEBUG nova.virt.libvirt.driver [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.564 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.564 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802409.500015, 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.565 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] VM Started (Lifecycle Event)#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.588 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.591 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.610 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:06:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:06:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1969957770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.650 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.669 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:49 np0005532048 nova_compute[253661]: 2025-11-22 09:06:49.672 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:06:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790742848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.145 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.148 253665 DEBUG nova.objects.instance [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid bfc23def-6d15-4b5e-959e-3165bc676f9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.161 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <uuid>bfc23def-6d15-4b5e-959e-3165bc676f9c</uuid>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <name>instance-00000006</name>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1335877445</nova:name>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:06:49</nova:creationTime>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <nova:user uuid="62c2fa81e90346db80e713e8b110de6e">tempest-ListImageFiltersTestJSON-1780489870-project-member</nova:user>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <nova:project uuid="4d149c68c4874b3bbb2b6c134b8855e0">tempest-ListImageFiltersTestJSON-1780489870</nova:project>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <entry name="serial">bfc23def-6d15-4b5e-959e-3165bc676f9c</entry>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <entry name="uuid">bfc23def-6d15-4b5e-959e-3165bc676f9c</entry>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/bfc23def-6d15-4b5e-959e-3165bc676f9c_disk">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/console.log" append="off"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:06:50 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:06:50 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:06:50 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:06:50 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.174 253665 INFO nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 3.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.175 253665 DEBUG nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.222 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.222 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.223 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Using config drive#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.245 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.386 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Creating config drive at /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.394 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph_s3qlga execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.528 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph_s3qlga" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.555 253665 DEBUG nova.storage.rbd_utils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] rbd image bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.559 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.593 253665 INFO nova.compute.manager [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 5.65 seconds to build instance.#033[00m
Nov 22 04:06:50 np0005532048 nova_compute[253661]: 2025-11-22 09:06:50.684 253665 DEBUG oslo_concurrency.lockutils [None req-a5b928ce-cf6b-41e1-8424-e486b235d38f 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 151 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 3.3 MiB/s wr, 73 op/s
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.342 253665 DEBUG oslo_concurrency.processutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config bfc23def-6d15-4b5e-959e-3165bc676f9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.782s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.343 253665 INFO nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Deleting local config drive /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c/disk.config because it was imported into RBD.#033[00m
Nov 22 04:06:51 np0005532048 systemd-machined[215941]: New machine qemu-6-instance-00000006.
Nov 22 04:06:51 np0005532048 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.857 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.860 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.861 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802411.858862, bfc23def-6d15-4b5e-959e-3165bc676f9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.861 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.863 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.864 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.868 253665 INFO nova.virt.libvirt.driver [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance spawned successfully.#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.868 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.897 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.903 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.904 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.905 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.905 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.906 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.906 253665 DEBUG nova.virt.libvirt.driver [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.914 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.937 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.940 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.941 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802411.8589442, bfc23def-6d15-4b5e-959e-3165bc676f9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.941 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] VM Started (Lifecycle Event)#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.966 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.971 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:06:51 np0005532048 nova_compute[253661]: 2025-11-22 09:06:51.984 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:06:52 np0005532048 nova_compute[253661]: 2025-11-22 09:06:52.102 253665 INFO nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 3.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:06:52 np0005532048 nova_compute[253661]: 2025-11-22 09:06:52.103 253665 DEBUG nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:06:52
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr', 'backups']
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:06:52 np0005532048 nova_compute[253661]: 2025-11-22 09:06:52.180 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:52 np0005532048 nova_compute[253661]: 2025-11-22 09:06:52.181 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:52 np0005532048 nova_compute[253661]: 2025-11-22 09:06:52.192 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:06:52 np0005532048 nova_compute[253661]: 2025-11-22 09:06:52.193 253665 INFO nova.compute.claims [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:06:52 np0005532048 nova_compute[253661]: 2025-11-22 09:06:52.551 253665 INFO nova.compute.manager [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 5.30 seconds to build instance.#033[00m
Nov 22 04:06:52 np0005532048 nova_compute[253661]: 2025-11-22 09:06:52.598 253665 DEBUG oslo_concurrency.lockutils [None req-97bdabfb-50b4-4397-92de-347fbe3f05fd 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:52 np0005532048 nova_compute[253661]: 2025-11-22 09:06:52.674 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 185 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 1016 KiB/s rd, 4.7 MiB/s wr, 145 op/s
Nov 22 04:06:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:06:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364691987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.161 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.168 253665 DEBUG nova.compute.provider_tree [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.182 253665 DEBUG nova.scheduler.client.report [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.450 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.451 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.540 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.541 253665 DEBUG nova.network.neutron [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.561 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.638 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:06:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.836 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.839 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.839 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Creating image(s)#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.869 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.900 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.931 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.938 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.966 253665 DEBUG nova.network.neutron [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:06:53 np0005532048 nova_compute[253661]: 2025-11-22 09:06:53.967 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:06:54 np0005532048 nova_compute[253661]: 2025-11-22 09:06:54.019 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:54 np0005532048 nova_compute[253661]: 2025-11-22 09:06:54.021 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:54 np0005532048 nova_compute[253661]: 2025-11-22 09:06:54.022 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:54 np0005532048 nova_compute[253661]: 2025-11-22 09:06:54.022 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:54 np0005532048 nova_compute[253661]: 2025-11-22 09:06:54.051 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:54 np0005532048 nova_compute[253661]: 2025-11-22 09:06:54.057 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:06:54 np0005532048 nova_compute[253661]: 2025-11-22 09:06:54.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:06:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:06:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 214 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 236 op/s
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.338 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.426 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] resizing rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.548 253665 DEBUG nova.objects.instance [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'migration_context' on Instance uuid 1bb24315-1978-4dbf-a16d-5e7b84a25d17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.672 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.672 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Ensure instance console log exists: /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.673 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.673 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.673 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.674 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.680 253665 WARNING nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.686 253665 DEBUG nova.virt.libvirt.host [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.686 253665 DEBUG nova.virt.libvirt.host [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.690 253665 DEBUG nova.virt.libvirt.host [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.691 253665 DEBUG nova.virt.libvirt.host [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.691 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.691 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.691 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.692 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.693 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.693 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.693 253665 DEBUG nova.virt.hardware [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.696 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.979 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.980 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.980 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.980 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.980 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.982 253665 INFO nova.compute.manager [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Terminating instance#033[00m
Nov 22 04:06:55 np0005532048 nova_compute[253661]: 2025-11-22 09:06:55.983 253665 DEBUG nova.compute.manager [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:06:56 np0005532048 kernel: tapf5fa33e1-ab (unregistering): left promiscuous mode
Nov 22 04:06:56 np0005532048 NetworkManager[48920]: <info>  [1763802416.0970] device (tapf5fa33e1-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:06:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:56Z|00032|binding|INFO|Releasing lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 from this chassis (sb_readonly=0)
Nov 22 04:06:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:56Z|00033|binding|INFO|Setting lport f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 down in Southbound
Nov 22 04:06:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:06:56Z|00034|binding|INFO|Removing iface tapf5fa33e1-ab ovn-installed in OVS
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.143 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:06:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569626535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:56 np0005532048 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 22 04:06:56 np0005532048 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 15.603s CPU time.
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.154 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:b6:44 10.100.0.6'], port_security=['fa:16:3e:78:b6:44 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e90ab44-2028-4ef8-ab7a-3c603be3e750', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27705719-461d-420b-a9b8-656219b295b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6e7bbd2-3ac0-4509-872c-a46868ca499e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e343bc4-f111-4a21-942b-257d99455815, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.158 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 in datapath 27705719-461d-420b-a9b8-656219b295b7 unbound from our chassis#033[00m
Nov 22 04:06:56 np0005532048 systemd-machined[215941]: Machine qemu-4-instance-00000004 terminated.
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.160 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27705719-461d-420b-a9b8-656219b295b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.161 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cd07be68-8225-417e-bc39-685c3534316a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.162 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27705719-461d-420b-a9b8-656219b295b7 namespace which is not needed anymore#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.175 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.200 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.211 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.248 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Instance destroyed successfully.#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.251 253665 DEBUG nova.objects.instance [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'resources' on Instance uuid 4e90ab44-2028-4ef8-ab7a-3c603be3e750 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.268 253665 DEBUG nova.virt.libvirt.vif [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:06:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-382486397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-382486397',id=4,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:06:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-8sum2ias',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:06:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=4e90ab44-2028-4ef8-ab7a-3c603be3e750,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.268 253665 DEBUG nova.network.os_vif_util [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "address": "fa:16:3e:78:b6:44", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5fa33e1-ab", "ovs_interfaceid": "f5fa33e1-ab24-4daa-9790-5e0dbcbf4907", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.270 253665 DEBUG nova.network.os_vif_util [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.270 253665 DEBUG os_vif [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.272 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.273 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5fa33e1-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.280 253665 INFO os_vif [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:b6:44,bridge_name='br-int',has_traffic_filtering=True,id=f5fa33e1-ab24-4daa-9790-5e0dbcbf4907,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5fa33e1-ab')#033[00m
Nov 22 04:06:56 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [NOTICE]   (271604) : haproxy version is 2.8.14-c23fe91
Nov 22 04:06:56 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [NOTICE]   (271604) : path to executable is /usr/sbin/haproxy
Nov 22 04:06:56 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [WARNING]  (271604) : Exiting Master process...
Nov 22 04:06:56 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [WARNING]  (271604) : Exiting Master process...
Nov 22 04:06:56 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [ALERT]    (271604) : Current worker (271609) exited with code 143 (Terminated)
Nov 22 04:06:56 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[271582]: [WARNING]  (271604) : All workers exited. Exiting... (0)
Nov 22 04:06:56 np0005532048 systemd[1]: libpod-ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51.scope: Deactivated successfully.
Nov 22 04:06:56 np0005532048 podman[273168]: 2025-11-22 09:06:56.366259436 +0000 UTC m=+0.081443571 container died ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 04:06:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51-userdata-shm.mount: Deactivated successfully.
Nov 22 04:06:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-676a94e4d511baf87b86317c8fc75c84c67f7a8300dfb76d8ed2f43fa82c4669-merged.mount: Deactivated successfully.
Nov 22 04:06:56 np0005532048 podman[273168]: 2025-11-22 09:06:56.426124098 +0000 UTC m=+0.141308233 container cleanup ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:06:56 np0005532048 systemd[1]: libpod-conmon-ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51.scope: Deactivated successfully.
Nov 22 04:06:56 np0005532048 podman[273229]: 2025-11-22 09:06:56.519152856 +0000 UTC m=+0.066893990 container remove ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.527 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4438faf-9210-4b67-a731-d7d57017ca38]: (4, ('Sat Nov 22 09:06:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7 (ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51)\nffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51\nSat Nov 22 09:06:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7 (ffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51)\nffdc8c81f92a6b85f7b6d123c87ad7cca9faef2d5caff600a9fd366c6a156e51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.530 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ef2b18-5f76-43f8-9c1c-6095c6120531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.531 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27705719-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.533 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:56 np0005532048 kernel: tap27705719-40: left promiscuous mode
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a54b545-fb90-4718-9ae0-b4b686c8b126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99228d90-f24d-4c3c-a3b1-ad707148e2dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23600b1f-ce51-4182-a066-5ea7c0821d10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.588 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8dc3981-9b4d-47b8-8d82-affe81206416]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524985, 'reachable_time': 25895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273242, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:56 np0005532048 systemd[1]: run-netns-ovnmeta\x2d27705719\x2d461d\x2d420b\x2da9b8\x2d656219b295b7.mount: Deactivated successfully.
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.604 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27705719-461d-420b-a9b8-656219b295b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:06:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:06:56.605 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f160e390-22c3-4fca-9bca-41bc47f44a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.618 253665 DEBUG nova.compute.manager [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.660 253665 INFO nova.compute.manager [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] instance snapshotting#033[00m
Nov 22 04:06:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:06:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500563076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.689 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.693 253665 DEBUG nova.objects.instance [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bb24315-1978-4dbf-a16d-5e7b84a25d17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.705 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <uuid>1bb24315-1978-4dbf-a16d-5e7b84a25d17</uuid>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <name>instance-00000007</name>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1458062651</nova:name>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:06:55</nova:creationTime>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <nova:user uuid="a46c9aa2bf204aac90754c5cde832c1d">tempest-LiveMigrationNegativeTest-1531468932-project-member</nova:user>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <nova:project uuid="2aac0910356c4371ad12a604c19aed9b">tempest-LiveMigrationNegativeTest-1531468932</nova:project>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <entry name="serial">1bb24315-1978-4dbf-a16d-5e7b84a25d17</entry>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <entry name="uuid">1bb24315-1978-4dbf-a16d-5e7b84a25d17</entry>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/console.log" append="off"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:06:56 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:06:56 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:06:56 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:06:56 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.772 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.772 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.781 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Using config drive#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.811 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.884 253665 INFO nova.virt.libvirt.driver [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Deleting instance files /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750_del#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.885 253665 INFO nova.virt.libvirt.driver [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Deletion of /var/lib/nova/instances/4e90ab44-2028-4ef8-ab7a-3c603be3e750_del complete#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.891 253665 DEBUG nova.compute.manager [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-unplugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.891 253665 DEBUG oslo_concurrency.lockutils [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.892 253665 DEBUG oslo_concurrency.lockutils [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.892 253665 DEBUG oslo_concurrency.lockutils [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.892 253665 DEBUG nova.compute.manager [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] No waiting events found dispatching network-vif-unplugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.892 253665 DEBUG nova.compute.manager [req-fd490bcc-3558-47d0-b793-22e217b28bc2 req-2a04117c-9c00-4f7b-b57c-94142bd664b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-unplugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:06:56 np0005532048 nova_compute[253661]: 2025-11-22 09:06:56.918 253665 INFO nova.virt.libvirt.driver [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Beginning live snapshot process#033[00m
Nov 22 04:06:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 232 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.5 MiB/s wr, 280 op/s
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.050 253665 INFO nova.compute.manager [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.051 253665 DEBUG oslo.service.loopingcall [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.052 253665 DEBUG nova.compute.manager [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.053 253665 DEBUG nova.network.neutron [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.535 253665 DEBUG nova.virt.libvirt.imagebackend [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.766 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Creating config drive at /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.771 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaiddtbyi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.795 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(30da9d8325b74797a836d1553f8e1f8b) on rbd image(4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.900 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaiddtbyi" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.930 253665 DEBUG nova.storage.rbd_utils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:06:57 np0005532048 nova_compute[253661]: 2025-11-22 09:06:57.934 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:06:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 22 04:06:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 22 04:06:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 22 04:06:58 np0005532048 nova_compute[253661]: 2025-11-22 09:06:58.767 253665 DEBUG oslo_concurrency.processutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config 1bb24315-1978-4dbf-a16d-5e7b84a25d17_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.833s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:06:58 np0005532048 nova_compute[253661]: 2025-11-22 09:06:58.769 253665 INFO nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Deleting local config drive /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17/disk.config because it was imported into RBD.#033[00m
Nov 22 04:06:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:58 np0005532048 nova_compute[253661]: 2025-11-22 09:06:58.830 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] cloning vms/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk@30da9d8325b74797a836d1553f8e1f8b to images/bf45eac9-b4aa-414f-a359-f8a2fe80c5d3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:06:58 np0005532048 systemd-machined[215941]: New machine qemu-7-instance-00000007.
Nov 22 04:06:58 np0005532048 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 22 04:06:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 219 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.0 MiB/s wr, 289 op/s
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.117 253665 DEBUG nova.compute.manager [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.118 253665 DEBUG oslo_concurrency.lockutils [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.118 253665 DEBUG oslo_concurrency.lockutils [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.118 253665 DEBUG oslo_concurrency.lockutils [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.119 253665 DEBUG nova.compute.manager [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] No waiting events found dispatching network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.119 253665 WARNING nova.compute.manager [req-401ba4e1-e599-4c63-b836-d33ae68a02e8 req-deecd4b1-ed4d-40b4-bf73-8b6bd247a258 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received unexpected event network-vif-plugged-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.256 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] flattening images/bf45eac9-b4aa-414f-a359-f8a2fe80c5d3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.425 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802419.4248726, 1bb24315-1978-4dbf-a16d-5e7b84a25d17 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.426 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.428 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.429 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.433 253665 INFO nova.virt.libvirt.driver [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance spawned successfully.#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.433 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.452 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.460 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.466 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.466 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.467 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.467 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.468 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.468 253665 DEBUG nova.virt.libvirt.driver [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.480 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.480 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802419.4278653, 1bb24315-1978-4dbf-a16d-5e7b84a25d17 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.480 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] VM Started (Lifecycle Event)#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.500 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.513 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.539 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.574 253665 INFO nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Took 5.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.574 253665 DEBUG nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.677 253665 INFO nova.compute.manager [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Took 7.52 seconds to build instance.#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.705 253665 DEBUG nova.network.neutron [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.727 253665 DEBUG oslo_concurrency.lockutils [None req-cd95d720-7524-4d90-8eec-6e2bfdcf0ee7 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.736 253665 INFO nova.compute.manager [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Took 2.68 seconds to deallocate network for instance.#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.803 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.804 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:06:59 np0005532048 nova_compute[253661]: 2025-11-22 09:06:59.926 253665 DEBUG oslo_concurrency.processutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.095 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] removing snapshot(30da9d8325b74797a836d1553f8e1f8b) on rbd image(4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:07:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582412337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.421 253665 DEBUG oslo_concurrency.processutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.426 253665 DEBUG nova.compute.provider_tree [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.452 253665 DEBUG nova.scheduler.client.report [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.477 253665 DEBUG nova.compute.manager [req-7f04b504-ef2f-4b71-bbd6-39fd97bb261d req-f3e09925-7877-4a04-8af1-1ddd89df998e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Received event network-vif-deleted-f5fa33e1-ab24-4daa-9790-5e0dbcbf4907 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.487 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.519 253665 INFO nova.scheduler.client.report [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Deleted allocations for instance 4e90ab44-2028-4ef8-ab7a-3c603be3e750#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.612 253665 DEBUG oslo_concurrency.lockutils [None req-2f9e1545-4832-43f2-8ae3-e8416d8c076c fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "4e90ab44-2028-4ef8-ab7a-3c603be3e750" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.769 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.770 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.835 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.919 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.920 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.925 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:07:00 np0005532048 nova_compute[253661]: 2025-11-22 09:07:00.925 253665 INFO nova.compute.claims [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:07:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 22 04:07:01 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 22 04:07:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 219 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 252 op/s
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.106 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.280 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.604 253665 DEBUG nova.storage.rbd_utils [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(snap) on rbd image(bf45eac9-b4aa-414f-a359-f8a2fe80c5d3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:07:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74389598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.688 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.695 253665 DEBUG nova.compute.provider_tree [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.709 253665 DEBUG nova.scheduler.client.report [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.746 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.747 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.822 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.822 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:07:01 np0005532048 nova_compute[253661]: 2025-11-22 09:07:01.852 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:07:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.005 253665 DEBUG nova.policy [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fabb775e44cc437680ea15de97d50877', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:07:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 22 04:07:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.209 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.299 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.301 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.302 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Creating image(s)#033[00m
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010428240175490004 of space, bias 1.0, pg target 0.3128472052647001 quantized to 32 (current 32)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0007990172437650583 of space, bias 1.0, pg target 0.2397051731295175 quantized to 32 (current 32)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.537 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.567 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.602 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.607 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.713 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.714 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.715 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.715 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.745 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.751 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 61ff3d94-226c-4991-af23-6da29d64dca1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.874 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.875 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.897 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.932 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Successfully created port: efd83824-eafa-462c-abe4-952ef6631c2b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.988 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.988 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.995 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:07:02 np0005532048 nova_compute[253661]: 2025-11-22 09:07:02.996 253665 INFO nova.compute.claims [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:07:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 200 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 205 op/s
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.227 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.499 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 61ff3d94-226c-4991-af23-6da29d64dca1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.600 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] resizing rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.748 253665 DEBUG nova.objects.instance [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'migration_context' on Instance uuid 61ff3d94-226c-4991-af23-6da29d64dca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817087408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.810 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.856 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.862 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.864 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.865 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.892 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.899 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.900 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.937 253665 DEBUG nova.compute.provider_tree [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.953 253665 DEBUG nova.scheduler.client.report [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.984 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.985 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.988 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:03 np0005532048 nova_compute[253661]: 2025-11-22 09:07:03.989 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.016 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.022 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.066 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.067 253665 DEBUG nova.network.neutron [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.121 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.190 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Successfully updated port: efd83824-eafa-462c-abe4-952ef6631c2b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.245 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.292 253665 DEBUG nova.compute.manager [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-changed-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.292 253665 DEBUG nova.compute.manager [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Refreshing instance network info cache due to event network-changed-efd83824-eafa-462c-abe4-952ef6631c2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.293 253665 DEBUG oslo_concurrency.lockutils [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.293 253665 DEBUG oslo_concurrency.lockutils [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.293 253665 DEBUG nova.network.neutron [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Refreshing network info cache for port efd83824-eafa-462c-abe4-952ef6631c2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.306 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.457 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.459 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.459 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Creating image(s)#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.488 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.519 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.548 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.556 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.586 253665 DEBUG nova.network.neutron [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.586 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.587 253665 DEBUG nova.network.neutron [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.675 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.676 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.677 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.677 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.710 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:04 np0005532048 nova_compute[253661]: 2025-11-22 09:07:04.715 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a5fd70ef-449c-45f6-a479-42c1293bcc35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 247 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 5.6 MiB/s wr, 292 op/s
Nov 22 04:07:05 np0005532048 nova_compute[253661]: 2025-11-22 09:07:05.110 253665 DEBUG nova.network.neutron [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:05 np0005532048 nova_compute[253661]: 2025-11-22 09:07:05.126 253665 DEBUG oslo_concurrency.lockutils [req-9efabb14-4e8e-4df9-aade-7ba6cc916520 req-98e39f87-2c2e-4530-b74e-a69749b9b32c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:07:05 np0005532048 nova_compute[253661]: 2025-11-22 09:07:05.127 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquired lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:07:05 np0005532048 nova_compute[253661]: 2025-11-22 09:07:05.127 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:07:05 np0005532048 nova_compute[253661]: 2025-11-22 09:07:05.278 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:05 np0005532048 nova_compute[253661]: 2025-11-22 09:07:05.338 253665 INFO nova.virt.libvirt.driver [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Snapshot image upload complete#033[00m
Nov 22 04:07:05 np0005532048 nova_compute[253661]: 2025-11-22 09:07:05.339 253665 INFO nova.compute.manager [None req-791a7f37-35c3-433c-afd4-16bf19c330af 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 8.68 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:07:05 np0005532048 nova_compute[253661]: 2025-11-22 09:07:05.917 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.894s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:05 np0005532048 nova_compute[253661]: 2025-11-22 09:07:05.996 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a5fd70ef-449c-45f6-a479-42c1293bcc35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.174 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] resizing rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.230 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.231 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Ensure instance console log exists: /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.231 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.347 253665 DEBUG nova.objects.instance [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'migration_context' on Instance uuid a5fd70ef-449c-45f6-a479-42c1293bcc35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.360 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.361 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Ensure instance console log exists: /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.362 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.362 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.362 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.364 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.381 253665 WARNING nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.389 253665 DEBUG nova.virt.libvirt.host [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.390 253665 DEBUG nova.virt.libvirt.host [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.395 253665 DEBUG nova.virt.libvirt.host [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.396 253665 DEBUG nova.virt.libvirt.host [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.396 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.397 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.397 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.398 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.398 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.398 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.398 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.399 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.399 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.399 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.399 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.400 253665 DEBUG nova.virt.hardware [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.404 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253100425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.899 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.934 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:06 np0005532048 nova_compute[253661]: 2025-11-22 09:07:06.939 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 292 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.9 MiB/s wr, 344 op/s
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.353 253665 DEBUG nova.network.neutron [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updating instance_info_cache with network_info: [{"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1829324916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.404 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.406 253665 DEBUG nova.objects.instance [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'pci_devices' on Instance uuid a5fd70ef-449c-45f6-a479-42c1293bcc35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.417 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <uuid>a5fd70ef-449c-45f6-a479-42c1293bcc35</uuid>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <name>instance-00000009</name>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <nova:name>tempest-LiveMigrationNegativeTest-server-503146601</nova:name>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:07:06</nova:creationTime>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <nova:user uuid="a46c9aa2bf204aac90754c5cde832c1d">tempest-LiveMigrationNegativeTest-1531468932-project-member</nova:user>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <nova:project uuid="2aac0910356c4371ad12a604c19aed9b">tempest-LiveMigrationNegativeTest-1531468932</nova:project>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <entry name="serial">a5fd70ef-449c-45f6-a479-42c1293bcc35</entry>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <entry name="uuid">a5fd70ef-449c-45f6-a479-42c1293bcc35</entry>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a5fd70ef-449c-45f6-a479-42c1293bcc35_disk">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/console.log" append="off"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:07:07 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:07:07 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:07:07 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:07:07 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.449 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Releasing lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.449 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance network_info: |[{"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.452 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start _get_guest_xml network_info=[{"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 1, 'encryption_format': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.456 253665 WARNING nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.460 253665 DEBUG nova.virt.libvirt.host [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.460 253665 DEBUG nova.virt.libvirt.host [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.463 253665 DEBUG nova.virt.libvirt.host [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.463 253665 DEBUG nova.virt.libvirt.host [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.464 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:06:09Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1293511702',id=18,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-2076067845',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.465 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.466 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.467 253665 DEBUG nova.virt.hardware [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.469 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.572 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.573 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.573 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Using config drive#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.600 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.703 253665 DEBUG nova.compute.manager [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.765 253665 INFO nova.compute.manager [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] instance snapshotting#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.769 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Creating config drive at /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.774 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvr4cvojo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.901 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvr4cvojo" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141056223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.952 253665 DEBUG nova.storage.rbd_utils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] rbd image a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.958 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.991 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:07 np0005532048 nova_compute[253661]: 2025-11-22 09:07:07.992 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:08 np0005532048 nova_compute[253661]: 2025-11-22 09:07:08.027 253665 INFO nova.virt.libvirt.driver [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Beginning live snapshot process#033[00m
Nov 22 04:07:08 np0005532048 nova_compute[253661]: 2025-11-22 09:07:08.298 253665 DEBUG nova.virt.libvirt.imagebackend [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:07:08 np0005532048 podman[274231]: 2025-11-22 09:07:08.384704869 +0000 UTC m=+0.067709432 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:07:08 np0005532048 nova_compute[253661]: 2025-11-22 09:07:08.394 253665 DEBUG oslo_concurrency.processutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config a5fd70ef-449c-45f6-a479-42c1293bcc35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:08 np0005532048 nova_compute[253661]: 2025-11-22 09:07:08.394 253665 INFO nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Deleting local config drive /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35/disk.config because it was imported into RBD.#033[00m
Nov 22 04:07:08 np0005532048 podman[274232]: 2025-11-22 09:07:08.422162541 +0000 UTC m=+0.105681966 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3)
Nov 22 04:07:08 np0005532048 systemd-machined[215941]: New machine qemu-8-instance-00000009.
Nov 22 04:07:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002468931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:08 np0005532048 systemd[1]: Started Virtual Machine qemu-8-instance-00000009.
Nov 22 04:07:08 np0005532048 nova_compute[253661]: 2025-11-22 09:07:08.484 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:08 np0005532048 nova_compute[253661]: 2025-11-22 09:07:08.512 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:08 np0005532048 nova_compute[253661]: 2025-11-22 09:07:08.523 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:08 np0005532048 nova_compute[253661]: 2025-11-22 09:07:08.608 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(64bc3502823e4771b602402c20132257) on rbd image(bfc23def-6d15-4b5e-959e-3165bc676f9c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:07:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 22 04:07:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 22 04:07:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 22 04:07:08 np0005532048 nova_compute[253661]: 2025-11-22 09:07:08.866 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] cloning vms/bfc23def-6d15-4b5e-959e-3165bc676f9c_disk@64bc3502823e4771b602402c20132257 to images/0f246b20-add7-47b2-8f11-a8b8543e9488 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:07:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183382633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.004 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] flattening images/0f246b20-add7-47b2-8f11-a8b8543e9488 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:07:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 376 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 14 MiB/s wr, 483 op/s
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.054 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.056 253665 DEBUG nova.virt.libvirt.vif [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:06:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1877351007',id=8,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-wqbqv4fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:07:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=61ff3d94-226c-4991-af23-6da29d64dca1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.057 253665 DEBUG nova.network.os_vif_util [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.058 253665 DEBUG nova.network.os_vif_util [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.060 253665 DEBUG nova.objects.instance [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'pci_devices' on Instance uuid 61ff3d94-226c-4991-af23-6da29d64dca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.078 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <uuid>61ff3d94-226c-4991-af23-6da29d64dca1</uuid>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <name>instance-00000008</name>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1877351007</nova:name>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:07:07</nova:creationTime>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-2076067845">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <nova:ephemeral>1</nova:ephemeral>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <nova:user uuid="fabb775e44cc437680ea15de97d50877">tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member</nova:user>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <nova:project uuid="4e0153a0f27f4c68ad2f7910dc78a992">tempest-ServersWithSpecificFlavorTestJSON-1107415015</nova:project>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <nova:port uuid="efd83824-eafa-462c-abe4-952ef6631c2b">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <entry name="serial">61ff3d94-226c-4991-af23-6da29d64dca1</entry>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <entry name="uuid">61ff3d94-226c-4991-af23-6da29d64dca1</entry>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/61ff3d94-226c-4991-af23-6da29d64dca1_disk">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/61ff3d94-226c-4991-af23-6da29d64dca1_disk.eph0">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <target dev="vdb" bus="virtio"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/61ff3d94-226c-4991-af23-6da29d64dca1_disk.config">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:2c:eb:49"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <target dev="tapefd83824-ea"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/console.log" append="off"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:07:09 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:07:09 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:07:09 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:07:09 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.082 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Preparing to wait for external event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.082 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.082 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.083 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.083 253665 DEBUG nova.virt.libvirt.vif [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:06:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1877351007',id=8,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-wqbqv4fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:07:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=61ff3d94-226c-4991-af23-6da29d64dca1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.084 253665 DEBUG nova.network.os_vif_util [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.084 253665 DEBUG nova.network.os_vif_util [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.085 253665 DEBUG os_vif [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.086 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.086 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.093 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefd83824-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.093 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapefd83824-ea, col_values=(('external_ids', {'iface-id': 'efd83824-eafa-462c-abe4-952ef6631c2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:eb:49', 'vm-uuid': '61ff3d94-226c-4991-af23-6da29d64dca1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:07:09 np0005532048 NetworkManager[48920]: <info>  [1763802429.0988] manager: (tapefd83824-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.109 253665 INFO os_vif [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea')#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.275 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.275 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.276 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.276 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] No VIF found with MAC fa:16:3e:2c:eb:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.276 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Using config drive#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.302 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.615 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802429.615135, a5fd70ef-449c-45f6-a479-42c1293bcc35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.615 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.618 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] removing snapshot(64bc3502823e4771b602402c20132257) on rbd image(bfc23def-6d15-4b5e-959e-3165bc676f9c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.620 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.621 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.627 253665 INFO nova.virt.libvirt.driver [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance spawned successfully.#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.628 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.631 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.634 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.652 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.653 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.653 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.653 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.654 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.654 253665 DEBUG nova.virt.libvirt.driver [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.657 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.658 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802429.6207056, a5fd70ef-449c-45f6-a479-42c1293bcc35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.658 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] VM Started (Lifecycle Event)#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.683 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.688 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.704 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.782 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Creating config drive at /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.797 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqk_m_39s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.887 253665 INFO nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Took 5.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.888 253665 DEBUG nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.967 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqk_m_39s" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 22 04:07:09 np0005532048 nova_compute[253661]: 2025-11-22 09:07:09.995 253665 DEBUG nova.storage.rbd_utils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] rbd image 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 22 04:07:10 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.017 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.065 253665 INFO nova.compute.manager [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Took 7.11 seconds to build instance.#033[00m
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.077 253665 DEBUG nova.storage.rbd_utils [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(snap) on rbd image(0f246b20-add7-47b2-8f11-a8b8543e9488) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.213 253665 DEBUG oslo_concurrency.processutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config 61ff3d94-226c-4991-af23-6da29d64dca1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.214 253665 INFO nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Deleting local config drive /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1/disk.config because it was imported into RBD.#033[00m
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.218 253665 DEBUG oslo_concurrency.lockutils [None req-b501042d-784f-415f-830d-f1f6cb268f78 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:10 np0005532048 kernel: tapefd83824-ea: entered promiscuous mode
Nov 22 04:07:10 np0005532048 NetworkManager[48920]: <info>  [1763802430.2912] manager: (tapefd83824-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 22 04:07:10 np0005532048 systemd-udevd[274455]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:07:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:10Z|00035|binding|INFO|Claiming lport efd83824-eafa-462c-abe4-952ef6631c2b for this chassis.
Nov 22 04:07:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:10Z|00036|binding|INFO|efd83824-eafa-462c-abe4-952ef6631c2b: Claiming fa:16:3e:2c:eb:49 10.100.0.13
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:10 np0005532048 NetworkManager[48920]: <info>  [1763802430.3083] device (tapefd83824-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:07:10 np0005532048 NetworkManager[48920]: <info>  [1763802430.3091] device (tapefd83824-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:07:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:10Z|00037|binding|INFO|Setting lport efd83824-eafa-462c-abe4-952ef6631c2b ovn-installed in OVS
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.320 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:10 np0005532048 systemd-machined[215941]: New machine qemu-9-instance-00000008.
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.333 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:eb:49 10.100.0.13'], port_security=['fa:16:3e:2c:eb:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61ff3d94-226c-4991-af23-6da29d64dca1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27705719-461d-420b-a9b8-656219b295b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6e7bbd2-3ac0-4509-872c-a46868ca499e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e343bc4-f111-4a21-942b-257d99455815, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=efd83824-eafa-462c-abe4-952ef6631c2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.334 162862 INFO neutron.agent.ovn.metadata.agent [-] Port efd83824-eafa-462c-abe4-952ef6631c2b in datapath 27705719-461d-420b-a9b8-656219b295b7 bound to our chassis#033[00m
Nov 22 04:07:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:10Z|00038|binding|INFO|Setting lport efd83824-eafa-462c-abe4-952ef6631c2b up in Southbound
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.336 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27705719-461d-420b-a9b8-656219b295b7#033[00m
Nov 22 04:07:10 np0005532048 systemd[1]: Started Virtual Machine qemu-9-instance-00000008.
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.354 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a51ed7-6571-45cf-aff2-87436ad23313]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.355 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27705719-41 in ovnmeta-27705719-461d-420b-a9b8-656219b295b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.357 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27705719-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.357 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b05b3fad-a856-4cd4-b8d0-f2f18126b48d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.358 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4b306d-7b4f-44c3-abca-694cc7e0d937]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.370 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1741b749-137a-4eee-b66e-a66f36513db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.390 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58f3f2f3-5f33-4deb-9caa-af0324a7bd14]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.431 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[18b7c70f-522c-4518-9dff-3858dead519d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.437 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8631fb2c-69a6-4a06-ad13-574a549ada9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 NetworkManager[48920]: <info>  [1763802430.4386] manager: (tap27705719-40): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Nov 22 04:07:10 np0005532048 systemd-udevd[274545]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.481 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c13982-6f98-414d-b3ea-0f5cbbdc651f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.485 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[63e88540-1af5-4e4e-af8a-2967821c48e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 NetworkManager[48920]: <info>  [1763802430.5070] device (tap27705719-40): carrier: link connected
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.515 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[98fa4c9e-1055-4e96-abeb-f969def9b852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.540 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9790ea65-96b0-4a48-9853-448b5fb16725]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27705719-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528124, 'reachable_time': 41640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274579, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.564 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1b4dcd-5323-4a51-9e29-fe0e20d64242]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:b616'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528124, 'tstamp': 528124}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274580, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.586 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e095121f-7c87-44ba-8540-7d6df9430e29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27705719-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:b6:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528124, 'reachable_time': 41640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274581, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.626 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc98360-bddb-4dcf-b361-fbf586ff536f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.716 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[67bd6ce0-82e4-4295-818f-fa66ac9ca684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.718 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27705719-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.718 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.719 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27705719-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.721 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:10 np0005532048 kernel: tap27705719-40: entered promiscuous mode
Nov 22 04:07:10 np0005532048 NetworkManager[48920]: <info>  [1763802430.7216] manager: (tap27705719-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.723 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27705719-40, col_values=(('external_ids', {'iface-id': '66390fc9-eaea-4181-96b2-4d926c45e6e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:07:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:10Z|00039|binding|INFO|Releasing lport 66390fc9-eaea-4181-96b2-4d926c45e6e5 from this chassis (sb_readonly=0)
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.727 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.728 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2af0f5-f2c4-477f-a365-718c1a7ff5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.728 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-27705719-461d-420b-a9b8-656219b295b7
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/27705719-461d-420b-a9b8-656219b295b7.pid.haproxy
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 27705719-461d-420b-a9b8-656219b295b7
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:07:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:10.729 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'env', 'PROCESS_TAG=haproxy-27705719-461d-420b-a9b8-656219b295b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27705719-461d-420b-a9b8-656219b295b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:07:10 np0005532048 nova_compute[253661]: 2025-11-22 09:07:10.746 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 22 04:07:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 22 04:07:11 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 22 04:07:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 376 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 13 MiB/s wr, 330 op/s
Nov 22 04:07:11 np0005532048 podman[274614]: 2025-11-22 09:07:11.128661308 +0000 UTC m=+0.030837893 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.239 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802416.2277353, 4e90ab44-2028-4ef8-ab7a-3c603be3e750 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.241 253665 INFO nova.compute.manager [-] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:07:11 np0005532048 podman[274614]: 2025-11-22 09:07:11.25920837 +0000 UTC m=+0.161384935 container create d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.279 253665 DEBUG nova.compute.manager [None req-98dcd478-c854-4438-9a03-469ab1572a91 - - - - - -] [instance: 4e90ab44-2028-4ef8-ab7a-3c603be3e750] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:11 np0005532048 systemd[1]: Started libpod-conmon-d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd.scope.
Nov 22 04:07:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:07:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/239c02df05785fa343c02b8c341ac64babc2cf40001ab923ae8e7d154e7f0394/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:11 np0005532048 podman[274614]: 2025-11-22 09:07:11.351012539 +0000 UTC m=+0.253189124 container init d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:07:11 np0005532048 podman[274614]: 2025-11-22 09:07:11.35892091 +0000 UTC m=+0.261097475 container start d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 04:07:11 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [NOTICE]   (274692) : New worker (274695) forked
Nov 22 04:07:11 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [NOTICE]   (274692) : Loading success.
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.419 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802431.419153, 61ff3d94-226c-4991-af23-6da29d64dca1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.420 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.454 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.458 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802431.419286, 61ff3d94-226c-4991-af23-6da29d64dca1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.459 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.487 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.493 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:07:11 np0005532048 nova_compute[253661]: 2025-11-22 09:07:11.520 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.240 253665 DEBUG nova.compute.manager [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.241 253665 DEBUG oslo_concurrency.lockutils [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.241 253665 DEBUG oslo_concurrency.lockutils [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.241 253665 DEBUG oslo_concurrency.lockutils [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.241 253665 DEBUG nova.compute.manager [req-74ea5a3e-8bb8-43a5-8b70-02e2e671db54 req-1719af5f-eb53-42b1-9dc4-e0085e306fb6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Processing event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.243 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.246 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802432.246462, 61ff3d94-226c-4991-af23-6da29d64dca1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.246 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.253 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.258 253665 INFO nova.virt.libvirt.driver [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance spawned successfully.#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.258 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.266 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.270 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.279 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.280 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.280 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.280 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.281 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.281 253665 DEBUG nova.virt.libvirt.driver [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:07:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/195864894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:07:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/195864894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.305 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:07:12 np0005532048 podman[274705]: 2025-11-22 09:07:12.408031202 +0000 UTC m=+0.094260169 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.566 253665 INFO nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Took 10.27 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.566 253665 DEBUG nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.751 253665 INFO nova.compute.manager [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Took 11.85 seconds to build instance.#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.754 253665 DEBUG nova.objects.instance [None req-cdcb59d8-a31e-478a-997b-c5d2bf9b40e9 7b6de63d2b014f7a8186c624d9ecbc85 13dcea55e23544739e1e310fe8503083 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5fd70ef-449c-45f6-a479-42c1293bcc35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.791 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802432.7910707, a5fd70ef-449c-45f6-a479-42c1293bcc35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.792 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.812 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.823 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.825 253665 DEBUG oslo_concurrency.lockutils [None req-cee0d0ef-d419-430e-9880-ba838e094256 fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:12 np0005532048 nova_compute[253661]: 2025-11-22 09:07:12.841 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 22 04:07:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 404 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 11 MiB/s wr, 400 op/s
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.253 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.309 253665 INFO nova.virt.libvirt.driver [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Snapshot image upload complete#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.310 253665 INFO nova.compute.manager [None req-83795f57-239f-440c-9be8-671b6c946af3 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 5.54 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:07:13 np0005532048 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 22 04:07:13 np0005532048 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Consumed 4.083s CPU time.
Nov 22 04:07:13 np0005532048 systemd-machined[215941]: Machine qemu-8-instance-00000009 terminated.
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.472 253665 DEBUG nova.compute.manager [None req-cdcb59d8-a31e-478a-997b-c5d2bf9b40e9 7b6de63d2b014f7a8186c624d9ecbc85 13dcea55e23544739e1e310fe8503083 - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1654682786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.746 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.844 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.845 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.845 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.850 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.851 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.856 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.856 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.862 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.863 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:13 np0005532048 nova_compute[253661]: 2025-11-22 09:07:13.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.081 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.084 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=59.83506774902344GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.085 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.085 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance bfc23def-6d15-4b5e-959e-3165bc676f9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 1bb24315-1978-4dbf-a16d-5e7b84a25d17 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 61ff3d94-226c-4991-af23-6da29d64dca1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance a5fd70ef-449c-45f6-a479-42c1293bcc35 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.245 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.246 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.356 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.425 253665 DEBUG nova.compute.manager [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.426 253665 DEBUG oslo_concurrency.lockutils [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.426 253665 DEBUG oslo_concurrency.lockutils [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.426 253665 DEBUG oslo_concurrency.lockutils [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.426 253665 DEBUG nova.compute.manager [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] No waiting events found dispatching network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.427 253665 WARNING nova.compute.manager [req-076d8bd0-06de-4859-bdf3-91baf0486272 req-33b04d9b-9c3d-4783-a1b3-de076a39330d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received unexpected event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b for instance with vm_state active and task_state None.#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/300687399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.893 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.901 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:14 np0005532048 nova_compute[253661]: 2025-11-22 09:07:14.916 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 487 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 449 op/s
Nov 22 04:07:15 np0005532048 nova_compute[253661]: 2025-11-22 09:07:15.107 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:07:15 np0005532048 nova_compute[253661]: 2025-11-22 09:07:15.108 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:16 np0005532048 nova_compute[253661]: 2025-11-22 09:07:16.100 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:16 np0005532048 nova_compute[253661]: 2025-11-22 09:07:16.101 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:16 np0005532048 nova_compute[253661]: 2025-11-22 09:07:16.101 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:16 np0005532048 nova_compute[253661]: 2025-11-22 09:07:16.101 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:16 np0005532048 nova_compute[253661]: 2025-11-22 09:07:16.102 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:07:16 np0005532048 nova_compute[253661]: 2025-11-22 09:07:16.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 493 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 9.0 MiB/s wr, 401 op/s
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.514 253665 DEBUG nova.compute.manager [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.555 253665 INFO nova.compute.manager [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] instance snapshotting#033[00m
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.953 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.954 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.954 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:07:17 np0005532048 nova_compute[253661]: 2025-11-22 09:07:17.955 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.127 253665 INFO nova.virt.libvirt.driver [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Beginning live snapshot process#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.437 253665 DEBUG nova.virt.libvirt.imagebackend [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.650 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(3324ea0f8d5e4f2091bcd99d0b296d9b) on rbd image(4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.792 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.935 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.936 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.936 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "a5fd70ef-449c-45f6-a479-42c1293bcc35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.937 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.937 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.939 253665 INFO nova.compute.manager [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Terminating instance#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.940 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "refresh_cache-a5fd70ef-449c-45f6-a479-42c1293bcc35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.940 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquired lock "refresh_cache-a5fd70ef-449c-45f6-a479-42c1293bcc35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:07:18 np0005532048 nova_compute[253661]: 2025-11-22 09:07:18.940 253665 DEBUG nova.network.neutron [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:07:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 22 04:07:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 8.1 MiB/s wr, 420 op/s
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.803 253665 DEBUG nova.network.neutron [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.915 253665 DEBUG nova.compute.manager [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-changed-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.915 253665 DEBUG nova.compute.manager [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Refreshing instance network info cache due to event network-changed-efd83824-eafa-462c-abe4-952ef6631c2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.916 253665 DEBUG oslo_concurrency.lockutils [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.916 253665 DEBUG oslo_concurrency.lockutils [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.916 253665 DEBUG nova.network.neutron [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Refreshing network info cache for port efd83824-eafa-462c-abe4-952ef6631c2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.974 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.989 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.990 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:07:19 np0005532048 nova_compute[253661]: 2025-11-22 09:07:19.991 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:07:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 22 04:07:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 22 04:07:20 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 22 04:07:20 np0005532048 nova_compute[253661]: 2025-11-22 09:07:20.793 253665 DEBUG nova.network.neutron [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:20 np0005532048 nova_compute[253661]: 2025-11-22 09:07:20.811 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Releasing lock "refresh_cache-a5fd70ef-449c-45f6-a479-42c1293bcc35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:07:20 np0005532048 nova_compute[253661]: 2025-11-22 09:07:20.812 253665 DEBUG nova.compute.manager [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:07:20 np0005532048 nova_compute[253661]: 2025-11-22 09:07:20.820 253665 INFO nova.virt.libvirt.driver [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance destroyed successfully.#033[00m
Nov 22 04:07:20 np0005532048 nova_compute[253661]: 2025-11-22 09:07:20.821 253665 DEBUG nova.objects.instance [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'resources' on Instance uuid a5fd70ef-449c-45f6-a479-42c1293bcc35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 7.4 MiB/s wr, 312 op/s
Nov 22 04:07:22 np0005532048 nova_compute[253661]: 2025-11-22 09:07:22.368 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] cloning vms/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk@3324ea0f8d5e4f2091bcd99d0b296d9b to images/4aac6d1c-c4ac-4afa-b126-b2123ebbf1d9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:22 np0005532048 nova_compute[253661]: 2025-11-22 09:07:22.910 253665 DEBUG nova.network.neutron [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updated VIF entry in instance network info cache for port efd83824-eafa-462c-abe4-952ef6631c2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:07:22 np0005532048 nova_compute[253661]: 2025-11-22 09:07:22.912 253665 DEBUG nova.network.neutron [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updating instance_info_cache with network_info: [{"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1000 KiB/s wr, 136 op/s
Nov 22 04:07:23 np0005532048 nova_compute[253661]: 2025-11-22 09:07:23.159 253665 DEBUG oslo_concurrency.lockutils [req-6a329ef5-df2a-4589-a68e-4f44bfd0ae99 req-c6434b26-0860-4012-b9e0-8ceade14b6ba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-61ff3d94-226c-4991-af23-6da29d64dca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:07:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:24 np0005532048 nova_compute[253661]: 2025-11-22 09:07:24.037 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] flattening images/4aac6d1c-c4ac-4afa-b126-b2123ebbf1d9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:07:24 np0005532048 nova_compute[253661]: 2025-11-22 09:07:24.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:24 np0005532048 nova_compute[253661]: 2025-11-22 09:07:24.480 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 117 KiB/s wr, 91 op/s
Nov 22 04:07:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 499 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 22 KiB/s wr, 61 op/s
Nov 22 04:07:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:27.951 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:27.952 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:27.953 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.402 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] removing snapshot(3324ea0f8d5e4f2091bcd99d0b296d9b) on rbd image(4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.474 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802433.4730887, a5fd70ef-449c-45f6-a479-42c1293bcc35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.476 253665 INFO nova.compute.manager [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.498 253665 DEBUG nova.compute.manager [None req-86f1490f-4600-40ed-bb82-44362d7b48fc - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.502 253665 DEBUG nova.compute.manager [None req-86f1490f-4600-40ed-bb82-44362d7b48fc - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.524 253665 INFO nova.compute.manager [None req-86f1490f-4600-40ed-bb82-44362d7b48fc - - - - - -] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 22 04:07:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 22 04:07:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 22 04:07:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.778 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:28.778 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:07:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:28.780 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.795 253665 DEBUG nova.storage.rbd_utils [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] creating snapshot(snap) on rbd image(4aac6d1c-c4ac-4afa-b126-b2123ebbf1d9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:07:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.977 253665 INFO nova.virt.libvirt.driver [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Deleting instance files /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35_del#033[00m
Nov 22 04:07:28 np0005532048 nova_compute[253661]: 2025-11-22 09:07:28.978 253665 INFO nova.virt.libvirt.driver [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Deletion of /var/lib/nova/instances/a5fd70ef-449c-45f6-a479-42c1293bcc35_del complete#033[00m
Nov 22 04:07:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 561 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.3 MiB/s wr, 106 op/s
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.072 253665 INFO nova.compute.manager [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Took 8.26 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.074 253665 DEBUG oslo.service.loopingcall [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.074 253665 DEBUG nova.compute.manager [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.074 253665 DEBUG nova.network.neutron [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:07:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:29Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:eb:49 10.100.0.13
Nov 22 04:07:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:29Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:eb:49 10.100.0.13
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.298 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 22 04:07:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 22 04:07:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.789 253665 DEBUG nova.network.neutron [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.810 253665 DEBUG nova.network.neutron [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.837 253665 INFO nova.compute.manager [-] [instance: a5fd70ef-449c-45f6-a479-42c1293bcc35] Took 0.76 seconds to deallocate network for instance.#033[00m
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.895 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:29 np0005532048 nova_compute[253661]: 2025-11-22 09:07:29.896 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:30 np0005532048 nova_compute[253661]: 2025-11-22 09:07:30.278 253665 DEBUG oslo_concurrency.processutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230164615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:30 np0005532048 nova_compute[253661]: 2025-11-22 09:07:30.758 253665 DEBUG oslo_concurrency.processutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:30 np0005532048 nova_compute[253661]: 2025-11-22 09:07:30.766 253665 DEBUG nova.compute.provider_tree [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:30 np0005532048 nova_compute[253661]: 2025-11-22 09:07:30.787 253665 DEBUG nova.scheduler.client.report [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:30 np0005532048 nova_compute[253661]: 2025-11-22 09:07:30.929 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 561 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.7 MiB/s wr, 103 op/s
Nov 22 04:07:31 np0005532048 nova_compute[253661]: 2025-11-22 09:07:31.146 253665 INFO nova.scheduler.client.report [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Deleted allocations for instance a5fd70ef-449c-45f6-a479-42c1293bcc35#033[00m
Nov 22 04:07:31 np0005532048 nova_compute[253661]: 2025-11-22 09:07:31.288 253665 DEBUG oslo_concurrency.lockutils [None req-dbec5ab8-2932-442c-befd-26d1bb36d2b8 a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "a5fd70ef-449c-45f6-a479-42c1293bcc35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:31 np0005532048 nova_compute[253661]: 2025-11-22 09:07:31.712 253665 INFO nova.virt.libvirt.driver [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Snapshot image upload complete#033[00m
Nov 22 04:07:31 np0005532048 nova_compute[253661]: 2025-11-22 09:07:31.713 253665 INFO nova.compute.manager [None req-a0ccd1ef-3968-4de9-8add-fb66e7fc41cf 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 14.16 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.294 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.294 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.295 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.295 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.295 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.296 253665 INFO nova.compute.manager [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Terminating instance#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.297 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "refresh_cache-1bb24315-1978-4dbf-a16d-5e7b84a25d17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.297 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquired lock "refresh_cache-1bb24315-1978-4dbf-a16d-5e7b84a25d17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.297 253665 DEBUG nova.network.neutron [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:07:32 np0005532048 nova_compute[253661]: 2025-11-22 09:07:32.794 253665 DEBUG nova.network.neutron [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 554 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.9 MiB/s wr, 164 op/s
Nov 22 04:07:33 np0005532048 nova_compute[253661]: 2025-11-22 09:07:33.043 253665 DEBUG nova.network.neutron [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:33 np0005532048 nova_compute[253661]: 2025-11-22 09:07:33.060 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Releasing lock "refresh_cache-1bb24315-1978-4dbf-a16d-5e7b84a25d17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:07:33 np0005532048 nova_compute[253661]: 2025-11-22 09:07:33.061 253665 DEBUG nova.compute.manager [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:07:33 np0005532048 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 22 04:07:33 np0005532048 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 14.182s CPU time.
Nov 22 04:07:33 np0005532048 systemd-machined[215941]: Machine qemu-7-instance-00000007 terminated.
Nov 22 04:07:33 np0005532048 nova_compute[253661]: 2025-11-22 09:07:33.687 253665 INFO nova.virt.libvirt.driver [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance destroyed successfully.#033[00m
Nov 22 04:07:33 np0005532048 nova_compute[253661]: 2025-11-22 09:07:33.688 253665 DEBUG nova.objects.instance [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lazy-loading 'resources' on Instance uuid 1bb24315-1978-4dbf-a16d-5e7b84a25d17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:34 np0005532048 nova_compute[253661]: 2025-11-22 09:07:34.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:34 np0005532048 nova_compute[253661]: 2025-11-22 09:07:34.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 565 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.0 MiB/s wr, 219 op/s
Nov 22 04:07:35 np0005532048 nova_compute[253661]: 2025-11-22 09:07:35.722 253665 INFO nova.virt.libvirt.driver [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Deleting instance files /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17_del#033[00m
Nov 22 04:07:35 np0005532048 nova_compute[253661]: 2025-11-22 09:07:35.723 253665 INFO nova.virt.libvirt.driver [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Deletion of /var/lib/nova/instances/1bb24315-1978-4dbf-a16d-5e7b84a25d17_del complete#033[00m
Nov 22 04:07:35 np0005532048 nova_compute[253661]: 2025-11-22 09:07:35.954 253665 INFO nova.compute.manager [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Took 2.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:07:35 np0005532048 nova_compute[253661]: 2025-11-22 09:07:35.954 253665 DEBUG oslo.service.loopingcall [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:07:35 np0005532048 nova_compute[253661]: 2025-11-22 09:07:35.955 253665 DEBUG nova.compute.manager [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:07:35 np0005532048 nova_compute[253661]: 2025-11-22 09:07:35.955 253665 DEBUG nova.network.neutron [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.216 253665 DEBUG nova.network.neutron [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.227 253665 DEBUG nova.network.neutron [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.239 253665 INFO nova.compute.manager [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Took 0.28 seconds to deallocate network for instance.#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.553 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.554 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.648 253665 DEBUG oslo_concurrency.processutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.773 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.774 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.774 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.775 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.775 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.777 253665 INFO nova.compute.manager [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Terminating instance#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.779 253665 DEBUG nova.compute.manager [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:07:36 np0005532048 kernel: tapefd83824-ea (unregistering): left promiscuous mode
Nov 22 04:07:36 np0005532048 NetworkManager[48920]: <info>  [1763802456.8670] device (tapefd83824-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:07:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:36Z|00040|binding|INFO|Releasing lport efd83824-eafa-462c-abe4-952ef6631c2b from this chassis (sb_readonly=0)
Nov 22 04:07:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:36Z|00041|binding|INFO|Setting lport efd83824-eafa-462c-abe4-952ef6631c2b down in Southbound
Nov 22 04:07:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:07:36Z|00042|binding|INFO|Removing iface tapefd83824-ea ovn-installed in OVS
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:36 np0005532048 nova_compute[253661]: 2025-11-22 09:07:36.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:36 np0005532048 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 22 04:07:36 np0005532048 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Consumed 14.790s CPU time.
Nov 22 04:07:36 np0005532048 systemd-machined[215941]: Machine qemu-9-instance-00000008 terminated.
Nov 22 04:07:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.982 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:eb:49 10.100.0.13'], port_security=['fa:16:3e:2c:eb:49 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61ff3d94-226c-4991-af23-6da29d64dca1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27705719-461d-420b-a9b8-656219b295b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e0153a0f27f4c68ad2f7910dc78a992', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6e7bbd2-3ac0-4509-872c-a46868ca499e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e343bc4-f111-4a21-942b-257d99455815, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=efd83824-eafa-462c-abe4-952ef6631c2b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:07:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.983 162862 INFO neutron.agent.ovn.metadata.agent [-] Port efd83824-eafa-462c-abe4-952ef6631c2b in datapath 27705719-461d-420b-a9b8-656219b295b7 unbound from our chassis#033[00m
Nov 22 04:07:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.984 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27705719-461d-420b-a9b8-656219b295b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:07:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34961a26-bd2d-4bc2-8624-6b1bd7f34cbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:36.987 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27705719-461d-420b-a9b8-656219b295b7 namespace which is not needed anymore#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.009 253665 INFO nova.virt.libvirt.driver [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Instance destroyed successfully.#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.010 253665 DEBUG nova.objects.instance [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lazy-loading 'resources' on Instance uuid 61ff3d94-226c-4991-af23-6da29d64dca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.021 253665 DEBUG nova.virt.libvirt.vif [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:06:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1877351007',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1877351007',id=8,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHodPu2mTylwLIiSpg98TP/l9TK91e/LqqUziWWty1W7HptoIJWYz1thR3bSVz/5iuqa18J3i9QIlrd3jgG6LZ6SDuZiEEZPZ9eZ7YiGOhjw3cAV2EtZ1B6zRxILW+qm/A==',key_name='tempest-keypair-2066856952',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:07:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e0153a0f27f4c68ad2f7910dc78a992',ramdisk_id='',reservation_id='r-wqbqv4fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1107415015-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:07:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fabb775e44cc437680ea15de97d50877',uuid=61ff3d94-226c-4991-af23-6da29d64dca1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.022 253665 DEBUG nova.network.os_vif_util [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converting VIF {"id": "efd83824-eafa-462c-abe4-952ef6631c2b", "address": "fa:16:3e:2c:eb:49", "network": {"id": "27705719-461d-420b-a9b8-656219b295b7", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1610179350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e0153a0f27f4c68ad2f7910dc78a992", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd83824-ea", "ovs_interfaceid": "efd83824-eafa-462c-abe4-952ef6631c2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.023 253665 DEBUG nova.network.os_vif_util [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.024 253665 DEBUG os_vif [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.026 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.026 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefd83824-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.028 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:07:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 540 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 2.2 MiB/s wr, 174 op/s
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.040 253665 INFO os_vif [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:eb:49,bridge_name='br-int',has_traffic_filtering=True,id=efd83824-eafa-462c-abe4-952ef6631c2b,network=Network(27705719-461d-420b-a9b8-656219b295b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd83824-ea')#033[00m
Nov 22 04:07:37 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [NOTICE]   (274692) : haproxy version is 2.8.14-c23fe91
Nov 22 04:07:37 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [NOTICE]   (274692) : path to executable is /usr/sbin/haproxy
Nov 22 04:07:37 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [WARNING]  (274692) : Exiting Master process...
Nov 22 04:07:37 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [ALERT]    (274692) : Current worker (274695) exited with code 143 (Terminated)
Nov 22 04:07:37 np0005532048 neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7[274685]: [WARNING]  (274692) : All workers exited. Exiting... (0)
Nov 22 04:07:37 np0005532048 systemd[1]: libpod-d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd.scope: Deactivated successfully.
Nov 22 04:07:37 np0005532048 podman[275060]: 2025-11-22 09:07:37.161039552 +0000 UTC m=+0.049223936 container died d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:07:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1123497692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.196 253665 DEBUG oslo_concurrency.processutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd-userdata-shm.mount: Deactivated successfully.
Nov 22 04:07:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay-239c02df05785fa343c02b8c341ac64babc2cf40001ab923ae8e7d154e7f0394-merged.mount: Deactivated successfully.
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.207 253665 DEBUG nova.compute.provider_tree [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:37 np0005532048 podman[275060]: 2025-11-22 09:07:37.209065869 +0000 UTC m=+0.097250233 container cleanup d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:07:37 np0005532048 systemd[1]: libpod-conmon-d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd.scope: Deactivated successfully.
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.221 253665 DEBUG nova.scheduler.client.report [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.274 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:37 np0005532048 podman[275090]: 2025-11-22 09:07:37.283442059 +0000 UTC m=+0.045933187 container remove d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:07:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.289 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a02429ea-0c0c-4ab5-b4e5-0c92cd561aff]: (4, ('Sat Nov 22 09:07:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7 (d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd)\nd6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd\nSat Nov 22 09:07:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27705719-461d-420b-a9b8-656219b295b7 (d6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd)\nd6a92ca1319fc4c99179889cf8c7c003fde8b48b776910819265f277bbdc73bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.292 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10dada30-9e89-4a5f-8bfc-0cda5d17f74c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.294 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27705719-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:37 np0005532048 kernel: tap27705719-40: left promiscuous mode
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.317 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[168b8f46-e7bb-4499-87e6-bdd8e98d1b89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.334 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7799777-dbca-40cb-b1d6-26debb4d340b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.335 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef099cc-474b-49a1-894e-c0bac4aad6d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.354 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c70168ab-5fb9-4df1-8adc-c8fb38d78988]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528116, 'reachable_time': 41543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275106, 'error': None, 'target': 'ovnmeta-27705719-461d-420b-a9b8-656219b295b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.358 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27705719-461d-420b-a9b8-656219b295b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:07:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:37.359 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b46236-c9f7-43fc-b548-1822b8a140da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:07:37 np0005532048 systemd[1]: run-netns-ovnmeta\x2d27705719\x2d461d\x2d420b\x2da9b8\x2d656219b295b7.mount: Deactivated successfully.
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.359 253665 INFO nova.scheduler.client.report [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Deleted allocations for instance 1bb24315-1978-4dbf-a16d-5e7b84a25d17#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.448 253665 DEBUG oslo_concurrency.lockutils [None req-495121c4-b35c-4b28-9872-29ed1f0b402a a46c9aa2bf204aac90754c5cde832c1d 2aac0910356c4371ad12a604c19aed9b - - default default] Lock "1bb24315-1978-4dbf-a16d-5e7b84a25d17" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.668 253665 INFO nova.virt.libvirt.driver [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Deleting instance files /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1_del#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.669 253665 INFO nova.virt.libvirt.driver [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Deletion of /var/lib/nova/instances/61ff3d94-226c-4991-af23-6da29d64dca1_del complete#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.729 253665 INFO nova.compute.manager [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.730 253665 DEBUG oslo.service.loopingcall [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.730 253665 DEBUG nova.compute.manager [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:07:37 np0005532048 nova_compute[253661]: 2025-11-22 09:07:37.730 253665 DEBUG nova.network.neutron [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:07:38 np0005532048 nova_compute[253661]: 2025-11-22 09:07:38.326 253665 DEBUG nova.compute.manager [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-unplugged-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:07:38 np0005532048 nova_compute[253661]: 2025-11-22 09:07:38.326 253665 DEBUG oslo_concurrency.lockutils [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:38 np0005532048 nova_compute[253661]: 2025-11-22 09:07:38.327 253665 DEBUG oslo_concurrency.lockutils [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:38 np0005532048 nova_compute[253661]: 2025-11-22 09:07:38.327 253665 DEBUG oslo_concurrency.lockutils [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:38 np0005532048 nova_compute[253661]: 2025-11-22 09:07:38.327 253665 DEBUG nova.compute.manager [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] No waiting events found dispatching network-vif-unplugged-efd83824-eafa-462c-abe4-952ef6631c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:07:38 np0005532048 nova_compute[253661]: 2025-11-22 09:07:38.327 253665 DEBUG nova.compute.manager [req-5271bca8-25b8-4fc0-b4a0-de3821ccfb8f req-ad38bee5-1e07-454a-bc6f-7fefc9c93afe 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-unplugged-efd83824-eafa-462c-abe4-952ef6631c2b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:07:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:07:38.781 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:07:38 np0005532048 nova_compute[253661]: 2025-11-22 09:07:38.803 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:38 np0005532048 nova_compute[253661]: 2025-11-22 09:07:38.804 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:38 np0005532048 nova_compute[253661]: 2025-11-22 09:07:38.897 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:07:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 22 04:07:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 22 04:07:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 22 04:07:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 424 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 655 KiB/s rd, 2.0 MiB/s wr, 225 op/s
Nov 22 04:07:39 np0005532048 podman[275107]: 2025-11-22 09:07:39.389249637 +0000 UTC m=+0.062428564 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:07:39 np0005532048 podman[275108]: 2025-11-22 09:07:39.404222637 +0000 UTC m=+0.076238386 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:07:39 np0005532048 nova_compute[253661]: 2025-11-22 09:07:39.487 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.047 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.048 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.056 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.057 253665 INFO nova.compute.claims [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.468 253665 DEBUG nova.network.neutron [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.512 253665 INFO nova.compute.manager [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Took 2.78 seconds to deallocate network for instance.#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.567 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.607 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.866 253665 DEBUG nova.compute.manager [req-7cef9d77-6144-4a08-9e0d-38bbdfc0532f req-d6f9457a-951a-425f-9426-401066d9e037 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-deleted-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.938 253665 DEBUG nova.compute.manager [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.938 253665 DEBUG oslo_concurrency.lockutils [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.939 253665 DEBUG oslo_concurrency.lockutils [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.939 253665 DEBUG oslo_concurrency.lockutils [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.939 253665 DEBUG nova.compute.manager [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] No waiting events found dispatching network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:07:40 np0005532048 nova_compute[253661]: 2025-11-22 09:07:40.940 253665 WARNING nova.compute.manager [req-58a5612f-4c12-4d12-aa6f-c32f7e6d7dd0 req-2c59eda2-9f71-456b-adc1-41fdfe5fbd15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Received unexpected event network-vif-plugged-efd83824-eafa-462c-abe4-952ef6631c2b for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:07:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 424 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 608 KiB/s rd, 1.9 MiB/s wr, 209 op/s
Nov 22 04:07:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1511008738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.141 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.148 253665 DEBUG nova.compute.provider_tree [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.162 253665 DEBUG nova.scheduler.client.report [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.183 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.184 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.186 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.226 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.242 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.266 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.276 253665 DEBUG oslo_concurrency.processutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.361 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.363 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.363 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Creating image(s)#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.383 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.411 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.436 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.442 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.517 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.518 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.519 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.519 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.539 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.542 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1426411698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.723 253665 DEBUG oslo_concurrency.processutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.730 253665 DEBUG nova.compute.provider_tree [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.742 253665 DEBUG nova.scheduler.client.report [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.763 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.792 253665 INFO nova.scheduler.client.report [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Deleted allocations for instance 61ff3d94-226c-4991-af23-6da29d64dca1#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.861 253665 DEBUG oslo_concurrency.lockutils [None req-59f573fc-f094-4225-8bae-e3640fdb83ce fabb775e44cc437680ea15de97d50877 4e0153a0f27f4c68ad2f7910dc78a992 - - default default] Lock "61ff3d94-226c-4991-af23-6da29d64dca1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:41 np0005532048 nova_compute[253661]: 2025-11-22 09:07:41.947 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.010 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] resizing rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 22 04:07:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 22 04:07:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.190 253665 DEBUG nova.objects.instance [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lazy-loading 'migration_context' on Instance uuid c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.201 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.201 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Ensure instance console log exists: /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.202 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.202 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.203 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.205 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.210 253665 WARNING nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.215 253665 DEBUG nova.virt.libvirt.host [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.216 253665 DEBUG nova.virt.libvirt.host [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.219 253665 DEBUG nova.virt.libvirt.host [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.220 253665 DEBUG nova.virt.libvirt.host [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.220 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.221 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.221 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.222 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.222 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.222 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.222 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.223 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.223 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.223 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.223 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.224 253665 DEBUG nova.virt.hardware [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.227 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521780234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.709 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.736 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:42 np0005532048 nova_compute[253661]: 2025-11-22 09:07:42.741 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 404 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 38 KiB/s wr, 109 op/s
Nov 22 04:07:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 22 04:07:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 22 04:07:43 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 22 04:07:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3207707638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.197 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.199 253665 DEBUG nova.objects.instance [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lazy-loading 'pci_devices' on Instance uuid c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.222 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <uuid>c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12</uuid>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <name>instance-0000000a</name>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-832171232</nova:name>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:07:42</nova:creationTime>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <nova:user uuid="9b2aa984fe7e4bbbab17fc76f5d39990">tempest-ServerDiagnosticsV248Test-357040963-project-member</nova:user>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <nova:project uuid="c759d960bd994160acfb1cdfbe9858c8">tempest-ServerDiagnosticsV248Test-357040963</nova:project>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <entry name="serial">c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12</entry>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <entry name="uuid">c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12</entry>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/console.log" append="off"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:07:43 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:07:43 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:07:43 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:07:43 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.281 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.281 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.282 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Using config drive#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.304 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:43 np0005532048 podman[275423]: 2025-11-22 09:07:43.35108791 +0000 UTC m=+0.088310387 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.549 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Creating config drive at /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.555 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp01d_r6_4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.693 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp01d_r6_4" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.723 253665 DEBUG nova.storage.rbd_utils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] rbd image c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.727 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.882 253665 DEBUG oslo_concurrency.processutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:43 np0005532048 nova_compute[253661]: 2025-11-22 09:07:43.883 253665 INFO nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Deleting local config drive /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12/disk.config because it was imported into RBD.#033[00m
Nov 22 04:07:43 np0005532048 systemd-machined[215941]: New machine qemu-10-instance-0000000a.
Nov 22 04:07:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:43 np0005532048 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 22 04:07:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 22 04:07:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 22 04:07:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.488 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.504 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802464.5036063, c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.504 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.507 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.507 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.512 253665 INFO nova.virt.libvirt.driver [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance spawned successfully.#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.512 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.526 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.535 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.540 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.540 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.541 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.541 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.542 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.542 253665 DEBUG nova.virt.libvirt.driver [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.569 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.570 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802464.5041304, c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.570 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] VM Started (Lifecycle Event)#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.585 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.588 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.614 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.625 253665 INFO nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Took 3.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.625 253665 DEBUG nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.679 253665 INFO nova.compute.manager [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Took 5.63 seconds to build instance.#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.698 253665 DEBUG oslo_concurrency.lockutils [None req-30c160f2-0fbd-4409-ab48-90ed5dea2c99 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.837 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.837 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.838 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "bfc23def-6d15-4b5e-959e-3165bc676f9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.838 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.838 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.839 253665 INFO nova.compute.manager [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Terminating instance#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.840 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "refresh_cache-bfc23def-6d15-4b5e-959e-3165bc676f9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.841 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquired lock "refresh_cache-bfc23def-6d15-4b5e-959e-3165bc676f9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:07:44 np0005532048 nova_compute[253661]: 2025-11-22 09:07:44.841 253665 DEBUG nova.network.neutron [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:07:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 371 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.0 MiB/s wr, 100 op/s
Nov 22 04:07:45 np0005532048 nova_compute[253661]: 2025-11-22 09:07:45.247 253665 DEBUG nova.network.neutron [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:45 np0005532048 nova_compute[253661]: 2025-11-22 09:07:45.532 253665 DEBUG nova.compute.manager [None req-cdb11c55-ca2b-4b2a-9db1-4e681f4820f6 5ab5801c00a94ae58a2ee4d79237d36d 1695c5aed6564e9ca76c77cf59eec4b5 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:45 np0005532048 nova_compute[253661]: 2025-11-22 09:07:45.534 253665 INFO nova.compute.manager [None req-cdb11c55-ca2b-4b2a-9db1-4e681f4820f6 5ab5801c00a94ae58a2ee4d79237d36d 1695c5aed6564e9ca76c77cf59eec4b5 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Retrieving diagnostics#033[00m
Nov 22 04:07:45 np0005532048 nova_compute[253661]: 2025-11-22 09:07:45.633 253665 DEBUG nova.network.neutron [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:45 np0005532048 nova_compute[253661]: 2025-11-22 09:07:45.649 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Releasing lock "refresh_cache-bfc23def-6d15-4b5e-959e-3165bc676f9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:07:45 np0005532048 nova_compute[253661]: 2025-11-22 09:07:45.650 253665 DEBUG nova.compute.manager [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:07:45 np0005532048 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 22 04:07:45 np0005532048 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 16.307s CPU time.
Nov 22 04:07:45 np0005532048 systemd-machined[215941]: Machine qemu-6-instance-00000006 terminated.
Nov 22 04:07:45 np0005532048 nova_compute[253661]: 2025-11-22 09:07:45.867 253665 INFO nova.virt.libvirt.driver [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance destroyed successfully.#033[00m
Nov 22 04:07:45 np0005532048 nova_compute[253661]: 2025-11-22 09:07:45.868 253665 DEBUG nova.objects.instance [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'resources' on Instance uuid bfc23def-6d15-4b5e-959e-3165bc676f9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.333 253665 INFO nova.virt.libvirt.driver [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Deleting instance files /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c_del#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.333 253665 INFO nova.virt.libvirt.driver [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Deletion of /var/lib/nova/instances/bfc23def-6d15-4b5e-959e-3165bc676f9c_del complete#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.407 253665 INFO nova.compute.manager [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.408 253665 DEBUG oslo.service.loopingcall [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.409 253665 DEBUG nova.compute.manager [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.409 253665 DEBUG nova.network.neutron [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.808 253665 DEBUG nova.network.neutron [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.827 253665 DEBUG nova.network.neutron [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.847 253665 INFO nova.compute.manager [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Took 0.44 seconds to deallocate network for instance.#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.886 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.887 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:46 np0005532048 nova_compute[253661]: 2025-11-22 09:07:46.986 253665 DEBUG oslo_concurrency.processutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 302 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 714 KiB/s rd, 3.6 MiB/s wr, 244 op/s
Nov 22 04:07:47 np0005532048 nova_compute[253661]: 2025-11-22 09:07:47.055 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266053923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:47 np0005532048 nova_compute[253661]: 2025-11-22 09:07:47.450 253665 DEBUG oslo_concurrency.processutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:47 np0005532048 nova_compute[253661]: 2025-11-22 09:07:47.457 253665 DEBUG nova.compute.provider_tree [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:47 np0005532048 nova_compute[253661]: 2025-11-22 09:07:47.474 253665 DEBUG nova.scheduler.client.report [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:47 np0005532048 nova_compute[253661]: 2025-11-22 09:07:47.493 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:47 np0005532048 nova_compute[253661]: 2025-11-22 09:07:47.533 253665 INFO nova.scheduler.client.report [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Deleted allocations for instance bfc23def-6d15-4b5e-959e-3165bc676f9c#033[00m
Nov 22 04:07:47 np0005532048 nova_compute[253661]: 2025-11-22 09:07:47.587 253665 DEBUG oslo_concurrency.lockutils [None req-ec2e027b-c331-43ee-8af3-44a1fd3e7e95 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "bfc23def-6d15-4b5e-959e-3165bc676f9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.203 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.204 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.205 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.205 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.206 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.210 253665 INFO nova.compute.manager [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Terminating instance#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.212 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.212 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquired lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.213 253665 DEBUG nova.network.neutron [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.372 253665 DEBUG nova.network.neutron [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.642 253665 DEBUG nova.network.neutron [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.661 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Releasing lock "refresh_cache-4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.666 253665 DEBUG nova.compute.manager [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.685 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802453.6846967, 1bb24315-1978-4dbf-a16d-5e7b84a25d17 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.686 253665 INFO nova.compute.manager [-] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.709 253665 DEBUG nova.compute.manager [None req-1cbc500b-06b7-4579-a59b-f03a3383d7f7 - - - - - -] [instance: 1bb24315-1978-4dbf-a16d-5e7b84a25d17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.754 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:48 np0005532048 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 22 04:07:48 np0005532048 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.766s CPU time.
Nov 22 04:07:48 np0005532048 systemd-machined[215941]: Machine qemu-5-instance-00000005 terminated.
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.896 253665 INFO nova.virt.libvirt.driver [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance destroyed successfully.#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.897 253665 DEBUG nova.objects.instance [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lazy-loading 'resources' on Instance uuid 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:48 np0005532048 nova_compute[253661]: 2025-11-22 09:07:48.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 22 04:07:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 22 04:07:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 22 04:07:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 391 op/s
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:07:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev dd3da25f-5328-45d4-ab4a-51280ad8173c does not exist
Nov 22 04:07:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9b980c1e-f7bc-4260-b81a-c975727467d4 does not exist
Nov 22 04:07:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 452624e9-2f6c-458d-81d6-423d67f2b842 does not exist
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.442 253665 INFO nova.virt.libvirt.driver [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Deleting instance files /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_del#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.444 253665 INFO nova.virt.libvirt.driver [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Deletion of /var/lib/nova/instances/4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76_del complete#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.492 253665 INFO nova.compute.manager [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.492 253665 DEBUG oslo.service.loopingcall [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.492 253665 DEBUG nova.compute.manager [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.493 253665 DEBUG nova.network.neutron [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.795 253665 DEBUG nova.network.neutron [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.811 253665 DEBUG nova.network.neutron [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.823 253665 INFO nova.compute.manager [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Took 0.33 seconds to deallocate network for instance.#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.872 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.873 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:49 np0005532048 podman[275900]: 2025-11-22 09:07:49.917461727 +0000 UTC m=+0.048771505 container create 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:07:49 np0005532048 nova_compute[253661]: 2025-11-22 09:07:49.926 253665 DEBUG oslo_concurrency.processutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:49 np0005532048 systemd[1]: Started libpod-conmon-3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617.scope.
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:07:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:07:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:07:49 np0005532048 podman[275900]: 2025-11-22 09:07:49.895734295 +0000 UTC m=+0.027044113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:07:50 np0005532048 podman[275900]: 2025-11-22 09:07:50.004303898 +0000 UTC m=+0.135613716 container init 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:07:50 np0005532048 podman[275900]: 2025-11-22 09:07:50.012669589 +0000 UTC m=+0.143979377 container start 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:07:50 np0005532048 podman[275900]: 2025-11-22 09:07:50.016008579 +0000 UTC m=+0.147318367 container attach 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:07:50 np0005532048 heuristic_cartwright[275917]: 167 167
Nov 22 04:07:50 np0005532048 systemd[1]: libpod-3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617.scope: Deactivated successfully.
Nov 22 04:07:50 np0005532048 conmon[275917]: conmon 3bc118eb8fefedc4d2e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617.scope/container/memory.events
Nov 22 04:07:50 np0005532048 podman[275922]: 2025-11-22 09:07:50.064841605 +0000 UTC m=+0.027608195 container died 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:07:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-49ae64f25c6ddcf4516db9e609730875bdffb956ec3f4aac8f30ac4cce5bd090-merged.mount: Deactivated successfully.
Nov 22 04:07:50 np0005532048 podman[275922]: 2025-11-22 09:07:50.104727595 +0000 UTC m=+0.067494185 container remove 3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:07:50 np0005532048 systemd[1]: libpod-conmon-3bc118eb8fefedc4d2e35f3d5628c517e69a22a8a09891799e2684b70cb7b617.scope: Deactivated successfully.
Nov 22 04:07:50 np0005532048 podman[275960]: 2025-11-22 09:07:50.302127997 +0000 UTC m=+0.050242421 container create 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:07:50 np0005532048 systemd[1]: Started libpod-conmon-03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f.scope.
Nov 22 04:07:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2647677578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:50 np0005532048 podman[275960]: 2025-11-22 09:07:50.276415397 +0000 UTC m=+0.024529861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:07:50 np0005532048 nova_compute[253661]: 2025-11-22 09:07:50.378 253665 DEBUG oslo_concurrency.processutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:07:50 np0005532048 nova_compute[253661]: 2025-11-22 09:07:50.386 253665 DEBUG nova.compute.provider_tree [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:50 np0005532048 nova_compute[253661]: 2025-11-22 09:07:50.400 253665 DEBUG nova.scheduler.client.report [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:50 np0005532048 podman[275960]: 2025-11-22 09:07:50.420507166 +0000 UTC m=+0.168621610 container init 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:07:50 np0005532048 nova_compute[253661]: 2025-11-22 09:07:50.428 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:50 np0005532048 podman[275960]: 2025-11-22 09:07:50.428762355 +0000 UTC m=+0.176876769 container start 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:07:50 np0005532048 podman[275960]: 2025-11-22 09:07:50.433275914 +0000 UTC m=+0.181390328 container attach 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:07:50 np0005532048 nova_compute[253661]: 2025-11-22 09:07:50.454 253665 INFO nova.scheduler.client.report [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Deleted allocations for instance 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76#033[00m
Nov 22 04:07:50 np0005532048 nova_compute[253661]: 2025-11-22 09:07:50.509 253665 DEBUG oslo_concurrency.lockutils [None req-4ef083e5-89e6-4d1a-86fb-f85c2da52d8c 62c2fa81e90346db80e713e8b110de6e 4d149c68c4874b3bbb2b6c134b8855e0 - - default default] Lock "4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 297 op/s
Nov 22 04:07:51 np0005532048 crazy_hodgkin[275977]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:07:51 np0005532048 crazy_hodgkin[275977]: --> relative data size: 1.0
Nov 22 04:07:51 np0005532048 crazy_hodgkin[275977]: --> All data devices are unavailable
Nov 22 04:07:51 np0005532048 systemd[1]: libpod-03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f.scope: Deactivated successfully.
Nov 22 04:07:51 np0005532048 podman[275960]: 2025-11-22 09:07:51.532446482 +0000 UTC m=+1.280560886 container died 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:07:51 np0005532048 systemd[1]: libpod-03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f.scope: Consumed 1.031s CPU time.
Nov 22 04:07:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-68d2078932f33785eadec67f5cca418630f870d462bd569687f73f9781ac96ba-merged.mount: Deactivated successfully.
Nov 22 04:07:51 np0005532048 podman[275960]: 2025-11-22 09:07:51.682689718 +0000 UTC m=+1.430804132 container remove 03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hodgkin, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:07:51 np0005532048 systemd[1]: libpod-conmon-03fd070489e830dda93bba9e67b7d87136a95c16c3384cce8765c1ac9523b88f.scope: Deactivated successfully.
Nov 22 04:07:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 22 04:07:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 22 04:07:52 np0005532048 nova_compute[253661]: 2025-11-22 09:07:52.007 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802457.0063968, 61ff3d94-226c-4991-af23-6da29d64dca1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:07:52 np0005532048 nova_compute[253661]: 2025-11-22 09:07:52.008 253665 INFO nova.compute.manager [-] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:07:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 22 04:07:52 np0005532048 nova_compute[253661]: 2025-11-22 09:07:52.036 253665 DEBUG nova.compute.manager [None req-ba2901fb-e9ce-434c-a0ce-d9f52fe2a11c - - - - - -] [instance: 61ff3d94-226c-4991-af23-6da29d64dca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:52 np0005532048 nova_compute[253661]: 2025-11-22 09:07:52.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:07:52
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:07:52 np0005532048 podman[276160]: 2025-11-22 09:07:52.360105124 +0000 UTC m=+0.039591535 container create 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:07:52 np0005532048 systemd[1]: Started libpod-conmon-12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1.scope.
Nov 22 04:07:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:07:52 np0005532048 podman[276160]: 2025-11-22 09:07:52.340753898 +0000 UTC m=+0.020240329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:07:52 np0005532048 podman[276160]: 2025-11-22 09:07:52.438039979 +0000 UTC m=+0.117526410 container init 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:07:52 np0005532048 podman[276160]: 2025-11-22 09:07:52.446493173 +0000 UTC m=+0.125979574 container start 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:07:52 np0005532048 podman[276160]: 2025-11-22 09:07:52.450152801 +0000 UTC m=+0.129639212 container attach 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:07:52 np0005532048 infallible_goldwasser[276176]: 167 167
Nov 22 04:07:52 np0005532048 systemd[1]: libpod-12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1.scope: Deactivated successfully.
Nov 22 04:07:52 np0005532048 podman[276160]: 2025-11-22 09:07:52.453214755 +0000 UTC m=+0.132701166 container died 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 04:07:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f96ea10e853778f795653a9d93897c21455ace8c4fde5fcddc10de2410fe1176-merged.mount: Deactivated successfully.
Nov 22 04:07:52 np0005532048 podman[276160]: 2025-11-22 09:07:52.495495072 +0000 UTC m=+0.174981473 container remove 12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:07:52 np0005532048 systemd[1]: libpod-conmon-12ae550638531fee6dfa01e8470b014dd75b014cdf3f74602a6238c03918f2b1.scope: Deactivated successfully.
Nov 22 04:07:52 np0005532048 podman[276200]: 2025-11-22 09:07:52.662077262 +0000 UTC m=+0.047453503 container create 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:52 np0005532048 systemd[1]: Started libpod-conmon-74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e.scope.
Nov 22 04:07:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:07:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:52 np0005532048 podman[276200]: 2025-11-22 09:07:52.640465872 +0000 UTC m=+0.025842133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:07:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:52 np0005532048 podman[276200]: 2025-11-22 09:07:52.792509622 +0000 UTC m=+0.177885883 container init 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:07:52 np0005532048 podman[276200]: 2025-11-22 09:07:52.799866339 +0000 UTC m=+0.185242580 container start 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:07:52 np0005532048 podman[276200]: 2025-11-22 09:07:52.830536188 +0000 UTC m=+0.215912519 container attach 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:07:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 22 04:07:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 22 04:07:53 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 22 04:07:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 144 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.8 KiB/s wr, 191 op/s
Nov 22 04:07:53 np0005532048 romantic_borg[276215]: {
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:    "0": [
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:        {
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "devices": [
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "/dev/loop3"
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            ],
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_name": "ceph_lv0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_size": "21470642176",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "name": "ceph_lv0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "tags": {
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.cluster_name": "ceph",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.crush_device_class": "",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.encrypted": "0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.osd_id": "0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.type": "block",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.vdo": "0"
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            },
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "type": "block",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "vg_name": "ceph_vg0"
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:        }
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:    ],
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:    "1": [
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:        {
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "devices": [
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "/dev/loop4"
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            ],
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_name": "ceph_lv1",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_size": "21470642176",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "name": "ceph_lv1",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "tags": {
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.cluster_name": "ceph",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.crush_device_class": "",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.encrypted": "0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.osd_id": "1",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.type": "block",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.vdo": "0"
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            },
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "type": "block",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "vg_name": "ceph_vg1"
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:        }
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:    ],
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:    "2": [
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:        {
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "devices": [
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "/dev/loop5"
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            ],
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_name": "ceph_lv2",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_size": "21470642176",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "name": "ceph_lv2",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "tags": {
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.cluster_name": "ceph",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.crush_device_class": "",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.encrypted": "0",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.osd_id": "2",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.type": "block",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:                "ceph.vdo": "0"
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            },
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "type": "block",
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:            "vg_name": "ceph_vg2"
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:        }
Nov 22 04:07:53 np0005532048 romantic_borg[276215]:    ]
Nov 22 04:07:53 np0005532048 romantic_borg[276215]: }
Nov 22 04:07:53 np0005532048 systemd[1]: libpod-74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e.scope: Deactivated successfully.
Nov 22 04:07:53 np0005532048 podman[276200]: 2025-11-22 09:07:53.674040941 +0000 UTC m=+1.059417192 container died 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:07:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8cd136868538d8f66bec4c26342122bdb8abd0a740c93d99778d9425b59fa9df-merged.mount: Deactivated successfully.
Nov 22 04:07:53 np0005532048 podman[276200]: 2025-11-22 09:07:53.734729941 +0000 UTC m=+1.120106182 container remove 74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:07:53 np0005532048 systemd[1]: libpod-conmon-74691c874b5d1bc294e4518229a8bd11188d503ca513b69a9cf8338aa4f4bb2e.scope: Deactivated successfully.
Nov 22 04:07:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 22 04:07:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 22 04:07:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:07:54 np0005532048 nova_compute[253661]: 2025-11-22 09:07:54.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:54 np0005532048 podman[276380]: 2025-11-22 09:07:54.493992778 +0000 UTC m=+0.076292488 container create a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:07:54 np0005532048 podman[276380]: 2025-11-22 09:07:54.454440825 +0000 UTC m=+0.036740555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:07:54 np0005532048 systemd[1]: Started libpod-conmon-a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896.scope.
Nov 22 04:07:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:07:54 np0005532048 podman[276380]: 2025-11-22 09:07:54.73337591 +0000 UTC m=+0.315675620 container init a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:07:54 np0005532048 podman[276380]: 2025-11-22 09:07:54.742690924 +0000 UTC m=+0.324990634 container start a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:07:54 np0005532048 brave_hypatia[276396]: 167 167
Nov 22 04:07:54 np0005532048 systemd[1]: libpod-a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896.scope: Deactivated successfully.
Nov 22 04:07:54 np0005532048 podman[276380]: 2025-11-22 09:07:54.807875413 +0000 UTC m=+0.390175133 container attach a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:07:54 np0005532048 podman[276380]: 2025-11-22 09:07:54.808727994 +0000 UTC m=+0.391027704 container died a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:07:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:07:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.2 KiB/s wr, 103 op/s
Nov 22 04:07:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-47169f60c1609004754af649dc08bc719fbb0f0a3a6348b48a8855774d3494b3-merged.mount: Deactivated successfully.
Nov 22 04:07:55 np0005532048 podman[276380]: 2025-11-22 09:07:55.336413675 +0000 UTC m=+0.918713425 container remove a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hypatia, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:07:55 np0005532048 systemd[1]: libpod-conmon-a65b8d184f62e3c1fb1fb22724719c0d917da7df3bf040eb5cfe8afea3383896.scope: Deactivated successfully.
Nov 22 04:07:55 np0005532048 podman[276420]: 2025-11-22 09:07:55.499368558 +0000 UTC m=+0.024923781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:07:55 np0005532048 podman[276420]: 2025-11-22 09:07:55.610022151 +0000 UTC m=+0.135577354 container create 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:07:55 np0005532048 systemd[1]: Started libpod-conmon-33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3.scope.
Nov 22 04:07:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:07:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:55 np0005532048 podman[276420]: 2025-11-22 09:07:55.703947252 +0000 UTC m=+0.229502555 container init 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:07:55 np0005532048 podman[276420]: 2025-11-22 09:07:55.713871611 +0000 UTC m=+0.239426814 container start 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:07:55 np0005532048 podman[276420]: 2025-11-22 09:07:55.718288897 +0000 UTC m=+0.243844310 container attach 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:55.999 253665 DEBUG nova.compute.manager [None req-eea21a1d-e179-432d-9b9e-17292723997b 5ab5801c00a94ae58a2ee4d79237d36d 1695c5aed6564e9ca76c77cf59eec4b5 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.009 253665 INFO nova.compute.manager [None req-eea21a1d-e179-432d-9b9e-17292723997b 5ab5801c00a94ae58a2ee4d79237d36d 1695c5aed6564e9ca76c77cf59eec4b5 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Retrieving diagnostics#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.308 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.309 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.309 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.309 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.309 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.311 253665 INFO nova.compute.manager [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Terminating instance#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.311 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "refresh_cache-c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.312 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquired lock "refresh_cache-c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.312 253665 DEBUG nova.network.neutron [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]: {
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "osd_id": 1,
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "type": "bluestore"
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:    },
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "osd_id": 0,
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "type": "bluestore"
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:    },
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "osd_id": 2,
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:        "type": "bluestore"
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]:    }
Nov 22 04:07:56 np0005532048 condescending_maxwell[276437]: }
Nov 22 04:07:56 np0005532048 systemd[1]: libpod-33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3.scope: Deactivated successfully.
Nov 22 04:07:56 np0005532048 podman[276420]: 2025-11-22 09:07:56.751052157 +0000 UTC m=+1.276607370 container died 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:07:56 np0005532048 systemd[1]: libpod-33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3.scope: Consumed 1.011s CPU time.
Nov 22 04:07:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2e18b588db02dbf55179879cc03cf27ce915098e5225367111c407d0af07da12-merged.mount: Deactivated successfully.
Nov 22 04:07:56 np0005532048 nova_compute[253661]: 2025-11-22 09:07:56.802 253665 DEBUG nova.network.neutron [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:56 np0005532048 podman[276420]: 2025-11-22 09:07:56.806125352 +0000 UTC m=+1.331680555 container remove 33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 22 04:07:56 np0005532048 systemd[1]: libpod-conmon-33bdec8bedfb2f20f81ebe6b9904f62f89907ab64defdbbd6e3024f4464a29c3.scope: Deactivated successfully.
Nov 22 04:07:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:07:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:07:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:07:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:07:56 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 193e80e1-8cab-4d30-bf6c-bbc38c84334c does not exist
Nov 22 04:07:56 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b0d2d8f2-b678-4a89-b8ad-0da17f43e214 does not exist
Nov 22 04:07:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 96 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 1.1 MiB/s wr, 183 op/s
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.168 253665 DEBUG nova.network.neutron [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.183 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Releasing lock "refresh_cache-c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.184 253665 DEBUG nova.compute.manager [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:07:57 np0005532048 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 22 04:07:57 np0005532048 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 12.469s CPU time.
Nov 22 04:07:57 np0005532048 systemd-machined[215941]: Machine qemu-10-instance-0000000a terminated.
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.413 253665 INFO nova.virt.libvirt.driver [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance destroyed successfully.#033[00m
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.415 253665 DEBUG nova.objects.instance [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lazy-loading 'resources' on Instance uuid c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.821 253665 INFO nova.virt.libvirt.driver [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Deleting instance files /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_del#033[00m
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.822 253665 INFO nova.virt.libvirt.driver [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Deletion of /var/lib/nova/instances/c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12_del complete#033[00m
Nov 22 04:07:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:07:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.893 253665 INFO nova.compute.manager [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.895 253665 DEBUG oslo.service.loopingcall [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.896 253665 DEBUG nova.compute.manager [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:07:57 np0005532048 nova_compute[253661]: 2025-11-22 09:07:57.897 253665 DEBUG nova.network.neutron [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.071 253665 DEBUG nova.network.neutron [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.086 253665 DEBUG nova.network.neutron [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.097 253665 INFO nova.compute.manager [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Took 0.20 seconds to deallocate network for instance.#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.151 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.151 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.196 253665 DEBUG oslo_concurrency.processutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:07:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690782661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.645 253665 DEBUG oslo_concurrency.processutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.653 253665 DEBUG nova.compute.provider_tree [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.669 253665 DEBUG nova.scheduler.client.report [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.690 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.725 253665 INFO nova.scheduler.client.report [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Deleted allocations for instance c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12#033[00m
Nov 22 04:07:58 np0005532048 nova_compute[253661]: 2025-11-22 09:07:58.786 253665 DEBUG oslo_concurrency.lockutils [None req-7ce3b5bb-359c-48d3-82dd-e0234d10470b 9b2aa984fe7e4bbbab17fc76f5d39990 c759d960bd994160acfb1cdfbe9858c8 - - default default] Lock "c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:07:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 22 04:07:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 22 04:07:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 22 04:07:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 57 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 661 KiB/s rd, 4.1 MiB/s wr, 264 op/s
Nov 22 04:07:59 np0005532048 nova_compute[253661]: 2025-11-22 09:07:59.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:00 np0005532048 nova_compute[253661]: 2025-11-22 09:08:00.866 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802465.8661232, bfc23def-6d15-4b5e-959e-3165bc676f9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:00 np0005532048 nova_compute[253661]: 2025-11-22 09:08:00.867 253665 INFO nova.compute.manager [-] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:08:00 np0005532048 nova_compute[253661]: 2025-11-22 09:08:00.884 253665 DEBUG nova.compute.manager [None req-a9870ad6-019c-4059-85ab-41f55a12bd79 - - - - - -] [instance: bfc23def-6d15-4b5e-959e-3165bc676f9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 57 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 497 KiB/s rd, 3.1 MiB/s wr, 198 op/s
Nov 22 04:08:02 np0005532048 nova_compute[253661]: 2025-11-22 09:08:02.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0001601849929999349 of space, bias 1.0, pg target 0.04805549789998047 quantized to 32 (current 32)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:08:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 444 KiB/s rd, 2.7 MiB/s wr, 151 op/s
Nov 22 04:08:03 np0005532048 nova_compute[253661]: 2025-11-22 09:08:03.894 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802468.892703, 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:03 np0005532048 nova_compute[253661]: 2025-11-22 09:08:03.894 253665 INFO nova.compute.manager [-] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:08:03 np0005532048 nova_compute[253661]: 2025-11-22 09:08:03.914 253665 DEBUG nova.compute.manager [None req-99e177d7-0d53-4d81-b6e9-de1a3d164b35 - - - - - -] [instance: 4de0c4e0-6b2a-4a70-a14d-f7da77fd3a76] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 22 04:08:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 22 04:08:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 22 04:08:04 np0005532048 nova_compute[253661]: 2025-11-22 09:08:04.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 451 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Nov 22 04:08:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 15 KiB/s wr, 30 op/s
Nov 22 04:08:07 np0005532048 nova_compute[253661]: 2025-11-22 09:08:07.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:07 np0005532048 nova_compute[253661]: 2025-11-22 09:08:07.411 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "36e46542-ccae-4acd-9191-80f54d6bc694" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:07 np0005532048 nova_compute[253661]: 2025-11-22 09:08:07.412 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:07 np0005532048 nova_compute[253661]: 2025-11-22 09:08:07.444 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:08:07 np0005532048 nova_compute[253661]: 2025-11-22 09:08:07.525 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:07 np0005532048 nova_compute[253661]: 2025-11-22 09:08:07.526 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:07 np0005532048 nova_compute[253661]: 2025-11-22 09:08:07.532 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:08:07 np0005532048 nova_compute[253661]: 2025-11-22 09:08:07.533 253665 INFO nova.compute.claims [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:08:07 np0005532048 nova_compute[253661]: 2025-11-22 09:08:07.650 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:08:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4195264242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.133 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.139 253665 DEBUG nova.compute.provider_tree [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.340 253665 DEBUG nova.scheduler.client.report [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.384 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.385 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.464 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.464 253665 DEBUG nova.network.neutron [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.511 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.573 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.707 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.708 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.709 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Creating image(s)#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.731 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.754 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.777 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.782 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.849 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.850 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.850 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.851 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.873 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:08 np0005532048 nova_compute[253661]: 2025-11-22 09:08:08.878 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 36e46542-ccae-4acd-9191-80f54d6bc694_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.063 253665 DEBUG nova.network.neutron [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.065 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.241 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 36e46542-ccae-4acd-9191-80f54d6bc694_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.293 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] resizing rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.398 253665 DEBUG nova.objects.instance [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lazy-loading 'migration_context' on Instance uuid 36e46542-ccae-4acd-9191-80f54d6bc694 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.413 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.413 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Ensure instance console log exists: /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.414 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.414 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.414 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.416 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.420 253665 WARNING nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.424 253665 DEBUG nova.virt.libvirt.host [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.425 253665 DEBUG nova.virt.libvirt.host [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.427 253665 DEBUG nova.virt.libvirt.host [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.428 253665 DEBUG nova.virt.libvirt.host [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.428 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.428 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.429 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.430 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.430 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.430 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.430 253665 DEBUG nova.virt.hardware [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.433 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.496 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.511 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.511 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.597 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.830 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.831 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.839 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.839 253665 INFO nova.compute.claims [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:08:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1992786895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.866 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.887 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:09 np0005532048 nova_compute[253661]: 2025-11-22 09:08:09.891 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.202 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1181363758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.328 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.331 253665 DEBUG nova.objects.instance [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36e46542-ccae-4acd-9191-80f54d6bc694 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.345 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <uuid>36e46542-ccae-4acd-9191-80f54d6bc694</uuid>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <name>instance-0000000b</name>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-422854885</nova:name>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:08:09</nova:creationTime>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <nova:user uuid="543b664d1bde44719b208e5f3e6902f1">tempest-ServerDiagnosticsNegativeTest-832003805-project-member</nova:user>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <nova:project uuid="f6e0addc8d86425c9ba676b31319cd79">tempest-ServerDiagnosticsNegativeTest-832003805</nova:project>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <entry name="serial">36e46542-ccae-4acd-9191-80f54d6bc694</entry>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <entry name="uuid">36e46542-ccae-4acd-9191-80f54d6bc694</entry>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/36e46542-ccae-4acd-9191-80f54d6bc694_disk">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/36e46542-ccae-4acd-9191-80f54d6bc694_disk.config">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/console.log" append="off"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:08:10 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:08:10 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:08:10 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:08:10 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:08:10 np0005532048 podman[276826]: 2025-11-22 09:08:10.381207451 +0000 UTC m=+0.066860871 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.406 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.406 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.407 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Using config drive#033[00m
Nov 22 04:08:10 np0005532048 podman[276827]: 2025-11-22 09:08:10.41569502 +0000 UTC m=+0.097470356 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.429 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.564 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Creating config drive at /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.569 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xnsplv3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:08:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040856504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.654 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.660 253665 DEBUG nova.compute.provider_tree [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.677 253665 DEBUG nova.scheduler.client.report [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.700 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xnsplv3" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.726 253665 DEBUG nova.storage.rbd_utils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] rbd image 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.731 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.753 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.755 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.847 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.868 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.916 253665 DEBUG oslo_concurrency.processutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config 36e46542-ccae-4acd-9191-80f54d6bc694_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.917 253665 INFO nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Deleting local config drive /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694/disk.config because it was imported into RBD.#033[00m
Nov 22 04:08:10 np0005532048 nova_compute[253661]: 2025-11-22 09:08:10.972 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:08:10 np0005532048 systemd-machined[215941]: New machine qemu-11-instance-0000000b.
Nov 22 04:08:11 np0005532048 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.037 253665 DEBUG oslo_concurrency.processutils [None req-6f69d831-742a-406a-8431-d31085746923 c4e39f8ab7c04e3f8670f5bbc0bc8dc3 1f91f31d3b434312a707fcb491e8cf89 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.078 253665 DEBUG oslo_concurrency.processutils [None req-6f69d831-742a-406a-8431-d31085746923 c4e39f8ab7c04e3f8670f5bbc0bc8dc3 1f91f31d3b434312a707fcb491e8cf89 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.241 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.242 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.313 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.315 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.316 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating image(s)#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.338 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.363 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.385 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.389 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.450 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.451 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.452 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.452 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.470 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.472 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.894 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:11 np0005532048 nova_compute[253661]: 2025-11-22 09:08:11.961 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] resizing rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.094 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802492.0942516, 36e46542-ccae-4acd-9191-80f54d6bc694 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.095 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.098 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.099 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.103 253665 INFO nova.virt.libvirt.driver [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance spawned successfully.#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.104 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.135 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.143 253665 DEBUG nova.objects.instance [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'migration_context' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.147 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.150 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.151 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.151 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.152 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.152 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.153 253665 DEBUG nova.virt.libvirt.driver [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.158 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.159 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Ensure instance console log exists: /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.159 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.160 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.160 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.161 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.164 253665 WARNING nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.168 253665 DEBUG nova.virt.libvirt.host [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.169 253665 DEBUG nova.virt.libvirt.host [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.173 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.173 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802492.0978801, 36e46542-ccae-4acd-9191-80f54d6bc694 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.173 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] VM Started (Lifecycle Event)#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.175 253665 DEBUG nova.virt.libvirt.host [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.175 253665 DEBUG nova.virt.libvirt.host [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.175 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.176 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.176 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.176 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.177 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.177 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.177 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.178 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.178 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.178 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.178 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.179 253665 DEBUG nova.virt.hardware [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.181 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.204 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.209 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.223 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.229 253665 INFO nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Took 3.52 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.230 253665 DEBUG nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:08:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/970811612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:08:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:08:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/970811612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.294 253665 INFO nova.compute.manager [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Took 4.80 seconds to build instance.#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.320 253665 DEBUG oslo_concurrency.lockutils [None req-0654e88d-eafa-492a-bc03-a630b67e42b4 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.410 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802477.4100316, c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.412 253665 INFO nova.compute.manager [-] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.431 253665 DEBUG nova.compute.manager [None req-dc24ba31-803e-41fa-b461-d15743ad6ae6 - - - - - -] [instance: c84e0ac3-7aa6-47d4-bff8-f1ba40b95e12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681831094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.651 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.677 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:12 np0005532048 nova_compute[253661]: 2025-11-22 09:08:12.682 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 53 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 852 KiB/s wr, 26 op/s
Nov 22 04:08:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966798618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.161 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.164 253665 DEBUG nova.objects.instance [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'pci_devices' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.176 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <uuid>c233bbff-b2e9-442f-818d-e8487dee1c3e</uuid>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <name>instance-0000000c</name>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAdmin275Test-server-1195148279</nova:name>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:08:12</nova:creationTime>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <nova:user uuid="db3c9e2649dc463a894636918b1536f6">tempest-ServersAdmin275Test-461797968-project-member</nova:user>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <nova:project uuid="452c52561ee04e93bc47895d639c9745">tempest-ServersAdmin275Test-461797968</nova:project>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <entry name="serial">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <entry name="uuid">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log" append="off"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:08:13 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:08:13 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:08:13 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:08:13 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.226 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.226 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.227 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Using config drive#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.246 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.805 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating config drive at /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.811 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5b3pgbpm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.951 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5b3pgbpm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.974 253665 DEBUG nova.storage.rbd_utils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:13 np0005532048 nova_compute[253661]: 2025-11-22 09:08:13.977 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.124 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "36e46542-ccae-4acd-9191-80f54d6bc694" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.125 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.125 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "36e46542-ccae-4acd-9191-80f54d6bc694-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.125 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.125 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.127 253665 INFO nova.compute.manager [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Terminating instance#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.128 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "refresh_cache-36e46542-ccae-4acd-9191-80f54d6bc694" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.128 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquired lock "refresh_cache-36e46542-ccae-4acd-9191-80f54d6bc694" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.128 253665 DEBUG nova.network.neutron [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.254 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.319 253665 DEBUG nova.network.neutron [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:08:14 np0005532048 podman[277285]: 2025-11-22 09:08:14.430760726 +0000 UTC m=+0.117283654 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.605 253665 DEBUG oslo_concurrency.processutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.606 253665 INFO nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting local config drive /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config because it was imported into RBD.#033[00m
Nov 22 04:08:14 np0005532048 systemd-machined[215941]: New machine qemu-12-instance-0000000c.
Nov 22 04:08:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:08:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44051744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:14 np0005532048 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.798 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.799 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.803 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.803 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.959 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.960 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4452MB free_disk=59.98017120361328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.960 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:14 np0005532048 nova_compute[253661]: 2025-11-22 09:08:14.960 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.042 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 36e46542-ccae-4acd-9191-80f54d6bc694 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.042 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance c233bbff-b2e9-442f-818d-e8487dee1c3e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.043 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.043 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:08:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 108 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 105 op/s
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.095 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.209 253665 DEBUG nova.network.neutron [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.228 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Releasing lock "refresh_cache-36e46542-ccae-4acd-9191-80f54d6bc694" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.229 253665 DEBUG nova.compute.manager [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.357 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802495.3567834, c233bbff-b2e9-442f-818d-e8487dee1c3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.357 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.360 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.360 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.365 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance spawned successfully.#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.366 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.379 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.386 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.390 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.390 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.391 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.391 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.392 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.392 253665 DEBUG nova.virt.libvirt.driver [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.417 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.417 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802495.357988, c233bbff-b2e9-442f-818d-e8487dee1c3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.417 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.434 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.436 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.453 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:08:15 np0005532048 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 22 04:08:15 np0005532048 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 4.241s CPU time.
Nov 22 04:08:15 np0005532048 systemd-machined[215941]: Machine qemu-11-instance-0000000b terminated.
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.475 253665 INFO nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Took 4.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.475 253665 DEBUG nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.528 253665 INFO nova.compute.manager [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Took 5.85 seconds to build instance.#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.546 253665 DEBUG oslo_concurrency.lockutils [None req-f71856d5-ca1e-4c35-ad10-265cd478f2be db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:08:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/409087369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.610 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.617 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.630 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.655 253665 INFO nova.virt.libvirt.driver [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance destroyed successfully.#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.656 253665 DEBUG nova.objects.instance [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lazy-loading 'resources' on Instance uuid 36e46542-ccae-4acd-9191-80f54d6bc694 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.701 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:08:15 np0005532048 nova_compute[253661]: 2025-11-22 09:08:15.701 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:16 np0005532048 nova_compute[253661]: 2025-11-22 09:08:16.693 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:16 np0005532048 nova_compute[253661]: 2025-11-22 09:08:16.694 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 156 op/s
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.075 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.378 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.378 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.378 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.379 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.379 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:08:17 np0005532048 nova_compute[253661]: 2025-11-22 09:08:17.409 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:08:18 np0005532048 nova_compute[253661]: 2025-11-22 09:08:18.507 253665 INFO nova.compute.manager [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Rebuilding instance#033[00m
Nov 22 04:08:18 np0005532048 nova_compute[253661]: 2025-11-22 09:08:18.759 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:19 np0005532048 nova_compute[253661]: 2025-11-22 09:08:19.041 253665 DEBUG nova.compute.manager [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 100 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Nov 22 04:08:19 np0005532048 nova_compute[253661]: 2025-11-22 09:08:19.259 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:08:19 np0005532048 nova_compute[253661]: 2025-11-22 09:08:19.373 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'pci_requests' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:19 np0005532048 nova_compute[253661]: 2025-11-22 09:08:19.393 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'pci_devices' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:19 np0005532048 nova_compute[253661]: 2025-11-22 09:08:19.404 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'resources' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:19 np0005532048 nova_compute[253661]: 2025-11-22 09:08:19.416 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'migration_context' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:19 np0005532048 nova_compute[253661]: 2025-11-22 09:08:19.427 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:08:19 np0005532048 nova_compute[253661]: 2025-11-22 09:08:19.433 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:08:19 np0005532048 nova_compute[253661]: 2025-11-22 09:08:19.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 100 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.069 253665 INFO nova.virt.libvirt.driver [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Deleting instance files /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694_del#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.070 253665 INFO nova.virt.libvirt.driver [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Deletion of /var/lib/nova/instances/36e46542-ccae-4acd-9191-80f54d6bc694_del complete#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.561 253665 INFO nova.compute.manager [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Took 6.33 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.561 253665 DEBUG oslo.service.loopingcall [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.561 253665 DEBUG nova.compute.manager [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.562 253665 DEBUG nova.network.neutron [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.745 253665 DEBUG nova.network.neutron [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.760 253665 DEBUG nova.network.neutron [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.782 253665 INFO nova.compute.manager [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Took 0.22 seconds to deallocate network for instance.#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.940 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:21 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.941 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:22 np0005532048 nova_compute[253661]: 2025-11-22 09:08:21.999 253665 DEBUG oslo_concurrency.processutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:22 np0005532048 nova_compute[253661]: 2025-11-22 09:08:22.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:08:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3503406224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:22 np0005532048 nova_compute[253661]: 2025-11-22 09:08:22.477 253665 DEBUG oslo_concurrency.processutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:22 np0005532048 nova_compute[253661]: 2025-11-22 09:08:22.484 253665 DEBUG nova.compute.provider_tree [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:08:22 np0005532048 nova_compute[253661]: 2025-11-22 09:08:22.500 253665 DEBUG nova.scheduler.client.report [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:08:22 np0005532048 nova_compute[253661]: 2025-11-22 09:08:22.595 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:22 np0005532048 nova_compute[253661]: 2025-11-22 09:08:22.677 253665 INFO nova.scheduler.client.report [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Deleted allocations for instance 36e46542-ccae-4acd-9191-80f54d6bc694#033[00m
Nov 22 04:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:22 np0005532048 nova_compute[253661]: 2025-11-22 09:08:22.869 253665 DEBUG oslo_concurrency.lockutils [None req-f81138a0-83d2-47d3-b906-671ee19893ec 543b664d1bde44719b208e5f3e6902f1 f6e0addc8d86425c9ba676b31319cd79 - - default default] Lock "36e46542-ccae-4acd-9191-80f54d6bc694" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Nov 22 04:08:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:08:23Z|00043|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 22 04:08:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:24 np0005532048 nova_compute[253661]: 2025-11-22 09:08:24.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 205 op/s
Nov 22 04:08:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Nov 22 04:08:27 np0005532048 nova_compute[253661]: 2025-11-22 09:08:27.080 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:08:27.952 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:08:27.952 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:08:27.953 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 99 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 946 KiB/s wr, 87 op/s
Nov 22 04:08:29 np0005532048 nova_compute[253661]: 2025-11-22 09:08:29.476 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:08:29 np0005532048 nova_compute[253661]: 2025-11-22 09:08:29.504 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:30 np0005532048 nova_compute[253661]: 2025-11-22 09:08:30.396 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:08:30.395 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:08:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:08:30.397 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:08:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:08:30.398 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:08:30 np0005532048 nova_compute[253661]: 2025-11-22 09:08:30.654 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802495.6528172, 36e46542-ccae-4acd-9191-80f54d6bc694 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:30 np0005532048 nova_compute[253661]: 2025-11-22 09:08:30.655 253665 INFO nova.compute.manager [-] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:08:30 np0005532048 nova_compute[253661]: 2025-11-22 09:08:30.677 253665 DEBUG nova.compute.manager [None req-4b603693-aa61-4dce-b1c4-538152544081 - - - - - -] [instance: 36e46542-ccae-4acd-9191-80f54d6bc694] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 99 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 749 KiB/s rd, 945 KiB/s wr, 56 op/s
Nov 22 04:08:32 np0005532048 nova_compute[253661]: 2025-11-22 09:08:32.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 116 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 941 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Nov 22 04:08:33 np0005532048 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 22 04:08:33 np0005532048 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.948s CPU time.
Nov 22 04:08:33 np0005532048 systemd-machined[215941]: Machine qemu-12-instance-0000000c terminated.
Nov 22 04:08:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:34 np0005532048 nova_compute[253661]: 2025-11-22 09:08:34.498 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance shutdown successfully after 15 seconds.#033[00m
Nov 22 04:08:34 np0005532048 nova_compute[253661]: 2025-11-22 09:08:34.506 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.#033[00m
Nov 22 04:08:34 np0005532048 nova_compute[253661]: 2025-11-22 09:08:34.506 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:34 np0005532048 nova_compute[253661]: 2025-11-22 09:08:34.511 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.#033[00m
Nov 22 04:08:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 22 04:08:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:08:37 np0005532048 nova_compute[253661]: 2025-11-22 09:08:37.084 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 85 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 22 04:08:39 np0005532048 nova_compute[253661]: 2025-11-22 09:08:39.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 85 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 1.2 MiB/s wr, 62 op/s
Nov 22 04:08:41 np0005532048 podman[277479]: 2025-11-22 09:08:41.373741412 +0000 UTC m=+0.065200331 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 22 04:08:41 np0005532048 podman[277480]: 2025-11-22 09:08:41.376290714 +0000 UTC m=+0.068638963 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:08:41 np0005532048 nova_compute[253661]: 2025-11-22 09:08:41.983 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting instance files /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del#033[00m
Nov 22 04:08:41 np0005532048 nova_compute[253661]: 2025-11-22 09:08:41.983 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deletion of /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del complete#033[00m
Nov 22 04:08:42 np0005532048 nova_compute[253661]: 2025-11-22 09:08:42.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:42 np0005532048 nova_compute[253661]: 2025-11-22 09:08:42.381 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:08:42 np0005532048 nova_compute[253661]: 2025-11-22 09:08:42.382 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating image(s)#033[00m
Nov 22 04:08:42 np0005532048 nova_compute[253661]: 2025-11-22 09:08:42.402 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:42 np0005532048 nova_compute[253661]: 2025-11-22 09:08:42.426 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:42 np0005532048 nova_compute[253661]: 2025-11-22 09:08:42.448 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:42 np0005532048 nova_compute[253661]: 2025-11-22 09:08:42.453 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:42 np0005532048 nova_compute[253661]: 2025-11-22 09:08:42.454 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:42 np0005532048 nova_compute[253661]: 2025-11-22 09:08:42.823 253665 DEBUG nova.virt.libvirt.imagebackend [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/baf70c6a-4f18-40eb-9d40-874af269a47f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/baf70c6a-4f18-40eb-9d40-874af269a47f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 22 04:08:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 361 KiB/s rd, 1.2 MiB/s wr, 65 op/s
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.466085) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523466131, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1553, "num_deletes": 259, "total_data_size": 2163425, "memory_usage": 2201440, "flush_reason": "Manual Compaction"}
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523550383, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2126175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24416, "largest_seqno": 25968, "table_properties": {"data_size": 2118928, "index_size": 4190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15993, "raw_average_key_size": 20, "raw_value_size": 2104114, "raw_average_value_size": 2732, "num_data_blocks": 185, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802405, "oldest_key_time": 1763802405, "file_creation_time": 1763802523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 84341 microseconds, and 6436 cpu microseconds.
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.550425) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2126175 bytes OK
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.550462) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.618574) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.618628) EVENT_LOG_v1 {"time_micros": 1763802523618616, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.618660) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2156447, prev total WAL file size 2156447, number of live WAL files 2.
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.619579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2076KB)], [56(7037KB)]
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523619620, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9332945, "oldest_snapshot_seqno": -1}
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4844 keys, 7570594 bytes, temperature: kUnknown
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523754777, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7570594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7537454, "index_size": 19910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 121342, "raw_average_key_size": 25, "raw_value_size": 7449196, "raw_average_value_size": 1537, "num_data_blocks": 818, "num_entries": 4844, "num_filter_entries": 4844, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.755196) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7570594 bytes
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.795388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.9 rd, 55.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.9 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 5371, records dropped: 527 output_compression: NoCompression
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.795432) EVENT_LOG_v1 {"time_micros": 1763802523795415, "job": 30, "event": "compaction_finished", "compaction_time_micros": 135369, "compaction_time_cpu_micros": 19463, "output_level": 6, "num_output_files": 1, "total_output_size": 7570594, "num_input_records": 5371, "num_output_records": 4844, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523796587, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802523798125, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.619490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:08:43 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:08:43.798403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:08:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:44 np0005532048 nova_compute[253661]: 2025-11-22 09:08:44.929 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:44 np0005532048 nova_compute[253661]: 2025-11-22 09:08:44.934 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:44 np0005532048 nova_compute[253661]: 2025-11-22 09:08:44.934 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:44 np0005532048 nova_compute[253661]: 2025-11-22 09:08:44.964 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:08:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 75 KiB/s wr, 47 op/s
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.123 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.123 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.131 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.131 253665 INFO nova.compute.claims [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.257 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.380 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:45 np0005532048 podman[277574]: 2025-11-22 09:08:45.44589457 +0000 UTC m=+0.129813315 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.450 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.part --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.452 253665 DEBUG nova.virt.images [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] baf70c6a-4f18-40eb-9d40-874af269a47f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.466 253665 DEBUG nova.privsep.utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.467 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.part /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:08:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2180418099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.713 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.720 253665 DEBUG nova.compute.provider_tree [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.732 253665 DEBUG nova.scheduler.client.report [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.827 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.part /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.converted" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.832 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.897 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4.converted --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.899 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.925 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.930 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.984 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:45 np0005532048 nova_compute[253661]: 2025-11-22 09:08:45.985 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.211 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.211 253665 DEBUG nova.network.neutron [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.245 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.274 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.462 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.527 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] resizing rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.566 253665 DEBUG nova.network.neutron [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.567 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.582 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.584 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.584 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Creating image(s)#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.611 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.637 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.660 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.664 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.724 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.725 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.726 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.726 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.749 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.755 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.820 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.821 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Ensure instance console log exists: /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.821 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.821 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.822 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.823 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.826 253665 WARNING nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.831 253665 DEBUG nova.virt.libvirt.host [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.831 253665 DEBUG nova.virt.libvirt.host [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.834 253665 DEBUG nova.virt.libvirt.host [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.834 253665 DEBUG nova.virt.libvirt.host [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.835 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.835 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.836 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.836 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.836 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.837 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.837 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.837 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.837 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.838 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.838 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.838 253665 DEBUG nova.virt.hardware [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.839 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:46 np0005532048 nova_compute[253661]: 2025-11-22 09:08:46.853 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 707 KiB/s rd, 780 KiB/s wr, 35 op/s
Nov 22 04:08:47 np0005532048 nova_compute[253661]: 2025-11-22 09:08:47.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140003744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:47 np0005532048 nova_compute[253661]: 2025-11-22 09:08:47.496 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:47 np0005532048 nova_compute[253661]: 2025-11-22 09:08:47.701 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:47 np0005532048 nova_compute[253661]: 2025-11-22 09:08:47.706 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:47 np0005532048 nova_compute[253661]: 2025-11-22 09:08:47.730 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.975s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:47 np0005532048 nova_compute[253661]: 2025-11-22 09:08:47.795 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] resizing rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:08:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1081899854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.160 253665 DEBUG nova.objects.instance [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lazy-loading 'migration_context' on Instance uuid 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.166 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.169 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <uuid>c233bbff-b2e9-442f-818d-e8487dee1c3e</uuid>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <name>instance-0000000c</name>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAdmin275Test-server-1195148279</nova:name>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:08:46</nova:creationTime>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <nova:user uuid="db3c9e2649dc463a894636918b1536f6">tempest-ServersAdmin275Test-461797968-project-member</nova:user>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <nova:project uuid="452c52561ee04e93bc47895d639c9745">tempest-ServersAdmin275Test-461797968</nova:project>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <entry name="serial">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <entry name="uuid">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log" append="off"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:08:48 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:08:48 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:08:48 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:08:48 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.173 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.173 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Ensure instance console log exists: /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.174 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.174 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.174 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.176 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.181 253665 WARNING nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.184 253665 DEBUG nova.virt.libvirt.host [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.185 253665 DEBUG nova.virt.libvirt.host [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.187 253665 DEBUG nova.virt.libvirt.host [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.188 253665 DEBUG nova.virt.libvirt.host [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.188 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.188 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.189 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.190 253665 DEBUG nova.virt.hardware [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.194 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.248 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.248 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.249 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Using config drive#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.269 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.289 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.383 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'keypairs' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3421013341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.635 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.661 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.667 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.720 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating config drive at /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.726 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpip7pirtm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.858 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpip7pirtm" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.890 253665 DEBUG nova.storage.rbd_utils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:48 np0005532048 nova_compute[253661]: 2025-11-22 09:08:48.894 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 116 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 82 op/s
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.119 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802514.1184192, c233bbff-b2e9-442f-818d-e8487dee1c3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.120 253665 INFO nova.compute.manager [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.143 253665 DEBUG nova.compute.manager [None req-a07fe106-55d3-44ef-905a-5ee1c9ab4da0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.147 253665 DEBUG nova.compute.manager [None req-a07fe106-55d3-44ef-905a-5ee1c9ab4da0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.166 253665 INFO nova.compute.manager [None req-a07fe106-55d3-44ef-905a-5ee1c9ab4da0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:08:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4026124718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.208 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.210 253665 DEBUG nova.objects.instance [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lazy-loading 'pci_devices' on Instance uuid 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.222 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <uuid>103af636-a2aa-4cdb-a2e1-2ab7cf5fb900</uuid>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <name>instance-0000000d</name>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerExternalEventsTest-server-300434324</nova:name>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:08:48</nova:creationTime>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <nova:user uuid="2e259dc1688c42e3ba13f2239d49b39e">tempest-ServerExternalEventsTest-107261648-project-member</nova:user>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <nova:project uuid="420f909f2162475ba3f933633661986f">tempest-ServerExternalEventsTest-107261648</nova:project>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <entry name="serial">103af636-a2aa-4cdb-a2e1-2ab7cf5fb900</entry>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <entry name="uuid">103af636-a2aa-4cdb-a2e1-2ab7cf5fb900</entry>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/console.log" append="off"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:08:49 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:08:49 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:08:49 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:08:49 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.270 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.270 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.271 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Using config drive#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.291 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.433 253665 DEBUG oslo_concurrency.processutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.434 253665 INFO nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting local config drive /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config because it was imported into RBD.#033[00m
Nov 22 04:08:49 np0005532048 systemd-machined[215941]: New machine qemu-13-instance-0000000c.
Nov 22 04:08:49 np0005532048 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.821 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Creating config drive at /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.826 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejie2xil execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.902 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802529.9012933, c233bbff-b2e9-442f-818d-e8487dee1c3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.903 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.907 253665 DEBUG nova.compute.manager [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.907 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.912 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance spawned successfully.#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.912 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.925 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.931 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.938 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.939 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.939 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.939 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.940 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.940 253665 DEBUG nova.virt.libvirt.driver [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.947 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.947 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802529.903311, c233bbff-b2e9-442f-818d-e8487dee1c3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.948 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.958 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejie2xil" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.982 253665 DEBUG nova.storage.rbd_utils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] rbd image 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:08:49 np0005532048 nova_compute[253661]: 2025-11-22 09:08:49.986 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.012 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.016 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.034 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.195 253665 DEBUG nova.compute.manager [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.361 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.362 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.363 253665 DEBUG nova.objects.instance [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.420 253665 DEBUG oslo_concurrency.lockutils [None req-c5995d17-a5ed-4d1d-91ce-9db8fc3b4a19 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.516 253665 DEBUG oslo_concurrency.processutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.516 253665 INFO nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Deleting local config drive /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900/disk.config because it was imported into RBD.#033[00m
Nov 22 04:08:50 np0005532048 systemd-machined[215941]: New machine qemu-14-instance-0000000d.
Nov 22 04:08:50 np0005532048 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.974 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802530.9741821, 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.975 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.977 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.978 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.981 253665 INFO nova.virt.libvirt.driver [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance spawned successfully.#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.981 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:08:50 np0005532048 nova_compute[253661]: 2025-11-22 09:08:50.998 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.008 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.012 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.013 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.013 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.014 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.014 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.015 253665 DEBUG nova.virt.libvirt.driver [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.042 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.043 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802530.974989, 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.043 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] VM Started (Lifecycle Event)#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.063 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.067 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:08:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 116 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 69 op/s
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.084 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.614 253665 INFO nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Took 5.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.615 253665 DEBUG nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.673 253665 INFO nova.compute.manager [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Took 6.57 seconds to build instance.#033[00m
Nov 22 04:08:51 np0005532048 nova_compute[253661]: 2025-11-22 09:08:51.704 253665 DEBUG oslo_concurrency.lockutils [None req-b15ea4a3-a9a8-4b9c-ad08-bce30a1c32a9 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:52 np0005532048 nova_compute[253661]: 2025-11-22 09:08:52.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:08:52
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'volumes']
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 109 op/s
Nov 22 04:08:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:08:54 np0005532048 nova_compute[253661]: 2025-11-22 09:08:54.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:08:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:08:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 22 04:08:56 np0005532048 nova_compute[253661]: 2025-11-22 09:08:56.810 253665 DEBUG nova.compute.manager [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:08:56 np0005532048 nova_compute[253661]: 2025-11-22 09:08:56.811 253665 DEBUG nova.compute.manager [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:08:56 np0005532048 nova_compute[253661]: 2025-11-22 09:08:56.812 253665 DEBUG oslo_concurrency.lockutils [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] Acquiring lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:08:56 np0005532048 nova_compute[253661]: 2025-11-22 09:08:56.812 253665 DEBUG oslo_concurrency.lockutils [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] Acquired lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:08:56 np0005532048 nova_compute[253661]: 2025-11-22 09:08:56.812 253665 DEBUG nova.network.neutron [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:08:56 np0005532048 nova_compute[253661]: 2025-11-22 09:08:56.927 253665 INFO nova.compute.manager [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Rebuilding instance#033[00m
Nov 22 04:08:56 np0005532048 nova_compute[253661]: 2025-11-22 09:08:56.983 253665 DEBUG nova.network.neutron [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:08:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 208 op/s
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.133 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'trusted_certs' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.145 253665 DEBUG nova.compute.manager [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.460 253665 DEBUG nova.network.neutron [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.474 253665 DEBUG oslo_concurrency.lockutils [None req-36cdc446-68b7-4523-bf3e-f0da36410f5a 42a8bc97dd6d404ea64186ba811f2a44 0affcd13176045618f59328bd02522c7 - - default default] Releasing lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.623 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'pci_requests' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.635 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'pci_devices' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.646 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'resources' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.657 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'migration_context' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.667 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.671 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.740 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.741 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.742 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.742 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.742 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.744 253665 INFO nova.compute.manager [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Terminating instance#033[00m
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.748 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.749 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquired lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.749 253665 DEBUG nova.network.neutron [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:08:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e1a7b0ff-353b-4c6e-b4bb-7d587dfccf9e does not exist
Nov 22 04:08:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 102200e4-8b49-4f77-a18a-90aeaf66143e does not exist
Nov 22 04:08:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 52a1c8a0-f1b2-4bb0-81ab-b1890b9abfd2 does not exist
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:08:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:08:57 np0005532048 nova_compute[253661]: 2025-11-22 09:08:57.926 253665 DEBUG nova.network.neutron [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:08:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:08:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:08:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:08:58 np0005532048 nova_compute[253661]: 2025-11-22 09:08:58.158 253665 DEBUG nova.network.neutron [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:08:58 np0005532048 nova_compute[253661]: 2025-11-22 09:08:58.170 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Releasing lock "refresh_cache-103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:08:58 np0005532048 nova_compute[253661]: 2025-11-22 09:08:58.171 253665 DEBUG nova.compute.manager [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:08:58 np0005532048 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 22 04:08:58 np0005532048 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 7.682s CPU time.
Nov 22 04:08:58 np0005532048 systemd-machined[215941]: Machine qemu-14-instance-0000000d terminated.
Nov 22 04:08:58 np0005532048 nova_compute[253661]: 2025-11-22 09:08:58.397 253665 INFO nova.virt.libvirt.driver [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance destroyed successfully.#033[00m
Nov 22 04:08:58 np0005532048 nova_compute[253661]: 2025-11-22 09:08:58.397 253665 DEBUG nova.objects.instance [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lazy-loading 'resources' on Instance uuid 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:08:58 np0005532048 podman[278536]: 2025-11-22 09:08:58.368518648 +0000 UTC m=+0.022100488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:08:58 np0005532048 podman[278536]: 2025-11-22 09:08:58.662811129 +0000 UTC m=+0.316392939 container create efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:08:58 np0005532048 nova_compute[253661]: 2025-11-22 09:08:58.931 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:58 np0005532048 nova_compute[253661]: 2025-11-22 09:08:58.933 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:58 np0005532048 nova_compute[253661]: 2025-11-22 09:08:58.989 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:08:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Nov 22 04:08:59 np0005532048 nova_compute[253661]: 2025-11-22 09:08:59.253 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:08:59 np0005532048 nova_compute[253661]: 2025-11-22 09:08:59.254 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:08:59 np0005532048 nova_compute[253661]: 2025-11-22 09:08:59.262 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:08:59 np0005532048 nova_compute[253661]: 2025-11-22 09:08:59.262 253665 INFO nova.compute.claims [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:08:59 np0005532048 systemd[1]: Started libpod-conmon-efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838.scope.
Nov 22 04:08:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:08:59 np0005532048 nova_compute[253661]: 2025-11-22 09:08:59.451 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:08:59 np0005532048 nova_compute[253661]: 2025-11-22 09:08:59.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:08:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:59 np0005532048 podman[278536]: 2025-11-22 09:08:59.694950777 +0000 UTC m=+1.348532607 container init efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:08:59 np0005532048 podman[278536]: 2025-11-22 09:08:59.703109656 +0000 UTC m=+1.356691486 container start efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:08:59 np0005532048 confident_aryabhata[278572]: 167 167
Nov 22 04:08:59 np0005532048 systemd[1]: libpod-efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838.scope: Deactivated successfully.
Nov 22 04:08:59 np0005532048 podman[278536]: 2025-11-22 09:08:59.936995691 +0000 UTC m=+1.590577521 container attach efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:08:59 np0005532048 podman[278536]: 2025-11-22 09:08:59.937946415 +0000 UTC m=+1.591528235 container died efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 04:09:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972792082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:00 np0005532048 nova_compute[253661]: 2025-11-22 09:09:00.150 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:00 np0005532048 nova_compute[253661]: 2025-11-22 09:09:00.157 253665 DEBUG nova.compute.provider_tree [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:09:00 np0005532048 nova_compute[253661]: 2025-11-22 09:09:00.171 253665 DEBUG nova.scheduler.client.report [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:09:00 np0005532048 nova_compute[253661]: 2025-11-22 09:09:00.359 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:00 np0005532048 nova_compute[253661]: 2025-11-22 09:09:00.360 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:09:00 np0005532048 nova_compute[253661]: 2025-11-22 09:09:00.626 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:09:00 np0005532048 nova_compute[253661]: 2025-11-22 09:09:00.626 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:09:00 np0005532048 nova_compute[253661]: 2025-11-22 09:09:00.694 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:09:00 np0005532048 nova_compute[253661]: 2025-11-22 09:09:00.762 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:09:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ec68f0275a2f9f0397a7aa8bd93cc8a0374d7d95cd34332d2e089ca0dab41b49-merged.mount: Deactivated successfully.
Nov 22 04:09:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 597 KiB/s wr, 153 op/s
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.240 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.242 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.242 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Creating image(s)#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.273 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.308 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.345 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.351 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.381 253665 DEBUG nova.policy [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee24e4812c424984881862883987d750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5879249ab50a40ec9553bc923bdd1042', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.440 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.441 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.442 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.442 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.462 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:01 np0005532048 nova_compute[253661]: 2025-11-22 09:09:01.467 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:01 np0005532048 podman[278536]: 2025-11-22 09:09:01.90822282 +0000 UTC m=+3.561804660 container remove efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 04:09:01 np0005532048 systemd[1]: libpod-conmon-efa6468003ff5859a0ce94a27a970f890e7ec1b81297e54ff160a1beba578838.scope: Deactivated successfully.
Nov 22 04:09:02 np0005532048 nova_compute[253661]: 2025-11-22 09:09:02.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:02 np0005532048 podman[278710]: 2025-11-22 09:09:02.083517244 +0000 UTC m=+0.025513610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:02 np0005532048 podman[278710]: 2025-11-22 09:09:02.182013053 +0000 UTC m=+0.124009399 container create 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:09:02 np0005532048 systemd[1]: Started libpod-conmon-937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f.scope.
Nov 22 04:09:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:09:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006967633855896333 of space, bias 1.0, pg target 0.20902901567689 quantized to 32 (current 32)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:09:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 597 KiB/s wr, 153 op/s
Nov 22 04:09:03 np0005532048 podman[278710]: 2025-11-22 09:09:03.226943333 +0000 UTC m=+1.168939709 container init 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:09:03 np0005532048 podman[278710]: 2025-11-22 09:09:03.235284845 +0000 UTC m=+1.177281191 container start 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:09:03 np0005532048 podman[278710]: 2025-11-22 09:09:03.655815171 +0000 UTC m=+1.597811507 container attach 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:09:04 np0005532048 nova_compute[253661]: 2025-11-22 09:09:04.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:04 np0005532048 nova_compute[253661]: 2025-11-22 09:09:04.660 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Successfully created port: 0122a4be-9c10-4475-ba7d-5c818be52474 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:09:04 np0005532048 trusting_satoshi[278727]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:09:04 np0005532048 trusting_satoshi[278727]: --> relative data size: 1.0
Nov 22 04:09:04 np0005532048 trusting_satoshi[278727]: --> All data devices are unavailable
Nov 22 04:09:04 np0005532048 systemd[1]: libpod-937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f.scope: Deactivated successfully.
Nov 22 04:09:04 np0005532048 podman[278710]: 2025-11-22 09:09:04.713436477 +0000 UTC m=+2.655432853 container died 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:09:04 np0005532048 systemd[1]: libpod-937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f.scope: Consumed 1.039s CPU time.
Nov 22 04:09:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 137 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 351 KiB/s wr, 133 op/s
Nov 22 04:09:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-01bac5d9a617bfe8f438c5ecb175372e3ee4dd26ce787e61528729e572fb3c3f-merged.mount: Deactivated successfully.
Nov 22 04:09:05 np0005532048 podman[278710]: 2025-11-22 09:09:05.489789818 +0000 UTC m=+3.431786154 container remove 937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:09:05 np0005532048 systemd[1]: libpod-conmon-937553159a1809ea1c38b7f2ac9b5feb3867df7dae6801250581f82e3433035f.scope: Deactivated successfully.
Nov 22 04:09:05 np0005532048 nova_compute[253661]: 2025-11-22 09:09:05.655 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:05 np0005532048 nova_compute[253661]: 2025-11-22 09:09:05.746 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] resizing rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:09:06 np0005532048 podman[278967]: 2025-11-22 09:09:06.126372817 +0000 UTC m=+0.025361678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:06 np0005532048 podman[278967]: 2025-11-22 09:09:06.271844426 +0000 UTC m=+0.170833307 container create b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:09:06 np0005532048 systemd[1]: Started libpod-conmon-b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045.scope.
Nov 22 04:09:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:09:06 np0005532048 podman[278967]: 2025-11-22 09:09:06.629627039 +0000 UTC m=+0.528615990 container init b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 04:09:06 np0005532048 podman[278967]: 2025-11-22 09:09:06.639051258 +0000 UTC m=+0.538040099 container start b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:09:06 np0005532048 practical_wiles[278983]: 167 167
Nov 22 04:09:06 np0005532048 systemd[1]: libpod-b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045.scope: Deactivated successfully.
Nov 22 04:09:06 np0005532048 podman[278967]: 2025-11-22 09:09:06.758758723 +0000 UTC m=+0.657747564 container attach b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:09:06 np0005532048 podman[278967]: 2025-11-22 09:09:06.759721236 +0000 UTC m=+0.658710077 container died b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:09:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 138 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 574 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.191 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.198 253665 DEBUG nova.objects.instance [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'migration_context' on Instance uuid 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.209 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.210 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Ensure instance console log exists: /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.210 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.210 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.211 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-aa29562a603ed1b0c7a80982ca6f89f5d9a5280aa6acd34ce6de6e83062cd1ea-merged.mount: Deactivated successfully.
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.850 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.921 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Successfully updated port: 0122a4be-9c10-4475-ba7d-5c818be52474 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.971 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.972 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquired lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:07 np0005532048 nova_compute[253661]: 2025-11-22 09:09:07.973 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:09:08 np0005532048 nova_compute[253661]: 2025-11-22 09:09:08.065 253665 DEBUG nova.compute.manager [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:08 np0005532048 nova_compute[253661]: 2025-11-22 09:09:08.066 253665 DEBUG nova.compute.manager [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing instance network info cache due to event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:09:08 np0005532048 nova_compute[253661]: 2025-11-22 09:09:08.066 253665 DEBUG oslo_concurrency.lockutils [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:08 np0005532048 nova_compute[253661]: 2025-11-22 09:09:08.279 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:09:08 np0005532048 podman[278967]: 2025-11-22 09:09:08.852215756 +0000 UTC m=+2.751204597 container remove b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_wiles, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:09:08 np0005532048 systemd[1]: libpod-conmon-b706ada1a613eaaf5024e67fe4fb19a46166273d3b324a25c51ce99a3a909045.scope: Deactivated successfully.
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.021 253665 DEBUG nova.network.neutron [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:09 np0005532048 podman[279025]: 2025-11-22 09:09:09.061738092 +0000 UTC m=+0.092590418 container create 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:09:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 156 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 3.8 MiB/s wr, 92 op/s
Nov 22 04:09:09 np0005532048 podman[279025]: 2025-11-22 09:09:08.991230831 +0000 UTC m=+0.022083157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.132 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Releasing lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.132 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance network_info: |[{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.132 253665 DEBUG oslo_concurrency.lockutils [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.133 253665 DEBUG nova.network.neutron [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.135 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start _get_guest_xml network_info=[{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.139 253665 WARNING nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.143 253665 DEBUG nova.virt.libvirt.host [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.144 253665 DEBUG nova.virt.libvirt.host [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.147 253665 DEBUG nova.virt.libvirt.host [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.148 253665 DEBUG nova.virt.libvirt.host [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.148 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.148 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.149 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.149 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.149 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.149 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.150 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.151 253665 DEBUG nova.virt.hardware [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.154 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:09 np0005532048 systemd[1]: Started libpod-conmon-3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe.scope.
Nov 22 04:09:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:09:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:09 np0005532048 podman[279025]: 2025-11-22 09:09:09.317723914 +0000 UTC m=+0.348576230 container init 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:09:09 np0005532048 podman[279025]: 2025-11-22 09:09:09.325890723 +0000 UTC m=+0.356743029 container start 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:09:09 np0005532048 podman[279025]: 2025-11-22 09:09:09.388937343 +0000 UTC m=+0.419789709 container attach 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1581701931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.622 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.647 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:09 np0005532048 nova_compute[253661]: 2025-11-22 09:09:09.651 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]: {
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:    "0": [
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:        {
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "devices": [
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "/dev/loop3"
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            ],
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_name": "ceph_lv0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_size": "21470642176",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "name": "ceph_lv0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "tags": {
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.cluster_name": "ceph",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.crush_device_class": "",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.encrypted": "0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.osd_id": "0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.type": "block",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.vdo": "0"
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            },
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "type": "block",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "vg_name": "ceph_vg0"
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:        }
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:    ],
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:    "1": [
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:        {
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "devices": [
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "/dev/loop4"
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            ],
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_name": "ceph_lv1",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_size": "21470642176",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "name": "ceph_lv1",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "tags": {
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.cluster_name": "ceph",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.crush_device_class": "",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.encrypted": "0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.osd_id": "1",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.type": "block",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.vdo": "0"
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            },
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "type": "block",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "vg_name": "ceph_vg1"
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:        }
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:    ],
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:    "2": [
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:        {
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "devices": [
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "/dev/loop5"
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            ],
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_name": "ceph_lv2",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_size": "21470642176",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "name": "ceph_lv2",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "tags": {
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.cluster_name": "ceph",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.crush_device_class": "",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.encrypted": "0",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.osd_id": "2",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.type": "block",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:                "ceph.vdo": "0"
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            },
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "type": "block",
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:            "vg_name": "ceph_vg2"
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:        }
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]:    ]
Nov 22 04:09:10 np0005532048 fervent_burnell[279044]: }
Nov 22 04:09:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4294778796' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.125 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.127 253665 DEBUG nova.virt.libvirt.vif [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:08:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-663200800',display_name='tempest-SecurityGroupsTestJSON-server-663200800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-663200800',id=14,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-197d3f9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:00Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=18eb7df8-f3ac-44d2-86c1-db7c0c913c53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.128 253665 DEBUG nova.network.os_vif_util [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.129 253665 DEBUG nova.network.os_vif_util [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.130 253665 DEBUG nova.objects.instance [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:10 np0005532048 systemd[1]: libpod-3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe.scope: Deactivated successfully.
Nov 22 04:09:10 np0005532048 podman[279025]: 2025-11-22 09:09:10.134790192 +0000 UTC m=+1.165642498 container died 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.145 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <uuid>18eb7df8-f3ac-44d2-86c1-db7c0c913c53</uuid>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <name>instance-0000000e</name>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <nova:name>tempest-SecurityGroupsTestJSON-server-663200800</nova:name>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:09:09</nova:creationTime>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <nova:user uuid="ee24e4812c424984881862883987d750">tempest-SecurityGroupsTestJSON-342579724-project-member</nova:user>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <nova:project uuid="5879249ab50a40ec9553bc923bdd1042">tempest-SecurityGroupsTestJSON-342579724</nova:project>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <nova:port uuid="0122a4be-9c10-4475-ba7d-5c818be52474">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <entry name="serial">18eb7df8-f3ac-44d2-86c1-db7c0c913c53</entry>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <entry name="uuid">18eb7df8-f3ac-44d2-86c1-db7c0c913c53</entry>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:ea:be:aa"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <target dev="tap0122a4be-9c"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/console.log" append="off"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:09:10 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:09:10 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:09:10 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:09:10 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.145 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Preparing to wait for external event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.145 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.146 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.146 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.146 253665 DEBUG nova.virt.libvirt.vif [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:08:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-663200800',display_name='tempest-SecurityGroupsTestJSON-server-663200800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-663200800',id=14,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-197d3f9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:00Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=18eb7df8-f3ac-44d2-86c1-db7c0c913c53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.147 253665 DEBUG nova.network.os_vif_util [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.147 253665 DEBUG nova.network.os_vif_util [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.148 253665 DEBUG os_vif [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.149 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.149 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.153 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.153 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0122a4be-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.153 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0122a4be-9c, col_values=(('external_ids', {'iface-id': '0122a4be-9c10-4475-ba7d-5c818be52474', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:be:aa', 'vm-uuid': '18eb7df8-f3ac-44d2-86c1-db7c0c913c53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:10 np0005532048 NetworkManager[48920]: <info>  [1763802550.1566] manager: (tap0122a4be-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.158 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.163 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.164 253665 INFO os_vif [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c')#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.250 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.250 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.251 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No VIF found with MAC fa:16:3e:ea:be:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.252 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Using config drive#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.276 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.282 253665 DEBUG nova.network.neutron [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updated VIF entry in instance network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.282 253665 DEBUG nova.network.neutron [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:10 np0005532048 nova_compute[253661]: 2025-11-22 09:09:10.302 253665 DEBUG oslo_concurrency.lockutils [req-0b5b8552-5fd2-4e75-b2a7-00c1c91f34a2 req-8daf8675-47c1-41cd-a311-19e01eca2b47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c7eb54fae6a156fcb45138e6352391f7c413e63437328c7609bd33a96ac2594b-merged.mount: Deactivated successfully.
Nov 22 04:09:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 156 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 3.8 MiB/s wr, 92 op/s
Nov 22 04:09:11 np0005532048 podman[279025]: 2025-11-22 09:09:11.098687415 +0000 UTC m=+2.129539761 container remove 3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:09:11 np0005532048 systemd[1]: libpod-conmon-3c6cb897d5335c110aac93d907abaf46396d9ff367cea5676854f3e7c99c81fe.scope: Deactivated successfully.
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.224 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Creating config drive at /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.228 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqsddqrq1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.370 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqsddqrq1" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.379 253665 INFO nova.virt.libvirt.driver [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Deleting instance files /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_del#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.380 253665 INFO nova.virt.libvirt.driver [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Deletion of /var/lib/nova/instances/103af636-a2aa-4cdb-a2e1-2ab7cf5fb900_del complete#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.412 253665 DEBUG nova.storage.rbd_utils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.419 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:11 np0005532048 podman[279243]: 2025-11-22 09:09:11.517468488 +0000 UTC m=+0.087228748 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:09:11 np0005532048 podman[279239]: 2025-11-22 09:09:11.532507963 +0000 UTC m=+0.102146230 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.629 253665 INFO nova.compute.manager [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Took 13.46 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.632 253665 DEBUG oslo.service.loopingcall [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.633 253665 DEBUG nova.compute.manager [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.633 253665 DEBUG nova.network.neutron [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.707 253665 DEBUG oslo_concurrency.processutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config 18eb7df8-f3ac-44d2-86c1-db7c0c913c53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.710 253665 INFO nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Deleting local config drive /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53/disk.config because it was imported into RBD.#033[00m
Nov 22 04:09:11 np0005532048 kernel: tap0122a4be-9c: entered promiscuous mode
Nov 22 04:09:11 np0005532048 NetworkManager[48920]: <info>  [1763802551.7851] manager: (tap0122a4be-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 22 04:09:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:11Z|00044|binding|INFO|Claiming lport 0122a4be-9c10-4475-ba7d-5c818be52474 for this chassis.
Nov 22 04:09:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:11Z|00045|binding|INFO|0122a4be-9c10-4475-ba7d-5c818be52474: Claiming fa:16:3e:ea:be:aa 10.100.0.6
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:11 np0005532048 systemd-machined[215941]: New machine qemu-15-instance-0000000e.
Nov 22 04:09:11 np0005532048 systemd-udevd[279374]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:09:11 np0005532048 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Nov 22 04:09:11 np0005532048 NetworkManager[48920]: <info>  [1763802551.8537] device (tap0122a4be-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:09:11 np0005532048 NetworkManager[48920]: <info>  [1763802551.8547] device (tap0122a4be-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.856 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:be:aa 10.100.0.6'], port_security=['fa:16:3e:ea:be:aa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '18eb7df8-f3ac-44d2-86c1-db7c0c913c53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0122a4be-9c10-4475-ba7d-5c818be52474) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.857 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0122a4be-9c10-4475-ba7d-5c818be52474 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 bound to our chassis#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.859 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.873 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bad184a9-c748-4306-ba4d-d2abc5dc8754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.875 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbce72c95-f1 in ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.876 253665 DEBUG nova.network.neutron [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.878 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbce72c95-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.879 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6099c2c8-57b2-4b54-b763-ff26aa31bbaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.880 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2ee1cf-bedc-466f-9139-27e208ef738a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:11Z|00046|binding|INFO|Setting lport 0122a4be-9c10-4475-ba7d-5c818be52474 ovn-installed in OVS
Nov 22 04:09:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:11Z|00047|binding|INFO|Setting lport 0122a4be-9c10-4475-ba7d-5c818be52474 up in Southbound
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.887 253665 DEBUG nova.network.neutron [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.898 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[7eef320e-ff3f-4350-b014-30e86fb8e051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.902 253665 INFO nova.compute.manager [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Took 0.27 seconds to deallocate network for instance.#033[00m
Nov 22 04:09:11 np0005532048 nova_compute[253661]: 2025-11-22 09:09:11.928 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.930 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c090813-bf9f-4760-a433-4c59924a8037]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.968 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[24097aef-70b6-4f98-8407-da6c6fcf8449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:11 np0005532048 NetworkManager[48920]: <info>  [1763802551.9762] manager: (tapbce72c95-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Nov 22 04:09:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:11.974 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66dae6e0-c95e-4a9b-9e74-bae161516219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:11 np0005532048 systemd-udevd[279377]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:09:11 np0005532048 podman[279379]: 2025-11-22 09:09:11.996197486 +0000 UTC m=+0.100570222 container create c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:09:12 np0005532048 podman[279379]: 2025-11-22 09:09:11.923381709 +0000 UTC m=+0.027754465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.021 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d51e05-8371-4b6d-b82f-9555752a941d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.025 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fb041a34-281a-4812-b462-f67bba61ec59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.029 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.029 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:12 np0005532048 NetworkManager[48920]: <info>  [1763802552.0544] device (tapbce72c95-f0): carrier: link connected
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.061 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7728e761-e71e-4aa6-9131-90ef6e4f69ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.087 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aee3210f-f446-4645-b5ee-17ff96a22bc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279423, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:12 np0005532048 systemd[1]: Started libpod-conmon-c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0.scope.
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.109 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8fe476-7d3a-4356-8b77-489bd00d8272]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:ca12'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540279, 'tstamp': 540279}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279439, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.116 253665 DEBUG oslo_concurrency.processutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.132 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58592137-bbd2-44de-a2ba-cbd611a14904]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279446, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:12 np0005532048 podman[279379]: 2025-11-22 09:09:12.15213553 +0000 UTC m=+0.256508296 container init c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:09:12 np0005532048 podman[279379]: 2025-11-22 09:09:12.162993424 +0000 UTC m=+0.267366170 container start c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:09:12 np0005532048 podman[279379]: 2025-11-22 09:09:12.170082016 +0000 UTC m=+0.274454772 container attach c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:09:12 np0005532048 systemd[1]: libpod-c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0.scope: Deactivated successfully.
Nov 22 04:09:12 np0005532048 modest_noyce[279440]: 167 167
Nov 22 04:09:12 np0005532048 conmon[279440]: conmon c69f8d4f2cbbb0e308d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0.scope/container/memory.events
Nov 22 04:09:12 np0005532048 podman[279379]: 2025-11-22 09:09:12.173489339 +0000 UTC m=+0.277862095 container died c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.185 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[44f53f9a-3321-44be-9269-b3db2c0acb61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-09aaef984562ae0088eb0fe709941dc24dbe0c893dc6f2367f6511a786f7150c-merged.mount: Deactivated successfully.
Nov 22 04:09:12 np0005532048 podman[279379]: 2025-11-22 09:09:12.272641864 +0000 UTC m=+0.377014600 container remove c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:09:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309182576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309182576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.286 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b859d9-afc8-410b-b321-0f091d80004a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.288 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.288 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.288 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:12 np0005532048 kernel: tapbce72c95-f0: entered promiscuous mode
Nov 22 04:09:12 np0005532048 NetworkManager[48920]: <info>  [1763802552.2909] manager: (tapbce72c95-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.291 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:12 np0005532048 systemd[1]: libpod-conmon-c69f8d4f2cbbb0e308d38ad4da643f57f399cce0866d5f845897931e3b7220a0.scope: Deactivated successfully.
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.297 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:12Z|00048|binding|INFO|Releasing lport 9b713871-83a7-42c2-9c01-d716fc099936 from this chassis (sb_readonly=0)
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.302 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bce72c95-f29f-458a-9b0e-7e700aa1deb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bce72c95-f29f-458a-9b0e-7e700aa1deb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.305 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7deeb3e-2d1f-4962-911f-48afba3d3033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.306 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-bce72c95-f29f-458a-9b0e-7e700aa1deb4
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/bce72c95-f29f-458a-9b0e-7e700aa1deb4.pid.haproxy
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID bce72c95-f29f-458a-9b0e-7e700aa1deb4
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:09:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:12.307 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'env', 'PROCESS_TAG=haproxy-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bce72c95-f29f-458a-9b0e-7e700aa1deb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.334 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802552.3260155, 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.334 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] VM Started (Lifecycle Event)#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.352 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.378 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802552.3263586, 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.379 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.393 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.399 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.415 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:09:12 np0005532048 podman[279521]: 2025-11-22 09:09:12.491090766 +0000 UTC m=+0.068578306 container create 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 04:09:12 np0005532048 systemd[1]: Started libpod-conmon-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope.
Nov 22 04:09:12 np0005532048 podman[279521]: 2025-11-22 09:09:12.463795984 +0000 UTC m=+0.041283514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:09:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579036546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:12 np0005532048 podman[279521]: 2025-11-22 09:09:12.602752086 +0000 UTC m=+0.180239616 container init 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:09:12 np0005532048 podman[279521]: 2025-11-22 09:09:12.61284078 +0000 UTC m=+0.190328280 container start 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.619 253665 DEBUG oslo_concurrency.processutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.629 253665 DEBUG nova.compute.provider_tree [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:09:12 np0005532048 podman[279521]: 2025-11-22 09:09:12.632329743 +0000 UTC m=+0.209817263 container attach 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.644 253665 DEBUG nova.scheduler.client.report [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.653 253665 DEBUG nova.compute.manager [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.654 253665 DEBUG oslo_concurrency.lockutils [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.654 253665 DEBUG oslo_concurrency.lockutils [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.654 253665 DEBUG oslo_concurrency.lockutils [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.654 253665 DEBUG nova.compute.manager [req-bc23c703-40a6-46ba-be50-9533021afa7c req-b810f3cd-64a1-44ff-bef4-df4de079b5ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Processing event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.655 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.659 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802552.6588144, 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.659 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.660 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.664 253665 INFO nova.virt.libvirt.driver [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance spawned successfully.#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.664 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.675 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.684 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.685 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.685 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.686 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.686 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.687 253665 DEBUG nova.virt.libvirt.driver [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.691 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.722 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:09:12 np0005532048 podman[279566]: 2025-11-22 09:09:12.750112771 +0000 UTC m=+0.100637543 container create c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 04:09:12 np0005532048 nova_compute[253661]: 2025-11-22 09:09:12.765 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:12 np0005532048 podman[279566]: 2025-11-22 09:09:12.682854489 +0000 UTC m=+0.033379291 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:09:12 np0005532048 systemd[1]: Started libpod-conmon-c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f.scope.
Nov 22 04:09:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:09:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b439ac87fd05beec68cedfc1c5359f61b19c450f7ad63dcb47892d20e6a6cb9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:12 np0005532048 podman[279566]: 2025-11-22 09:09:12.87325731 +0000 UTC m=+0.223782102 container init c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 04:09:12 np0005532048 podman[279566]: 2025-11-22 09:09:12.886553473 +0000 UTC m=+0.237078285 container start c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 04:09:12 np0005532048 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [NOTICE]   (279586) : New worker (279588) forked
Nov 22 04:09:12 np0005532048 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [NOTICE]   (279586) : Loading success.
Nov 22 04:09:13 np0005532048 nova_compute[253661]: 2025-11-22 09:09:13.026 253665 INFO nova.scheduler.client.report [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Deleted allocations for instance 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900#033[00m
Nov 22 04:09:13 np0005532048 nova_compute[253661]: 2025-11-22 09:09:13.038 253665 INFO nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Took 11.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:09:13 np0005532048 nova_compute[253661]: 2025-11-22 09:09:13.039 253665 DEBUG nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 161 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 22 04:09:13 np0005532048 nova_compute[253661]: 2025-11-22 09:09:13.184 253665 DEBUG oslo_concurrency.lockutils [None req-d1ab5dc2-b131-4df7-983b-5c634deb06fb 2e259dc1688c42e3ba13f2239d49b39e 420f909f2162475ba3f933633661986f - - default default] Lock "103af636-a2aa-4cdb-a2e1-2ab7cf5fb900" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:13 np0005532048 nova_compute[253661]: 2025-11-22 09:09:13.206 253665 INFO nova.compute.manager [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Took 13.98 seconds to build instance.#033[00m
Nov 22 04:09:13 np0005532048 nova_compute[253661]: 2025-11-22 09:09:13.287 253665 DEBUG oslo_concurrency.lockutils [None req-390ea260-8f27-4df9-bc1c-8760ec6a4a03 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:13 np0005532048 nova_compute[253661]: 2025-11-22 09:09:13.395 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802538.3944967, 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:13 np0005532048 nova_compute[253661]: 2025-11-22 09:09:13.396 253665 INFO nova.compute.manager [-] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:09:13 np0005532048 nova_compute[253661]: 2025-11-22 09:09:13.413 253665 DEBUG nova.compute.manager [None req-64bdf937-0e87-4149-a211-770c01354e72 - - - - - -] [instance: 103af636-a2aa-4cdb-a2e1-2ab7cf5fb900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:13 np0005532048 brave_germain[279542]: {
Nov 22 04:09:13 np0005532048 brave_germain[279542]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "osd_id": 1,
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "type": "bluestore"
Nov 22 04:09:13 np0005532048 brave_germain[279542]:    },
Nov 22 04:09:13 np0005532048 brave_germain[279542]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "osd_id": 0,
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "type": "bluestore"
Nov 22 04:09:13 np0005532048 brave_germain[279542]:    },
Nov 22 04:09:13 np0005532048 brave_germain[279542]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "osd_id": 2,
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:09:13 np0005532048 brave_germain[279542]:        "type": "bluestore"
Nov 22 04:09:13 np0005532048 brave_germain[279542]:    }
Nov 22 04:09:13 np0005532048 brave_germain[279542]: }
Nov 22 04:09:13 np0005532048 systemd[1]: libpod-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope: Deactivated successfully.
Nov 22 04:09:13 np0005532048 systemd[1]: libpod-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope: Consumed 1.123s CPU time.
Nov 22 04:09:13 np0005532048 conmon[279542]: conmon 00d5d9ed3fef89ff07b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope/container/memory.events
Nov 22 04:09:13 np0005532048 podman[279521]: 2025-11-22 09:09:13.785406937 +0000 UTC m=+1.362894447 container died 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:09:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6a6c5194b629dd5ae6c7dfeefa1b024309a4d6d4b90021f04ac222e8e1e3aceb-merged.mount: Deactivated successfully.
Nov 22 04:09:13 np0005532048 podman[279521]: 2025-11-22 09:09:13.889043092 +0000 UTC m=+1.466530602 container remove 00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 22 04:09:13 np0005532048 systemd[1]: libpod-conmon-00d5d9ed3fef89ff07b36511709f29e989fde4febeb9600dfd6a5af62d014ecf.scope: Deactivated successfully.
Nov 22 04:09:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:09:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:09:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:09:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:09:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 21cba8f6-d6d8-4d0f-88a6-5ac45bc3894e does not exist
Nov 22 04:09:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2e1c2049-4ec7-4c27-b58a-ab555280a7d8 does not exist
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.257 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154894827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.701 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.790 253665 DEBUG nova.compute.manager [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.790 253665 DEBUG oslo_concurrency.lockutils [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.791 253665 DEBUG oslo_concurrency.lockutils [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.791 253665 DEBUG oslo_concurrency.lockutils [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.792 253665 DEBUG nova.compute.manager [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] No waiting events found dispatching network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.792 253665 WARNING nova.compute.manager [req-168372ec-5ae3-4a1b-99a3-07612d04961b req-89c2fd79-3c5b-42d5-9912-f9b4b6c3630a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received unexpected event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.807 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.808 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.824 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:09:14 np0005532048 nova_compute[253661]: 2025-11-22 09:09:14.825 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:09:14 np0005532048 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 22 04:09:14 np0005532048 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 13.714s CPU time.
Nov 22 04:09:14 np0005532048 systemd-machined[215941]: Machine qemu-13-instance-0000000c terminated.
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.030 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance shutdown successfully after 17 seconds.#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.040 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.#033[00m
Nov 22 04:09:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:09:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.050 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.#033[00m
Nov 22 04:09:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.356 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.364 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.366 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4193MB free_disk=59.922393798828125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.366 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.366 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.427 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance c233bbff-b2e9-442f-818d-e8487dee1c3e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.428 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.428 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.429 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.478 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.820 253665 DEBUG nova.compute.manager [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.822 253665 DEBUG nova.compute.manager [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing instance network info cache due to event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.822 253665 DEBUG oslo_concurrency.lockutils [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.822 253665 DEBUG oslo_concurrency.lockutils [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.822 253665 DEBUG nova.network.neutron [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:09:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474440318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.942 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.948 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:09:15 np0005532048 nova_compute[253661]: 2025-11-22 09:09:15.964 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:09:16 np0005532048 nova_compute[253661]: 2025-11-22 09:09:16.013 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:09:16 np0005532048 nova_compute[253661]: 2025-11-22 09:09:16.014 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:16 np0005532048 podman[279752]: 2025-11-22 09:09:16.424035051 +0000 UTC m=+0.108247948 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 22 04:09:17 np0005532048 nova_compute[253661]: 2025-11-22 09:09:17.006 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:17 np0005532048 nova_compute[253661]: 2025-11-22 09:09:17.007 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Nov 22 04:09:17 np0005532048 nova_compute[253661]: 2025-11-22 09:09:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:17 np0005532048 nova_compute[253661]: 2025-11-22 09:09:17.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:09:17 np0005532048 nova_compute[253661]: 2025-11-22 09:09:17.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:09:17 np0005532048 nova_compute[253661]: 2025-11-22 09:09:17.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:17 np0005532048 nova_compute[253661]: 2025-11-22 09:09:17.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:17 np0005532048 nova_compute[253661]: 2025-11-22 09:09:17.250 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:09:17 np0005532048 nova_compute[253661]: 2025-11-22 09:09:17.250 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.014 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting instance files /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.016 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deletion of /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del complete#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.456 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.458 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating image(s)#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.480 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.504 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.529 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.534 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.599 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.600 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.601 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.602 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.624 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.628 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.929 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:09:18 np0005532048 nova_compute[253661]: 2025-11-22 09:09:18.997 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c233bbff-b2e9-442f-818d-e8487dee1c3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.051 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] resizing rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:09:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 114 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 173 op/s
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.147 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.148 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Ensure instance console log exists: /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.148 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.149 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.149 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.150 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.153 253665 WARNING nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.159 253665 DEBUG nova.virt.libvirt.host [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.159 253665 DEBUG nova.virt.libvirt.host [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.172 253665 DEBUG nova.virt.libvirt.host [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.173 253665 DEBUG nova.virt.libvirt.host [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.173 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.173 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.174 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.175 253665 DEBUG nova.virt.hardware [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.176 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'vcpu_model' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.193 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.237 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.254 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.254 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.259 253665 DEBUG nova.network.neutron [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updated VIF entry in instance network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.259 253665 DEBUG nova.network.neutron [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.393 253665 DEBUG oslo_concurrency.lockutils [req-b1eb19e4-48b8-447b-a239-59af8a91ea02 req-fe58c9b5-5039-4f4b-9bef-ef85c5663d70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254550709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.668 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.686 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.690 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.972 253665 DEBUG nova.compute.manager [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.973 253665 DEBUG nova.compute.manager [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing instance network info cache due to event network-changed-0122a4be-9c10-4475-ba7d-5c818be52474. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.974 253665 DEBUG oslo_concurrency.lockutils [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.974 253665 DEBUG oslo_concurrency.lockutils [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:19 np0005532048 nova_compute[253661]: 2025-11-22 09:09:19.974 253665 DEBUG nova.network.neutron [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Refreshing network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:09:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260095218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.128 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.131 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <uuid>c233bbff-b2e9-442f-818d-e8487dee1c3e</uuid>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <name>instance-0000000c</name>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAdmin275Test-server-1195148279</nova:name>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:09:19</nova:creationTime>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <nova:user uuid="db3c9e2649dc463a894636918b1536f6">tempest-ServersAdmin275Test-461797968-project-member</nova:user>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <nova:project uuid="452c52561ee04e93bc47895d639c9745">tempest-ServersAdmin275Test-461797968</nova:project>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <entry name="serial">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <entry name="uuid">c233bbff-b2e9-442f-818d-e8487dee1c3e</entry>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/console.log" append="off"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:09:20 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:09:20 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:09:20 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:09:20 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.180 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.181 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.181 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Using config drive#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.204 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.227 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'ec2_ids' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.287 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lazy-loading 'keypairs' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.361 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.790 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Creating config drive at /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.795 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuon41cky execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.925 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuon41cky" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.951 253665 DEBUG nova.storage.rbd_utils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] rbd image c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:20 np0005532048 nova_compute[253661]: 2025-11-22 09:09:20.954 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 114 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 126 KiB/s wr, 118 op/s
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.119 253665 DEBUG oslo_concurrency.processutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config c233bbff-b2e9-442f-818d-e8487dee1c3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.121 253665 INFO nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting local config drive /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e/disk.config because it was imported into RBD.#033[00m
Nov 22 04:09:21 np0005532048 systemd-machined[215941]: New machine qemu-16-instance-0000000c.
Nov 22 04:09:21 np0005532048 systemd[1]: Started Virtual Machine qemu-16-instance-0000000c.
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.570 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for c233bbff-b2e9-442f-818d-e8487dee1c3e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.570 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802561.5697138, c233bbff-b2e9-442f-818d-e8487dee1c3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.570 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.573 253665 DEBUG nova.compute.manager [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.573 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.577 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance spawned successfully.#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.578 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.600 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.605 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.606 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.606 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.607 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.607 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.608 253665 DEBUG nova.virt.libvirt.driver [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.611 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.637 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.638 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802561.5725129, c233bbff-b2e9-442f-818d-e8487dee1c3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.639 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.655 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.659 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.675 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.768 253665 DEBUG nova.compute.manager [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.839 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.840 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.841 253665 DEBUG nova.objects.instance [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:09:21 np0005532048 nova_compute[253661]: 2025-11-22 09:09:21.899 253665 DEBUG oslo_concurrency.lockutils [None req-9405df32-c21a-4485-bfcb-4d31b026d19a b42a3fc86d6d47d9a7e70b48fd515e85 7f632518f8f1483f867db6afe42b217e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.171 253665 DEBUG nova.network.neutron [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updated VIF entry in instance network info cache for port 0122a4be-9c10-4475-ba7d-5c818be52474. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.172 253665 DEBUG nova.network.neutron [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [{"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.187 253665 DEBUG oslo_concurrency.lockutils [req-569fbfc1-9f15-4546-93d1-aeada4a9e51f req-aa0cf86e-f29e-44c2-99df-4b53fb18e4b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-18eb7df8-f3ac-44d2-86c1-db7c0c913c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.389 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.391 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.391 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "c233bbff-b2e9-442f-818d-e8487dee1c3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.392 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.392 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.393 253665 INFO nova.compute.manager [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Terminating instance#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.394 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.394 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquired lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.394 253665 DEBUG nova.network.neutron [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:22 np0005532048 nova_compute[253661]: 2025-11-22 09:09:22.872 253665 DEBUG nova.network.neutron [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:09:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 105 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 784 KiB/s wr, 129 op/s
Nov 22 04:09:23 np0005532048 nova_compute[253661]: 2025-11-22 09:09:23.420 253665 DEBUG nova.network.neutron [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:23 np0005532048 nova_compute[253661]: 2025-11-22 09:09:23.433 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Releasing lock "refresh_cache-c233bbff-b2e9-442f-818d-e8487dee1c3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:23 np0005532048 nova_compute[253661]: 2025-11-22 09:09:23.433 253665 DEBUG nova.compute.manager [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:09:23 np0005532048 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 22 04:09:23 np0005532048 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000c.scope: Consumed 2.276s CPU time.
Nov 22 04:09:23 np0005532048 systemd-machined[215941]: Machine qemu-16-instance-0000000c terminated.
Nov 22 04:09:23 np0005532048 nova_compute[253661]: 2025-11-22 09:09:23.861 253665 INFO nova.virt.libvirt.driver [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance destroyed successfully.#033[00m
Nov 22 04:09:23 np0005532048 nova_compute[253661]: 2025-11-22 09:09:23.864 253665 DEBUG nova.objects.instance [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lazy-loading 'resources' on Instance uuid c233bbff-b2e9-442f-818d-e8487dee1c3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:24 np0005532048 nova_compute[253661]: 2025-11-22 09:09:24.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 134 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 205 op/s
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.266 253665 INFO nova.virt.libvirt.driver [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deleting instance files /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.267 253665 INFO nova.virt.libvirt.driver [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deletion of /var/lib/nova/instances/c233bbff-b2e9-442f-818d-e8487dee1c3e_del complete#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.364 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.507 253665 INFO nova.compute.manager [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Took 2.07 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.508 253665 DEBUG oslo.service.loopingcall [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.508 253665 DEBUG nova.compute.manager [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.508 253665 DEBUG nova.network.neutron [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.688 253665 DEBUG nova.network.neutron [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.696 253665 DEBUG nova.network.neutron [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.707 253665 INFO nova.compute.manager [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Took 0.20 seconds to deallocate network for instance.#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.751 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.752 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:25 np0005532048 nova_compute[253661]: 2025-11-22 09:09:25.819 253665 DEBUG oslo_concurrency.processutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:26Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ea:be:aa 10.100.0.6
Nov 22 04:09:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:26Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ea:be:aa 10.100.0.6
Nov 22 04:09:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002464113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:26 np0005532048 nova_compute[253661]: 2025-11-22 09:09:26.324 253665 DEBUG oslo_concurrency.processutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:26 np0005532048 nova_compute[253661]: 2025-11-22 09:09:26.333 253665 DEBUG nova.compute.provider_tree [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:09:26 np0005532048 nova_compute[253661]: 2025-11-22 09:09:26.352 253665 DEBUG nova.scheduler.client.report [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:09:26 np0005532048 nova_compute[253661]: 2025-11-22 09:09:26.392 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:26 np0005532048 nova_compute[253661]: 2025-11-22 09:09:26.459 253665 INFO nova.scheduler.client.report [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Deleted allocations for instance c233bbff-b2e9-442f-818d-e8487dee1c3e#033[00m
Nov 22 04:09:26 np0005532048 nova_compute[253661]: 2025-11-22 09:09:26.543 253665 DEBUG oslo_concurrency.lockutils [None req-9d22b355-4915-46ab-bac2-0c076b698f79 db3c9e2649dc463a894636918b1536f6 452c52561ee04e93bc47895d639c9745 - - default default] Lock "c233bbff-b2e9-442f-818d-e8487dee1c3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 126 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 205 op/s
Nov 22 04:09:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:27.953 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:27.954 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:27.955 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 117 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 248 op/s
Nov 22 04:09:29 np0005532048 nova_compute[253661]: 2025-11-22 09:09:29.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:30 np0005532048 nova_compute[253661]: 2025-11-22 09:09:30.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:30.405 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:09:30 np0005532048 nova_compute[253661]: 2025-11-22 09:09:30.405 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:30.407 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:09:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 117 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Nov 22 04:09:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 200 op/s
Nov 22 04:09:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:34.411 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:34 np0005532048 nova_compute[253661]: 2025-11-22 09:09:34.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 22 04:09:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Nov 22 04:09:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 22 04:09:35 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 22 04:09:35 np0005532048 nova_compute[253661]: 2025-11-22 09:09:35.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:36 np0005532048 nova_compute[253661]: 2025-11-22 09:09:36.854 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:36 np0005532048 nova_compute[253661]: 2025-11-22 09:09:36.854 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:36 np0005532048 nova_compute[253661]: 2025-11-22 09:09:36.877 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:09:36 np0005532048 nova_compute[253661]: 2025-11-22 09:09:36.957 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:36 np0005532048 nova_compute[253661]: 2025-11-22 09:09:36.958 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:36 np0005532048 nova_compute[253661]: 2025-11-22 09:09:36.965 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:09:36 np0005532048 nova_compute[253661]: 2025-11-22 09:09:36.965 253665 INFO nova.compute.claims [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.061 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 1.9 MiB/s wr, 94 op/s
Nov 22 04:09:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016299636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.615 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.624 253665 DEBUG nova.compute.provider_tree [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.639 253665 DEBUG nova.scheduler.client.report [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.658 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.659 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.700 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.700 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.718 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.733 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.813 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.815 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.815 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Creating image(s)#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.839 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.868 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.897 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.903 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.949 253665 DEBUG nova.policy [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee24e4812c424984881862883987d750', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5879249ab50a40ec9553bc923bdd1042', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.974 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.975 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.975 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:37 np0005532048 nova_compute[253661]: 2025-11-22 09:09:37.976 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.002 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.007 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c5f708d0-4110-417f-8353-dc61992d22dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.544 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c5f708d0-4110-417f-8353-dc61992d22dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.577 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Successfully created port: 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.616 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] resizing rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.740 253665 DEBUG nova.objects.instance [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'migration_context' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.752 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.753 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Ensure instance console log exists: /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.753 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.754 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.754 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.860 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802563.8584688, c233bbff-b2e9-442f-818d-e8487dee1c3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.860 253665 INFO nova.compute.manager [-] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:09:38 np0005532048 nova_compute[253661]: 2025-11-22 09:09:38.882 253665 DEBUG nova.compute.manager [None req-12376c83-52d3-4afe-915c-27c1ee3f3ecf - - - - - -] [instance: c233bbff-b2e9-442f-818d-e8487dee1c3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 128 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 63 KiB/s wr, 22 op/s
Nov 22 04:09:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 22 04:09:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 22 04:09:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 22 04:09:39 np0005532048 nova_compute[253661]: 2025-11-22 09:09:39.535 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:40 np0005532048 nova_compute[253661]: 2025-11-22 09:09:40.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:40 np0005532048 nova_compute[253661]: 2025-11-22 09:09:40.806 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Successfully updated port: 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:09:40 np0005532048 nova_compute[253661]: 2025-11-22 09:09:40.821 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:40 np0005532048 nova_compute[253661]: 2025-11-22 09:09:40.821 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:40 np0005532048 nova_compute[253661]: 2025-11-22 09:09:40.822 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.016 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:09:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 128 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 72 KiB/s wr, 20 op/s
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.178 253665 DEBUG nova.compute.manager [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.178 253665 DEBUG nova.compute.manager [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing instance network info cache due to event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.178 253665 DEBUG oslo_concurrency.lockutils [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.695 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.696 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.718 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.793 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.794 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.801 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.801 253665 INFO nova.compute.claims [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.936 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.968 253665 DEBUG nova.network.neutron [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.988 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.988 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance network_info: |[{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.989 253665 DEBUG oslo_concurrency.lockutils [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.989 253665 DEBUG nova.network.neutron [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.992 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start _get_guest_xml network_info=[{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:09:41 np0005532048 nova_compute[253661]: 2025-11-22 09:09:41.998 253665 WARNING nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.004 253665 DEBUG nova.virt.libvirt.host [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.005 253665 DEBUG nova.virt.libvirt.host [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.016 253665 DEBUG nova.virt.libvirt.host [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.017 253665 DEBUG nova.virt.libvirt.host [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.017 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.017 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.018 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.019 253665 DEBUG nova.virt.hardware [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.023 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:42 np0005532048 podman[280394]: 2025-11-22 09:09:42.373402138 +0000 UTC m=+0.061617416 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:09:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228160723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:42 np0005532048 podman[280395]: 2025-11-22 09:09:42.396844327 +0000 UTC m=+0.079626383 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.414 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.421 253665 DEBUG nova.compute.provider_tree [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.449 253665 DEBUG nova.scheduler.client.report [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:09:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156589561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.470 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.471 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.487 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.515 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.520 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.553 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.554 253665 DEBUG nova.network.neutron [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.574 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.591 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.677 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.679 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.679 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Creating image(s)#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.707 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.737 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.762 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.768 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.839 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.840 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.841 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.841 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.863 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.869 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d6aea4a7-7722-4565-8c76-6d257dcc5362_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4122168536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.993 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.995 253665 DEBUG nova.virt.libvirt.vif [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:37Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.996 253665 DEBUG nova.network.os_vif_util [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.997 253665 DEBUG nova.network.os_vif_util [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:09:42 np0005532048 nova_compute[253661]: 2025-11-22 09:09:42.998 253665 DEBUG nova.objects.instance [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.016 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <uuid>c5f708d0-4110-417f-8353-dc61992d22dc</uuid>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <name>instance-0000000f</name>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <nova:name>tempest-SecurityGroupsTestJSON-server-782283387</nova:name>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:09:41</nova:creationTime>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <nova:user uuid="ee24e4812c424984881862883987d750">tempest-SecurityGroupsTestJSON-342579724-project-member</nova:user>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <nova:project uuid="5879249ab50a40ec9553bc923bdd1042">tempest-SecurityGroupsTestJSON-342579724</nova:project>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <nova:port uuid="2bc1ef13-abf9-49ce-b3bb-41d737b9cd13">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <entry name="serial">c5f708d0-4110-417f-8353-dc61992d22dc</entry>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <entry name="uuid">c5f708d0-4110-417f-8353-dc61992d22dc</entry>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c5f708d0-4110-417f-8353-dc61992d22dc_disk">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c5f708d0-4110-417f-8353-dc61992d22dc_disk.config">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:2b:74:48"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <target dev="tap2bc1ef13-ab"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/console.log" append="off"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:09:43 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:09:43 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:09:43 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:09:43 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.017 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Preparing to wait for external event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.018 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.019 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.019 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.020 253665 DEBUG nova.virt.libvirt.vif [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:37Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.020 253665 DEBUG nova.network.os_vif_util [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.021 253665 DEBUG nova.network.os_vif_util [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.022 253665 DEBUG os_vif [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.023 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.023 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.024 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.030 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bc1ef13-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.030 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bc1ef13-ab, col_values=(('external_ids', {'iface-id': '2bc1ef13-abf9-49ce-b3bb-41d737b9cd13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:74:48', 'vm-uuid': 'c5f708d0-4110-417f-8353-dc61992d22dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.032 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:43 np0005532048 NetworkManager[48920]: <info>  [1763802583.0330] manager: (tap2bc1ef13-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.034 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.041 253665 INFO os_vif [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab')#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.078 253665 DEBUG nova.network.neutron [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.078 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:09:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 138 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 689 KiB/s wr, 57 op/s
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.193 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.194 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.194 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] No VIF found with MAC fa:16:3e:2b:74:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.195 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Using config drive#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.215 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.369 253665 DEBUG nova.network.neutron [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updated VIF entry in instance network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.370 253665 DEBUG nova.network.neutron [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.383 253665 DEBUG oslo_concurrency.lockutils [req-31ab415b-8bcd-4985-9743-0b6ebcd88e76 req-d2b980e6-2d71-4eb2-84c8-c6fba8d32246 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.551 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d6aea4a7-7722-4565-8c76-6d257dcc5362_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.623 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] resizing rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.770 253665 DEBUG nova.objects.instance [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lazy-loading 'migration_context' on Instance uuid d6aea4a7-7722-4565-8c76-6d257dcc5362 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.784 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.784 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Ensure instance console log exists: /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.785 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.785 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.785 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.787 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.793 253665 WARNING nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.800 253665 DEBUG nova.virt.libvirt.host [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.801 253665 DEBUG nova.virt.libvirt.host [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.804 253665 DEBUG nova.virt.libvirt.host [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.805 253665 DEBUG nova.virt.libvirt.host [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.805 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.805 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.806 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.806 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.807 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.807 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.807 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.808 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.808 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.808 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.808 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.809 253665 DEBUG nova.virt.hardware [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.812 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.919 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Creating config drive at /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config#033[00m
Nov 22 04:09:43 np0005532048 nova_compute[253661]: 2025-11-22 09:09:43.927 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprqhspvri execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.062 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprqhspvri" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.087 253665 DEBUG nova.storage.rbd_utils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] rbd image c5f708d0-4110-417f-8353-dc61992d22dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.092 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config c5f708d0-4110-417f-8353-dc61992d22dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884135950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.265 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.300 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.308 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804180269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.834 253665 DEBUG oslo_concurrency.processutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config c5f708d0-4110-417f-8353-dc61992d22dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.835 253665 INFO nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Deleting local config drive /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/disk.config because it was imported into RBD.#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.842 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.843 253665 DEBUG nova.objects.instance [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lazy-loading 'pci_devices' on Instance uuid d6aea4a7-7722-4565-8c76-6d257dcc5362 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.855 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <uuid>d6aea4a7-7722-4565-8c76-6d257dcc5362</uuid>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <name>instance-00000010</name>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerDiagnosticsTest-server-1370620633</nova:name>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:09:43</nova:creationTime>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <nova:user uuid="79b4e753165d476ea489590d62266a4d">tempest-ServerDiagnosticsTest-1737322539-project-member</nova:user>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <nova:project uuid="573fbf80a5d94f8f841634d42a3bd35a">tempest-ServerDiagnosticsTest-1737322539</nova:project>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <entry name="serial">d6aea4a7-7722-4565-8c76-6d257dcc5362</entry>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <entry name="uuid">d6aea4a7-7722-4565-8c76-6d257dcc5362</entry>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d6aea4a7-7722-4565-8c76-6d257dcc5362_disk">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/console.log" append="off"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:09:44 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:09:44 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:09:44 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:09:44 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:09:44 np0005532048 kernel: tap2bc1ef13-ab: entered promiscuous mode
Nov 22 04:09:44 np0005532048 NetworkManager[48920]: <info>  [1763802584.8946] manager: (tap2bc1ef13-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 22 04:09:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:44Z|00049|binding|INFO|Claiming lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for this chassis.
Nov 22 04:09:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:44Z|00050|binding|INFO|2bc1ef13-abf9-49ce-b3bb-41d737b9cd13: Claiming fa:16:3e:2b:74:48 10.100.0.8
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.896 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.909 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:74:48 10.100.0.8'], port_security=['fa:16:3e:2b:74:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c5f708d0-4110-417f-8353-dc61992d22dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:09:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.910 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 bound to our chassis#033[00m
Nov 22 04:09:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.912 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.917 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.917 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.918 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Using config drive#033[00m
Nov 22 04:09:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:44Z|00051|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 ovn-installed in OVS
Nov 22 04:09:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:44Z|00052|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 up in Southbound
Nov 22 04:09:44 np0005532048 systemd-udevd[280784]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:09:44 np0005532048 systemd-machined[215941]: New machine qemu-17-instance-0000000f.
Nov 22 04:09:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.936 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f047f832-acc1-494d-bd83-de7f9c5909e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:44 np0005532048 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Nov 22 04:09:44 np0005532048 NetworkManager[48920]: <info>  [1763802584.9524] device (tap2bc1ef13-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:09:44 np0005532048 NetworkManager[48920]: <info>  [1763802584.9532] device (tap2bc1ef13-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.956 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:44 np0005532048 nova_compute[253661]: 2025-11-22 09:09:44.963 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.973 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f1025c49-0722-48d9-95f7-6f9208ac0572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:44.978 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[228abb60-2fb6-45d8-9e14-c80d42462594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.009 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cb6014-6847-434d-9558-1a54c4c9cf3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.028 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be98bb7c-07a9-4d2b-bbb2-d36c99f94274]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280807, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.046 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06abecf0-8897-402a-b8fd-eaae463ab121]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540297, 'tstamp': 540297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280809, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540301, 'tstamp': 540301}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280809, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.048 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.053 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.054 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.054 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:45.054 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.115 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Creating config drive at /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.119 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_fsbpk6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.225 253665 DEBUG nova.compute.manager [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.226 253665 DEBUG oslo_concurrency.lockutils [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.226 253665 DEBUG oslo_concurrency.lockutils [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.226 253665 DEBUG oslo_concurrency.lockutils [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.226 253665 DEBUG nova.compute.manager [req-2daa864a-5cf6-4a06-a000-1d6a1229c4d5 req-57ae986e-31a9-486b-9d2a-27d7da53dacc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Processing event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.251 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_fsbpk6" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.280 253665 DEBUG nova.storage.rbd_utils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] rbd image d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.285 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.580 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.582 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802585.5817897, c5f708d0-4110-417f-8353-dc61992d22dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.582 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Started (Lifecycle Event)#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.585 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.588 253665 INFO nova.virt.libvirt.driver [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance spawned successfully.#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.588 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.598 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.600 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.610 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.611 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.611 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.611 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.612 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.612 253665 DEBUG nova.virt.libvirt.driver [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.619 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.619 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802585.5818622, c5f708d0-4110-417f-8353-dc61992d22dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.619 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.639 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.643 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802585.5849652, c5f708d0-4110-417f-8353-dc61992d22dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.643 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.659 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.662 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.680 253665 INFO nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Took 7.87 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.680 253665 DEBUG nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.682 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.754 253665 INFO nova.compute.manager [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Took 8.83 seconds to build instance.#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.771 253665 DEBUG oslo_concurrency.lockutils [None req-683b93c8-9808-4651-b927-783228e85f66 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.809 253665 DEBUG oslo_concurrency.processutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config d6aea4a7-7722-4565-8c76-6d257dcc5362_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:45 np0005532048 nova_compute[253661]: 2025-11-22 09:09:45.810 253665 INFO nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Deleting local config drive /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362/disk.config because it was imported into RBD.#033[00m
Nov 22 04:09:45 np0005532048 systemd-machined[215941]: New machine qemu-18-instance-00000010.
Nov 22 04:09:45 np0005532048 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.446 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802586.4455445, d6aea4a7-7722-4565-8c76-6d257dcc5362 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.447 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.450 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.450 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.455 253665 INFO nova.virt.libvirt.driver [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance spawned successfully.#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.455 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.477 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.480 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.480 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.481 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.481 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.481 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.481 253665 DEBUG nova.virt.libvirt.driver [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.487 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.516 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.517 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802586.4471738, d6aea4a7-7722-4565-8c76-6d257dcc5362 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.517 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] VM Started (Lifecycle Event)#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.541 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.545 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.551 253665 INFO nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Took 3.87 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.552 253665 DEBUG nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.564 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.609 253665 INFO nova.compute.manager [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Took 4.84 seconds to build instance.#033[00m
Nov 22 04:09:46 np0005532048 nova_compute[253661]: 2025-11-22 09:09:46.624 253665 DEBUG oslo_concurrency.lockutils [None req-95f89b62-5276-4d63-839e-0778bc49d3a0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 185 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 534 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Nov 22 04:09:47 np0005532048 nova_compute[253661]: 2025-11-22 09:09:47.298 253665 DEBUG nova.compute.manager [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:47 np0005532048 nova_compute[253661]: 2025-11-22 09:09:47.299 253665 DEBUG oslo_concurrency.lockutils [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:47 np0005532048 nova_compute[253661]: 2025-11-22 09:09:47.299 253665 DEBUG oslo_concurrency.lockutils [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:47 np0005532048 nova_compute[253661]: 2025-11-22 09:09:47.300 253665 DEBUG oslo_concurrency.lockutils [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:47 np0005532048 nova_compute[253661]: 2025-11-22 09:09:47.300 253665 DEBUG nova.compute.manager [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:09:47 np0005532048 nova_compute[253661]: 2025-11-22 09:09:47.300 253665 WARNING nova.compute.manager [req-8602bbe1-e26d-446c-9585-a42357e7aacb req-448fe4fa-6fb9-4c25-9371-78420186a265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:09:47 np0005532048 podman[280949]: 2025-11-22 09:09:47.413722867 +0000 UTC m=+0.102766675 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.032 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.295 253665 DEBUG nova.compute.manager [None req-2bcdfe52-6137-445b-b8b1-ae7bd5d47d83 385bfa70f94e4e118f9fa6d567f40b2f 3259232df7614dd094a8c6c8c274c0fe - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.301 253665 INFO nova.compute.manager [None req-2bcdfe52-6137-445b-b8b1-ae7bd5d47d83 385bfa70f94e4e118f9fa6d567f40b2f 3259232df7614dd094a8c6c8c274c0fe - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Retrieving diagnostics#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.469 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.471 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.471 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "d6aea4a7-7722-4565-8c76-6d257dcc5362-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.471 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.472 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.473 253665 INFO nova.compute.manager [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Terminating instance#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.474 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "refresh_cache-d6aea4a7-7722-4565-8c76-6d257dcc5362" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.474 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquired lock "refresh_cache-d6aea4a7-7722-4565-8c76-6d257dcc5362" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.474 253665 DEBUG nova.network.neutron [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:09:48 np0005532048 nova_compute[253661]: 2025-11-22 09:09:48.867 253665 DEBUG nova.network.neutron [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:09:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 214 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 243 op/s
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.235 253665 DEBUG nova.network.neutron [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.248 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Releasing lock "refresh_cache-d6aea4a7-7722-4565-8c76-6d257dcc5362" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.250 253665 DEBUG nova.compute.manager [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:09:49 np0005532048 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 22 04:09:49 np0005532048 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 3.423s CPU time.
Nov 22 04:09:49 np0005532048 systemd-machined[215941]: Machine qemu-18-instance-00000010 terminated.
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.470 253665 INFO nova.virt.libvirt.driver [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance destroyed successfully.#033[00m
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.471 253665 DEBUG nova.objects.instance [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lazy-loading 'resources' on Instance uuid d6aea4a7-7722-4565-8c76-6d257dcc5362 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.476 253665 DEBUG nova.compute.manager [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.476 253665 DEBUG nova.compute.manager [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing instance network info cache due to event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.477 253665 DEBUG oslo_concurrency.lockutils [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.477 253665 DEBUG oslo_concurrency.lockutils [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.477 253665 DEBUG nova.network.neutron [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:09:49 np0005532048 nova_compute[253661]: 2025-11-22 09:09:49.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 22 04:09:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 22 04:09:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.162 253665 INFO nova.virt.libvirt.driver [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Deleting instance files /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362_del#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.163 253665 INFO nova.virt.libvirt.driver [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Deletion of /var/lib/nova/instances/d6aea4a7-7722-4565-8c76-6d257dcc5362_del complete#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.297 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.298 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.298 253665 INFO nova.compute.manager [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Rebooting instance#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.307 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.332 253665 INFO nova.compute.manager [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.332 253665 DEBUG oslo.service.loopingcall [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.332 253665 DEBUG nova.compute.manager [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.333 253665 DEBUG nova.network.neutron [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.521 253665 DEBUG nova.network.neutron [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.535 253665 DEBUG nova.network.neutron [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.550 253665 INFO nova.compute.manager [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Took 0.22 seconds to deallocate network for instance.#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.597 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.598 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:50 np0005532048 nova_compute[253661]: 2025-11-22 09:09:50.683 253665 DEBUG oslo_concurrency.processutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 214 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 243 op/s
Nov 22 04:09:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1620928968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.162 253665 DEBUG oslo_concurrency.processutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.169 253665 DEBUG nova.compute.provider_tree [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.184 253665 DEBUG nova.scheduler.client.report [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.207 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.235 253665 INFO nova.scheduler.client.report [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Deleted allocations for instance d6aea4a7-7722-4565-8c76-6d257dcc5362#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.293 253665 DEBUG oslo_concurrency.lockutils [None req-34d0bede-94ed-48c5-9ce6-ab6266ab18b0 79b4e753165d476ea489590d62266a4d 573fbf80a5d94f8f841634d42a3bd35a - - default default] Lock "d6aea4a7-7722-4565-8c76-6d257dcc5362" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.372 253665 DEBUG nova.network.neutron [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updated VIF entry in instance network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.372 253665 DEBUG nova.network.neutron [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.389 253665 DEBUG oslo_concurrency.lockutils [req-fc747d0b-f0bf-4873-9610-fbe289945ac1 req-81cc0aec-92f3-4587-a34b-d7ce4b46332f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.390 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:51 np0005532048 nova_compute[253661]: 2025-11-22 09:09:51.390 253665 DEBUG nova.network.neutron [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:09:52
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'vms', 'default.rgw.control', 'volumes', '.mgr']
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:53 np0005532048 nova_compute[253661]: 2025-11-22 09:09:53.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 195 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.7 MiB/s wr, 238 op/s
Nov 22 04:09:53 np0005532048 nova_compute[253661]: 2025-11-22 09:09:53.820 253665 DEBUG nova.network.neutron [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:09:53 np0005532048 nova_compute[253661]: 2025-11-22 09:09:53.836 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:09:53 np0005532048 nova_compute[253661]: 2025-11-22 09:09:53.837 253665 DEBUG nova.compute.manager [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:53 np0005532048 kernel: tap2bc1ef13-ab (unregistering): left promiscuous mode
Nov 22 04:09:53 np0005532048 NetworkManager[48920]: <info>  [1763802593.9692] device (tap2bc1ef13-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:09:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:53Z|00053|binding|INFO|Releasing lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 from this chassis (sb_readonly=0)
Nov 22 04:09:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:53Z|00054|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 down in Southbound
Nov 22 04:09:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:53Z|00055|binding|INFO|Removing iface tap2bc1ef13-ab ovn-installed in OVS
Nov 22 04:09:53 np0005532048 nova_compute[253661]: 2025-11-22 09:09:53.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:53 np0005532048 nova_compute[253661]: 2025-11-22 09:09:53.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:53.986 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:74:48 10.100.0.8'], port_security=['fa:16:3e:2b:74:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c5f708d0-4110-417f-8353-dc61992d22dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5ecca170-8cb5-478c-9208-cfe27a5002c7 90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:09:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:53.988 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 unbound from our chassis#033[00m
Nov 22 04:09:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:53.989 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:53.998 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.014 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[492235bb-4513-4aa3-a661-5bcb50ab447e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:54 np0005532048 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 22 04:09:54 np0005532048 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 8.863s CPU time.
Nov 22 04:09:54 np0005532048 systemd-machined[215941]: Machine qemu-17-instance-0000000f terminated.
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.052 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[349cb55d-928d-449d-be8a-5daf1520c57a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.057 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a963d17-b957-45f9-9524-96c81f13b30e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.084 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fa049ca8-bfa9-41ee-a36a-988466fa508e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.111 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a97cea0f-dc8b-4c1e-b226-36f14dab8b88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281032, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.145 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3979ff5f-1a79-47e7-833c-470818b0a611]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540297, 'tstamp': 540297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281033, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540301, 'tstamp': 540301}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281033, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.147 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.156 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.157 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.157 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.158 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:54.159 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.183 253665 INFO nova.virt.libvirt.driver [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance destroyed successfully.#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.184 253665 DEBUG nova.objects.instance [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'resources' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.199 253665 DEBUG nova.virt.libvirt.vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:53Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.200 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.200 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.201 253665 DEBUG os_vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.203 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bc1ef13-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.208 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.210 253665 INFO os_vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab')#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.217 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start _get_guest_xml network_info=[{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.221 253665 WARNING nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.226 253665 DEBUG nova.virt.libvirt.host [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.226 253665 DEBUG nova.virt.libvirt.host [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.230 253665 DEBUG nova.virt.libvirt.host [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.230 253665 DEBUG nova.virt.libvirt.host [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.231 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.231 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.232 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.232 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.232 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.232 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.233 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.233 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.233 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.234 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.234 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.234 253665 DEBUG nova.virt.hardware [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.234 253665 DEBUG nova.objects.instance [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.252 253665 DEBUG oslo_concurrency.processutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1532047322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.713 253665 DEBUG oslo_concurrency.processutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.759 253665 DEBUG oslo_concurrency.processutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.791 253665 DEBUG nova.compute.manager [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.792 253665 DEBUG oslo_concurrency.lockutils [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.792 253665 DEBUG oslo_concurrency.lockutils [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.793 253665 DEBUG oslo_concurrency.lockutils [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.793 253665 DEBUG nova.compute.manager [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:09:54 np0005532048 nova_compute[253661]: 2025-11-22 09:09:54.793 253665 WARNING nova.compute.manager [req-8b24e1ed-c611-4386-98b8-2ad32d5d6cc8 req-0295ec3b-a535-4d10-aaa2-f94fddeb1c80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:09:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:09:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 237 op/s
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.130 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.130 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.146 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:09:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:09:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/681901186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.227 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.228 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.235 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.235 253665 INFO nova.compute.claims [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.242 253665 DEBUG oslo_concurrency.processutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.243 253665 DEBUG nova.virt.libvirt.vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:53Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.243 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.244 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.245 253665 DEBUG nova.objects.instance [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.268 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <uuid>c5f708d0-4110-417f-8353-dc61992d22dc</uuid>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <name>instance-0000000f</name>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <nova:name>tempest-SecurityGroupsTestJSON-server-782283387</nova:name>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:09:54</nova:creationTime>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <nova:user uuid="ee24e4812c424984881862883987d750">tempest-SecurityGroupsTestJSON-342579724-project-member</nova:user>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <nova:project uuid="5879249ab50a40ec9553bc923bdd1042">tempest-SecurityGroupsTestJSON-342579724</nova:project>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <nova:port uuid="2bc1ef13-abf9-49ce-b3bb-41d737b9cd13">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <entry name="serial">c5f708d0-4110-417f-8353-dc61992d22dc</entry>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <entry name="uuid">c5f708d0-4110-417f-8353-dc61992d22dc</entry>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c5f708d0-4110-417f-8353-dc61992d22dc_disk">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c5f708d0-4110-417f-8353-dc61992d22dc_disk.config">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:2b:74:48"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <target dev="tap2bc1ef13-ab"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc/console.log" append="off"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <input type="keyboard" bus="usb"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:09:55 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:09:55 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:09:55 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:09:55 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.271 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.271 253665 DEBUG nova.virt.libvirt.driver [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.272 253665 DEBUG nova.virt.libvirt.vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:53Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.273 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.274 253665 DEBUG nova.network.os_vif_util [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.274 253665 DEBUG os_vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.275 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.276 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bc1ef13-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.280 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bc1ef13-ab, col_values=(('external_ids', {'iface-id': '2bc1ef13-abf9-49ce-b3bb-41d737b9cd13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:74:48', 'vm-uuid': 'c5f708d0-4110-417f-8353-dc61992d22dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.281 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:55 np0005532048 NetworkManager[48920]: <info>  [1763802595.2829] manager: (tap2bc1ef13-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.290 253665 INFO os_vif [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab')#033[00m
Nov 22 04:09:55 np0005532048 virtqemud[254229]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 22 04:09:55 np0005532048 virtqemud[254229]: hostname: compute-0
Nov 22 04:09:55 np0005532048 virtqemud[254229]: End of file while reading data: Input/output error
Nov 22 04:09:55 np0005532048 virtqemud[254229]: End of file while reading data: Input/output error
Nov 22 04:09:55 np0005532048 systemd-udevd[281025]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:09:55 np0005532048 kernel: tap2bc1ef13-ab: entered promiscuous mode
Nov 22 04:09:55 np0005532048 NetworkManager[48920]: <info>  [1763802595.3800] manager: (tap2bc1ef13-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.383 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:55Z|00056|binding|INFO|Claiming lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for this chassis.
Nov 22 04:09:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:55Z|00057|binding|INFO|2bc1ef13-abf9-49ce-b3bb-41d737b9cd13: Claiming fa:16:3e:2b:74:48 10.100.0.8
Nov 22 04:09:55 np0005532048 NetworkManager[48920]: <info>  [1763802595.3926] device (tap2bc1ef13-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:09:55 np0005532048 NetworkManager[48920]: <info>  [1763802595.3937] device (tap2bc1ef13-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.394 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:74:48 10.100.0.8'], port_security=['fa:16:3e:2b:74:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c5f708d0-4110-417f-8353-dc61992d22dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5ecca170-8cb5-478c-9208-cfe27a5002c7 90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.395 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 bound to our chassis#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.397 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4#033[00m
Nov 22 04:09:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:55Z|00058|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 ovn-installed in OVS
Nov 22 04:09:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:09:55Z|00059|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 up in Southbound
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.414 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e206640f-4a8d-47df-8d63-0a024745d0e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:55 np0005532048 systemd-machined[215941]: New machine qemu-19-instance-0000000f.
Nov 22 04:09:55 np0005532048 systemd[1]: Started Virtual Machine qemu-19-instance-0000000f.
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.459 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d30401d6-b430-4938-a917-a8a1741b2737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.466 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbed0b8-b2e9-4496-8c5b-d28031a515ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.494 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfc8b43-41eb-4e67-9bff-4a90417ed1af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.520 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc4e7ca-9569-49f9-87ee-bf158a2580b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281134, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.542 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a47b2-1076-43fe-937f-c6713e5fffac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540297, 'tstamp': 540297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281154, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540301, 'tstamp': 540301}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281154, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.545 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.548 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:09:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:09:55.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:09:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824317784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.857 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.864 253665 DEBUG nova.compute.provider_tree [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.874 253665 DEBUG nova.scheduler.client.report [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.970 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:55 np0005532048 nova_compute[253661]: 2025-11-22 09:09:55.972 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.020 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.020 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.077 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.168 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.410 253665 DEBUG nova.policy [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '526789957ca1421b94691426dc7bccb5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef6e238d438c49959eb8bee112836e52', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.425 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.427 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.427 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Creating image(s)#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.449 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.471 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.492 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.496 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.560 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.561 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.562 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.563 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.586 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.591 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 04781543-b5ed-482a-a30a-0730fbcd12a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.794 253665 DEBUG nova.compute.manager [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.795 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for c5f708d0-4110-417f-8353-dc61992d22dc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.796 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802596.7885516, c5f708d0-4110-417f-8353-dc61992d22dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.799 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.811 253665 INFO nova.virt.libvirt.driver [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance rebooted successfully.#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.812 253665 DEBUG nova.compute.manager [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.824 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.826 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.847 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.847 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802596.7887912, c5f708d0-4110-417f-8353-dc61992d22dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.848 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Started (Lifecycle Event)#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.865 253665 DEBUG oslo_concurrency.lockutils [None req-0776a296-b259-4107-9bc6-bfb97fcddeee ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.867 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.871 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:09:56 np0005532048 nova_compute[253661]: 2025-11-22 09:09:56.939 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 04781543-b5ed-482a-a30a-0730fbcd12a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.001 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.002 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.002 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.003 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.003 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.004 253665 WARNING nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.004 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.004 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.005 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.005 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.005 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.005 253665 WARNING nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.006 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.006 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.006 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.007 253665 DEBUG oslo_concurrency.lockutils [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.007 253665 DEBUG nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.008 253665 WARNING nova.compute.manager [req-31f9f397-b025-4a2d-ae05-f41a804496fc req-cc934428-bc3b-459e-8793-75b8386b9e97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.015 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] resizing rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:09:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 213 op/s
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.182 253665 DEBUG nova.objects.instance [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'migration_context' on Instance uuid 04781543-b5ed-482a-a30a-0730fbcd12a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.194 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.195 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Ensure instance console log exists: /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.195 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.196 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.196 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:09:57 np0005532048 nova_compute[253661]: 2025-11-22 09:09:57.913 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Successfully created port: e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:09:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 191 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Nov 22 04:09:59 np0005532048 nova_compute[253661]: 2025-11-22 09:09:59.562 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:09:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:59 np0005532048 nova_compute[253661]: 2025-11-22 09:09:59.647 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Successfully updated port: e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:09:59 np0005532048 nova_compute[253661]: 2025-11-22 09:09:59.661 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:09:59 np0005532048 nova_compute[253661]: 2025-11-22 09:09:59.662 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:09:59 np0005532048 nova_compute[253661]: 2025-11-22 09:09:59.662 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:09:59 np0005532048 nova_compute[253661]: 2025-11-22 09:09:59.865 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:10:00 np0005532048 nova_compute[253661]: 2025-11-22 09:10:00.283 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:00 np0005532048 nova_compute[253661]: 2025-11-22 09:10:00.313 253665 DEBUG nova.compute.manager [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:00 np0005532048 nova_compute[253661]: 2025-11-22 09:10:00.314 253665 DEBUG nova.compute.manager [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:00 np0005532048 nova_compute[253661]: 2025-11-22 09:10:00.314 253665 DEBUG oslo_concurrency.lockutils [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.058 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.059 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.075 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.080 253665 DEBUG nova.network.neutron [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 191 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 971 KiB/s wr, 118 op/s
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.115 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.116 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance network_info: |[{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.117 253665 DEBUG oslo_concurrency.lockutils [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.118 253665 DEBUG nova.network.neutron [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.124 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start _get_guest_xml network_info=[{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.154 253665 WARNING nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.161 253665 DEBUG nova.virt.libvirt.host [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.162 253665 DEBUG nova.virt.libvirt.host [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.165 253665 DEBUG nova.virt.libvirt.host [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.166 253665 DEBUG nova.virt.libvirt.host [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.166 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.166 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.167 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.167 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.167 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.168 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.168 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.168 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.168 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.169 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.169 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.169 253665 DEBUG nova.virt.hardware [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.172 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.205 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.207 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.218 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.219 253665 INFO nova.compute.claims [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.413 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1718560103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.679 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.703 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.708 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1073061718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.926 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.933 253665 DEBUG nova.compute.provider_tree [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.946 253665 DEBUG nova.scheduler.client.report [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.967 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:01 np0005532048 nova_compute[253661]: 2025-11-22 09:10:01.968 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.019 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.019 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.042 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.064 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:10:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2849461507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.156 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.159 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.160 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating image(s)#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.189 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.217 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.240 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.246 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.271 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.274 253665 DEBUG nova.virt.libvirt.vif [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-785600448',display_name='tempest-FloatingIPsAssociationTestJSON-server-785600448',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-785600448',id=17,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-912pf9hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:56Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=04781543-b5ed-482a-a30a-0730fbcd12a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.275 253665 DEBUG nova.network.os_vif_util [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.276 253665 DEBUG nova.network.os_vif_util [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.278 253665 DEBUG nova.objects.instance [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'pci_devices' on Instance uuid 04781543-b5ed-482a-a30a-0730fbcd12a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.294 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <uuid>04781543-b5ed-482a-a30a-0730fbcd12a1</uuid>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <name>instance-00000011</name>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-785600448</nova:name>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:10:01</nova:creationTime>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <nova:user uuid="526789957ca1421b94691426dc7bccb5">tempest-FloatingIPsAssociationTestJSON-1882113079-project-member</nova:user>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <nova:project uuid="ef6e238d438c49959eb8bee112836e52">tempest-FloatingIPsAssociationTestJSON-1882113079</nova:project>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <nova:port uuid="e7682709-05fd-4d27-bd49-1a84e1cf6bd3">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <entry name="serial">04781543-b5ed-482a-a30a-0730fbcd12a1</entry>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <entry name="uuid">04781543-b5ed-482a-a30a-0730fbcd12a1</entry>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/04781543-b5ed-482a-a30a-0730fbcd12a1_disk">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:5e:ea:eb"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <target dev="tape7682709-05"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/console.log" append="off"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:10:02 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:10:02 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:10:02 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:10:02 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.296 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Preparing to wait for external event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.296 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.296 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.297 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.297 253665 DEBUG nova.virt.libvirt.vif [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-785600448',display_name='tempest-FloatingIPsAssociationTestJSON-server-785600448',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-785600448',id=17,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-912pf9hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:09:56Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=04781543-b5ed-482a-a30a-0730fbcd12a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.298 253665 DEBUG nova.network.os_vif_util [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.298 253665 DEBUG nova.network.os_vif_util [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.299 253665 DEBUG os_vif [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.299 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.300 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.301 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.303 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.303 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7682709-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.304 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7682709-05, col_values=(('external_ids', {'iface-id': 'e7682709-05fd-4d27-bd49-1a84e1cf6bd3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:ea:eb', 'vm-uuid': '04781543-b5ed-482a-a30a-0730fbcd12a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:02 np0005532048 NetworkManager[48920]: <info>  [1763802602.3070] manager: (tape7682709-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.311 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.311 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.312 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.312 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.337 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.341 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.371 253665 INFO os_vif [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05')#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.430 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.430 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.431 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No VIF found with MAC fa:16:3e:5e:ea:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.431 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Using config drive#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.465 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.536 253665 DEBUG nova.policy [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05cafdbce8334f9380b4dbd1d21f7d58', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd78b26f20d674ae6a213d727050a50d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012833240745262747 of space, bias 1.0, pg target 0.3849972223578824 quantized to 32 (current 32)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.678 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.747 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.858 253665 DEBUG nova.objects.instance [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.875 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.875 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Ensure instance console log exists: /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.876 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.876 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:02 np0005532048 nova_compute[253661]: 2025-11-22 09:10:02.877 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.158 253665 DEBUG nova.compute.manager [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.158 253665 DEBUG nova.compute.manager [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing instance network info cache due to event network-changed-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.159 253665 DEBUG oslo_concurrency.lockutils [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.159 253665 DEBUG oslo_concurrency.lockutils [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.159 253665 DEBUG nova.network.neutron [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Refreshing network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.335 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Creating config drive at /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.341 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeuilv0d8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.416 253665 DEBUG nova.network.neutron [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.417 253665 DEBUG nova.network.neutron [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.435 253665 DEBUG oslo_concurrency.lockutils [req-d1e234ca-d422-4084-8e5a-9ebfa7075fab req-02010600-abbb-4bf1-8210-285db5b9464c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.478 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeuilv0d8" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.521 253665 DEBUG nova.storage.rbd_utils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.526 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.567 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Successfully created port: 716b716d-2ee2-44e7-9850-c10854634f77 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.593 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.594 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.594 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.594 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.594 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.596 253665 INFO nova.compute.manager [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Terminating instance#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.597 253665 DEBUG nova.compute.manager [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:10:03 np0005532048 kernel: tap2bc1ef13-ab (unregistering): left promiscuous mode
Nov 22 04:10:03 np0005532048 NetworkManager[48920]: <info>  [1763802603.6497] device (tap2bc1ef13-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.666 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:03Z|00060|binding|INFO|Releasing lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 from this chassis (sb_readonly=0)
Nov 22 04:10:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:03Z|00061|binding|INFO|Setting lport 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 down in Southbound
Nov 22 04:10:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:03Z|00062|binding|INFO|Removing iface tap2bc1ef13-ab ovn-installed in OVS
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.674 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.710 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:74:48 10.100.0.8'], port_security=['fa:16:3e:2b:74:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c5f708d0-4110-417f-8353-dc61992d22dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5ecca170-8cb5-478c-9208-cfe27a5002c7 8db4c515-712a-46df-b14c-6a11222f6f3f 90f543f2-0e15-4746-9035-ec29edc5cf1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.713 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 unbound from our chassis#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.716 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce72c95-f29f-458a-9b0e-7e700aa1deb4#033[00m
Nov 22 04:10:03 np0005532048 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 22 04:10:03 np0005532048 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000000f.scope: Consumed 7.727s CPU time.
Nov 22 04:10:03 np0005532048 systemd-machined[215941]: Machine qemu-19-instance-0000000f terminated.
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.738 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4cefbc04-5ef5-4aca-95ab-ac6c6412f8e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.745 253665 DEBUG oslo_concurrency.processutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config 04781543-b5ed-482a-a30a-0730fbcd12a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.746 253665 INFO nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Deleting local config drive /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1/disk.config because it was imported into RBD.#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.784 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae4d304-12e5-4b11-9b62-1e961f4f944c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.789 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[00ab4103-0240-4334-b309-8c829878b700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:03 np0005532048 kernel: tape7682709-05: entered promiscuous mode
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.829 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[37218518-613b-46da-b79d-75ff3d963f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:03 np0005532048 NetworkManager[48920]: <info>  [1763802603.8330] manager: (tape7682709-05): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:03Z|00063|binding|INFO|Claiming lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 for this chassis.
Nov 22 04:10:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:03Z|00064|binding|INFO|e7682709-05fd-4d27-bd49-1a84e1cf6bd3: Claiming fa:16:3e:5e:ea:eb 10.100.0.3
Nov 22 04:10:03 np0005532048 systemd-udevd[281682]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:10:03 np0005532048 NetworkManager[48920]: <info>  [1763802603.8528] device (tape7682709-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:10:03 np0005532048 NetworkManager[48920]: <info>  [1763802603.8544] device (tape7682709-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.856 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:ea:eb 10.100.0.3'], port_security=['fa:16:3e:5e:ea:eb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '04781543-b5ed-482a-a30a-0730fbcd12a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e7682709-05fd-4d27-bd49-1a84e1cf6bd3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.860 253665 INFO nova.virt.libvirt.driver [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Instance destroyed successfully.#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.860 253665 DEBUG nova.objects.instance [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'resources' on Instance uuid c5f708d0-4110-417f-8353-dc61992d22dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6da945-75a3-4497-9a80-d9d1abfe229d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce72c95-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:ca:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540279, 'reachable_time': 16826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281711, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.877 253665 DEBUG nova.virt.libvirt.vif [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-782283387',display_name='tempest-SecurityGroupsTestJSON-server-782283387',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-782283387',id=15,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-r4cim3xq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:56Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=c5f708d0-4110-417f-8353-dc61992d22dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.878 253665 DEBUG nova.network.os_vif_util [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.879 253665 DEBUG nova.network.os_vif_util [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:03 np0005532048 systemd-machined[215941]: New machine qemu-20-instance-00000011.
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.879 253665 DEBUG os_vif [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.882 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bc1ef13-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.893 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d8287c-fc8b-46fd-b993-e77d3cf2cf5f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540297, 'tstamp': 540297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281719, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbce72c95-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540301, 'tstamp': 540301}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281719, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.895 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.889 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.890 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.921 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:10:03 np0005532048 systemd[1]: Started Virtual Machine qemu-20-instance-00000011.
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.928 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce72c95-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:03Z|00065|binding|INFO|Setting lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 ovn-installed in OVS
Nov 22 04:10:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:03Z|00066|binding|INFO|Setting lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 up in Southbound
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.928 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.929 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce72c95-f0, col_values=(('external_ids', {'iface-id': '9b713871-83a7-42c2-9c01-d716fc099936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.929 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.931 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 bound to our chassis#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.932 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.932 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608#033[00m
Nov 22 04:10:03 np0005532048 nova_compute[253661]: 2025-11-22 09:10:03.938 253665 INFO os_vif [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:74:48,bridge_name='br-int',has_traffic_filtering=True,id=2bc1ef13-abf9-49ce-b3bb-41d737b9cd13,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bc1ef13-ab')#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.948 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5234b553-e050-4701-ba87-c6eddb55ad13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.949 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape64548ac-51 in ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.952 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape64548ac-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.952 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8a6ab6-f2b4-4d02-859e-e9220ce138bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.954 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d8cbf875-6f46-4b58-9c87-dd0e23b111e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:03.969 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[470bae88-23dc-4418-aa31-302b54644da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.000 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.000 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.002 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06727b9f-4e88-431e-8c02-e66774b499eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.009 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.009 253665 INFO nova.compute.claims [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.044 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3ce953-82cd-4123-8b48-6ef0742528e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 systemd-udevd[281681]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.051 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73d30804-dac3-4694-8282-e9a7e493cc57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 NetworkManager[48920]: <info>  [1763802604.0532] manager: (tape64548ac-50): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.092 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ddec7b-e456-4cfd-bc31-3919a4a92ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.096 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e65b01f9-c13f-4f9b-a94e-2dcf7cedd5b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 NetworkManager[48920]: <info>  [1763802604.1266] device (tape64548ac-50): carrier: link connected
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.134 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0d72ec-7d45-44fb-8792-9f166a5a7c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.160 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95d15e12-ed08-4f97-8995-0580191d9e52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281783, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.183 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8fc1bd-6e1e-47b7-bd34-14c619293931]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:bc3b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545486, 'tstamp': 545486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281788, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ad23ec3d-75b7-49ab-83eb-bc3cbce3533b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281804, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.234 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.249 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc1ef1f-f4e5-41f5-a973-8b900c34e13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.293 253665 DEBUG nova.compute.manager [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.294 253665 DEBUG oslo_concurrency.lockutils [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.294 253665 DEBUG oslo_concurrency.lockutils [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.295 253665 DEBUG oslo_concurrency.lockutils [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.295 253665 DEBUG nova.compute.manager [req-3ca8b42c-9e5b-4c82-b8f3-3fa3dcd54d15 req-bb754f80-34e4-4258-9b78-20df938fb809 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Processing event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.324 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.327 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802604.3268237, 04781543-b5ed-482a-a30a-0730fbcd12a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.327 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.332 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.337 253665 INFO nova.virt.libvirt.driver [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance spawned successfully.#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.338 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.346 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a5452a-0efd-4fca-8e2c-723517fd3475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.348 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.349 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.349 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.349 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:04 np0005532048 kernel: tape64548ac-50: entered promiscuous mode
Nov 22 04:10:04 np0005532048 NetworkManager[48920]: <info>  [1763802604.3526] manager: (tape64548ac-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.354 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:04Z|00067|binding|INFO|Releasing lport 791df5ce-fddc-4961-a1d0-6667026f8b13 from this chassis (sb_readonly=0)
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.364 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.368 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.368 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.368 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.369 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.369 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.369 253665 DEBUG nova.virt.libvirt.driver [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.378 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e64548ac-5898-4d23-b6f7-17a1ae54c608.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e64548ac-5898-4d23-b6f7-17a1ae54c608.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.379 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc76d56-22c3-46f7-b3bc-446fd6064f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.380 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-e64548ac-5898-4d23-b6f7-17a1ae54c608
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/e64548ac-5898-4d23-b6f7-17a1ae54c608.pid.haproxy
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID e64548ac-5898-4d23-b6f7-17a1ae54c608
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:10:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:04.382 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'env', 'PROCESS_TAG=haproxy-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e64548ac-5898-4d23-b6f7-17a1ae54c608.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.583 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.585 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802589.4686115, d6aea4a7-7722-4565-8c76-6d257dcc5362 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.585 253665 INFO nova.compute.manager [-] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.586 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.586 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802604.3280363, 04781543-b5ed-482a-a30a-0730fbcd12a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.586 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.588 253665 INFO nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Took 8.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.588 253665 DEBUG nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.598 253665 INFO nova.virt.libvirt.driver [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Deleting instance files /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc_del#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.599 253665 INFO nova.virt.libvirt.driver [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Deletion of /var/lib/nova/instances/c5f708d0-4110-417f-8353-dc61992d22dc_del complete#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.607 253665 DEBUG nova.compute.manager [None req-f5c560f6-26b2-4794-8ab8-ba1271de37dc - - - - - -] [instance: d6aea4a7-7722-4565-8c76-6d257dcc5362] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.618 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.629 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802604.3335626, 04781543-b5ed-482a-a30a-0730fbcd12a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.655 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.662 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352268410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.704 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.717 253665 INFO nova.compute.manager [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Took 9.51 seconds to build instance.#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.725 253665 INFO nova.compute.manager [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.725 253665 DEBUG oslo.service.loopingcall [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.726 253665 DEBUG nova.compute.manager [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.726 253665 DEBUG nova.network.neutron [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.729 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.731 253665 DEBUG oslo_concurrency.lockutils [None req-5963e3ca-0d41-4418-84f2-29a708f4abe0 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.735 253665 DEBUG nova.compute.provider_tree [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.750 253665 DEBUG nova.scheduler.client.report [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.771 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.772 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:10:04 np0005532048 podman[281864]: 2025-11-22 09:10:04.802244243 +0000 UTC m=+0.066149547 container create 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.841 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.842 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:10:04 np0005532048 systemd[1]: Started libpod-conmon-977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3.scope.
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.858 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:10:04 np0005532048 podman[281864]: 2025-11-22 09:10:04.767577761 +0000 UTC m=+0.031483095 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.869 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:10:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:10:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fa4cb5a6efdb2b8320c6dc794a849ea1a90de83555fc6fea9133bc58a10cfaa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:04 np0005532048 podman[281864]: 2025-11-22 09:10:04.908796768 +0000 UTC m=+0.172702162 container init 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 04:10:04 np0005532048 podman[281864]: 2025-11-22 09:10:04.917222583 +0000 UTC m=+0.181127927 container start 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 04:10:04 np0005532048 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [NOTICE]   (281884) : New worker (281886) forked
Nov 22 04:10:04 np0005532048 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [NOTICE]   (281884) : Loading success.
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.954 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.956 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.956 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Creating image(s)#033[00m
Nov 22 04:10:04 np0005532048 nova_compute[253661]: 2025-11-22 09:10:04.981 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.017 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.049 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.054 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.089 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Successfully updated port: 716b716d-2ee2-44e7-9850-c10854634f77 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.104 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.105 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquired lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.105 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:10:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 244 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 137 op/s
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.144 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.146 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.146 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.147 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.173 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.178 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.206 253665 DEBUG nova.network.neutron [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updated VIF entry in instance network info cache for port 2bc1ef13-abf9-49ce-b3bb-41d737b9cd13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.207 253665 DEBUG nova.network.neutron [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [{"id": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "address": "fa:16:3e:2b:74:48", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bc1ef13-ab", "ovs_interfaceid": "2bc1ef13-abf9-49ce-b3bb-41d737b9cd13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.226 253665 DEBUG oslo_concurrency.lockutils [req-0c04e8db-ddfb-490f-8712-e9476b9ac449 req-7d725c92-bf94-43b4-bebe-d74faf33e675 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5f708d0-4110-417f-8353-dc61992d22dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.238 253665 DEBUG nova.policy [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05cafdbce8334f9380b4dbd1d21f7d58', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd78b26f20d674ae6a213d727050a50d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.298 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.298 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-unplugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.299 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] No waiting events found dispatching network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 WARNING nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received unexpected event network-vif-plugged-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-changed-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG nova.compute.manager [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Refreshing instance network info cache due to event network-changed-716b716d-2ee2-44e7-9850-c10854634f77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.300 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.319 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.565 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.649 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.719 253665 DEBUG nova.network.neutron [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.756 253665 INFO nova.compute.manager [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Took 1.03 seconds to deallocate network for instance.#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.768 253665 DEBUG nova.objects.instance [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid d99bd27b-0ff3-493e-a69c-6c7ec034aa81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.781 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.781 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Ensure instance console log exists: /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.782 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.782 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.782 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.821 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.822 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:05 np0005532048 nova_compute[253661]: 2025-11-22 09:10:05.948 253665 DEBUG oslo_concurrency.processutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.003 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Successfully created port: a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.167 253665 DEBUG nova.network.neutron [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updating instance_info_cache with network_info: [{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.185 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Releasing lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.185 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance network_info: |[{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.185 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.185 253665 DEBUG nova.network.neutron [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Refreshing network info cache for port 716b716d-2ee2-44e7-9850-c10854634f77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.188 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start _get_guest_xml network_info=[{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.193 253665 WARNING nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.202 253665 DEBUG nova.virt.libvirt.host [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.203 253665 DEBUG nova.virt.libvirt.host [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.209 253665 DEBUG nova.virt.libvirt.host [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.209 253665 DEBUG nova.virt.libvirt.host [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.210 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.210 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.211 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.211 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.212 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.212 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.212 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.212 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.213 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.213 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.213 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.213 253665 DEBUG nova.virt.hardware [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.217 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986123316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.422 253665 DEBUG oslo_concurrency.processutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.435 253665 DEBUG nova.compute.provider_tree [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.454 253665 DEBUG nova.scheduler.client.report [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.486 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.526 253665 INFO nova.scheduler.client.report [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Deleted allocations for instance c5f708d0-4110-417f-8353-dc61992d22dc#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.609 253665 DEBUG oslo_concurrency.lockutils [None req-de6397a8-06a5-4e62-bcc3-2ffd14a6c826 ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "c5f708d0-4110-417f-8353-dc61992d22dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524036332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.692 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.719 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.724 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.754 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Successfully updated port: a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.770 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.770 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquired lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.770 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.983 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.996 253665 DEBUG nova.compute.manager [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.996 253665 DEBUG oslo_concurrency.lockutils [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.997 253665 DEBUG oslo_concurrency.lockutils [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.997 253665 DEBUG oslo_concurrency.lockutils [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.997 253665 DEBUG nova.compute.manager [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] No waiting events found dispatching network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:06 np0005532048 nova_compute[253661]: 2025-11-22 09:10:06.997 253665 WARNING nova.compute.manager [req-1e10991f-10f9-41de-a1e5-f9afd543e0ee req-88ecdb63-337f-45f4-b372-ce597bfe400c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received unexpected event network-vif-plugged-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:10:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 274 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 198 op/s
Nov 22 04:10:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509290870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.210 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.212 253665 DEBUG nova.virt.libvirt.vif [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:02Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.212 253665 DEBUG nova.network.os_vif_util [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.213 253665 DEBUG nova.network.os_vif_util [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.215 253665 DEBUG nova.objects.instance [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.228 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <uuid>3ae08a2f-348c-406b-8ffc-9acb8a542e1c</uuid>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <name>instance-00000012</name>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAdminTestJSON-server-1439141870</nova:name>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:10:06</nova:creationTime>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <nova:port uuid="716b716d-2ee2-44e7-9850-c10854634f77">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <entry name="serial">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <entry name="uuid">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:47:7d:dd"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <target dev="tap716b716d-2e"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log" append="off"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:10:07 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:10:07 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:10:07 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:10:07 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.228 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Preparing to wait for external event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.229 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.229 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.229 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.230 253665 DEBUG nova.virt.libvirt.vif [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:02Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.230 253665 DEBUG nova.network.os_vif_util [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.230 253665 DEBUG nova.network.os_vif_util [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.231 253665 DEBUG os_vif [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.232 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.232 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.235 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.235 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap716b716d-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.239 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap716b716d-2e, col_values=(('external_ids', {'iface-id': '716b716d-2ee2-44e7-9850-c10854634f77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:7d:dd', 'vm-uuid': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:07 np0005532048 NetworkManager[48920]: <info>  [1763802607.2706] manager: (tap716b716d-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.272 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.288 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.290 253665 INFO os_vif [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.356 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.357 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.357 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:47:7d:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.358 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Using config drive#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.380 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.449 253665 DEBUG nova.compute.manager [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Received event network-vif-deleted-2bc1ef13-abf9-49ce-b3bb-41d737b9cd13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.449 253665 DEBUG nova.compute.manager [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-changed-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.449 253665 DEBUG nova.compute.manager [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Refreshing instance network info cache due to event network-changed-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.450 253665 DEBUG oslo_concurrency.lockutils [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.785 253665 DEBUG nova.network.neutron [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updated VIF entry in instance network info cache for port 716b716d-2ee2-44e7-9850-c10854634f77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.786 253665 DEBUG nova.network.neutron [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updating instance_info_cache with network_info: [{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.799 253665 DEBUG oslo_concurrency.lockutils [req-9ce4f21e-696d-4111-ade9-f04e593ef32a req-c93a72ae-e5b9-47f1-8395-afe4f1cb2edd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.843 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating config drive at /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.849 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphg_hb7ki execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.899 253665 DEBUG nova.network.neutron [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updating instance_info_cache with network_info: [{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.924 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Releasing lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.924 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance network_info: |[{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.925 253665 DEBUG oslo_concurrency.lockutils [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.925 253665 DEBUG nova.network.neutron [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Refreshing network info cache for port a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.928 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start _get_guest_xml network_info=[{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.934 253665 WARNING nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.941 253665 DEBUG nova.virt.libvirt.host [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.941 253665 DEBUG nova.virt.libvirt.host [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.945 253665 DEBUG nova.virt.libvirt.host [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.945 253665 DEBUG nova.virt.libvirt.host [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.945 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.946 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.947 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.948 253665 DEBUG nova.virt.hardware [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.950 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:07 np0005532048 nova_compute[253661]: 2025-11-22 09:10:07.983 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphg_hb7ki" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.009 253665 DEBUG nova.storage.rbd_utils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.014 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.195 253665 DEBUG oslo_concurrency.processutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.196 253665 INFO nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting local config drive /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config because it was imported into RBD.#033[00m
Nov 22 04:10:08 np0005532048 kernel: tap716b716d-2e: entered promiscuous mode
Nov 22 04:10:08 np0005532048 NetworkManager[48920]: <info>  [1763802608.2664] manager: (tap716b716d-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Nov 22 04:10:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:08Z|00068|binding|INFO|Claiming lport 716b716d-2ee2-44e7-9850-c10854634f77 for this chassis.
Nov 22 04:10:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:08Z|00069|binding|INFO|716b716d-2ee2-44e7-9850-c10854634f77: Claiming fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.266 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.284 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.285 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.287 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:10:08 np0005532048 systemd-udevd[282235]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:10:08 np0005532048 NetworkManager[48920]: <info>  [1763802608.3038] device (tap716b716d-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:10:08 np0005532048 NetworkManager[48920]: <info>  [1763802608.3046] device (tap716b716d-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.303 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[170e4371-7bdf-4458-bb6a-92e02182aeb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.306 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap514ab32c-31 in ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.309 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap514ab32c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc11488c-01a7-404c-b7bf-fe89ec5d0960]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.311 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10a7cb20-0f88-4feb-884c-09a62eb2b2fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 systemd-machined[215941]: New machine qemu-21-instance-00000012.
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.328 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe7cc47-5b01-4306-b917-36cc4c796c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 systemd[1]: Started Virtual Machine qemu-21-instance-00000012.
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.356 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa5262c-af95-48af-b8d1-027683ac0979]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979115775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:08Z|00070|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 ovn-installed in OVS
Nov 22 04:10:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:08Z|00071|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 up in Southbound
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.437 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.440 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8acd4790-1893-4d44-b8d8-eca1706e87f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 NetworkManager[48920]: <info>  [1763802608.4537] manager: (tap514ab32c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.452 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b40dece-fef8-4e3e-bf6f-e435e9d3c6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.482 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.488 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.497 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2c237f-0d22-410f-b8fa-d2dd6d01d81e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.502 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[96e19735-f41d-4499-857f-84e309f8444e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 NetworkManager[48920]: <info>  [1763802608.5277] device (tap514ab32c-30): carrier: link connected
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.535 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e744068c-a9a1-42b5-a88b-77d9868dbdca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.555 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf6e623-8778-4627-8159-eb028226ccc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282292, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.577 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a134e3d3-6b6d-4dda-888f-a98cbd88821a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:d932'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545926, 'tstamp': 545926}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282293, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.594 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e3485840-25ea-40db-8c3d-2d3b398ead83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282294, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.625 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d19e03b-6227-4a0c-9544-30b893d46f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.692 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c99a8dbf-534e-4ea9-bfd6-1f4e1073856c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.694 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.695 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.695 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:08 np0005532048 NetworkManager[48920]: <info>  [1763802608.6984] manager: (tap514ab32c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:08 np0005532048 kernel: tap514ab32c-30: entered promiscuous mode
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.704 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:08Z|00072|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.707 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/514ab32c-3e9b-4d95-81f8-6acc06be6d1a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/514ab32c-3e9b-4d95-81f8-6acc06be6d1a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.708 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d85911-7424-4851-9dde-5d07611d4075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.708 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/514ab32c-3e9b-4d95-81f8-6acc06be6d1a.pid.haproxy
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 514ab32c-3e9b-4d95-81f8-6acc06be6d1a
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:10:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:08.709 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'env', 'PROCESS_TAG=haproxy-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/514ab32c-3e9b-4d95-81f8-6acc06be6d1a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.723 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282349143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.956 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.965 253665 DEBUG nova.virt.libvirt.vif [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1874754552',display_name='tempest-ServersAdminTestJSON-server-1874754552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1874754552',id=19,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-otgq40uh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:04Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=d99bd27b-0ff3-493e-a69c-6c7ec034aa81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.966 253665 DEBUG nova.network.os_vif_util [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.967 253665 DEBUG nova.network.os_vif_util [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:08 np0005532048 nova_compute[253661]: 2025-11-22 09:10:08.970 253665 DEBUG nova.objects.instance [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid d99bd27b-0ff3-493e-a69c-6c7ec034aa81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.030 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <uuid>d99bd27b-0ff3-493e-a69c-6c7ec034aa81</uuid>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <name>instance-00000013</name>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAdminTestJSON-server-1874754552</nova:name>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:10:07</nova:creationTime>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <nova:port uuid="a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <entry name="serial">d99bd27b-0ff3-493e-a69c-6c7ec034aa81</entry>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <entry name="uuid">d99bd27b-0ff3-493e-a69c-6c7ec034aa81</entry>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:0c:5a:f3"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <target dev="tapa36e1a52-1f"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/console.log" append="off"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:10:09 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:10:09 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:10:09 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:10:09 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.032 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Preparing to wait for external event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.032 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.032 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.033 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.033 253665 DEBUG nova.virt.libvirt.vif [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1874754552',display_name='tempest-ServersAdminTestJSON-server-1874754552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1874754552',id=19,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-otgq40uh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:04Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=d99bd27b-0ff3-493e-a69c-6c7ec034aa81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.034 253665 DEBUG nova.network.os_vif_util [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.034 253665 DEBUG nova.network.os_vif_util [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.035 253665 DEBUG os_vif [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.038 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.038 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa36e1a52-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.038 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa36e1a52-1f, col_values=(('external_ids', {'iface-id': 'a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:5a:f3', 'vm-uuid': 'd99bd27b-0ff3-493e-a69c-6c7ec034aa81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:09 np0005532048 NetworkManager[48920]: <info>  [1763802609.0413] manager: (tapa36e1a52-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.045 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.051 253665 INFO os_vif [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f')#033[00m
Nov 22 04:10:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 251 op/s
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.126 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.126 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.126 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:0c:5a:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.127 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Using config drive#033[00m
Nov 22 04:10:09 np0005532048 podman[282388]: 2025-11-22 09:10:09.132528569 +0000 UTC m=+0.064004343 container create af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.167 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:09 np0005532048 systemd[1]: Started libpod-conmon-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6.scope.
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.174 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802609.1464906, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.175 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Started (Lifecycle Event)#033[00m
Nov 22 04:10:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:10:09 np0005532048 podman[282388]: 2025-11-22 09:10:09.101901167 +0000 UTC m=+0.033376961 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:10:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31e66055707f791e950c824f07df600e3199b957edd8fd29251cf5299b718d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.213 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:09 np0005532048 podman[282388]: 2025-11-22 09:10:09.221206092 +0000 UTC m=+0.152681886 container init af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.225 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802609.1466255, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.225 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:10:09 np0005532048 podman[282388]: 2025-11-22 09:10:09.2310109 +0000 UTC m=+0.162486674 container start af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.246 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.251 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:09 np0005532048 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [NOTICE]   (282428) : New worker (282430) forked
Nov 22 04:10:09 np0005532048 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [NOTICE]   (282428) : Loading success.
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.268 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.603 253665 DEBUG nova.compute.manager [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.604 253665 DEBUG oslo_concurrency.lockutils [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.604 253665 DEBUG oslo_concurrency.lockutils [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.604 253665 DEBUG oslo_concurrency.lockutils [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.605 253665 DEBUG nova.compute.manager [req-36d8ae4d-cf71-4f1c-b85a-f5975f735b05 req-a63cf1b3-c960-4b56-a748-960718cb6400 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Processing event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.606 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.622 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802609.6209538, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.622 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:10:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.626 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.646 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.648 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance spawned successfully.#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.649 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.651 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.671 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.671 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.672 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.672 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.673 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.673 253665 DEBUG nova.virt.libvirt.driver [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.679 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.720 253665 INFO nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Took 7.56 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.720 253665 DEBUG nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.773 253665 INFO nova.compute.manager [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Took 8.64 seconds to build instance.#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.786 253665 DEBUG oslo_concurrency.lockutils [None req-b435b213-5bd8-41ec-9708-8e10220b309c 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.812 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Creating config drive at /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.816 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbet2jpvl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.843 253665 DEBUG nova.network.neutron [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updated VIF entry in instance network info cache for port a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.844 253665 DEBUG nova.network.neutron [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updating instance_info_cache with network_info: [{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.856 253665 DEBUG oslo_concurrency.lockutils [req-88922c2c-55a0-40d4-ac50-49600a92399c req-74c70dad-b597-4cfd-9706-aa241b6fe609 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.949 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbet2jpvl" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.978 253665 DEBUG nova.storage.rbd_utils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:09 np0005532048 nova_compute[253661]: 2025-11-22 09:10:09.982 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.154 253665 DEBUG oslo_concurrency.processutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config d99bd27b-0ff3-493e-a69c-6c7ec034aa81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.155 253665 INFO nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Deleting local config drive /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81/disk.config because it was imported into RBD.#033[00m
Nov 22 04:10:10 np0005532048 kernel: tapa36e1a52-1f: entered promiscuous mode
Nov 22 04:10:10 np0005532048 NetworkManager[48920]: <info>  [1763802610.2502] manager: (tapa36e1a52-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Nov 22 04:10:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:10Z|00073|binding|INFO|Claiming lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 for this chassis.
Nov 22 04:10:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:10Z|00074|binding|INFO|a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19: Claiming fa:16:3e:0c:5a:f3 10.100.0.11
Nov 22 04:10:10 np0005532048 systemd-udevd[282276]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.258 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:5a:f3 10.100.0.11'], port_security=['fa:16:3e:0c:5a:f3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd99bd27b-0ff3-493e-a69c-6c7ec034aa81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.260 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.263 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.267 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.272 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.272 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:10 np0005532048 NetworkManager[48920]: <info>  [1763802610.2839] device (tapa36e1a52-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:10:10 np0005532048 NetworkManager[48920]: <info>  [1763802610.2858] device (tapa36e1a52-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:10:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:10Z|00075|binding|INFO|Setting lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 ovn-installed in OVS
Nov 22 04:10:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:10Z|00076|binding|INFO|Setting lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 up in Southbound
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.287 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[107e20f4-b450-4893-a750-6a2c2c52c795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.292 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.295 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:10 np0005532048 systemd-machined[215941]: New machine qemu-22-instance-00000013.
Nov 22 04:10:10 np0005532048 systemd[1]: Started Virtual Machine qemu-22-instance-00000013.
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.327 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2e5c7d-c615-479c-993c-b01ea6f99e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.331 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d2d3f2-ed7c-453c-9f08-a2d687230f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.369 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.369 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.377 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.377 253665 INFO nova.compute.claims [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.366 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9643dd14-686d-4329-a603-db375a7e562d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.398 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee9264b-d608-46bf-82ea-02b2f0a4d922]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282503, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.421 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eac9e805-9381-4222-91c9-9c119e1ee3b3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282505, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282505, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.424 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.428 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.428 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.428 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:10.429 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.541 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.741 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802610.7401047, d99bd27b-0ff3-493e-a69c-6c7ec034aa81 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.744 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] VM Started (Lifecycle Event)#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.766 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.772 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802610.740446, d99bd27b-0ff3-493e-a69c-6c7ec034aa81 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.773 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.795 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.801 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:10 np0005532048 nova_compute[253661]: 2025-11-22 09:10:10.822 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948069961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.079 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.086 253665 DEBUG nova.compute.provider_tree [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.103 253665 DEBUG nova.scheduler.client.report [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 188 op/s
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.127 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.129 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.169 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.170 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.187 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.202 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.320 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.323 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.323 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Creating image(s)#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.360 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.403 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.442 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.449 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.488 253665 DEBUG nova.policy [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '526789957ca1421b94691426dc7bccb5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef6e238d438c49959eb8bee112836e52', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.540 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.542 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.543 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.543 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.578 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.586 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ff657cfc-b1bb-4545-bc13-ad240e69c666_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.882 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ff657cfc-b1bb-4545-bc13-ad240e69c666_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.935 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] resizing rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.987 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.988 253665 WARNING nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Processing event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.989 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 DEBUG oslo_concurrency.lockutils [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 DEBUG nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] No waiting events found dispatching network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.990 253665 WARNING nova.compute.manager [req-247ddf63-8cb2-4236-9b65-3245f57ea9e2 req-ab852484-e3fc-4682-bf3f-4b313ad66c06 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received unexpected event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:10:11 np0005532048 nova_compute[253661]: 2025-11-22 09:10:11.991 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.039 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802612.0026987, d99bd27b-0ff3-493e-a69c-6c7ec034aa81 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.040 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.043 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.049 253665 DEBUG nova.objects.instance [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'migration_context' on Instance uuid ff657cfc-b1bb-4545-bc13-ad240e69c666 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.053 253665 INFO nova.virt.libvirt.driver [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance spawned successfully.#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.054 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.059 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.062 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.065 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.067 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Ensure instance console log exists: /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.067 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.067 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.068 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.074 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.075 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.075 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.076 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.076 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.077 253665 DEBUG nova.virt.libvirt.driver [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.081 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.169 253665 INFO nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Took 7.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.169 253665 DEBUG nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.223 253665 INFO nova.compute.manager [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Took 8.24 seconds to build instance.#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.241 253665 DEBUG oslo_concurrency.lockutils [None req-c59f790a-be5b-4716-9185-a511253654b3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823678760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823678760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.336 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Successfully created port: 52bf11af-1372-4c5d-8bd8-81017da77de8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.613 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.614 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.638 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.713 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.714 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.719 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.720 253665 INFO nova.compute.claims [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:10:12 np0005532048 nova_compute[253661]: 2025-11-22 09:10:12.923 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.4 MiB/s wr, 203 op/s
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.134 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Successfully updated port: 52bf11af-1372-4c5d-8bd8-81017da77de8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.148 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.149 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquired lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.149 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.308 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:10:13 np0005532048 podman[282756]: 2025-11-22 09:10:13.413264715 +0000 UTC m=+0.096100753 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 22 04:10:13 np0005532048 podman[282757]: 2025-11-22 09:10:13.415885209 +0000 UTC m=+0.098479832 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:10:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171787393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.459 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.465 253665 DEBUG nova.compute.provider_tree [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.477 253665 DEBUG nova.scheduler.client.report [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.499 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.500 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.540 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.540 253665 DEBUG nova.network.neutron [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.556 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.579 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.618 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.619 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.619 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.619 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.620 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.621 253665 INFO nova.compute.manager [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Terminating instance#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.622 253665 DEBUG nova.compute.manager [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.667 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.669 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.669 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Creating image(s)#033[00m
Nov 22 04:10:13 np0005532048 kernel: tap0122a4be-9c (unregistering): left promiscuous mode
Nov 22 04:10:13 np0005532048 NetworkManager[48920]: <info>  [1763802613.6856] device (tap0122a4be-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:10:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:13Z|00077|binding|INFO|Releasing lport 0122a4be-9c10-4475-ba7d-5c818be52474 from this chassis (sb_readonly=0)
Nov 22 04:10:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:13Z|00078|binding|INFO|Setting lport 0122a4be-9c10-4475-ba7d-5c818be52474 down in Southbound
Nov 22 04:10:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:13Z|00079|binding|INFO|Removing iface tap0122a4be-9c ovn-installed in OVS
Nov 22 04:10:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.705 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:be:aa 10.100.0.6'], port_security=['fa:16:3e:ea:be:aa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '18eb7df8-f3ac-44d2-86c1-db7c0c913c53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5879249ab50a40ec9553bc923bdd1042', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7098ed06-dd10-40f8-a35d-3bd27702aded 90f543f2-0e15-4746-9035-ec29edc5cf1e d14126c2-5248-4820-8cca-a041d8844d35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0de8bc98-4153-4ec7-ae4b-7da28376c78a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0122a4be-9c10-4475-ba7d-5c818be52474) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.709 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0122a4be-9c10-4475-ba7d-5c818be52474 in datapath bce72c95-f29f-458a-9b0e-7e700aa1deb4 unbound from our chassis#033[00m
Nov 22 04:10:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.711 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bce72c95-f29f-458a-9b0e-7e700aa1deb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:10:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.713 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9acff7a-89ab-4d13-82fc-cb2a2256ddee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:13.714 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 namespace which is not needed anymore#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.713 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:13 np0005532048 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 22 04:10:13 np0005532048 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 15.950s CPU time.
Nov 22 04:10:13 np0005532048 systemd-machined[215941]: Machine qemu-15-instance-0000000e terminated.
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.760 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.817 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.829 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.892 253665 INFO nova.virt.libvirt.driver [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Instance destroyed successfully.#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.894 253665 DEBUG nova.objects.instance [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lazy-loading 'resources' on Instance uuid 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.904 253665 DEBUG nova.network.neutron [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.905 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.909 253665 DEBUG nova.virt.libvirt.vif [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:08:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-663200800',display_name='tempest-SecurityGroupsTestJSON-server-663200800',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-663200800',id=14,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:09:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5879249ab50a40ec9553bc923bdd1042',ramdisk_id='',reservation_id='r-197d3f9j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-342579724',owner_user_name='tempest-SecurityGroupsTestJSON-342579724-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:09:13Z,user_data=None,user_id='ee24e4812c424984881862883987d750',uuid=18eb7df8-f3ac-44d2-86c1-db7c0c913c53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.909 253665 DEBUG nova.network.os_vif_util [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converting VIF {"id": "0122a4be-9c10-4475-ba7d-5c818be52474", "address": "fa:16:3e:ea:be:aa", "network": {"id": "bce72c95-f29f-458a-9b0e-7e700aa1deb4", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1467253492-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5879249ab50a40ec9553bc923bdd1042", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0122a4be-9c", "ovs_interfaceid": "0122a4be-9c10-4475-ba7d-5c818be52474", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.910 253665 DEBUG nova.network.os_vif_util [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.910 253665 DEBUG os_vif [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:10:13 np0005532048 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [NOTICE]   (279586) : haproxy version is 2.8.14-c23fe91
Nov 22 04:10:13 np0005532048 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [NOTICE]   (279586) : path to executable is /usr/sbin/haproxy
Nov 22 04:10:13 np0005532048 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [WARNING]  (279586) : Exiting Master process...
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.916 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:13 np0005532048 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [ALERT]    (279586) : Current worker (279588) exited with code 143 (Terminated)
Nov 22 04:10:13 np0005532048 neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4[279582]: [WARNING]  (279586) : All workers exited. Exiting... (0)
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.918 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0122a4be-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:13 np0005532048 systemd[1]: libpod-c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f.scope: Deactivated successfully.
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.925 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.927 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.928 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:13 np0005532048 podman[282869]: 2025-11-22 09:10:13.928041217 +0000 UTC m=+0.083652910 container died c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.928 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.955 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:13 np0005532048 nova_compute[253661]: 2025-11-22 09:10:13.969 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f-userdata-shm.mount: Deactivated successfully.
Nov 22 04:10:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b439ac87fd05beec68cedfc1c5359f61b19c450f7ad63dcb47892d20e6a6cb9e-merged.mount: Deactivated successfully.
Nov 22 04:10:14 np0005532048 podman[282869]: 2025-11-22 09:10:14.01015769 +0000 UTC m=+0.165769393 container cleanup c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.015 253665 INFO os_vif [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ea:be:aa,bridge_name='br-int',has_traffic_filtering=True,id=0122a4be-9c10-4475-ba7d-5c818be52474,network=Network(bce72c95-f29f-458a-9b0e-7e700aa1deb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0122a4be-9c')#033[00m
Nov 22 04:10:14 np0005532048 systemd[1]: libpod-conmon-c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f.scope: Deactivated successfully.
Nov 22 04:10:14 np0005532048 podman[282932]: 2025-11-22 09:10:14.118339865 +0000 UTC m=+0.069397074 container remove c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:10:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.126 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b390e5d4-0f3c-4096-ae32-a506da445b84]: (4, ('Sat Nov 22 09:10:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 (c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f)\nc391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f\nSat Nov 22 09:10:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 (c391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f)\nc391e7cfc8fa2168f8ee40b0e7c4905d4d0c93f2c63de26f7b5dcf3b776c9f0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.128 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4773d91-95e7-44a1-9738-e7363df9fe8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.131 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce72c95-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:14 np0005532048 kernel: tapbce72c95-f0: left promiscuous mode
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.153 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.159 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99847d8b-d196-4441-b783-0724eae07418]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.176 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4262973-fbe6-4d38-9c20-f0e8b80ffdd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.178 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b17158e4-bb5a-448f-adc0-19b10ba13e13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[398c4af4-0dc8-48d7-86f3-2cb77b306b4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540269, 'reachable_time': 19889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282996, 'error': None, 'target': 'ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:14 np0005532048 systemd[1]: run-netns-ovnmeta\x2dbce72c95\x2df29f\x2d458a\x2d9b0e\x2d7e700aa1deb4.mount: Deactivated successfully.
Nov 22 04:10:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.215 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bce72c95-f29f-458a-9b0e-7e700aa1deb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:10:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:14.215 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c493f820-461b-46cb-b0fd-22db7d386ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.376 253665 DEBUG nova.network.neutron [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.398 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.497 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] resizing rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.564 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Releasing lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.564 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance network_info: |[{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.571 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start _get_guest_xml network_info=[{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.592 253665 WARNING nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.599 253665 DEBUG nova.virt.libvirt.host [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.600 253665 DEBUG nova.virt.libvirt.host [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.606 253665 DEBUG nova.virt.libvirt.host [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.607 253665 DEBUG nova.virt.libvirt.host [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.608 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.608 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.609 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.609 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.609 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.610 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.610 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.610 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.610 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.611 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.611 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.611 253665 DEBUG nova.virt.hardware [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.615 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.664 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.664 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing instance network info cache due to event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.665 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.665 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.666 253665 DEBUG nova.network.neutron [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.725 253665 DEBUG nova.objects.instance [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lazy-loading 'migration_context' on Instance uuid 0e7ac107-5a5a-4066-9396-f22b877e4c2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.741 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.742 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Ensure instance console log exists: /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.742 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.743 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.743 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.744 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.750 253665 WARNING nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.761 253665 INFO nova.virt.libvirt.driver [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Deleting instance files /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53_del#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.761 253665 INFO nova.virt.libvirt.driver [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Deletion of /var/lib/nova/instances/18eb7df8-f3ac-44d2-86c1-db7c0c913c53_del complete#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.771 253665 DEBUG nova.virt.libvirt.host [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.772 253665 DEBUG nova.virt.libvirt.host [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.780 253665 DEBUG nova.virt.libvirt.host [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.780 253665 DEBUG nova.virt.libvirt.host [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.781 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.782 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.783 253665 DEBUG nova.virt.hardware [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.786 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.886 253665 INFO nova.compute.manager [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Took 1.26 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.887 253665 DEBUG oslo.service.loopingcall [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.888 253665 DEBUG nova.compute.manager [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:10:14 np0005532048 nova_compute[253661]: 2025-11-22 09:10:14.888 253665 DEBUG nova.network.neutron [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168624485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:10:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1857b15f-9c09-4b59-930f-345761f6be59 does not exist
Nov 22 04:10:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d037f7c4-6ffa-4f86-b9b2-3003a50107c0 does not exist
Nov 22 04:10:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev badd32c9-34f0-43cf-b54f-55a150dc872e does not exist
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.119 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:10:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 292 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.9 MiB/s wr, 263 op/s
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.164 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.174 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562866794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.374 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.414 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.418 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.501 253665 DEBUG nova.network.neutron [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.575 253665 INFO nova.compute.manager [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Took 0.69 seconds to deallocate network for instance.#033[00m
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471072209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.668 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.670 253665 DEBUG nova.virt.libvirt.vif [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1250749597',display_name='tempest-FloatingIPsAssociationTestJSON-server-1250749597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1250749597',id=20,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-63b4tjoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:11Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=ff657cfc-b1bb-4545-bc13-ad240e69c666,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.671 253665 DEBUG nova.network.os_vif_util [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.672 253665 DEBUG nova.network.os_vif_util [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.673 253665 DEBUG nova.objects.instance [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'pci_devices' on Instance uuid ff657cfc-b1bb-4545-bc13-ad240e69c666 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.690 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <uuid>ff657cfc-b1bb-4545-bc13-ad240e69c666</uuid>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <name>instance-00000014</name>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1250749597</nova:name>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:10:14</nova:creationTime>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:user uuid="526789957ca1421b94691426dc7bccb5">tempest-FloatingIPsAssociationTestJSON-1882113079-project-member</nova:user>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:project uuid="ef6e238d438c49959eb8bee112836e52">tempest-FloatingIPsAssociationTestJSON-1882113079</nova:project>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:port uuid="52bf11af-1372-4c5d-8bd8-81017da77de8">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="serial">ff657cfc-b1bb-4545-bc13-ad240e69c666</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="uuid">ff657cfc-b1bb-4545-bc13-ad240e69c666</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/ff657cfc-b1bb-4545-bc13-ad240e69c666_disk">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:58:5c:0b"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <target dev="tap52bf11af-13"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/console.log" append="off"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:10:15 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:10:15 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.692 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Preparing to wait for external event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.693 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.693 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.694 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.694 253665 DEBUG nova.virt.libvirt.vif [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1250749597',display_name='tempest-FloatingIPsAssociationTestJSON-server-1250749597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1250749597',id=20,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-63b4tjoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:11Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=ff657cfc-b1bb-4545-bc13-ad240e69c666,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.695 253665 DEBUG nova.network.os_vif_util [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.695 253665 DEBUG nova.network.os_vif_util [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.696 253665 DEBUG os_vif [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.696 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.697 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.697 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.701 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52bf11af-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.702 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52bf11af-13, col_values=(('external_ids', {'iface-id': '52bf11af-1372-4c5d-8bd8-81017da77de8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:5c:0b', 'vm-uuid': 'ff657cfc-b1bb-4545-bc13-ad240e69c666'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:15 np0005532048 NetworkManager[48920]: <info>  [1763802615.7059] manager: (tap52bf11af-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.712 253665 INFO os_vif [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13')#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.730 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.731 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:15 np0005532048 podman[283435]: 2025-11-22 09:10:15.777017739 +0000 UTC m=+0.055193980 container create a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.801 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.801 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.802 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] No VIF found with MAC fa:16:3e:58:5c:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.802 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Using config drive#033[00m
Nov 22 04:10:15 np0005532048 systemd[1]: Started libpod-conmon-a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb.scope.
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.838 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:15 np0005532048 podman[283435]: 2025-11-22 09:10:15.747508313 +0000 UTC m=+0.025684574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.852 253665 DEBUG nova.compute.manager [req-883135d7-893d-42dc-8278-9cb630dad984 req-e8efbffa-690a-43da-88f7-3484b4d97b2d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-deleted-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:10:15 np0005532048 podman[283435]: 2025-11-22 09:10:15.888477584 +0000 UTC m=+0.166653855 container init a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:10:15 np0005532048 podman[283435]: 2025-11-22 09:10:15.896541729 +0000 UTC m=+0.174717970 container start a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893282555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:15 np0005532048 laughing_joliot[283467]: 167 167
Nov 22 04:10:15 np0005532048 systemd[1]: libpod-a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb.scope: Deactivated successfully.
Nov 22 04:10:15 np0005532048 podman[283435]: 2025-11-22 09:10:15.906605504 +0000 UTC m=+0.184781745 container attach a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.905 253665 DEBUG oslo_concurrency.processutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:15 np0005532048 podman[283435]: 2025-11-22 09:10:15.907892255 +0000 UTC m=+0.186068496 container died a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:10:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:10:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e65c9ba47ed24e48939c734aa215904208b903a171e1cda51c434eec5dbcc4a9-merged.mount: Deactivated successfully.
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.952 253665 DEBUG nova.network.neutron [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updated VIF entry in instance network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.956 253665 DEBUG nova.network.neutron [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.958 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.961 253665 DEBUG nova.objects.instance [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e7ac107-5a5a-4066-9396-f22b877e4c2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:15 np0005532048 podman[283435]: 2025-11-22 09:10:15.972732019 +0000 UTC m=+0.250908270 container remove a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.976 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.977 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-unplugged-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.977 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.977 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.979 253665 DEBUG oslo_concurrency.lockutils [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.979 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] No waiting events found dispatching network-vif-unplugged-0122a4be-9c10-4475-ba7d-5c818be52474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.979 253665 DEBUG nova.compute.manager [req-d7a938ff-d7dc-4b6a-86a6-e7c08f08edd8 req-5d8379c4-1484-4f8c-8a34-423f42035f6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-unplugged-0122a4be-9c10-4475-ba7d-5c818be52474 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:10:15 np0005532048 nova_compute[253661]: 2025-11-22 09:10:15.982 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <uuid>0e7ac107-5a5a-4066-9396-f22b877e4c2b</uuid>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <name>instance-00000015</name>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:name>tempest-TenantUsagesTestJSON-server-1254272894</nova:name>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:10:14</nova:creationTime>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:user uuid="d741f4ee50ae459697238fe0a7207afe">tempest-TenantUsagesTestJSON-238986020-project-member</nova:user>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <nova:project uuid="6869d1beac0b4bfab5de74e8692b55ed">tempest-TenantUsagesTestJSON-238986020</nova:project>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="serial">0e7ac107-5a5a-4066-9396-f22b877e4c2b</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="uuid">0e7ac107-5a5a-4066-9396-f22b877e4c2b</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/console.log" append="off"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:10:15 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:10:15 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:10:15 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:10:15 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:10:16 np0005532048 systemd[1]: libpod-conmon-a28475ab62982414ca60befc7168a2e748eca4b86967571af2de75c1c5255ceb.scope: Deactivated successfully.
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.035 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.036 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.036 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Using config drive#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.095 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.151 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Creating config drive at /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.156 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcncqr4v5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:10:16 np0005532048 podman[283533]: 2025-11-22 09:10:16.240341483 +0000 UTC m=+0.097347734 container create 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:16 np0005532048 podman[283533]: 2025-11-22 09:10:16.17469447 +0000 UTC m=+0.031700691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:16 np0005532048 systemd[1]: Started libpod-conmon-439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423.scope.
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.295 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcncqr4v5" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:10:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.333 253665 DEBUG nova.storage.rbd_utils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] rbd image ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.349 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:16 np0005532048 podman[283533]: 2025-11-22 09:10:16.353838247 +0000 UTC m=+0.210844458 container init 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:10:16 np0005532048 podman[283533]: 2025-11-22 09:10:16.361139034 +0000 UTC m=+0.218145245 container start 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:10:16 np0005532048 podman[283533]: 2025-11-22 09:10:16.366270909 +0000 UTC m=+0.223277120 container attach 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751987839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.395 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Creating config drive at /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.400 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjzoz7npp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.431 253665 DEBUG oslo_concurrency.processutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.437 253665 DEBUG nova.compute.provider_tree [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.452 253665 DEBUG nova.scheduler.client.report [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.498 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.503 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.504 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.504 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.504 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.543 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjzoz7npp" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.567 253665 DEBUG nova.storage.rbd_utils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] rbd image 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.571 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.606 253665 INFO nova.scheduler.client.report [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Deleted allocations for instance 18eb7df8-f3ac-44d2-86c1-db7c0c913c53#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.609 253665 DEBUG oslo_concurrency.processutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config ff657cfc-b1bb-4545-bc13-ad240e69c666_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.609 253665 INFO nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Deleting local config drive /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666/disk.config because it was imported into RBD.#033[00m
Nov 22 04:10:16 np0005532048 kernel: tap52bf11af-13: entered promiscuous mode
Nov 22 04:10:16 np0005532048 NetworkManager[48920]: <info>  [1763802616.6674] manager: (tap52bf11af-13): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 22 04:10:16 np0005532048 systemd-udevd[282834]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:10:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:16Z|00080|binding|INFO|Claiming lport 52bf11af-1372-4c5d-8bd8-81017da77de8 for this chassis.
Nov 22 04:10:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:16Z|00081|binding|INFO|52bf11af-1372-4c5d-8bd8-81017da77de8: Claiming fa:16:3e:58:5c:0b 10.100.0.5
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.668 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:16 np0005532048 NetworkManager[48920]: <info>  [1763802616.6853] device (tap52bf11af-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.686 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:5c:0b 10.100.0.5'], port_security=['fa:16:3e:58:5c:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff657cfc-b1bb-4545-bc13-ad240e69c666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=52bf11af-1372-4c5d-8bd8-81017da77de8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:16 np0005532048 NetworkManager[48920]: <info>  [1763802616.6888] device (tap52bf11af-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:10:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:16Z|00082|binding|INFO|Setting lport 52bf11af-1372-4c5d-8bd8-81017da77de8 ovn-installed in OVS
Nov 22 04:10:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:16Z|00083|binding|INFO|Setting lport 52bf11af-1372-4c5d-8bd8-81017da77de8 up in Southbound
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.689 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 52bf11af-1372-4c5d-8bd8-81017da77de8 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 bound to our chassis#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.691 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.705 253665 DEBUG oslo_concurrency.lockutils [None req-25a59031-41f8-43ec-9122-f37138da976d ee24e4812c424984881862883987d750 5879249ab50a40ec9553bc923bdd1042 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.717 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[67f263dd-107d-48ff-8689-16cd9ff5af18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:16 np0005532048 systemd-machined[215941]: New machine qemu-23-instance-00000014.
Nov 22 04:10:16 np0005532048 systemd[1]: Started Virtual Machine qemu-23-instance-00000014.
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.738 253665 DEBUG nova.compute.manager [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.739 253665 DEBUG oslo_concurrency.lockutils [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.739 253665 DEBUG oslo_concurrency.lockutils [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.739 253665 DEBUG oslo_concurrency.lockutils [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "18eb7df8-f3ac-44d2-86c1-db7c0c913c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.740 253665 DEBUG nova.compute.manager [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] No waiting events found dispatching network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.740 253665 WARNING nova.compute.manager [req-373a86a8-92da-4935-bb6d-06e002df5d56 req-4e32699c-b844-44a8-8926-2777c5391415 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Received unexpected event network-vif-plugged-0122a4be-9c10-4475-ba7d-5c818be52474 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.745 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.745 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.756 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[13f25353-b74e-4166-a90e-50a56d1159a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.760 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ea682dc1-7246-4491-8c8f-55202017f748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.787 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.806 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0cad83a-9c48-4b1d-9c83-48b4d22e9b5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.831 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88d2a4b0-ac6f-4d27-ba4c-a4e2f7188370]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283681, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.851 253665 DEBUG oslo_concurrency.processutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config 0e7ac107-5a5a-4066-9396-f22b877e4c2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.852 253665 INFO nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Deleting local config drive /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b/disk.config because it was imported into RBD.#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dfaec166-0fbf-497a-9070-f4a8caf5b9cc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545502, 'tstamp': 545502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283682, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545507, 'tstamp': 545507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283682, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.883 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.884 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.884 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.885 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.886 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:16.886 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.896 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:10:16 np0005532048 nova_compute[253661]: 2025-11-22 09:10:16.896 253665 INFO nova.compute.claims [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:10:16 np0005532048 systemd-machined[215941]: New machine qemu-24-instance-00000015.
Nov 22 04:10:16 np0005532048 systemd[1]: Started Virtual Machine qemu-24-instance-00000015.
Nov 22 04:10:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2589562911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.036 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.100 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 294 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.2 MiB/s wr, 308 op/s
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.174 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.175 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.189 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.189 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.202 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.203 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.212 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.213 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.220 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.222 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.236 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802617.2355912, ff657cfc-b1bb-4545-bc13-ad240e69c666 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.237 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] VM Started (Lifecycle Event)#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.257 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.263 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802617.2360687, ff657cfc-b1bb-4545-bc13-ad240e69c666 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.263 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.281 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.292 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.307 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.408 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802617.408283, 0e7ac107-5a5a-4066-9396-f22b877e4c2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.411 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.414 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.415 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.421 253665 INFO nova.virt.libvirt.driver [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance spawned successfully.#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.422 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.435 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.445 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.448 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.448 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.449 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.449 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.449 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.450 253665 DEBUG nova.virt.libvirt.driver [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.479 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.480 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802617.408365, 0e7ac107-5a5a-4066-9396-f22b877e4c2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.480 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] VM Started (Lifecycle Event)#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.493 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.501 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.518 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:17 np0005532048 jovial_gauss[283551]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:10:17 np0005532048 jovial_gauss[283551]: --> relative data size: 1.0
Nov 22 04:10:17 np0005532048 jovial_gauss[283551]: --> All data devices are unavailable
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.670 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.671 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3947MB free_disk=59.8648681640625GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.671 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:17 np0005532048 systemd[1]: libpod-439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423.scope: Deactivated successfully.
Nov 22 04:10:17 np0005532048 systemd[1]: libpod-439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423.scope: Consumed 1.125s CPU time.
Nov 22 04:10:17 np0005532048 podman[283533]: 2025-11-22 09:10:17.68164849 +0000 UTC m=+1.538654691 container died 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:10:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a3f2ce623868ceeeb5bf55f96ee067167dcc21c8b3f0f115d6f811f47f50de92-merged.mount: Deactivated successfully.
Nov 22 04:10:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978496543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.754 253665 INFO nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Took 4.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.754 253665 DEBUG nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.784 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:17 np0005532048 podman[283533]: 2025-11-22 09:10:17.793554376 +0000 UTC m=+1.650560587 container remove 439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.808 253665 DEBUG nova.compute.provider_tree [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.830 253665 DEBUG nova.scheduler.client.report [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:17 np0005532048 systemd[1]: libpod-conmon-439d5bd8a782ee0000a0f2d9dd45c458031df2f5fcccec4cacf8820f96b4e423.scope: Deactivated successfully.
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.835 253665 INFO nova.compute.manager [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Took 5.15 seconds to build instance.#033[00m
Nov 22 04:10:17 np0005532048 podman[283830]: 2025-11-22 09:10:17.86911379 +0000 UTC m=+0.159262985 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.877 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.878 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.881 253665 DEBUG oslo_concurrency.lockutils [None req-6062101b-56a7-4ad3-a040-284f931aa0a1 d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.882 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.966 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 04781543-b5ed-482a-a30a-0730fbcd12a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3ae08a2f-348c-406b-8ffc-9acb8a542e1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d99bd27b-0ff3-493e-a69c-6c7ec034aa81 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance ff657cfc-b1bb-4545-bc13-ad240e69c666 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 0e7ac107-5a5a-4066-9396-f22b877e4c2b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.967 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 96000606-0bc4-4cf1-9e33-360a640c2cb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.968 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:10:17 np0005532048 nova_compute[253661]: 2025-11-22 09:10:17.968 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.194 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.194 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.237 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.240 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.289 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.401 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.403 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.404 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Creating image(s)#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.436 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.498 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:18 np0005532048 podman[284053]: 2025-11-22 09:10:18.553087179 +0000 UTC m=+0.059364352 container create 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.557 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.577 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:18 np0005532048 systemd[1]: Started libpod-conmon-3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2.scope.
Nov 22 04:10:18 np0005532048 podman[284053]: 2025-11-22 09:10:18.521065562 +0000 UTC m=+0.027342775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:10:18 np0005532048 podman[284053]: 2025-11-22 09:10:18.665295252 +0000 UTC m=+0.171572445 container init 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:10:18 np0005532048 podman[284053]: 2025-11-22 09:10:18.674522855 +0000 UTC m=+0.180800028 container start 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.675 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.676 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.677 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.677 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:18 np0005532048 podman[284053]: 2025-11-22 09:10:18.678178904 +0000 UTC m=+0.184456077 container attach 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:10:18 np0005532048 quizzical_poitras[284090]: 167 167
Nov 22 04:10:18 np0005532048 systemd[1]: libpod-3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2.scope: Deactivated successfully.
Nov 22 04:10:18 np0005532048 conmon[284090]: conmon 3a929d02345659ba0be6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2.scope/container/memory.events
Nov 22 04:10:18 np0005532048 podman[284053]: 2025-11-22 09:10:18.683429872 +0000 UTC m=+0.189707035 container died 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 04:10:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay-da0746c8cdb68eccbdff141d0d3d3dcb582ae3d065b7ced41349937555337199-merged.mount: Deactivated successfully.
Nov 22 04:10:18 np0005532048 podman[284053]: 2025-11-22 09:10:18.727794429 +0000 UTC m=+0.234071602 container remove 3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.736 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.755 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966723609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:18 np0005532048 systemd[1]: libpod-conmon-3a929d02345659ba0be67bd95005fc48fcff09f70b2984372857f34384d4adf2.scope: Deactivated successfully.
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.803 253665 DEBUG nova.policy [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05cafdbce8334f9380b4dbd1d21f7d58', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd78b26f20d674ae6a213d727050a50d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.806 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.813 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.827 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.847 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802603.8446388, c5f708d0-4110-417f-8353-dc61992d22dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.848 253665 INFO nova.compute.manager [-] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.855 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.856 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:18 np0005532048 nova_compute[253661]: 2025-11-22 09:10:18.866 253665 DEBUG nova.compute.manager [None req-62b2d0fa-fab7-434c-b937-5068e8ba5677 - - - - - -] [instance: c5f708d0-4110-417f-8353-dc61992d22dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:18 np0005532048 podman[284152]: 2025-11-22 09:10:18.946253671 +0000 UTC m=+0.052513446 container create 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:10:19 np0005532048 podman[284152]: 2025-11-22 09:10:18.916862847 +0000 UTC m=+0.023122642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:19 np0005532048 systemd[1]: Started libpod-conmon-69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8.scope.
Nov 22 04:10:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:10:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:19 np0005532048 podman[284152]: 2025-11-22 09:10:19.090184533 +0000 UTC m=+0.196444318 container init 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 22 04:10:19 np0005532048 podman[284152]: 2025-11-22 09:10:19.100960115 +0000 UTC m=+0.207219890 container start 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:10:19 np0005532048 podman[284152]: 2025-11-22 09:10:19.105592807 +0000 UTC m=+0.211852582 container attach 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:10:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 288 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.2 MiB/s wr, 317 op/s
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.143 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.217 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:10:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:19Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:ea:eb 10.100.0.3
Nov 22 04:10:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:19Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:ea:eb 10.100.0.3
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.378 253665 DEBUG nova.objects.instance [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 96000606-0bc4-4cf1-9e33-360a640c2cb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.398 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.399 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Ensure instance console log exists: /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.400 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.401 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.401 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.593 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.856 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.857 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.879 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.879 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.880 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.880 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:10:19 np0005532048 nova_compute[253661]: 2025-11-22 09:10:19.881 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:10:19 np0005532048 confident_khorana[284171]: {
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:    "0": [
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:        {
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "devices": [
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "/dev/loop3"
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            ],
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_name": "ceph_lv0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_size": "21470642176",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "name": "ceph_lv0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "tags": {
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.cluster_name": "ceph",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.crush_device_class": "",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.encrypted": "0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.osd_id": "0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.type": "block",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.vdo": "0"
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            },
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "type": "block",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "vg_name": "ceph_vg0"
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:        }
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:    ],
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:    "1": [
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:        {
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "devices": [
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "/dev/loop4"
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            ],
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_name": "ceph_lv1",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_size": "21470642176",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "name": "ceph_lv1",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "tags": {
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.cluster_name": "ceph",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.crush_device_class": "",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.encrypted": "0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.osd_id": "1",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.type": "block",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.vdo": "0"
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            },
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "type": "block",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "vg_name": "ceph_vg1"
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:        }
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:    ],
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:    "2": [
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:        {
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "devices": [
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "/dev/loop5"
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            ],
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_name": "ceph_lv2",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_size": "21470642176",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "name": "ceph_lv2",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "tags": {
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.cluster_name": "ceph",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.crush_device_class": "",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.encrypted": "0",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.osd_id": "2",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.type": "block",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:                "ceph.vdo": "0"
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            },
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "type": "block",
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:            "vg_name": "ceph_vg2"
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:        }
Nov 22 04:10:20 np0005532048 confident_khorana[284171]:    ]
Nov 22 04:10:20 np0005532048 confident_khorana[284171]: }
Nov 22 04:10:20 np0005532048 systemd[1]: libpod-69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8.scope: Deactivated successfully.
Nov 22 04:10:20 np0005532048 conmon[284171]: conmon 69303f0bd44d107a035f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8.scope/container/memory.events
Nov 22 04:10:20 np0005532048 podman[284152]: 2025-11-22 09:10:20.026527876 +0000 UTC m=+1.132787651 container died 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:10:20 np0005532048 systemd[1]: var-lib-containers-storage-overlay-68a1522dbc28e421c131eb3c0198613ff325161a14c3c7adb34c691815cd5d86-merged.mount: Deactivated successfully.
Nov 22 04:10:20 np0005532048 podman[284152]: 2025-11-22 09:10:20.087572348 +0000 UTC m=+1.193832123 container remove 69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:10:20 np0005532048 systemd[1]: libpod-conmon-69303f0bd44d107a035ff5c4d1f15560f5dd2ca3888f5813a2c4b5e57f4176d8.scope: Deactivated successfully.
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.299 253665 DEBUG nova.compute.manager [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.300 253665 DEBUG oslo_concurrency.lockutils [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.300 253665 DEBUG oslo_concurrency.lockutils [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.300 253665 DEBUG oslo_concurrency.lockutils [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.300 253665 DEBUG nova.compute.manager [req-4df4f276-08d0-4ab8-a277-55bd54463641 req-74f3e4df-87ad-44d7-b72e-a261289e317d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Processing event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.301 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.309 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.310 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802620.3106909, ff657cfc-b1bb-4545-bc13-ad240e69c666 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.311 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.315 253665 INFO nova.virt.libvirt.driver [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance spawned successfully.#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.316 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.336 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.348 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.349 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.349 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.349 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.350 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.350 253665 DEBUG nova.virt.libvirt.driver [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.377 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.409 253665 INFO nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Took 9.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.409 253665 DEBUG nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.474 253665 INFO nova.compute.manager [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Took 10.12 seconds to build instance.#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.498 253665 DEBUG oslo_concurrency.lockutils [None req-b864e957-798a-4970-9b36-e5d5c33e1253 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.646 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Successfully created port: 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:10:20 np0005532048 nova_compute[253661]: 2025-11-22 09:10:20.735 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:20 np0005532048 podman[284397]: 2025-11-22 09:10:20.757463425 +0000 UTC m=+0.041408586 container create 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:10:20 np0005532048 systemd[1]: Started libpod-conmon-3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0.scope.
Nov 22 04:10:20 np0005532048 podman[284397]: 2025-11-22 09:10:20.738976396 +0000 UTC m=+0.022921577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:20 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:10:20 np0005532048 podman[284397]: 2025-11-22 09:10:20.862987866 +0000 UTC m=+0.146933047 container init 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:10:20 np0005532048 podman[284397]: 2025-11-22 09:10:20.871195684 +0000 UTC m=+0.155140845 container start 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:20 np0005532048 podman[284397]: 2025-11-22 09:10:20.875024548 +0000 UTC m=+0.158969719 container attach 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:20 np0005532048 eager_mestorf[284410]: 167 167
Nov 22 04:10:20 np0005532048 systemd[1]: libpod-3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0.scope: Deactivated successfully.
Nov 22 04:10:20 np0005532048 podman[284397]: 2025-11-22 09:10:20.879726971 +0000 UTC m=+0.163672162 container died 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:10:20 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3ff6dfeff1a5bfe678f6473a30cda0f156466d9f70b1ee7668b7c4590944a8c5-merged.mount: Deactivated successfully.
Nov 22 04:10:20 np0005532048 podman[284397]: 2025-11-22 09:10:20.924114629 +0000 UTC m=+0.208059790 container remove 3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mestorf, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:10:20 np0005532048 systemd[1]: libpod-conmon-3cf04fc95d99b15d4758902b3db7fd03a3803d9f2b5298a79a7f8b69b13d68c0.scope: Deactivated successfully.
Nov 22 04:10:21 np0005532048 podman[284432]: 2025-11-22 09:10:21.127890765 +0000 UTC m=+0.054129225 container create dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 22 04:10:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 288 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.7 MiB/s wr, 259 op/s
Nov 22 04:10:21 np0005532048 systemd[1]: Started libpod-conmon-dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a.scope.
Nov 22 04:10:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:10:21 np0005532048 podman[284432]: 2025-11-22 09:10:21.104035786 +0000 UTC m=+0.030274256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:21 np0005532048 podman[284432]: 2025-11-22 09:10:21.21839543 +0000 UTC m=+0.144633910 container init dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:10:21 np0005532048 podman[284432]: 2025-11-22 09:10:21.226548838 +0000 UTC m=+0.152787308 container start dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:10:21 np0005532048 podman[284432]: 2025-11-22 09:10:21.231104749 +0000 UTC m=+0.157343209 container attach dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.325 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.327 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.327 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.327 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.327 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.329 253665 INFO nova.compute.manager [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Terminating instance#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.329 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "refresh_cache-0e7ac107-5a5a-4066-9396-f22b877e4c2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.330 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquired lock "refresh_cache-0e7ac107-5a5a-4066-9396-f22b877e4c2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.330 253665 DEBUG nova.network.neutron [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.496 253665 DEBUG nova.network.neutron [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.541 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Successfully updated port: 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.567 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.568 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquired lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.568 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.672 253665 DEBUG nova.compute.manager [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-changed-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.672 253665 DEBUG nova.compute.manager [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Refreshing instance network info cache due to event network-changed-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.673 253665 DEBUG oslo_concurrency.lockutils [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.749 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.834 253665 DEBUG nova.network.neutron [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.845 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Releasing lock "refresh_cache-0e7ac107-5a5a-4066-9396-f22b877e4c2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:21 np0005532048 nova_compute[253661]: 2025-11-22 09:10:21.846 253665 DEBUG nova.compute.manager [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:10:21 np0005532048 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 22 04:10:21 np0005532048 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000015.scope: Consumed 4.591s CPU time.
Nov 22 04:10:21 np0005532048 systemd-machined[215941]: Machine qemu-24-instance-00000015 terminated.
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.077 253665 INFO nova.virt.libvirt.driver [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance destroyed successfully.#033[00m
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.077 253665 DEBUG nova.objects.instance [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lazy-loading 'resources' on Instance uuid 0e7ac107-5a5a-4066-9396-f22b877e4c2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:10:22 np0005532048 nice_khorana[284447]: {
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "osd_id": 1,
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "type": "bluestore"
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:    },
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "osd_id": 0,
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "type": "bluestore"
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:    },
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "osd_id": 2,
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:        "type": "bluestore"
Nov 22 04:10:22 np0005532048 nice_khorana[284447]:    }
Nov 22 04:10:22 np0005532048 nice_khorana[284447]: }
Nov 22 04:10:22 np0005532048 systemd[1]: libpod-dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a.scope: Deactivated successfully.
Nov 22 04:10:22 np0005532048 systemd[1]: libpod-dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a.scope: Consumed 1.063s CPU time.
Nov 22 04:10:22 np0005532048 podman[284502]: 2025-11-22 09:10:22.398830077 +0000 UTC m=+0.035759008 container died dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:10:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-24e2cc61b112276390078f96cd8a8957fb260751d8a012793bd4365698cca776-merged.mount: Deactivated successfully.
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.462 253665 DEBUG nova.compute.manager [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.464 253665 DEBUG oslo_concurrency.lockutils [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.464 253665 DEBUG oslo_concurrency.lockutils [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.465 253665 DEBUG oslo_concurrency.lockutils [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.465 253665 DEBUG nova.compute.manager [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] No waiting events found dispatching network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.466 253665 WARNING nova.compute.manager [req-0007974c-bc05-4f94-b043-a86ebe40f389 req-6980d405-d421-4ed4-a763-1acaa01098bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received unexpected event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:10:22 np0005532048 podman[284502]: 2025-11-22 09:10:22.474677419 +0000 UTC m=+0.111606310 container remove dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khorana, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:10:22 np0005532048 systemd[1]: libpod-conmon-dac616e75ae30affd6a3cbe3b7e1ff53ae44f01d54649ace86501eef2313675a.scope: Deactivated successfully.
Nov 22 04:10:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:10:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:10:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:10:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:10:22 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8c5bbc95-47f4-47fd-942d-a4dc0babe9a8 does not exist
Nov 22 04:10:22 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 29dc3265-c2a3-493e-a023-36e0a0fdc149 does not exist
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.573 253665 INFO nova.virt.libvirt.driver [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Deleting instance files /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b_del#033[00m
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.573 253665 INFO nova.virt.libvirt.driver [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Deletion of /var/lib/nova/instances/0e7ac107-5a5a-4066-9396-f22b877e4c2b_del complete#033[00m
Nov 22 04:10:22 np0005532048 nova_compute[253661]: 2025-11-22 09:10:22.584 253665 DEBUG nova.network.neutron [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Updating instance_info_cache with network_info: [{"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:10:22 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:23 np0005532048 NetworkManager[48920]: <info>  [1763802623.1369] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 22 04:10:23 np0005532048 NetworkManager[48920]: <info>  [1763802623.1381] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 22 04:10:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 314 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.3 MiB/s wr, 337 op/s
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.230 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:23Z|00084|binding|INFO|Releasing lport 791df5ce-fddc-4961-a1d0-6667026f8b13 from this chassis (sb_readonly=0)
Nov 22 04:10:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:23Z|00085|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.447 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Releasing lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.448 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance network_info: |[{"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.448 253665 DEBUG oslo_concurrency.lockutils [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.448 253665 DEBUG nova.network.neutron [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Refreshing network info cache for port 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.451 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start _get_guest_xml network_info=[{"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.455 253665 WARNING nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.461 253665 DEBUG nova.virt.libvirt.host [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.462 253665 DEBUG nova.virt.libvirt.host [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.467 253665 DEBUG nova.virt.libvirt.host [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.468 253665 DEBUG nova.virt.libvirt.host [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.468 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.469 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.469 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.469 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.470 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.470 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.470 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.471 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.471 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.471 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.472 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.472 253665 DEBUG nova.virt.hardware [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.475 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.717 253665 INFO nova.compute.manager [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Took 1.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.718 253665 DEBUG oslo.service.loopingcall [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.719 253665 DEBUG nova.compute.manager [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.719 253665 DEBUG nova.network.neutron [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.925 253665 DEBUG nova.network.neutron [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.935 253665 DEBUG nova.network.neutron [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2068492745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.951 253665 INFO nova.compute.manager [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Took 0.23 seconds to deallocate network for instance.#033[00m
Nov 22 04:10:23 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.961 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:23.999 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.004 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:24Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 04:10:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:24Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.124 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.128 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.308 253665 DEBUG oslo_concurrency.processutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2748705719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.521 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.523 253665 DEBUG nova.virt.libvirt.vif [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-70556130',display_name='tempest-ServersAdminTestJSON-server-70556130',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-70556130',id=22,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-74e7hdfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:18Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=96000606-0bc4-4cf1-9e33-360a640c2cb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.524 253665 DEBUG nova.network.os_vif_util [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.525 253665 DEBUG nova.network.os_vif_util [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.526 253665 DEBUG nova.objects.instance [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 96000606-0bc4-4cf1-9e33-360a640c2cb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.542 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <uuid>96000606-0bc4-4cf1-9e33-360a640c2cb7</uuid>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <name>instance-00000016</name>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAdminTestJSON-server-70556130</nova:name>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:10:23</nova:creationTime>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <nova:port uuid="411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <entry name="serial">96000606-0bc4-4cf1-9e33-360a640c2cb7</entry>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <entry name="uuid">96000606-0bc4-4cf1-9e33-360a640c2cb7</entry>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/96000606-0bc4-4cf1-9e33-360a640c2cb7_disk">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:cb:6f:23"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <target dev="tap411035c7-ec"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/console.log" append="off"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:10:24 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:10:24 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:10:24 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:10:24 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.543 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Preparing to wait for external event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.543 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.543 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.543 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.544 253665 DEBUG nova.virt.libvirt.vif [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-70556130',display_name='tempest-ServersAdminTestJSON-server-70556130',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-70556130',id=22,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-74e7hdfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:18Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=96000606-0bc4-4cf1-9e33-360a640c2cb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.544 253665 DEBUG nova.network.os_vif_util [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.545 253665 DEBUG nova.network.os_vif_util [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.545 253665 DEBUG os_vif [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.546 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.547 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.552 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap411035c7-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.553 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap411035c7-ec, col_values=(('external_ids', {'iface-id': '411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:6f:23', 'vm-uuid': '96000606-0bc4-4cf1-9e33-360a640c2cb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:24 np0005532048 NetworkManager[48920]: <info>  [1763802624.5556] manager: (tap411035c7-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.565 253665 INFO os_vif [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec')#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.639 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.639 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.639 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:cb:6f:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.640 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Using config drive#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.669 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.779 253665 DEBUG nova.compute.manager [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.780 253665 DEBUG nova.compute.manager [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.781 253665 DEBUG oslo_concurrency.lockutils [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.782 253665 DEBUG oslo_concurrency.lockutils [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.783 253665 DEBUG nova.network.neutron [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3538232642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.825 253665 DEBUG oslo_concurrency.processutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.831 253665 DEBUG nova.compute.provider_tree [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.845 253665 DEBUG nova.scheduler.client.report [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.882 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:24 np0005532048 nova_compute[253661]: 2025-11-22 09:10:24.971 253665 INFO nova.scheduler.client.report [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Deleted allocations for instance 0e7ac107-5a5a-4066-9396-f22b877e4c2b#033[00m
Nov 22 04:10:25 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 04:10:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 335 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 7.5 MiB/s rd, 9.0 MiB/s wr, 486 op/s
Nov 22 04:10:25 np0005532048 nova_compute[253661]: 2025-11-22 09:10:25.151 253665 DEBUG oslo_concurrency.lockutils [None req-4332124d-d6df-4edc-9146-f440c6ac628a d741f4ee50ae459697238fe0a7207afe 6869d1beac0b4bfab5de74e8692b55ed - - default default] Lock "0e7ac107-5a5a-4066-9396-f22b877e4c2b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:25Z|00086|binding|INFO|Releasing lport 791df5ce-fddc-4961-a1d0-6667026f8b13 from this chassis (sb_readonly=0)
Nov 22 04:10:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:25Z|00087|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:10:25 np0005532048 nova_compute[253661]: 2025-11-22 09:10:25.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.092 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Creating config drive at /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.096 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ir_coe6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.230 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ir_coe6" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.288 253665 DEBUG nova.storage.rbd_utils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.294 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.451 253665 DEBUG oslo_concurrency.processutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config 96000606-0bc4-4cf1-9e33-360a640c2cb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.452 253665 INFO nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Deleting local config drive /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7/disk.config because it was imported into RBD.#033[00m
Nov 22 04:10:26 np0005532048 kernel: tap411035c7-ec: entered promiscuous mode
Nov 22 04:10:26 np0005532048 NetworkManager[48920]: <info>  [1763802626.5224] manager: (tap411035c7-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 22 04:10:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:26Z|00088|binding|INFO|Claiming lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa for this chassis.
Nov 22 04:10:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:26Z|00089|binding|INFO|411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa: Claiming fa:16:3e:cb:6f:23 10.100.0.10
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:26Z|00090|binding|INFO|Setting lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa ovn-installed in OVS
Nov 22 04:10:26 np0005532048 systemd-udevd[284725]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.566 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:26 np0005532048 systemd-machined[215941]: New machine qemu-25-instance-00000016.
Nov 22 04:10:26 np0005532048 NetworkManager[48920]: <info>  [1763802626.5850] device (tap411035c7-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:10:26 np0005532048 NetworkManager[48920]: <info>  [1763802626.5870] device (tap411035c7-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:10:26 np0005532048 systemd[1]: Started Virtual Machine qemu-25-instance-00000016.
Nov 22 04:10:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:26Z|00091|binding|INFO|Setting lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa up in Southbound
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.595 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:6f:23 10.100.0.10'], port_security=['fa:16:3e:cb:6f:23 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '96000606-0bc4-4cf1-9e33-360a640c2cb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.596 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.598 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.617 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00baf2e3-4eb4-4f96-95da-e966c6186229]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.664 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad91ab6-cfe7-48b9-afee-eff8721a0572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.670 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a3abd501-5668-41f9-a8fb-cec5e63751fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.704 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[20416871-4354-42d7-9787-971e056cf6e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.724 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3cd94e-da6e-4755-970d-eb48f3394db9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284741, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.743 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[786f8080-a5c1-48bb-92ed-f8300a3f2c79]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284742, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284742, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.746 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.748 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.750 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.751 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.751 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.752 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:26.753 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.993 253665 DEBUG nova.network.neutron [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Updated VIF entry in instance network info cache for port 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:26 np0005532048 nova_compute[253661]: 2025-11-22 09:10:26.994 253665 DEBUG nova.network.neutron [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Updating instance_info_cache with network_info: [{"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:27 np0005532048 nova_compute[253661]: 2025-11-22 09:10:27.010 253665 DEBUG oslo_concurrency.lockutils [req-da0cce79-0822-4641-b839-2fb400fd0e7a req-cb5d668a-3b78-4e68-9c9c-74f663a7de98 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-96000606-0bc4-4cf1-9e33-360a640c2cb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 349 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 9.0 MiB/s wr, 454 op/s
Nov 22 04:10:27 np0005532048 nova_compute[253661]: 2025-11-22 09:10:27.349 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802627.3485453, 96000606-0bc4-4cf1-9e33-360a640c2cb7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:27 np0005532048 nova_compute[253661]: 2025-11-22 09:10:27.350 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] VM Started (Lifecycle Event)#033[00m
Nov 22 04:10:27 np0005532048 nova_compute[253661]: 2025-11-22 09:10:27.368 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:27 np0005532048 nova_compute[253661]: 2025-11-22 09:10:27.374 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802627.3488536, 96000606-0bc4-4cf1-9e33-360a640c2cb7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:27 np0005532048 nova_compute[253661]: 2025-11-22 09:10:27.375 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:10:27 np0005532048 nova_compute[253661]: 2025-11-22 09:10:27.396 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:27 np0005532048 nova_compute[253661]: 2025-11-22 09:10:27.401 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:27 np0005532048 nova_compute[253661]: 2025-11-22 09:10:27.417 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:27Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:5a:f3 10.100.0.11
Nov 22 04:10:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:27Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:5a:f3 10.100.0.11
Nov 22 04:10:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:27.953 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:27.956 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:27.957 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.110 253665 DEBUG nova.compute.manager [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.111 253665 DEBUG oslo_concurrency.lockutils [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.111 253665 DEBUG oslo_concurrency.lockutils [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.111 253665 DEBUG oslo_concurrency.lockutils [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.111 253665 DEBUG nova.compute.manager [req-8c7f2658-e0fd-4505-b6cb-1c817b29b251 req-bc505504-eb95-478c-8575-39519a92d126 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Processing event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.112 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.124 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802628.1156123, 96000606-0bc4-4cf1-9e33-360a640c2cb7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.127 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.144 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.149 253665 INFO nova.virt.libvirt.driver [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance spawned successfully.#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.149 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.152 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.175 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.180 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.180 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.181 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.182 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.183 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.183 253665 DEBUG nova.virt.libvirt.driver [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.236 253665 INFO nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Took 9.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.237 253665 DEBUG nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.302 253665 INFO nova.compute.manager [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Took 11.44 seconds to build instance.#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.324 253665 DEBUG oslo_concurrency.lockutils [None req-239cc68d-c381-4eb3-a050-0202dcc792db 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.467 253665 DEBUG nova.network.neutron [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.468 253665 DEBUG nova.network.neutron [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.483 253665 DEBUG oslo_concurrency.lockutils [req-0aca3d88-255f-4e50-a84d-763002ecbd52 req-27af3fd9-3832-423b-b4d2-4af05c29313c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.875 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802613.8735316, 18eb7df8-f3ac-44d2-86c1-db7c0c913c53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.876 253665 INFO nova.compute.manager [-] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:10:28 np0005532048 nova_compute[253661]: 2025-11-22 09:10:28.897 253665 DEBUG nova.compute.manager [None req-7e6dbe9e-45fd-4ecf-936a-7b83d3f4bed9 - - - - - -] [instance: 18eb7df8-f3ac-44d2-86c1-db7c0c913c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 9.7 MiB/s wr, 455 op/s
Nov 22 04:10:29 np0005532048 nova_compute[253661]: 2025-11-22 09:10:29.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:29 np0005532048 nova_compute[253661]: 2025-11-22 09:10:29.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.252 253665 DEBUG nova.compute.manager [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.253 253665 DEBUG oslo_concurrency.lockutils [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.253 253665 DEBUG oslo_concurrency.lockutils [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.254 253665 DEBUG oslo_concurrency.lockutils [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.254 253665 DEBUG nova.compute.manager [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] No waiting events found dispatching network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.254 253665 WARNING nova.compute.manager [req-55843386-f7d9-44d5-bff7-f6b526139c3a req-89b07186-6aa1-40b1-b862-e2ca4e6fcbbf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received unexpected event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa for instance with vm_state active and task_state None.#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.644 253665 DEBUG nova.compute.manager [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.645 253665 DEBUG nova.compute.manager [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.645 253665 DEBUG oslo_concurrency.lockutils [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.646 253665 DEBUG oslo_concurrency.lockutils [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.646 253665 DEBUG nova.network.neutron [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:30.806 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:30.807 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:10:30 np0005532048 nova_compute[253661]: 2025-11-22 09:10:30.808 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.0 MiB/s wr, 362 op/s
Nov 22 04:10:32 np0005532048 nova_compute[253661]: 2025-11-22 09:10:32.655 253665 DEBUG nova.network.neutron [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:32 np0005532048 nova_compute[253661]: 2025-11-22 09:10:32.656 253665 DEBUG nova.network.neutron [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:32 np0005532048 nova_compute[253661]: 2025-11-22 09:10:32.693 253665 DEBUG oslo_concurrency.lockutils [req-5b8632e7-8899-4758-8fa3-e0a4c95b7d46 req-f3af109c-d299-4613-88a2-9ce3d7b9a33f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:32 np0005532048 nova_compute[253661]: 2025-11-22 09:10:32.823 253665 DEBUG nova.compute.manager [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:32 np0005532048 nova_compute[253661]: 2025-11-22 09:10:32.824 253665 DEBUG nova.compute.manager [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing instance network info cache due to event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:32 np0005532048 nova_compute[253661]: 2025-11-22 09:10:32.824 253665 DEBUG oslo_concurrency.lockutils [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:32 np0005532048 nova_compute[253661]: 2025-11-22 09:10:32.824 253665 DEBUG oslo_concurrency.lockutils [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:32 np0005532048 nova_compute[253661]: 2025-11-22 09:10:32.825 253665 DEBUG nova.network.neutron [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 7.1 MiB/s wr, 385 op/s
Nov 22 04:10:33 np0005532048 nova_compute[253661]: 2025-11-22 09:10:33.584 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:33 np0005532048 nova_compute[253661]: 2025-11-22 09:10:33.584 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:33 np0005532048 nova_compute[253661]: 2025-11-22 09:10:33.601 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:10:33 np0005532048 nova_compute[253661]: 2025-11-22 09:10:33.666 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:33 np0005532048 nova_compute[253661]: 2025-11-22 09:10:33.667 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:33 np0005532048 nova_compute[253661]: 2025-11-22 09:10:33.676 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:10:33 np0005532048 nova_compute[253661]: 2025-11-22 09:10:33.676 253665 INFO nova.compute.claims [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:10:33 np0005532048 nova_compute[253661]: 2025-11-22 09:10:33.872 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.166 253665 DEBUG nova.network.neutron [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updated VIF entry in instance network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.167 253665 DEBUG nova.network.neutron [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.224 253665 DEBUG oslo_concurrency.lockutils [req-cb6db502-2f98-4009-be60-d3e4b9c418aa req-4f22ae84-27b7-4cd6-a0ad-422bf0e849ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985083650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.475 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.483 253665 DEBUG nova.compute.provider_tree [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.499 253665 DEBUG nova.scheduler.client.report [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.519 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.520 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:34Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:5c:0b 10.100.0.5
Nov 22 04:10:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:34Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:5c:0b 10.100.0.5
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.565 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.565 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.582 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.599 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.602 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.685 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.687 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.688 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Creating image(s)#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.715 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.748 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.777 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.782 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.866 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.868 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.869 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.869 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.891 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:34 np0005532048 nova_compute[253661]: 2025-11-22 09:10:34.895 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 de145d76-062b-4362-bc82-09e09d2f9154_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.038 253665 DEBUG nova.policy [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05cafdbce8334f9380b4dbd1d21f7d58', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd78b26f20d674ae6a213d727050a50d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:10:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.5 MiB/s wr, 351 op/s
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.273 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 de145d76-062b-4362-bc82-09e09d2f9154_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.365 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.516 253665 DEBUG nova.objects.instance [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid de145d76-062b-4362-bc82-09e09d2f9154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.529 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.530 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Ensure instance console log exists: /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.531 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.532 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.532 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:35 np0005532048 nova_compute[253661]: 2025-11-22 09:10:35.929 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Successfully created port: c048a826-73ad-49d3-a29f-5d790d359e51 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:10:36 np0005532048 nova_compute[253661]: 2025-11-22 09:10:36.217 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.075 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802622.0738266, 0e7ac107-5a5a-4066-9396-f22b877e4c2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.077 253665 INFO nova.compute.manager [-] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.095 253665 DEBUG nova.compute.manager [None req-9f017c36-6a7b-45ad-8210-da2334bf98d2 - - - - - -] [instance: 0e7ac107-5a5a-4066-9396-f22b877e4c2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 394 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.150 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Successfully updated port: c048a826-73ad-49d3-a29f-5d790d359e51 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.165 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.166 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquired lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.166 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.382 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.803 253665 DEBUG nova.compute.manager [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-changed-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.804 253665 DEBUG nova.compute.manager [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Refreshing instance network info cache due to event network-changed-c048a826-73ad-49d3-a29f-5d790d359e51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.804 253665 DEBUG oslo_concurrency.lockutils [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:37.810 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.878 253665 DEBUG nova.compute.manager [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.879 253665 DEBUG nova.compute.manager [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing instance network info cache due to event network-changed-52bf11af-1372-4c5d-8bd8-81017da77de8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.879 253665 DEBUG oslo_concurrency.lockutils [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.880 253665 DEBUG oslo_concurrency.lockutils [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:37 np0005532048 nova_compute[253661]: 2025-11-22 09:10:37.880 253665 DEBUG nova.network.neutron [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Refreshing network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 451 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 219 op/s
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.264 253665 DEBUG nova.network.neutron [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updating instance_info_cache with network_info: [{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.295 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Releasing lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.296 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance network_info: |[{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.296 253665 DEBUG oslo_concurrency.lockutils [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.297 253665 DEBUG nova.network.neutron [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Refreshing network info cache for port c048a826-73ad-49d3-a29f-5d790d359e51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.300 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start _get_guest_xml network_info=[{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.304 253665 WARNING nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.311 253665 DEBUG nova.virt.libvirt.host [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.311 253665 DEBUG nova.virt.libvirt.host [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.315 253665 DEBUG nova.virt.libvirt.host [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.316 253665 DEBUG nova.virt.libvirt.host [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.317 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.317 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.318 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.318 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.318 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.318 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.319 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.319 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.319 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.320 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.320 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.320 253665 DEBUG nova.virt.hardware [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.323 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.605 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720399152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.835 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.858 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.862 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.925 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.926 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.926 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.926 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.927 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.928 253665 INFO nova.compute.manager [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Terminating instance#033[00m
Nov 22 04:10:39 np0005532048 nova_compute[253661]: 2025-11-22 09:10:39.929 253665 DEBUG nova.compute.manager [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.202 253665 DEBUG nova.network.neutron [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updated VIF entry in instance network info cache for port 52bf11af-1372-4c5d-8bd8-81017da77de8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.203 253665 DEBUG nova.network.neutron [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [{"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.219 253665 DEBUG oslo_concurrency.lockutils [req-8f763b58-7f12-4eab-b8fb-7376c2061059 req-12443405-8862-46d3-8ccc-7377c1e6314d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ff657cfc-b1bb-4545-bc13-ad240e69c666" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:40 np0005532048 kernel: tap52bf11af-13 (unregistering): left promiscuous mode
Nov 22 04:10:40 np0005532048 NetworkManager[48920]: <info>  [1763802640.2839] device (tap52bf11af-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:10:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:40Z|00092|binding|INFO|Releasing lport 52bf11af-1372-4c5d-8bd8-81017da77de8 from this chassis (sb_readonly=0)
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.294 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:40Z|00093|binding|INFO|Setting lport 52bf11af-1372-4c5d-8bd8-81017da77de8 down in Southbound
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.298 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:40Z|00094|binding|INFO|Removing iface tap52bf11af-13 ovn-installed in OVS
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.304 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:5c:0b 10.100.0.5'], port_security=['fa:16:3e:58:5c:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff657cfc-b1bb-4545-bc13-ad240e69c666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=52bf11af-1372-4c5d-8bd8-81017da77de8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.305 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 52bf11af-1372-4c5d-8bd8-81017da77de8 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 unbound from our chassis#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.309 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.334 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[62573e58-2dc1-4f22-ad65-1f3f0ffa0bdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 22 04:10:40 np0005532048 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000014.scope: Consumed 14.994s CPU time.
Nov 22 04:10:40 np0005532048 systemd-machined[215941]: Machine qemu-23-instance-00000014 terminated.
Nov 22 04:10:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3855083627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.374 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac3eda2-f3c6-4789-a8ca-f327b9802476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.379 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d156547f-f964-4f0f-ba1a-ff3e48e38fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.384 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.386 253665 DEBUG nova.virt.libvirt.vif [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-27339221',display_name='tempest-ServersAdminTestJSON-server-27339221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-27339221',id=23,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-xz93pz1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:34Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=de145d76-062b-4362-bc82-09e09d2f9154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.386 253665 DEBUG nova.network.os_vif_util [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.391 253665 DEBUG nova.network.os_vif_util [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.392 253665 DEBUG nova.objects.instance [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid de145d76-062b-4362-bc82-09e09d2f9154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.411 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <uuid>de145d76-062b-4362-bc82-09e09d2f9154</uuid>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <name>instance-00000017</name>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAdminTestJSON-server-27339221</nova:name>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:10:39</nova:creationTime>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <nova:port uuid="c048a826-73ad-49d3-a29f-5d790d359e51">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <entry name="serial">de145d76-062b-4362-bc82-09e09d2f9154</entry>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <entry name="uuid">de145d76-062b-4362-bc82-09e09d2f9154</entry>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/de145d76-062b-4362-bc82-09e09d2f9154_disk">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/de145d76-062b-4362-bc82-09e09d2f9154_disk.config">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:8c:b7:42"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <target dev="tapc048a826-73"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/console.log" append="off"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:10:40 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:10:40 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:10:40 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:10:40 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.413 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Preparing to wait for external event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.413 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.414 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.414 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.415 253665 DEBUG nova.virt.libvirt.vif [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-27339221',display_name='tempest-ServersAdminTestJSON-server-27339221',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-27339221',id=23,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-xz93pz1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:34Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=de145d76-062b-4362-bc82-09e09d2f9154,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.416 253665 DEBUG nova.network.os_vif_util [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.416 253665 DEBUG nova.network.os_vif_util [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.417 253665 DEBUG os_vif [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.418 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.418 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.419 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.414 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[71fcf913-f1f3-4f2e-a125-1357ae4ea1e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.423 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc048a826-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.426 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc048a826-73, col_values=(('external_ids', {'iface-id': 'c048a826-73ad-49d3-a29f-5d790d359e51', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:b7:42', 'vm-uuid': 'de145d76-062b-4362-bc82-09e09d2f9154'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 NetworkManager[48920]: <info>  [1763802640.4293] manager: (tapc048a826-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.436 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.436 253665 INFO os_vif [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73')#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ab71ca0e-b918-44a9-87be-d4f5c49c80c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285048, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.496 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[690f5363-53d6-45a2-b898-1c016b17462e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545502, 'tstamp': 545502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285051, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545507, 'tstamp': 545507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285051, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.500 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.512 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.513 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.513 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.514 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:40 np0005532048 kernel: tap52bf11af-13: entered promiscuous mode
Nov 22 04:10:40 np0005532048 NetworkManager[48920]: <info>  [1763802640.5631] manager: (tap52bf11af-13): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.561 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.562 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.562 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:8c:b7:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.563 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Using config drive#033[00m
Nov 22 04:10:40 np0005532048 kernel: tap52bf11af-13 (unregistering): left promiscuous mode
Nov 22 04:10:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:40Z|00095|binding|INFO|Claiming lport 52bf11af-1372-4c5d-8bd8-81017da77de8 for this chassis.
Nov 22 04:10:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:40Z|00096|binding|INFO|52bf11af-1372-4c5d-8bd8-81017da77de8: Claiming fa:16:3e:58:5c:0b 10.100.0.5
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.577 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:5c:0b 10.100.0.5'], port_security=['fa:16:3e:58:5c:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff657cfc-b1bb-4545-bc13-ad240e69c666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=52bf11af-1372-4c5d-8bd8-81017da77de8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.578 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 52bf11af-1372-4c5d-8bd8-81017da77de8 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 bound to our chassis#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.588 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608#033[00m
Nov 22 04:10:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:40Z|00097|binding|INFO|Releasing lport 52bf11af-1372-4c5d-8bd8-81017da77de8 from this chassis (sb_readonly=0)
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.604 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:5c:0b 10.100.0.5'], port_security=['fa:16:3e:58:5c:0b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ff657cfc-b1bb-4545-bc13-ad240e69c666', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=52bf11af-1372-4c5d-8bd8-81017da77de8) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.609 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.617 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48c3ba9e-6d1f-4ca6-8b3d-768fe7318c62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.622 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.628 253665 INFO nova.virt.libvirt.driver [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Instance destroyed successfully.#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.628 253665 DEBUG nova.objects.instance [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'resources' on Instance uuid ff657cfc-b1bb-4545-bc13-ad240e69c666 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.645 253665 DEBUG nova.virt.libvirt.vif [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1250749597',display_name='tempest-FloatingIPsAssociationTestJSON-server-1250749597',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1250749597',id=20,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-63b4tjoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:20Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=ff657cfc-b1bb-4545-bc13-ad240e69c666,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.645 253665 DEBUG nova.network.os_vif_util [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "52bf11af-1372-4c5d-8bd8-81017da77de8", "address": "fa:16:3e:58:5c:0b", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52bf11af-13", "ovs_interfaceid": "52bf11af-1372-4c5d-8bd8-81017da77de8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.646 253665 DEBUG nova.network.os_vif_util [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.646 253665 DEBUG os_vif [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.649 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.650 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52bf11af-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.650 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[423a035f-52e8-4468-93a8-43312c277bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.655 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[83e36ba2-c546-4646-80d6-b4179596d052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.663 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.665 253665 INFO os_vif [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:5c:0b,bridge_name='br-int',has_traffic_filtering=True,id=52bf11af-1372-4c5d-8bd8-81017da77de8,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52bf11af-13')#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.690 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2a778b66-ec01-43ef-a4cd-5fd6f37c003c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.714 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a75fa380-aa99-4f32-9678-f912a3d356ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285099, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.735 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84efa79c-bf98-4beb-86ae-06d5c3626ec5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545502, 'tstamp': 545502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285103, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545507, 'tstamp': 545507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285103, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.737 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.739 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.744 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.744 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.745 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.745 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.745 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.746 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 52bf11af-1372-4c5d-8bd8-81017da77de8 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 unbound from our chassis#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.747 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e64548ac-5898-4d23-b6f7-17a1ae54c608#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.766 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8dedcb-ef4a-4480-b462-71b3b91723bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.794 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc5781a-82e3-4adc-b3f7-15e0988d3ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.798 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5283a19e-d54e-4477-943b-9aa94daff111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.829 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c7808190-f80f-4177-92a9-5599145f1fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.854 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e95307-7e48-41d7-beeb-21628e68d91e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape64548ac-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545486, 'reachable_time': 19914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285110, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d4db3a07-79a1-4b28-beb7-82b86df7b1e3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545502, 'tstamp': 545502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285111, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape64548ac-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545507, 'tstamp': 545507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285111, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.876 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.881 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape64548ac-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape64548ac-50, col_values=(('external_ids', {'iface-id': '791df5ce-fddc-4961-a1d0-6667026f8b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:40.883 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:40 np0005532048 nova_compute[253661]: 2025-11-22 09:10:40.997 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Creating config drive at /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config#033[00m
Nov 22 04:10:41 np0005532048 nova_compute[253661]: 2025-11-22 09:10:41.004 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpky9ybsp3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 451 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Nov 22 04:10:41 np0005532048 nova_compute[253661]: 2025-11-22 09:10:41.160 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpky9ybsp3" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:41 np0005532048 nova_compute[253661]: 2025-11-22 09:10:41.186 253665 DEBUG nova.storage.rbd_utils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image de145d76-062b-4362-bc82-09e09d2f9154_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:41 np0005532048 nova_compute[253661]: 2025-11-22 09:10:41.191 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config de145d76-062b-4362-bc82-09e09d2f9154_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:41 np0005532048 nova_compute[253661]: 2025-11-22 09:10:41.236 253665 DEBUG nova.network.neutron [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updated VIF entry in instance network info cache for port c048a826-73ad-49d3-a29f-5d790d359e51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:41 np0005532048 nova_compute[253661]: 2025-11-22 09:10:41.237 253665 DEBUG nova.network.neutron [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updating instance_info_cache with network_info: [{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:41 np0005532048 nova_compute[253661]: 2025-11-22 09:10:41.253 253665 DEBUG oslo_concurrency.lockutils [req-6aa3424f-7498-4918-9ca2-1beb124c92a7 req-e2ea0277-9be0-482c-a168-31ac453f85e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.154 253665 DEBUG nova.compute.manager [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.155 253665 DEBUG nova.compute.manager [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.155 253665 DEBUG oslo_concurrency.lockutils [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.155 253665 DEBUG oslo_concurrency.lockutils [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.156 253665 DEBUG nova.network.neutron [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.253 253665 DEBUG oslo_concurrency.processutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config de145d76-062b-4362-bc82-09e09d2f9154_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.253 253665 INFO nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Deleting local config drive /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154/disk.config because it was imported into RBD.#033[00m
Nov 22 04:10:42 np0005532048 kernel: tapc048a826-73: entered promiscuous mode
Nov 22 04:10:42 np0005532048 NetworkManager[48920]: <info>  [1763802642.3172] manager: (tapc048a826-73): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Nov 22 04:10:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:42Z|00098|binding|INFO|Claiming lport c048a826-73ad-49d3-a29f-5d790d359e51 for this chassis.
Nov 22 04:10:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:42Z|00099|binding|INFO|c048a826-73ad-49d3-a29f-5d790d359e51: Claiming fa:16:3e:8c:b7:42 10.100.0.7
Nov 22 04:10:42 np0005532048 systemd-udevd[285037]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.326 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:b7:42 10.100.0.7'], port_security=['fa:16:3e:8c:b7:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'de145d76-062b-4362-bc82-09e09d2f9154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c048a826-73ad-49d3-a29f-5d790d359e51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.328 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c048a826-73ad-49d3-a29f-5d790d359e51 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.330 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:10:42 np0005532048 NetworkManager[48920]: <info>  [1763802642.3372] device (tapc048a826-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:10:42 np0005532048 NetworkManager[48920]: <info>  [1763802642.3381] device (tapc048a826-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:10:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:42Z|00100|binding|INFO|Setting lport c048a826-73ad-49d3-a29f-5d790d359e51 ovn-installed in OVS
Nov 22 04:10:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:42Z|00101|binding|INFO|Setting lport c048a826-73ad-49d3-a29f-5d790d359e51 up in Southbound
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.354 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.359 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c423509c-9f86-4428-93f0-19841b8cac44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:42 np0005532048 systemd-machined[215941]: New machine qemu-26-instance-00000017.
Nov 22 04:10:42 np0005532048 systemd[1]: Started Virtual Machine qemu-26-instance-00000017.
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.395 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[38783663-dd63-41e3-bba3-00db84f7343e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.398 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[de5b4ca6-e325-4ada-a687-666a2392c59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.436 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a27f7c-d48f-4350-af36-7626145ae83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.459 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce331fd-2669-428a-8049-900ec8ad81a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285177, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.481 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc6daab-9fd5-4a9c-846c-98cb97d7cbac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285178, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285178, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.484 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:42 np0005532048 nova_compute[253661]: 2025-11-22 09:10:42.488 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.489 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.489 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.489 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:42.490 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.073 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802643.0720322, de145d76-062b-4362-bc82-09e09d2f9154 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.073 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] VM Started (Lifecycle Event)#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.089 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.095 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802643.0751026, de145d76-062b-4362-bc82-09e09d2f9154 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.095 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.109 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.112 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.129 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 451 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.911 253665 DEBUG nova.network.neutron [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.912 253665 DEBUG nova.network.neutron [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:43 np0005532048 nova_compute[253661]: 2025-11-22 09:10:43.940 253665 DEBUG oslo_concurrency.lockutils [req-3d0de599-02fd-42e1-bbc9-0b16cdecd3e4 req-db59477b-fc07-44a4-84b6-6485e3b8effa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:44 np0005532048 podman[285223]: 2025-11-22 09:10:44.429422838 +0000 UTC m=+0.080585557 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 04:10:44 np0005532048 podman[285224]: 2025-11-22 09:10:44.438450246 +0000 UTC m=+0.089318897 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:10:44 np0005532048 nova_compute[253661]: 2025-11-22 09:10:44.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:44 np0005532048 nova_compute[253661]: 2025-11-22 09:10:44.672 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 399 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 22 04:10:45 np0005532048 nova_compute[253661]: 2025-11-22 09:10:45.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:46 np0005532048 nova_compute[253661]: 2025-11-22 09:10:46.935 253665 INFO nova.virt.libvirt.driver [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Deleting instance files /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666_del#033[00m
Nov 22 04:10:46 np0005532048 nova_compute[253661]: 2025-11-22 09:10:46.936 253665 INFO nova.virt.libvirt.driver [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Deletion of /var/lib/nova/instances/ff657cfc-b1bb-4545-bc13-ad240e69c666_del complete#033[00m
Nov 22 04:10:46 np0005532048 nova_compute[253661]: 2025-11-22 09:10:46.993 253665 INFO nova.compute.manager [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Took 7.06 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:10:46 np0005532048 nova_compute[253661]: 2025-11-22 09:10:46.994 253665 DEBUG oslo.service.loopingcall [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:10:46 np0005532048 nova_compute[253661]: 2025-11-22 09:10:46.994 253665 DEBUG nova.compute.manager [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:10:46 np0005532048 nova_compute[253661]: 2025-11-22 09:10:46.995 253665 DEBUG nova.network.neutron [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:10:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 376 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 4.2 MiB/s wr, 130 op/s
Nov 22 04:10:47 np0005532048 nova_compute[253661]: 2025-11-22 09:10:47.540 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:47 np0005532048 nova_compute[253661]: 2025-11-22 09:10:47.541 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:47 np0005532048 nova_compute[253661]: 2025-11-22 09:10:47.553 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:10:47 np0005532048 nova_compute[253661]: 2025-11-22 09:10:47.634 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:47 np0005532048 nova_compute[253661]: 2025-11-22 09:10:47.635 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:47 np0005532048 nova_compute[253661]: 2025-11-22 09:10:47.643 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:10:47 np0005532048 nova_compute[253661]: 2025-11-22 09:10:47.644 253665 INFO nova.compute.claims [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:10:47 np0005532048 nova_compute[253661]: 2025-11-22 09:10:47.834 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:47Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:6f:23 10.100.0.10
Nov 22 04:10:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:47Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:6f:23 10.100.0.10
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.086 253665 DEBUG nova.compute.manager [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-unplugged-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.087 253665 DEBUG oslo_concurrency.lockutils [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.087 253665 DEBUG oslo_concurrency.lockutils [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.088 253665 DEBUG oslo_concurrency.lockutils [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.088 253665 DEBUG nova.compute.manager [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] No waiting events found dispatching network-vif-unplugged-52bf11af-1372-4c5d-8bd8-81017da77de8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.088 253665 DEBUG nova.compute.manager [req-7333911f-6e59-40b4-9154-243f8de6bb77 req-0800c045-c14d-4310-82d8-141eaa625356 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-unplugged-52bf11af-1372-4c5d-8bd8-81017da77de8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.141 253665 DEBUG nova.compute.manager [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.142 253665 DEBUG nova.compute.manager [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing instance network info cache due to event network-changed-e7682709-05fd-4d27-bd49-1a84e1cf6bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.143 253665 DEBUG oslo_concurrency.lockutils [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.143 253665 DEBUG oslo_concurrency.lockutils [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.143 253665 DEBUG nova.network.neutron [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Refreshing network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422486760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.359 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.366 253665 DEBUG nova.compute.provider_tree [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.381 253665 DEBUG nova.scheduler.client.report [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.403 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.404 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.444 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.445 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:10:48 np0005532048 podman[285278]: 2025-11-22 09:10:48.452304584 +0000 UTC m=+0.136427371 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.465 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.480 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.567 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.569 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.569 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Creating image(s)#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.593 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.620 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.645 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.649 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.720 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.722 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.722 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.723 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.746 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.751 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.788 253665 DEBUG nova.policy [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.803 253665 DEBUG nova.network.neutron [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.820 253665 INFO nova.compute.manager [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Took 1.82 seconds to deallocate network for instance.#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.874 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.875 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.902 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.929 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.929 253665 DEBUG nova.compute.provider_tree [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.952 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:10:48 np0005532048 nova_compute[253661]: 2025-11-22 09:10:48.990 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.109 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 404 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 606 KiB/s rd, 5.2 MiB/s wr, 157 op/s
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.187 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] resizing rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.221 253665 DEBUG oslo_concurrency.processutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.323 253665 DEBUG nova.objects.instance [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'migration_context' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.339 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.339 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Ensure instance console log exists: /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.340 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.340 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.341 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2528930108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.726 253665 DEBUG oslo_concurrency.processutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.732 253665 DEBUG nova.compute.provider_tree [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.748 253665 DEBUG nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.773 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.813 253665 INFO nova.scheduler.client.report [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Deleted allocations for instance ff657cfc-b1bb-4545-bc13-ad240e69c666#033[00m
Nov 22 04:10:49 np0005532048 nova_compute[253661]: 2025-11-22 09:10:49.880 253665 DEBUG oslo_concurrency.lockutils [None req-3d335cf9-6e24-4f0c-9d37-f937d758a667 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.002 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Successfully created port: f70fa10f-f756-4faa-aebf-deeb0b129704 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.195 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.196 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.196 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.196 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ff657cfc-b1bb-4545-bc13-ad240e69c666-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.196 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] No waiting events found dispatching network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 WARNING nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received unexpected event network-vif-plugged-52bf11af-1372-4c5d-8bd8-81017da77de8 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.197 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Processing event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.198 253665 DEBUG oslo_concurrency.lockutils [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.199 253665 DEBUG nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] No waiting events found dispatching network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.199 253665 WARNING nova.compute.manager [req-b40f5e93-e878-4e32-bf6d-a45e3a383c27 req-76788bbc-7803-4c34-ac36-46a9bd923931 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received unexpected event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.199 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.204 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802650.2034488, de145d76-062b-4362-bc82-09e09d2f9154 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.204 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.207 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.212 253665 INFO nova.virt.libvirt.driver [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance spawned successfully.#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.213 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.241 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.246 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.247 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.247 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.248 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.248 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.248 253665 DEBUG nova.virt.libvirt.driver [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.254 253665 DEBUG nova.compute.manager [req-36d96467-634a-4193-919a-0b3708525c63 req-6e37f6ed-798c-4a10-a238-a361c6ff4c7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Received event network-vif-deleted-52bf11af-1372-4c5d-8bd8-81017da77de8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.268 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.437 253665 DEBUG nova.network.neutron [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updated VIF entry in instance network info cache for port e7682709-05fd-4d27-bd49-1a84e1cf6bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.438 253665 DEBUG nova.network.neutron [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [{"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.453 253665 DEBUG oslo_concurrency.lockutils [req-dbd542a4-64ee-4fd8-b52a-d514b32ef8d5 req-ec679781-b962-4b51-9214-baef7291a2c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-04781543-b5ed-482a-a30a-0730fbcd12a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.489 253665 INFO nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Took 15.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.490 253665 DEBUG nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.575 253665 INFO nova.compute.manager [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Took 16.93 seconds to build instance.#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.607 253665 DEBUG oslo_concurrency.lockutils [None req-f3d43004-8af5-45c5-9562-fca4177448bd 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:50 np0005532048 nova_compute[253661]: 2025-11-22 09:10:50.661 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 404 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 22 04:10:51 np0005532048 nova_compute[253661]: 2025-11-22 09:10:51.994 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Successfully updated port: f70fa10f-f756-4faa-aebf-deeb0b129704 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.018 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.018 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.018 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:10:52
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups']
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.224 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.444 253665 DEBUG oslo_concurrency.lockutils [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] Acquiring lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.444 253665 DEBUG oslo_concurrency.lockutils [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] Acquired lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.444 253665 DEBUG nova.network.neutron [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.536 253665 DEBUG nova.compute.manager [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-changed-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.537 253665 DEBUG nova.compute.manager [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing instance network info cache due to event network-changed-f70fa10f-f756-4faa-aebf-deeb0b129704. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.537 253665 DEBUG oslo_concurrency.lockutils [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.706 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.706 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.707 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.707 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.707 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.712 253665 INFO nova.compute.manager [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Terminating instance#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.713 253665 DEBUG nova.compute.manager [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:10:52 np0005532048 kernel: tape7682709-05 (unregistering): left promiscuous mode
Nov 22 04:10:52 np0005532048 NetworkManager[48920]: <info>  [1763802652.7729] device (tape7682709-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:52Z|00102|binding|INFO|Releasing lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 from this chassis (sb_readonly=0)
Nov 22 04:10:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:52Z|00103|binding|INFO|Setting lport e7682709-05fd-4d27-bd49-1a84e1cf6bd3 down in Southbound
Nov 22 04:10:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:52Z|00104|binding|INFO|Removing iface tape7682709-05 ovn-installed in OVS
Nov 22 04:10:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.837 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:ea:eb 10.100.0.3'], port_security=['fa:16:3e:5e:ea:eb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '04781543-b5ed-482a-a30a-0730fbcd12a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef6e238d438c49959eb8bee112836e52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75ab40c0-07f4-4bb0-a066-aed1106fa100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72afa370-b1fd-466e-b3d9-08000d4400d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e7682709-05fd-4d27-bd49-1a84e1cf6bd3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.838 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e7682709-05fd-4d27-bd49-1a84e1cf6bd3 in datapath e64548ac-5898-4d23-b6f7-17a1ae54c608 unbound from our chassis#033[00m
Nov 22 04:10:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.840 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e64548ac-5898-4d23-b6f7-17a1ae54c608, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:10:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.842 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[163305c9-6c5a-461a-9776-29a84280a4a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:52.844 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 namespace which is not needed anymore#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.853 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:52 np0005532048 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 22 04:10:52 np0005532048 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000011.scope: Consumed 15.242s CPU time.
Nov 22 04:10:52 np0005532048 systemd-machined[215941]: Machine qemu-20-instance-00000011 terminated.
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.958 253665 INFO nova.virt.libvirt.driver [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Instance destroyed successfully.#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.960 253665 DEBUG nova.objects.instance [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lazy-loading 'resources' on Instance uuid 04781543-b5ed-482a-a30a-0730fbcd12a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.969 253665 DEBUG nova.virt.libvirt.vif [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-785600448',display_name='tempest-FloatingIPsAssociationTestJSON-server-785600448',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-785600448',id=17,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef6e238d438c49959eb8bee112836e52',ramdisk_id='',reservation_id='r-912pf9hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1882113079',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1882113079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:04Z,user_data=None,user_id='526789957ca1421b94691426dc7bccb5',uuid=04781543-b5ed-482a-a30a-0730fbcd12a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.970 253665 DEBUG nova.network.os_vif_util [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converting VIF {"id": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "address": "fa:16:3e:5e:ea:eb", "network": {"id": "e64548ac-5898-4d23-b6f7-17a1ae54c608", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1088267097-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef6e238d438c49959eb8bee112836e52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7682709-05", "ovs_interfaceid": "e7682709-05fd-4d27-bd49-1a84e1cf6bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.971 253665 DEBUG nova.network.os_vif_util [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.971 253665 DEBUG os_vif [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.975 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.976 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7682709-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:52 np0005532048 nova_compute[253661]: 2025-11-22 09:10:52.984 253665 INFO os_vif [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:ea:eb,bridge_name='br-int',has_traffic_filtering=True,id=e7682709-05fd-4d27-bd49-1a84e1cf6bd3,network=Network(e64548ac-5898-4d23-b6f7-17a1ae54c608),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7682709-05')#033[00m
Nov 22 04:10:53 np0005532048 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [NOTICE]   (281884) : haproxy version is 2.8.14-c23fe91
Nov 22 04:10:53 np0005532048 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [NOTICE]   (281884) : path to executable is /usr/sbin/haproxy
Nov 22 04:10:53 np0005532048 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [WARNING]  (281884) : Exiting Master process...
Nov 22 04:10:53 np0005532048 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [WARNING]  (281884) : Exiting Master process...
Nov 22 04:10:53 np0005532048 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [ALERT]    (281884) : Current worker (281886) exited with code 143 (Terminated)
Nov 22 04:10:53 np0005532048 neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608[281880]: [WARNING]  (281884) : All workers exited. Exiting... (0)
Nov 22 04:10:53 np0005532048 systemd[1]: libpod-977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3.scope: Deactivated successfully.
Nov 22 04:10:53 np0005532048 podman[285521]: 2025-11-22 09:10:53.041621227 +0000 UTC m=+0.074265163 container died 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:10:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9fa4cb5a6efdb2b8320c6dc794a849ea1a90de83555fc6fea9133bc58a10cfaa-merged.mount: Deactivated successfully.
Nov 22 04:10:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3-userdata-shm.mount: Deactivated successfully.
Nov 22 04:10:53 np0005532048 podman[285521]: 2025-11-22 09:10:53.098807355 +0000 UTC m=+0.131451281 container cleanup 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:53 np0005532048 systemd[1]: libpod-conmon-977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3.scope: Deactivated successfully.
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.137 253665 DEBUG nova.network.neutron [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.154 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.154 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance network_info: |[{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.154 253665 DEBUG oslo_concurrency.lockutils [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.155 253665 DEBUG nova.network.neutron [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing network info cache for port f70fa10f-f756-4faa-aebf-deeb0b129704 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.157 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start _get_guest_xml network_info=[{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:10:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 418 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 2.7 MiB/s wr, 112 op/s
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.163 253665 WARNING nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.171 253665 DEBUG nova.virt.libvirt.host [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.172 253665 DEBUG nova.virt.libvirt.host [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.176 253665 DEBUG nova.virt.libvirt.host [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.176 253665 DEBUG nova.virt.libvirt.host [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.176 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.177 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.178 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.179 253665 DEBUG nova.virt.hardware [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.181 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:53 np0005532048 podman[285575]: 2025-11-22 09:10:53.184860543 +0000 UTC m=+0.060741785 container remove 977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.190 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff128a20-1279-45bc-b522-6a9e797a0d27]: (4, ('Sat Nov 22 09:10:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 (977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3)\n977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3\nSat Nov 22 09:10:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 (977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3)\n977bdc1bce6ec2f4d0542cbf97cb3e46450836e273d674c2c83a57297bd2f2e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.193 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c11c6ad-443f-41d0-a172-e145313581bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.194 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape64548ac-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:53 np0005532048 kernel: tape64548ac-50: left promiscuous mode
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.218 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.225 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e086739f-87e3-415f-b113-51a2443a8dfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a607c65f-6ec1-4885-be62-d817dcb59ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.241 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca6c784-f838-4a39-afdf-fb6c9dd2dc9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.259 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10aaa1e4-9c42-434b-893f-073dccc688d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545477, 'reachable_time': 39507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285590, 'error': None, 'target': 'ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:53 np0005532048 systemd[1]: run-netns-ovnmeta\x2de64548ac\x2d5898\x2d4d23\x2db6f7\x2d17a1ae54c608.mount: Deactivated successfully.
Nov 22 04:10:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.265 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e64548ac-5898-4d23-b6f7-17a1ae54c608 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:10:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:53.265 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9c56e1-972f-4a39-a4ac-319ee1024e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.493 253665 INFO nova.virt.libvirt.driver [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Deleting instance files /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1_del#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.494 253665 INFO nova.virt.libvirt.driver [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Deletion of /var/lib/nova/instances/04781543-b5ed-482a-a30a-0730fbcd12a1_del complete#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.553 253665 INFO nova.compute.manager [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.553 253665 DEBUG oslo.service.loopingcall [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.557 253665 DEBUG nova.compute.manager [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.557 253665 DEBUG nova.network.neutron [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:10:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652402155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.681 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.707 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.712 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.968 253665 DEBUG nova.network.neutron [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updating instance_info_cache with network_info: [{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.989 253665 DEBUG oslo_concurrency.lockutils [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] Releasing lock "refresh_cache-de145d76-062b-4362-bc82-09e09d2f9154" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.989 253665 DEBUG nova.compute.manager [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 22 04:10:53 np0005532048 nova_compute[253661]: 2025-11-22 09:10:53.989 253665 DEBUG nova.compute.manager [None req-4890bb91-28c1-4ae5-b268-17d698d1d6c3 9226686b6fa443e2877853c43ee3efc3 4b6c3fcaf3734f80af201051789cefdb - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] network_info to inject: |[{"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 22 04:10:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017011033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.225 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.227 253665 DEBUG nova.virt.libvirt.vif [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.227 253665 DEBUG nova.network.os_vif_util [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.228 253665 DEBUG nova.network.os_vif_util [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.229 253665 DEBUG nova.objects.instance [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_devices' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.252 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <uuid>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</uuid>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <name>instance-00000018</name>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:10:53</nova:creationTime>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <entry name="serial">a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <entry name="uuid">a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:14:85:74"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <target dev="tapf70fa10f-f7"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log" append="off"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:10:54 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:10:54 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:10:54 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:10:54 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.253 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Preparing to wait for external event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.253 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.253 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.254 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.254 253665 DEBUG nova.virt.libvirt.vif [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.255 253665 DEBUG nova.network.os_vif_util [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.255 253665 DEBUG nova.network.os_vif_util [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.256 253665 DEBUG os_vif [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.257 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.257 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.260 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.261 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf70fa10f-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.261 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf70fa10f-f7, col_values=(('external_ids', {'iface-id': 'f70fa10f-f756-4faa-aebf-deeb0b129704', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:85:74', 'vm-uuid': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:54 np0005532048 NetworkManager[48920]: <info>  [1763802654.2641] manager: (tapf70fa10f-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.272 253665 INFO os_vif [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7')#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.324 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.324 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.325 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:14:85:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.325 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Using config drive#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.352 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.442 253665 DEBUG nova.network.neutron [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updated VIF entry in instance network info cache for port f70fa10f-f756-4faa-aebf-deeb0b129704. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.443 253665 DEBUG nova.network.neutron [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.457 253665 DEBUG oslo_concurrency.lockutils [req-20b51925-39c4-4da8-8dbf-a52bacc26ecd req-a24cd9ef-6ade-4441-b06d-99734028cd5e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.468 253665 DEBUG nova.network.neutron [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.485 253665 INFO nova.compute.manager [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Took 0.93 seconds to deallocate network for instance.#033[00m
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.533 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.533 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.613 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.652 253665 DEBUG nova.compute.manager [req-863300a6-878e-4384-acf2-5a3f3f2d6f6d req-c45618f1-41a9-4abf-b7de-ece097520e56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Received event network-vif-deleted-e7682709-05fd-4d27-bd49-1a84e1cf6bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.675 253665 DEBUG oslo_concurrency.processutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.762 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Creating config drive at /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.771 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp_om7w41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:10:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.912 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp_om7w41" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.951 253665 DEBUG nova.storage.rbd_utils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:10:54 np0005532048 nova_compute[253661]: 2025-11-22 09:10:54.955 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:10:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 404 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 200 op/s
Nov 22 04:10:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3367886030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.197 253665 DEBUG oslo_concurrency.processutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.204 253665 DEBUG nova.compute.provider_tree [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.219 253665 DEBUG oslo_concurrency.processutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.220 253665 INFO nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Deleting local config drive /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/disk.config because it was imported into RBD.#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.224 253665 DEBUG nova.scheduler.client.report [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.245 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.269 253665 INFO nova.scheduler.client.report [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Deleted allocations for instance 04781543-b5ed-482a-a30a-0730fbcd12a1#033[00m
Nov 22 04:10:55 np0005532048 kernel: tapf70fa10f-f7: entered promiscuous mode
Nov 22 04:10:55 np0005532048 NetworkManager[48920]: <info>  [1763802655.2817] manager: (tapf70fa10f-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Nov 22 04:10:55 np0005532048 systemd-udevd[285497]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:10:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:55Z|00105|binding|INFO|Claiming lport f70fa10f-f756-4faa-aebf-deeb0b129704 for this chassis.
Nov 22 04:10:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:55Z|00106|binding|INFO|f70fa10f-f756-4faa-aebf-deeb0b129704: Claiming fa:16:3e:14:85:74 10.100.0.11
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.294 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:85:74 10.100.0.11'], port_security=['fa:16:3e:14:85:74 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '483aedc9-eae7-4cec-a714-9d623421c584', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f70fa10f-f756-4faa-aebf-deeb0b129704) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.295 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f70fa10f-f756-4faa-aebf-deeb0b129704 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.297 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:10:55 np0005532048 NetworkManager[48920]: <info>  [1763802655.3071] device (tapf70fa10f-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:10:55 np0005532048 NetworkManager[48920]: <info>  [1763802655.3087] device (tapf70fa10f-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:10:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:55Z|00107|binding|INFO|Setting lport f70fa10f-f756-4faa-aebf-deeb0b129704 ovn-installed in OVS
Nov 22 04:10:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:55Z|00108|binding|INFO|Setting lport f70fa10f-f756-4faa-aebf-deeb0b129704 up in Southbound
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.312 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[93c9b86c-e807-45db-aa78-8f2070ef59b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.314 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e2cd359-c1 in ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.317 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e2cd359-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.317 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00575ec3-5323-4980-898c-5accb3e70c79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.322 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[91b3c6f7-aa6a-4b48-92b8-c0da85bac254]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.337 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5b9e66-312b-4018-93b6-75cb8358494f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.338 253665 DEBUG oslo_concurrency.lockutils [None req-720b3c75-e977-40b8-aa5b-dc4654db5e52 526789957ca1421b94691426dc7bccb5 ef6e238d438c49959eb8bee112836e52 - - default default] Lock "04781543-b5ed-482a-a30a-0730fbcd12a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:55 np0005532048 systemd-machined[215941]: New machine qemu-27-instance-00000018.
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34c8b35f-331d-4a8c-adf6-8442a3312bcc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 systemd[1]: Started Virtual Machine qemu-27-instance-00000018.
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.408 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0a7b38-8fd9-4168-8415-6579ffef983f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 NetworkManager[48920]: <info>  [1763802655.4165] manager: (tap5e2cd359-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.417 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce6db14-10db-4a32-b9df-9a3400ba5a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.471 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d02a02-2467-4f13-b70b-78accd4220ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.476 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[688eed24-0cae-48bf-89c4-4ff8439d2127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 NetworkManager[48920]: <info>  [1763802655.5106] device (tap5e2cd359-c0): carrier: link connected
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.519 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8dfe95-5ddd-46fc-ae31-61d6a5c64f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.541 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81f7d4b9-94fa-41ed-8a92-e715dc4b20d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550624, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285780, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b9bf7c-6c4b-4731-aaa3-4eb86156937f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:bd41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550624, 'tstamp': 550624}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285781, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.590 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[db1ec91f-9beb-4384-afad-12f7f4ae2ee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550624, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285782, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.622 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802640.5864038, ff657cfc-b1bb-4545-bc13-ad240e69c666 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.623 253665 INFO nova.compute.manager [-] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.633 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0190f679-741a-40fc-a156-5a74734657d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.641 253665 DEBUG nova.compute.manager [None req-174428f5-ce50-4c6d-a56b-ec0a9738dec0 - - - - - -] [instance: ff657cfc-b1bb-4545-bc13-ad240e69c666] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.718 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d8546b-e5ab-450a-b4e4-eed5761797b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.721 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.722 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.722 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:55 np0005532048 NetworkManager[48920]: <info>  [1763802655.7257] manager: (tap5e2cd359-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 22 04:10:55 np0005532048 kernel: tap5e2cd359-c0: entered promiscuous mode
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.729 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:10:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:55Z|00109|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.753 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.755 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17d980c8-458d-49b0-b51a-777beb0dc990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.756 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:10:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:10:55.757 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'env', 'PROCESS_TAG=haproxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e2cd359-c68f-4256-90e8-0ad40aff8a00.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.948 253665 DEBUG nova.compute.manager [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.949 253665 DEBUG oslo_concurrency.lockutils [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.949 253665 DEBUG oslo_concurrency.lockutils [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.950 253665 DEBUG oslo_concurrency.lockutils [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:55 np0005532048 nova_compute[253661]: 2025-11-22 09:10:55.950 253665 DEBUG nova.compute.manager [req-f02e11fe-e2d0-458c-8211-2dfca8adcfc6 req-35934aa5-9a85-47c8-8d66-47fbe4bd3c46 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Processing event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:10:56 np0005532048 podman[285832]: 2025-11-22 09:10:56.231898279 +0000 UTC m=+0.088887318 container create 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:10:56 np0005532048 podman[285832]: 2025-11-22 09:10:56.176603827 +0000 UTC m=+0.033592896 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:10:56 np0005532048 systemd[1]: Started libpod-conmon-64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f.scope.
Nov 22 04:10:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:10:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7679634f901691bbb7933c31bc9e7b94437f81ed8d9c677c4422e9a5cf6ef0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.330 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802656.3298676, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.332 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] VM Started (Lifecycle Event)#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.334 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.338 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:10:56 np0005532048 podman[285832]: 2025-11-22 09:10:56.338792594 +0000 UTC m=+0.195781653 container init 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.342 253665 INFO nova.virt.libvirt.driver [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance spawned successfully.#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.343 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:10:56 np0005532048 podman[285832]: 2025-11-22 09:10:56.345041055 +0000 UTC m=+0.202030094 container start 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.355 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.358 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.373 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.374 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.374 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:56 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [NOTICE]   (285875) : New worker (285877) forked
Nov 22 04:10:56 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [NOTICE]   (285875) : Loading success.
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.376 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.376 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.377 253665 DEBUG nova.virt.libvirt.driver [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.382 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.383 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802656.330079, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.383 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.413 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.418 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802656.3382874, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.419 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.438 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.443 253665 INFO nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Took 7.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.444 253665 DEBUG nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.445 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.476 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.516 253665 INFO nova.compute.manager [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Took 8.91 seconds to build instance.#033[00m
Nov 22 04:10:56 np0005532048 nova_compute[253661]: 2025-11-22 09:10:56.534 253665 DEBUG oslo_concurrency.lockutils [None req-c181d4c1-6f92-45bc-a6e9-87325ef90ae1 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 372 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 202 op/s
Nov 22 04:10:57 np0005532048 nova_compute[253661]: 2025-11-22 09:10:57.551 253665 INFO nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Rebuilding instance#033[00m
Nov 22 04:10:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:10:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6018 writes, 27K keys, 6018 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6018 writes, 6018 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1512 writes, 6796 keys, 1512 commit groups, 1.0 writes per commit group, ingest: 9.34 MB, 0.02 MB/s#012Interval WAL: 1512 writes, 1512 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     54.6      0.53              0.10        15    0.036       0      0       0.0       0.0#012  L6      1/0    7.22 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     77.1     63.2      1.59              0.31        14    0.113     65K   7730       0.0       0.0#012 Sum      1/0    7.22 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     57.7     61.1      2.12              0.41        29    0.073     65K   7730       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     40.2     40.6      0.96              0.13         8    0.120     21K   2569       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     77.1     63.2      1.59              0.31        14    0.113     65K   7730       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     54.8      0.53              0.10        14    0.038       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 2.1 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 12.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000256 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(848,12.46 MB,4.09741%) FilterBlock(30,188.23 KB,0.060468%) IndexBlock(30,341.61 KB,0.109738%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 04:10:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:57Z|00110|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:10:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:57Z|00111|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:10:57 np0005532048 nova_compute[253661]: 2025-11-22 09:10:57.731 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:57 np0005532048 nova_compute[253661]: 2025-11-22 09:10:57.824 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:57Z|00112|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:10:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:57Z|00113|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:10:57 np0005532048 nova_compute[253661]: 2025-11-22 09:10:57.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.018 253665 DEBUG nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.103 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.114 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.124 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.135 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.143 253665 DEBUG nova.compute.manager [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.143 253665 DEBUG oslo_concurrency.lockutils [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.144 253665 DEBUG oslo_concurrency.lockutils [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.144 253665 DEBUG oslo_concurrency.lockutils [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.144 253665 DEBUG nova.compute.manager [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.145 253665 WARNING nova.compute.manager [req-9af5be11-e125-4c98-b512-0587a7258334 req-b8aacf99-7853-47d4-a583-48c16eb4fce8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.146 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:10:58 np0005532048 nova_compute[253661]: 2025-11-22 09:10:58.150 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:10:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Nov 22 04:10:59 np0005532048 nova_compute[253661]: 2025-11-22 09:10:59.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:59 np0005532048 NetworkManager[48920]: <info>  [1763802659.1995] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 22 04:10:59 np0005532048 NetworkManager[48920]: <info>  [1763802659.2014] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 22 04:10:59 np0005532048 nova_compute[253661]: 2025-11-22 09:10:59.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:59 np0005532048 nova_compute[253661]: 2025-11-22 09:10:59.340 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:59Z|00114|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:10:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:10:59Z|00115|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:10:59 np0005532048 nova_compute[253661]: 2025-11-22 09:10:59.357 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:59 np0005532048 nova_compute[253661]: 2025-11-22 09:10:59.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:10:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:00 np0005532048 nova_compute[253661]: 2025-11-22 09:11:00.275 253665 DEBUG nova.compute.manager [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-changed-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:00 np0005532048 nova_compute[253661]: 2025-11-22 09:11:00.276 253665 DEBUG nova.compute.manager [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing instance network info cache due to event network-changed-f70fa10f-f756-4faa-aebf-deeb0b129704. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:11:00 np0005532048 nova_compute[253661]: 2025-11-22 09:11:00.276 253665 DEBUG oslo_concurrency.lockutils [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:11:00 np0005532048 nova_compute[253661]: 2025-11-22 09:11:00.276 253665 DEBUG oslo_concurrency.lockutils [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:11:00 np0005532048 nova_compute[253661]: 2025-11-22 09:11:00.277 253665 DEBUG nova.network.neutron [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing network info cache for port f70fa10f-f756-4faa-aebf-deeb0b129704 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:11:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:00Z|00116|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:11:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:00Z|00117|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:11:00 np0005532048 nova_compute[253661]: 2025-11-22 09:11:00.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:00 np0005532048 kernel: tap716b716d-2e (unregistering): left promiscuous mode
Nov 22 04:11:00 np0005532048 NetworkManager[48920]: <info>  [1763802660.9202] device (tap716b716d-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:11:00 np0005532048 nova_compute[253661]: 2025-11-22 09:11:00.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:00Z|00118|binding|INFO|Releasing lport 716b716d-2ee2-44e7-9850-c10854634f77 from this chassis (sb_readonly=0)
Nov 22 04:11:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:00Z|00119|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 down in Southbound
Nov 22 04:11:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:00Z|00120|binding|INFO|Removing iface tap716b716d-2e ovn-installed in OVS
Nov 22 04:11:00 np0005532048 nova_compute[253661]: 2025-11-22 09:11:00.943 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:00.958 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:11:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:00.960 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis#033[00m
Nov 22 04:11:00 np0005532048 nova_compute[253661]: 2025-11-22 09:11:00.960 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:00.963 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:11:00 np0005532048 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 22 04:11:00 np0005532048 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000012.scope: Consumed 16.096s CPU time.
Nov 22 04:11:00 np0005532048 systemd-machined[215941]: Machine qemu-21-instance-00000012 terminated.
Nov 22 04:11:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:00.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6eaa97-7f02-4ead-af8e-7f9e1468f0c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.036 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[89b42a54-f3ad-46a9-9766-e8a79901232d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.040 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd422c9-2e99-4c30-ac8b-43123555de04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.078 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ccbbe956-d09d-4078-bb56-1d0c8ffc65ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.099 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f446147-cde0-483d-b5b5-c597f4f6a141]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285899, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.118 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea95a089-81bf-4b80-bb19-21a08866c2a0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285900, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285900, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.120 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.123 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.129 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.130 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.130 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:01.130 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.170 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance shutdown successfully after 3 seconds.#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.177 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.184 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.186 253665 DEBUG nova.virt.libvirt.vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:10:56Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.186 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.190 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.192 253665 DEBUG os_vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.194 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap716b716d-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.202 253665 INFO os_vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.732 253665 DEBUG nova.compute.manager [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.734 253665 DEBUG oslo_concurrency.lockutils [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.734 253665 DEBUG oslo_concurrency.lockutils [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.734 253665 DEBUG oslo_concurrency.lockutils [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.735 253665 DEBUG nova.compute.manager [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.735 253665 WARNING nova.compute.manager [req-4f15dab4-a970-493e-b4af-4b3ac18847a5 req-ce65a20f-cbb2-485b-8c32-f04c438f7997 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state error and task_state rebuilding.#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.973 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting instance files /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del#033[00m
Nov 22 04:11:01 np0005532048 nova_compute[253661]: 2025-11-22 09:11:01.974 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deletion of /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del complete#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.224 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.225 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating image(s)#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.250 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.282 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.308 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.312 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.382 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.384 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.385 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.386 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.418 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.424 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029728019999023247 of space, bias 1.0, pg target 0.8918405999706974 quantized to 32 (current 32)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.870 253665 DEBUG nova.network.neutron [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updated VIF entry in instance network info cache for port f70fa10f-f756-4faa-aebf-deeb0b129704. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:11:02 np0005532048 nova_compute[253661]: 2025-11-22 09:11:02.873 253665 DEBUG nova.network.neutron [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:11:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.201 253665 DEBUG oslo_concurrency.lockutils [req-41f305dd-9eca-4640-b038-c1780e177db0 req-242167f3-1546-4c88-8df0-6870d7ae2c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.617 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.700 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.846 253665 DEBUG nova.compute.manager [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.846 253665 DEBUG oslo_concurrency.lockutils [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.847 253665 DEBUG oslo_concurrency.lockutils [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.847 253665 DEBUG oslo_concurrency.lockutils [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.847 253665 DEBUG nova.compute.manager [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.847 253665 WARNING nova.compute.manager [req-67a2d676-9051-4f6e-a774-a55682b333ae req-19c78b9d-a567-4a1b-b9cc-53ad5a0fa387 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.855 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.856 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Ensure instance console log exists: /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.856 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.857 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.857 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.859 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start _get_guest_xml network_info=[{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.863 253665 WARNING nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.870 253665 DEBUG nova.virt.libvirt.host [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.871 253665 DEBUG nova.virt.libvirt.host [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.877 253665 DEBUG nova.virt.libvirt.host [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.878 253665 DEBUG nova.virt.libvirt.host [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.878 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.878 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.879 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.879 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.879 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.880 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.880 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.880 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.881 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.881 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.881 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.881 253665 DEBUG nova.virt.hardware [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.882 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:03 np0005532048 nova_compute[253661]: 2025-11-22 09:11:03.896 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:11:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170946742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.363 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.391 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.397 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:04Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:b7:42 10.100.0.7
Nov 22 04:11:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:04Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:b7:42 10.100.0.7
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.617 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:11:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1049188024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.849 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.852 253665 DEBUG nova.virt.libvirt.vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:02Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.853 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.854 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.858 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <uuid>3ae08a2f-348c-406b-8ffc-9acb8a542e1c</uuid>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <name>instance-00000012</name>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAdminTestJSON-server-1439141870</nova:name>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:11:03</nova:creationTime>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <nova:port uuid="716b716d-2ee2-44e7-9850-c10854634f77">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <entry name="serial">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <entry name="uuid">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:47:7d:dd"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <target dev="tap716b716d-2e"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log" append="off"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:11:04 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:11:04 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:11:04 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:11:04 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.867 253665 DEBUG nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Preparing to wait for external event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.867 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.868 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.868 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.870 253665 DEBUG nova.virt.libvirt.vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:02Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.870 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.871 253665 DEBUG nova.network.os_vif_util [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.872 253665 DEBUG os_vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.873 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.874 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.875 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.879 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.880 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap716b716d-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.880 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap716b716d-2e, col_values=(('external_ids', {'iface-id': '716b716d-2ee2-44e7-9850-c10854634f77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:7d:dd', 'vm-uuid': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:04 np0005532048 NetworkManager[48920]: <info>  [1763802664.8853] manager: (tap716b716d-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.895 253665 INFO os_vif [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.955 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.955 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.956 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:47:7d:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.956 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Using config drive#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.976 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:04 np0005532048 nova_compute[253661]: 2025-11-22 09:11:04.992 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:05 np0005532048 nova_compute[253661]: 2025-11-22 09:11:05.020 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'keypairs' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 335 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 229 op/s
Nov 22 04:11:06 np0005532048 nova_compute[253661]: 2025-11-22 09:11:06.325 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating config drive at /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config#033[00m
Nov 22 04:11:06 np0005532048 nova_compute[253661]: 2025-11-22 09:11:06.332 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfym0nge execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:06 np0005532048 nova_compute[253661]: 2025-11-22 09:11:06.467 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfym0nge" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:06 np0005532048 nova_compute[253661]: 2025-11-22 09:11:06.604 253665 DEBUG nova.storage.rbd_utils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:06 np0005532048 nova_compute[253661]: 2025-11-22 09:11:06.609 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 331 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Nov 22 04:11:07 np0005532048 nova_compute[253661]: 2025-11-22 09:11:07.951 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802652.9507298, 04781543-b5ed-482a-a30a-0730fbcd12a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:11:07 np0005532048 nova_compute[253661]: 2025-11-22 09:11:07.952 253665 INFO nova.compute.manager [-] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:11:07 np0005532048 nova_compute[253661]: 2025-11-22 09:11:07.969 253665 DEBUG nova.compute.manager [None req-56aba9da-28f3-4415-b17d-bc49fc5684f8 - - - - - -] [instance: 04781543-b5ed-482a-a30a-0730fbcd12a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.393 253665 DEBUG oslo_concurrency.processutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.784s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.394 253665 INFO nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting local config drive /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config because it was imported into RBD.#033[00m
Nov 22 04:11:08 np0005532048 NetworkManager[48920]: <info>  [1763802668.4705] manager: (tap716b716d-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Nov 22 04:11:08 np0005532048 kernel: tap716b716d-2e: entered promiscuous mode
Nov 22 04:11:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:08Z|00121|binding|INFO|Claiming lport 716b716d-2ee2-44e7-9850-c10854634f77 for this chassis.
Nov 22 04:11:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:08Z|00122|binding|INFO|716b716d-2ee2-44e7-9850-c10854634f77: Claiming fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.486 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.487 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.489 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:08Z|00123|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 ovn-installed in OVS
Nov 22 04:11:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:08Z|00124|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 up in Southbound
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.508 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea65e18a-f3bc-42da-91bb-3548a9104fdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:08 np0005532048 systemd-machined[215941]: New machine qemu-28-instance-00000012.
Nov 22 04:11:08 np0005532048 systemd[1]: Started Virtual Machine qemu-28-instance-00000012.
Nov 22 04:11:08 np0005532048 systemd-udevd[286239]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.546 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[964aad8e-ead5-4223-a83f-35d749377895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:08 np0005532048 NetworkManager[48920]: <info>  [1763802668.5547] device (tap716b716d-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:11:08 np0005532048 NetworkManager[48920]: <info>  [1763802668.5557] device (tap716b716d-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.553 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84a9e1d2-87cb-40df-9e70-7120dc26d39b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.590 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[edd2f1d2-5854-4857-b712-117a5310d904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.616 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c7ffa4-a7f9-4bd6-bc09-c27a413f7a11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286249, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.641 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aa51c023-74a9-4db2-a3ce-35253c75b43c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286251, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286251, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.643 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.645 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.648 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.649 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.649 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:08.650 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.914 253665 DEBUG nova.compute.manager [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.914 253665 DEBUG oslo_concurrency.lockutils [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.914 253665 DEBUG oslo_concurrency.lockutils [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.915 253665 DEBUG oslo_concurrency.lockutils [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:08 np0005532048 nova_compute[253661]: 2025-11-22 09:11:08.915 253665 DEBUG nova.compute.manager [req-5cd0b889-bf87-42dd-9c38-a85b801f7b2a req-ca82798a-1b4b-4c0c-82ba-b30e2d07f1f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Processing event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:11:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 372 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Nov 22 04:11:09 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 04:11:09 np0005532048 nova_compute[253661]: 2025-11-22 09:11:09.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:09 np0005532048 kernel: hrtimer: interrupt took 58050732 ns
Nov 22 04:11:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:09 np0005532048 nova_compute[253661]: 2025-11-22 09:11:09.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.265 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 3ae08a2f-348c-406b-8ffc-9acb8a542e1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.267 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802670.2647564, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.268 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Started (Lifecycle Event)#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.270 253665 DEBUG nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.275 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.283 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance spawned successfully.#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.284 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.302 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.311 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.315 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.315 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.316 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.316 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.317 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.317 253665 DEBUG nova.virt.libvirt.driver [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.350 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.350 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802670.265166, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.351 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.377 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.386 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802670.274439, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.386 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.404 253665 DEBUG nova.compute.manager [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.406 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.413 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.444 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.619 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.619 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.620 253665 DEBUG nova.objects.instance [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.668 253665 DEBUG oslo_concurrency.lockutils [None req-ecd0dab7-953c-45fd-81c3-3187244e4534 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.988 253665 DEBUG nova.compute.manager [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.989 253665 DEBUG oslo_concurrency.lockutils [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.989 253665 DEBUG oslo_concurrency.lockutils [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.989 253665 DEBUG oslo_concurrency.lockutils [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.990 253665 DEBUG nova.compute.manager [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:10 np0005532048 nova_compute[253661]: 2025-11-22 09:11:10.990 253665 WARNING nova.compute.manager [req-6f2b0eda-4712-4b18-b32b-9f5801a20ffb req-740445dc-e09a-4506-8335-2b2382ccd5e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:11:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 372 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Nov 22 04:11:11 np0005532048 nova_compute[253661]: 2025-11-22 09:11:11.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:11Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:85:74 10.100.0.11
Nov 22 04:11:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:11Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:85:74 10.100.0.11
Nov 22 04:11:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:11:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2860305444' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:11:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:11:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2860305444' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:11:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 375 MiB data, 540 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 183 op/s
Nov 22 04:11:13 np0005532048 nova_compute[253661]: 2025-11-22 09:11:13.431 253665 INFO nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Rebuilding instance#033[00m
Nov 22 04:11:13 np0005532048 nova_compute[253661]: 2025-11-22 09:11:13.628 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:13 np0005532048 nova_compute[253661]: 2025-11-22 09:11:13.642 253665 DEBUG nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:13 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 04:11:13 np0005532048 nova_compute[253661]: 2025-11-22 09:11:13.790 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:13 np0005532048 nova_compute[253661]: 2025-11-22 09:11:13.801 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:13 np0005532048 nova_compute[253661]: 2025-11-22 09:11:13.813 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:13 np0005532048 nova_compute[253661]: 2025-11-22 09:11:13.823 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:13 np0005532048 nova_compute[253661]: 2025-11-22 09:11:13.831 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:11:13 np0005532048 nova_compute[253661]: 2025-11-22 09:11:13.836 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:11:14 np0005532048 nova_compute[253661]: 2025-11-22 09:11:14.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:14 np0005532048 nova_compute[253661]: 2025-11-22 09:11:14.623 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:14 np0005532048 nova_compute[253661]: 2025-11-22 09:11:14.931 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 393 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 231 op/s
Nov 22 04:11:15 np0005532048 nova_compute[253661]: 2025-11-22 09:11:15.233 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:15 np0005532048 podman[286295]: 2025-11-22 09:11:15.403474621 +0000 UTC m=+0.089439837 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:11:15 np0005532048 podman[286296]: 2025-11-22 09:11:15.425989464 +0000 UTC m=+0.099595722 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:11:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 397 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 201 op/s
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.251 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:11:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506221972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.732 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.830 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.831 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.836 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.836 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.839 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.840 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.857 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.857 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.866 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:17 np0005532048 nova_compute[253661]: 2025-11-22 09:11:17.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.085 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.086 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3537MB free_disk=59.786197662353516GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.087 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.087 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.154 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3ae08a2f-348c-406b-8ffc-9acb8a542e1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.155 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d99bd27b-0ff3-493e-a69c-6c7ec034aa81 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.155 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 96000606-0bc4-4cf1-9e33-360a640c2cb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.155 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance de145d76-062b-4362-bc82-09e09d2f9154 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.155 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.156 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.156 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.267 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:11:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24817561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.798 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.805 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.822 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.901 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:11:18 np0005532048 nova_compute[253661]: 2025-11-22 09:11:18.902 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 402 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Nov 22 04:11:19 np0005532048 podman[286380]: 2025-11-22 09:11:19.439683773 +0000 UTC m=+0.123443267 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.896 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.897 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.897 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.898 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.923 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.924 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.924 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.925 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:19 np0005532048 nova_compute[253661]: 2025-11-22 09:11:19.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:20 np0005532048 nova_compute[253661]: 2025-11-22 09:11:20.188 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 402 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 22 04:11:22 np0005532048 nova_compute[253661]: 2025-11-22 09:11:22.309 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updating instance_info_cache with network_info: [{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:11:22 np0005532048 nova_compute[253661]: 2025-11-22 09:11:22.374 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-3ae08a2f-348c-406b-8ffc-9acb8a542e1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:11:22 np0005532048 nova_compute[253661]: 2025-11-22 09:11:22.374 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:11:22 np0005532048 nova_compute[253661]: 2025-11-22 09:11:22.375 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:22 np0005532048 nova_compute[253661]: 2025-11-22 09:11:22.375 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:22 np0005532048 nova_compute[253661]: 2025-11-22 09:11:22.375 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:22 np0005532048 nova_compute[253661]: 2025-11-22 09:11:22.375 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:22 np0005532048 nova_compute[253661]: 2025-11-22 09:11:22.376 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 405 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 22 04:11:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:11:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:11:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:11:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:11:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:11:23 np0005532048 nova_compute[253661]: 2025-11-22 09:11:23.891 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:11:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:11:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 17dc58d1-5a04-4007-a138-4991c36eb930 does not exist
Nov 22 04:11:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b696cd8d-142f-4654-84de-d4472afcef8b does not exist
Nov 22 04:11:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4211af3b-b854-499f-a4f0-4dd266d62d2b does not exist
Nov 22 04:11:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:11:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:11:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:11:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:11:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:11:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:11:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.284 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.284 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.305 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.378 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.380 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.387 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.387 253665 INFO nova.compute.claims [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.579 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.628 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:24 np0005532048 podman[286680]: 2025-11-22 09:11:24.727766404 +0000 UTC m=+0.033384806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:24 np0005532048 nova_compute[253661]: 2025-11-22 09:11:24.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:11:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833411026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.046 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.057 253665 DEBUG nova.compute.provider_tree [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.073 253665 DEBUG nova.scheduler.client.report [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:11:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.094 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.095 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.142 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.142 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.161 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:11:25 np0005532048 podman[286680]: 2025-11-22 09:11:25.171711396 +0000 UTC m=+0.477329778 container create fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:11:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 414 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 112 op/s
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.178 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:11:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:11:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.270 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.272 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.273 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Creating image(s)#033[00m
Nov 22 04:11:25 np0005532048 systemd[1]: Started libpod-conmon-fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4.scope.
Nov 22 04:11:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.577 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.600 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.621 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.625 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.654 253665 DEBUG nova.policy [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97872d7ce91947789de976821b771135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.694 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.694 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.695 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.695 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.718 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:25 np0005532048 nova_compute[253661]: 2025-11-22 09:11:25.722 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:25 np0005532048 podman[286680]: 2025-11-22 09:11:25.739576805 +0000 UTC m=+1.045195197 container init fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:11:25 np0005532048 podman[286680]: 2025-11-22 09:11:25.754608108 +0000 UTC m=+1.060226480 container start fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:11:25 np0005532048 systemd[1]: libpod-fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4.scope: Deactivated successfully.
Nov 22 04:11:25 np0005532048 objective_ganguly[286722]: 167 167
Nov 22 04:11:25 np0005532048 conmon[286722]: conmon fe5ee609ddaaa1b492ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4.scope/container/memory.events
Nov 22 04:11:26 np0005532048 podman[286680]: 2025-11-22 09:11:26.087054732 +0000 UTC m=+1.392673114 container attach fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 04:11:26 np0005532048 podman[286680]: 2025-11-22 09:11:26.088786164 +0000 UTC m=+1.394404596 container died fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:11:26 np0005532048 nova_compute[253661]: 2025-11-22 09:11:26.477 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Successfully created port: 816016d3-f417-4c33-8f24-8e6360d6fa39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:11:26 np0005532048 nova_compute[253661]: 2025-11-22 09:11:26.698 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:11:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dd370b3588e41fce04bf3fe28717c0a3f5dfd6f80825a7ffa8cd8ede8ab67ddf-merged.mount: Deactivated successfully.
Nov 22 04:11:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 419 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Nov 22 04:11:27 np0005532048 podman[286680]: 2025-11-22 09:11:27.712613239 +0000 UTC m=+3.018231621 container remove fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:11:27 np0005532048 systemd[1]: libpod-conmon-fe5ee609ddaaa1b492ac44dd4ee1d14c4700460047ca48a6cef2493f18bc5ca4.scope: Deactivated successfully.
Nov 22 04:11:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:27.955 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:27.956 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:27.957 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:28 np0005532048 podman[286838]: 2025-11-22 09:11:27.934566961 +0000 UTC m=+0.027321780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:28 np0005532048 nova_compute[253661]: 2025-11-22 09:11:28.206 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Successfully updated port: 816016d3-f417-4c33-8f24-8e6360d6fa39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:11:28 np0005532048 nova_compute[253661]: 2025-11-22 09:11:28.228 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:11:28 np0005532048 nova_compute[253661]: 2025-11-22 09:11:28.228 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:11:28 np0005532048 nova_compute[253661]: 2025-11-22 09:11:28.229 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:11:28 np0005532048 podman[286838]: 2025-11-22 09:11:28.254265727 +0000 UTC m=+0.347020466 container create 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:11:28 np0005532048 nova_compute[253661]: 2025-11-22 09:11:28.391 253665 DEBUG nova.compute.manager [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received event network-changed-816016d3-f417-4c33-8f24-8e6360d6fa39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:28 np0005532048 nova_compute[253661]: 2025-11-22 09:11:28.392 253665 DEBUG nova.compute.manager [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Refreshing instance network info cache due to event network-changed-816016d3-f417-4c33-8f24-8e6360d6fa39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:11:28 np0005532048 nova_compute[253661]: 2025-11-22 09:11:28.393 253665 DEBUG oslo_concurrency.lockutils [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:11:28 np0005532048 systemd[1]: Started libpod-conmon-98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d.scope.
Nov 22 04:11:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:11:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:28 np0005532048 nova_compute[253661]: 2025-11-22 09:11:28.716 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:11:29 np0005532048 podman[286838]: 2025-11-22 09:11:29.009471347 +0000 UTC m=+1.102226106 container init 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:11:29 np0005532048 podman[286838]: 2025-11-22 09:11:29.018045213 +0000 UTC m=+1.110799952 container start 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:11:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 426 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Nov 22 04:11:29 np0005532048 nova_compute[253661]: 2025-11-22 09:11:29.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:29 np0005532048 podman[286838]: 2025-11-22 09:11:29.868824583 +0000 UTC m=+1.961579352 container attach 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:11:29 np0005532048 nova_compute[253661]: 2025-11-22 09:11:29.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:30 np0005532048 friendly_ramanujan[286855]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:11:30 np0005532048 friendly_ramanujan[286855]: --> relative data size: 1.0
Nov 22 04:11:30 np0005532048 friendly_ramanujan[286855]: --> All data devices are unavailable
Nov 22 04:11:30 np0005532048 systemd[1]: libpod-98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d.scope: Deactivated successfully.
Nov 22 04:11:30 np0005532048 systemd[1]: libpod-98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d.scope: Consumed 1.078s CPU time.
Nov 22 04:11:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:30Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 04:11:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:30Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 04:11:30 np0005532048 podman[286884]: 2025-11-22 09:11:30.64980431 +0000 UTC m=+0.431674227 container died 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:11:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 426 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Nov 22 04:11:31 np0005532048 nova_compute[253661]: 2025-11-22 09:11:31.210 253665 DEBUG nova.network.neutron [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Updating instance_info_cache with network_info: [{"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:11:31 np0005532048 nova_compute[253661]: 2025-11-22 09:11:31.233 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:11:31 np0005532048 nova_compute[253661]: 2025-11-22 09:11:31.233 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance network_info: |[{"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:11:31 np0005532048 nova_compute[253661]: 2025-11-22 09:11:31.233 253665 DEBUG oslo_concurrency.lockutils [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:11:31 np0005532048 nova_compute[253661]: 2025-11-22 09:11:31.233 253665 DEBUG nova.network.neutron [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Refreshing network info cache for port 816016d3-f417-4c33-8f24-8e6360d6fa39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:11:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d3b886dd8d0ed2c54fde7c008e290b2e52655dbc290f7a552f330fd0c00bcb29-merged.mount: Deactivated successfully.
Nov 22 04:11:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:32.558 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:11:32 np0005532048 nova_compute[253661]: 2025-11-22 09:11:32.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:32.559 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:11:32 np0005532048 nova_compute[253661]: 2025-11-22 09:11:32.965 253665 DEBUG nova.network.neutron [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Updated VIF entry in instance network info cache for port 816016d3-f417-4c33-8f24-8e6360d6fa39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:11:32 np0005532048 nova_compute[253661]: 2025-11-22 09:11:32.966 253665 DEBUG nova.network.neutron [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Updating instance_info_cache with network_info: [{"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:11:32 np0005532048 nova_compute[253661]: 2025-11-22 09:11:32.983 253665 DEBUG oslo_concurrency.lockutils [req-7010df99-5ab5-4f35-9b19-f5be471b841c req-6d63292e-9f53-4109-9b6d-5be7ff526d0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:11:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 437 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 2.3 MiB/s wr, 35 op/s
Nov 22 04:11:33 np0005532048 podman[286884]: 2025-11-22 09:11:33.655951861 +0000 UTC m=+3.437821708 container remove 98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:11:33 np0005532048 systemd[1]: libpod-conmon-98ed05415a28cef4c1e5ad295acdf74d17234a3fa55b5e755aa8c93822a5f81d.scope: Deactivated successfully.
Nov 22 04:11:34 np0005532048 podman[287039]: 2025-11-22 09:11:34.390252443 +0000 UTC m=+0.027048774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:34 np0005532048 podman[287039]: 2025-11-22 09:11:34.614697053 +0000 UTC m=+0.251493354 container create 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:11:34 np0005532048 nova_compute[253661]: 2025-11-22 09:11:34.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:34 np0005532048 nova_compute[253661]: 2025-11-22 09:11:34.645 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.923s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:34 np0005532048 nova_compute[253661]: 2025-11-22 09:11:34.874 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] resizing rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:11:34 np0005532048 systemd[1]: Started libpod-conmon-8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297.scope.
Nov 22 04:11:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:11:35 np0005532048 nova_compute[253661]: 2025-11-22 09:11:35.001 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:35 np0005532048 nova_compute[253661]: 2025-11-22 09:11:35.006 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:11:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 468 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Nov 22 04:11:35 np0005532048 podman[287039]: 2025-11-22 09:11:35.220130099 +0000 UTC m=+0.856926420 container init 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:11:35 np0005532048 podman[287039]: 2025-11-22 09:11:35.232955679 +0000 UTC m=+0.869751960 container start 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:11:35 np0005532048 happy_hugle[287109]: 167 167
Nov 22 04:11:35 np0005532048 systemd[1]: libpod-8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297.scope: Deactivated successfully.
Nov 22 04:11:35 np0005532048 nova_compute[253661]: 2025-11-22 09:11:35.651 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:35 np0005532048 nova_compute[253661]: 2025-11-22 09:11:35.651 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:35 np0005532048 nova_compute[253661]: 2025-11-22 09:11:35.652 253665 DEBUG nova.objects.instance [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:35 np0005532048 podman[287039]: 2025-11-22 09:11:35.703862711 +0000 UTC m=+1.340658982 container attach 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:11:35 np0005532048 podman[287039]: 2025-11-22 09:11:35.705291935 +0000 UTC m=+1.342088216 container died 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:11:35 np0005532048 nova_compute[253661]: 2025-11-22 09:11:35.974 253665 DEBUG nova.objects.instance [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:35 np0005532048 nova_compute[253661]: 2025-11-22 09:11:35.985 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:11:36 np0005532048 nova_compute[253661]: 2025-11-22 09:11:36.134 253665 DEBUG nova.policy [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:11:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1d0ef816e35fd18265e7e74611866b9dd5a81c3e3c27ed517c6bf8d6cf2a798e-merged.mount: Deactivated successfully.
Nov 22 04:11:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 477 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 2.8 MiB/s wr, 52 op/s
Nov 22 04:11:37 np0005532048 podman[287039]: 2025-11-22 09:11:37.342403472 +0000 UTC m=+2.979199753 container remove 8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hugle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:11:37 np0005532048 systemd[1]: libpod-conmon-8283ab4f3dc8a0820073da787fd7fd01bbff719e902a4c4164fa492a3501d297.scope: Deactivated successfully.
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.464 253665 DEBUG nova.objects.instance [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'migration_context' on Instance uuid b7c923dd-3ae9-4c51-8d6d-6305a71fe97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.478 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.479 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Ensure instance console log exists: /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.479 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.480 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.480 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.482 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start _get_guest_xml network_info=[{"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.487 253665 WARNING nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.497 253665 DEBUG nova.virt.libvirt.host [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.498 253665 DEBUG nova.virt.libvirt.host [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.503 253665 DEBUG nova.virt.libvirt.host [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.503 253665 DEBUG nova.virt.libvirt.host [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.504 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.504 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.505 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.505 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.506 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.506 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.506 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.506 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.507 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.507 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.507 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.508 253665 DEBUG nova.virt.hardware [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.512 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:37.561 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:37 np0005532048 podman[287150]: 2025-11-22 09:11:37.561045573 +0000 UTC m=+0.033011476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:37 np0005532048 podman[287150]: 2025-11-22 09:11:37.781654861 +0000 UTC m=+0.253620755 container create d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:11:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:11:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314115248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:11:37 np0005532048 nova_compute[253661]: 2025-11-22 09:11:37.993 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:38 np0005532048 systemd[1]: Started libpod-conmon-d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9.scope.
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.036 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.044 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:11:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.092 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Successfully created port: fdaaf015-c32e-4960-a33a-2767bf447b71 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:11:38 np0005532048 podman[287150]: 2025-11-22 09:11:38.252705947 +0000 UTC m=+0.724671860 container init d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:11:38 np0005532048 podman[287150]: 2025-11-22 09:11:38.267122075 +0000 UTC m=+0.739087958 container start d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:11:38 np0005532048 podman[287150]: 2025-11-22 09:11:38.462002053 +0000 UTC m=+0.933967976 container attach d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:11:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:11:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503735232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.528 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.530 253665 DEBUG nova.virt.libvirt.vif [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:11:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1474933117',display_name='tempest-ImagesTestJSON-server-1474933117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1474933117',id=25,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-vsxkgtmn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:25Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=b7c923dd-3ae9-4c51-8d6d-6305a71fe97f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.530 253665 DEBUG nova.network.os_vif_util [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.531 253665 DEBUG nova.network.os_vif_util [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.532 253665 DEBUG nova.objects.instance [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid b7c923dd-3ae9-4c51-8d6d-6305a71fe97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.548 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <uuid>b7c923dd-3ae9-4c51-8d6d-6305a71fe97f</uuid>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <name>instance-00000019</name>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <nova:name>tempest-ImagesTestJSON-server-1474933117</nova:name>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:11:37</nova:creationTime>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <nova:port uuid="816016d3-f417-4c33-8f24-8e6360d6fa39">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <entry name="serial">b7c923dd-3ae9-4c51-8d6d-6305a71fe97f</entry>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <entry name="uuid">b7c923dd-3ae9-4c51-8d6d-6305a71fe97f</entry>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:78:74:61"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <target dev="tap816016d3-f4"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/console.log" append="off"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:11:38 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:11:38 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:11:38 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:11:38 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.550 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Preparing to wait for external event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.551 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.551 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.551 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.552 253665 DEBUG nova.virt.libvirt.vif [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:11:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1474933117',display_name='tempest-ImagesTestJSON-server-1474933117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1474933117',id=25,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-vsxkgtmn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:25Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=b7c923dd-3ae9-4c51-8d6d-6305a71fe97f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.553 253665 DEBUG nova.network.os_vif_util [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.553 253665 DEBUG nova.network.os_vif_util [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.554 253665 DEBUG os_vif [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.556 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.556 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.561 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap816016d3-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.561 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap816016d3-f4, col_values=(('external_ids', {'iface-id': '816016d3-f417-4c33-8f24-8e6360d6fa39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:74:61', 'vm-uuid': 'b7c923dd-3ae9-4c51-8d6d-6305a71fe97f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:38 np0005532048 NetworkManager[48920]: <info>  [1763802698.5649] manager: (tap816016d3-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.574 253665 INFO os_vif [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4')#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.825 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.826 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.826 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:78:74:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.827 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Using config drive#033[00m
Nov 22 04:11:38 np0005532048 nova_compute[253661]: 2025-11-22 09:11:38.850 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.094 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Successfully updated port: fdaaf015-c32e-4960-a33a-2767bf447b71 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.156 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.157 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.157 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:11:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 477 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 246 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.196 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Creating config drive at /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config#033[00m
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.202 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6sgvmsy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]: {
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:    "0": [
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:        {
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "devices": [
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "/dev/loop3"
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            ],
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_name": "ceph_lv0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_size": "21470642176",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "name": "ceph_lv0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "tags": {
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.cluster_name": "ceph",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.crush_device_class": "",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.encrypted": "0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.osd_id": "0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.type": "block",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.vdo": "0"
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            },
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "type": "block",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "vg_name": "ceph_vg0"
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:        }
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:    ],
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:    "1": [
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:        {
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "devices": [
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "/dev/loop4"
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            ],
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_name": "ceph_lv1",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_size": "21470642176",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "name": "ceph_lv1",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "tags": {
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.cluster_name": "ceph",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.crush_device_class": "",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.encrypted": "0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.osd_id": "1",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.type": "block",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.vdo": "0"
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            },
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "type": "block",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "vg_name": "ceph_vg1"
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:        }
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:    ],
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:    "2": [
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:        {
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "devices": [
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "/dev/loop5"
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            ],
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_name": "ceph_lv2",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_size": "21470642176",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "name": "ceph_lv2",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "tags": {
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.cluster_name": "ceph",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.crush_device_class": "",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.encrypted": "0",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.osd_id": "2",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.type": "block",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:                "ceph.vdo": "0"
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            },
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "type": "block",
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:            "vg_name": "ceph_vg2"
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:        }
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]:    ]
Nov 22 04:11:39 np0005532048 happy_meninsky[287202]: }
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.294 253665 WARNING nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it#033[00m
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.340 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6sgvmsy" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:39 np0005532048 systemd[1]: libpod-d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9.scope: Deactivated successfully.
Nov 22 04:11:39 np0005532048 podman[287150]: 2025-11-22 09:11:39.344920857 +0000 UTC m=+1.816886740 container died d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.425 253665 DEBUG nova.storage.rbd_utils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.434 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:39 np0005532048 nova_compute[253661]: 2025-11-22 09:11:39.637 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-598e03de036af8ebd9f5c2c6d5f56b652c5b20f5dc2f9e0c4e368e81a37a720d-merged.mount: Deactivated successfully.
Nov 22 04:11:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:40 np0005532048 podman[287150]: 2025-11-22 09:11:40.325533377 +0000 UTC m=+2.797499260 container remove d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meninsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:11:40 np0005532048 systemd[1]: libpod-conmon-d2749bc9e9e4cca924e64b989e982db18f354ec63b63ca1143b47075606ec1f9.scope: Deactivated successfully.
Nov 22 04:11:41 np0005532048 nova_compute[253661]: 2025-11-22 09:11:41.138 253665 DEBUG nova.compute.manager [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-changed-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:41 np0005532048 nova_compute[253661]: 2025-11-22 09:11:41.138 253665 DEBUG nova.compute.manager [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing instance network info cache due to event network-changed-fdaaf015-c32e-4960-a33a-2767bf447b71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:11:41 np0005532048 nova_compute[253661]: 2025-11-22 09:11:41.139 253665 DEBUG oslo_concurrency.lockutils [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:11:41 np0005532048 podman[287453]: 2025-11-22 09:11:41.063285592 +0000 UTC m=+0.042448315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 477 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Nov 22 04:11:41 np0005532048 podman[287453]: 2025-11-22 09:11:41.491062654 +0000 UTC m=+0.470225307 container create fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:11:41 np0005532048 systemd[1]: Started libpod-conmon-fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471.scope.
Nov 22 04:11:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:11:41 np0005532048 podman[287453]: 2025-11-22 09:11:41.92352285 +0000 UTC m=+0.902685513 container init fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:11:41 np0005532048 podman[287453]: 2025-11-22 09:11:41.941064804 +0000 UTC m=+0.920227457 container start fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:11:41 np0005532048 kind_diffie[287469]: 167 167
Nov 22 04:11:41 np0005532048 systemd[1]: libpod-fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471.scope: Deactivated successfully.
Nov 22 04:11:42 np0005532048 podman[287453]: 2025-11-22 09:11:42.02722389 +0000 UTC m=+1.006386543 container attach fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:11:42 np0005532048 podman[287453]: 2025-11-22 09:11:42.027886267 +0000 UTC m=+1.007048920 container died fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.033 253665 DEBUG oslo_concurrency.processutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.034 253665 INFO nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Deleting local config drive /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f/disk.config because it was imported into RBD.#033[00m
Nov 22 04:11:42 np0005532048 kernel: tap816016d3-f4: entered promiscuous mode
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.1141] manager: (tap816016d3-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:42Z|00125|binding|INFO|Claiming lport 816016d3-f417-4c33-8f24-8e6360d6fa39 for this chassis.
Nov 22 04:11:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:42Z|00126|binding|INFO|816016d3-f417-4c33-8f24-8e6360d6fa39: Claiming fa:16:3e:78:74:61 10.100.0.9
Nov 22 04:11:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:42Z|00127|binding|INFO|Setting lport 816016d3-f417-4c33-8f24-8e6360d6fa39 ovn-installed in OVS
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.154 253665 DEBUG nova.network.neutron [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:42Z|00128|binding|INFO|Setting lport 816016d3-f417-4c33-8f24-8e6360d6fa39 up in Southbound
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.159 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:74:61 10.100.0.9'], port_security=['fa:16:3e:78:74:61 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b7c923dd-3ae9-4c51-8d6d-6305a71fe97f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=816016d3-f417-4c33-8f24-8e6360d6fa39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.160 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 816016d3-f417-4c33-8f24-8e6360d6fa39 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 bound to our chassis#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.162 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51#033[00m
Nov 22 04:11:42 np0005532048 systemd-udevd[287498]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:11:42 np0005532048 systemd-machined[215941]: New machine qemu-29-instance-00000019.
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.185 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a71043cf-e128-4f7b-984c-a420bb461dce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.186 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2abeeeb2-21 in ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.1869] device (tap816016d3-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.1887] device (tap816016d3-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.190 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2abeeeb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.190 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d797192b-91be-4d5c-9299-78466e2f4091]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 systemd[1]: Started Virtual Machine qemu-29-instance-00000019.
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[315420f6-37d5-416d-99fa-8b1bd0d84b93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.217 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.215 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8f954800-4ce9-4f5e-b6e5-e290d366d3e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.218 253665 DEBUG oslo_concurrency.lockutils [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.218 253665 DEBUG nova.network.neutron [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Refreshing network info cache for port fdaaf015-c32e-4960-a33a-2767bf447b71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.225 253665 DEBUG nova.virt.libvirt.vif [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.225 253665 DEBUG nova.network.os_vif_util [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.226 253665 DEBUG nova.network.os_vif_util [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.227 253665 DEBUG os_vif [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.228 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.228 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.237 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdaaf015-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.237 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdaaf015-c3, col_values=(('external_ids', {'iface-id': 'fdaaf015-c32e-4960-a33a-2767bf447b71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:7a:b5', 'vm-uuid': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.239 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.2402] manager: (tapfdaaf015-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.241 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.246 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08d839da-30fe-47e8-ad9e-85b97148c551]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.247 253665 INFO os_vif [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3')#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.248 253665 DEBUG nova.virt.libvirt.vif [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.248 253665 DEBUG nova.network.os_vif_util [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.249 253665 DEBUG nova.network.os_vif_util [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.254 253665 DEBUG nova.virt.libvirt.guest [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:f8:7a:b5"/>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <target dev="tapfdaaf015-c3"/>
Nov 22 04:11:42 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:11:42 np0005532048 nova_compute[253661]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 22 04:11:42 np0005532048 kernel: tapfdaaf015-c3: entered promiscuous mode
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.2695] manager: (tapfdaaf015-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 22 04:11:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:42Z|00129|binding|INFO|Claiming lport fdaaf015-c32e-4960-a33a-2767bf447b71 for this chassis.
Nov 22 04:11:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:42Z|00130|binding|INFO|fdaaf015-c32e-4960-a33a-2767bf447b71: Claiming fa:16:3e:f8:7a:b5 10.100.0.14
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.270 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.2830] device (tapfdaaf015-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.2842] device (tapfdaaf015-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.288 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bf28ac-3336-4793-9fae-f49c993bfcff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:42Z|00131|binding|INFO|Setting lport fdaaf015-c32e-4960-a33a-2767bf447b71 ovn-installed in OVS
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.295 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95806123-3de2-4079-b6b9-1028dbba31cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.2966] manager: (tap2abeeeb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Nov 22 04:11:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b028fb0a2de50f97ec65d48c0710009be0753d31bd4cc49498aa0e21246449b1-merged.mount: Deactivated successfully.
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.338 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a6d636-8bd9-49c0-a282-417970e3fa42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.342 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4b77d2a8-690d-40f9-b709-10f89f7e05e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:42Z|00132|binding|INFO|Setting lport fdaaf015-c32e-4960-a33a-2767bf447b71 up in Southbound
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.346 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:7a:b5 10.100.0.14'], port_security=['fa:16:3e:f8:7a:b5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fdaaf015-c32e-4960-a33a-2767bf447b71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.3765] device (tap2abeeeb2-20): carrier: link connected
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.384 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a44814-25f4-4144-95cd-5ed72740fc8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.407 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[da9236ae-fd6c-4cf5-b422-3dd11850e1e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555311, 'reachable_time': 37064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287538, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b165fa52-3119-4756-866f-23f746e0af10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:bff7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555311, 'tstamp': 555311}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287539, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.446 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d99b6415-3869-448d-be81-e24276d77ac4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555311, 'reachable_time': 37064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287540, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.458 253665 DEBUG nova.virt.libvirt.driver [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.458 253665 DEBUG nova.virt.libvirt.driver [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.459 253665 DEBUG nova.virt.libvirt.driver [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:14:85:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.459 253665 DEBUG nova.virt.libvirt.driver [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:f8:7a:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.490 253665 DEBUG nova.virt.libvirt.guest [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:11:42</nova:creationTime>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 04:11:42 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    <nova:port uuid="fdaaf015-c32e-4960-a33a-2767bf447b71">
Nov 22 04:11:42 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:11:42 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:11:42 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:11:42 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.492 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01bf9721-86d4-475e-bd8e-b5ad3d71b344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.519 253665 DEBUG oslo_concurrency.lockutils [None req-23c57d2b-d92f-47bf-b485-a38a4adb0b3b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.549 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c400c04f-12c6-4e47-a14e-a20c7aac0cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.551 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.551 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.552 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 NetworkManager[48920]: <info>  [1763802702.5559] manager: (tap2abeeeb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 22 04:11:42 np0005532048 kernel: tap2abeeeb2-20: entered promiscuous mode
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.560 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:42Z|00133|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 nova_compute[253661]: 2025-11-22 09:11:42.579 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.581 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.582 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f974aee6-1eca-4703-b29f-0e6edcd7e95a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.583 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:42.585 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'env', 'PROCESS_TAG=haproxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:11:42 np0005532048 podman[287453]: 2025-11-22 09:11:42.995591695 +0000 UTC m=+1.974754338 container remove fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:11:43 np0005532048 systemd[1]: libpod-conmon-fffb9523ffcc9a345d0bbe26f370b97fc4fc8311e81cd20408a4a1e298cb5471.scope: Deactivated successfully.
Nov 22 04:11:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 482 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 22 04:11:43 np0005532048 podman[287591]: 2025-11-22 09:11:43.090328149 +0000 UTC m=+0.027834691 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.397 253665 DEBUG nova.compute.manager [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.398 253665 DEBUG oslo_concurrency.lockutils [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.399 253665 DEBUG oslo_concurrency.lockutils [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.399 253665 DEBUG oslo_concurrency.lockutils [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.399 253665 DEBUG nova.compute.manager [req-17614586-6782-4765-ac4e-e1de9c22c50f req-7c6f23e9-e82d-4e64-b48e-4392b6afb5ac 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Processing event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.451 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802703.4506342, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.452 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Started (Lifecycle Event)#033[00m
Nov 22 04:11:43 np0005532048 podman[287616]: 2025-11-22 09:11:43.37491928 +0000 UTC m=+0.181661231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.454 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.457 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.460 253665 INFO nova.virt.libvirt.driver [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance spawned successfully.#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.461 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.471 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:43 np0005532048 podman[287591]: 2025-11-22 09:11:43.472657226 +0000 UTC m=+0.410163738 container create 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.482 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.486 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.487 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.488 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.488 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.489 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.489 253665 DEBUG nova.virt.libvirt.driver [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.517 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.518 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802703.4518642, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.518 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:11:43 np0005532048 systemd[1]: Started libpod-conmon-3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e.scope.
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.547 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.553 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802703.4565256, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.553 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.560 253665 INFO nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 18.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.560 253665 DEBUG nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:11:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f397d4018771e7d833e8521b0dd4b258c9b2b44865d862d4ba5ce5ea501120/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.570 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.575 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.595 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:11:43 np0005532048 podman[287616]: 2025-11-22 09:11:43.624514697 +0000 UTC m=+0.431256618 container create 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.634 253665 INFO nova.compute.manager [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 19.28 seconds to build instance.#033[00m
Nov 22 04:11:43 np0005532048 nova_compute[253661]: 2025-11-22 09:11:43.653 253665 DEBUG oslo_concurrency.lockutils [None req-14959718-d963-4c12-b339-3b3c60891db2 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:43 np0005532048 podman[287591]: 2025-11-22 09:11:43.66654909 +0000 UTC m=+0.604055632 container init 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:11:43 np0005532048 podman[287591]: 2025-11-22 09:11:43.673949198 +0000 UTC m=+0.611455720 container start 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:11:43 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [NOTICE]   (287653) : New worker (287658) forked
Nov 22 04:11:43 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [NOTICE]   (287653) : Loading success.
Nov 22 04:11:43 np0005532048 systemd[1]: Started libpod-conmon-23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a.scope.
Nov 22 04:11:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:11:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.866 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fdaaf015-c32e-4960-a33a-2767bf447b71 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis#033[00m
Nov 22 04:11:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.870 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:11:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.886 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a02b51b2-695d-433d-be72-67ab685fcb6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:43 np0005532048 podman[287616]: 2025-11-22 09:11:43.898507453 +0000 UTC m=+0.705249394 container init 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:11:43 np0005532048 podman[287616]: 2025-11-22 09:11:43.908840851 +0000 UTC m=+0.715582772 container start 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:11:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.927 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[04057188-2e7e-483b-a802-5521e1521388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.932 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[36d066bf-bf3c-4d3d-9ebb-bc2f245282f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.964 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1f94e8ef-e374-4eb4-8808-3e244ca60f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:43 np0005532048 podman[287616]: 2025-11-22 09:11:43.9859264 +0000 UTC m=+0.792668481 container attach 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:11:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:43.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b2dfef8d-9123-4840-920e-bb2da22383fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550624, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287677, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.007 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9cf291-2428-4dd8-8c1b-a5ebad946b01]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550640, 'tstamp': 550640}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287678, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550644, 'tstamp': 550644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287678, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.009 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.012 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.012 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.013 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.013 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:44.013 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.237 253665 DEBUG nova.network.neutron [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updated VIF entry in instance network info cache for port fdaaf015-c32e-4960-a33a-2767bf447b71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.238 253665 DEBUG nova.network.neutron [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.251 253665 DEBUG oslo_concurrency.lockutils [req-e8904a3c-8380-4ac0-9a35-053b0c995b28 req-201e21d7-edce-4a11-9faf-2b160776ce58 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:11:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:44Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:7a:b5 10.100.0.14
Nov 22 04:11:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:44Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:7a:b5 10.100.0.14
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.639 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.904 253665 INFO nova.compute.manager [None req-063f50f1-0146-47a3-b680-77cb0b70a626 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Pausing#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.907 253665 DEBUG nova.objects.instance [None req-063f50f1-0146-47a3-b680-77cb0b70a626 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'flavor' on Instance uuid b7c923dd-3ae9-4c51-8d6d-6305a71fe97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.936 253665 DEBUG nova.compute.manager [None req-063f50f1-0146-47a3-b680-77cb0b70a626 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.937 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802704.9355547, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.937 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.959 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.962 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:11:44 np0005532048 nova_compute[253661]: 2025-11-22 09:11:44.989 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 22 04:11:45 np0005532048 hardcore_black[287667]: {
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "osd_id": 1,
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "type": "bluestore"
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:    },
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "osd_id": 0,
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "type": "bluestore"
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:    },
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "osd_id": 2,
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:        "type": "bluestore"
Nov 22 04:11:45 np0005532048 hardcore_black[287667]:    }
Nov 22 04:11:45 np0005532048 hardcore_black[287667]: }
Nov 22 04:11:45 np0005532048 systemd[1]: libpod-23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a.scope: Deactivated successfully.
Nov 22 04:11:45 np0005532048 systemd[1]: libpod-23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a.scope: Consumed 1.107s CPU time.
Nov 22 04:11:45 np0005532048 podman[287616]: 2025-11-22 09:11:45.054056489 +0000 UTC m=+1.860798410 container died 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:11:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 1.6 MiB/s wr, 70 op/s
Nov 22 04:11:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7b9df34a710fe14f35f303c42fe1f643e5ca833c25993cce7d234e4e05f130db-merged.mount: Deactivated successfully.
Nov 22 04:11:45 np0005532048 podman[287616]: 2025-11-22 09:11:45.418457094 +0000 UTC m=+2.225199015 container remove 23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_black, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:11:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:11:45 np0005532048 systemd[1]: libpod-conmon-23b3841df15b906fa7939ff2b9d92310ba085ce40e967e953ebd201adfa1450a.scope: Deactivated successfully.
Nov 22 04:11:45 np0005532048 podman[287722]: 2025-11-22 09:11:45.65058449 +0000 UTC m=+0.075383489 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.669 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-fdaaf015-c32e-4960-a33a-2767bf447b71" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.669 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-fdaaf015-c32e-4960-a33a-2767bf447b71" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.687 253665 DEBUG nova.objects.instance [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:11:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:11:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:11:45 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 14e13e3a-f3b9-4e0b-989a-bb5d9f09feed does not exist
Nov 22 04:11:45 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev cab48b88-7b32-4145-8bdc-506f181e7aa1 does not exist
Nov 22 04:11:45 np0005532048 podman[287720]: 2025-11-22 09:11:45.729470531 +0000 UTC m=+0.155517190 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.740 253665 DEBUG nova.virt.libvirt.vif [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.741 253665 DEBUG nova.network.os_vif_util [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.742 253665 DEBUG nova.network.os_vif_util [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.747 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.750 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.753 253665 DEBUG nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Attempting to detach device tapfdaaf015-c3 from instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.753 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:11:45 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:f8:7a:b5"/>
Nov 22 04:11:45 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:11:45 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:11:45 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:11:45 np0005532048 nova_compute[253661]:  <target dev="tapfdaaf015-c3"/>
Nov 22 04:11:45 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:11:45 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.799 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.800 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.801 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.801 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.801 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] No waiting events found dispatching network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.801 253665 WARNING nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received unexpected event network-vif-plugged-816016d3-f417-4c33-8f24-8e6360d6fa39 for instance with vm_state paused and task_state None.#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.802 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.802 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.802 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.802 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 WARNING nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.803 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.804 253665 DEBUG oslo_concurrency.lockutils [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.804 253665 DEBUG nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:45 np0005532048 nova_compute[253661]: 2025-11-22 09:11:45.804 253665 WARNING nova.compute.manager [req-62322d98-6a3a-48bb-b9f4-b143dc66420b req-81489ec3-4b09-400c-b972-ccb2e58f1a36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:11:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:46Z|00134|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:11:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:46Z|00135|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:11:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:46Z|00136|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.148 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.210 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.215 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface>not found in domain: <domain type='kvm' id='27'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <name>instance-00000018</name>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <uuid>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</uuid>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:11:42</nova:creationTime>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:port uuid="fdaaf015-c32e-4960-a33a-2767bf447b71">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='serial'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='uuid'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk' index='2'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config' index='1'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:14:85:74'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target dev='tapf70fa10f-f7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:f8:7a:b5'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target dev='tapfdaaf015-c3'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='net1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c707,c812</label>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c707,c812</imagelabel>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.215 253665 INFO nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tapfdaaf015-c3 from instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 from the persistent domain config.#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.217 253665 DEBUG nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] (1/8): Attempting to detach device tapfdaaf015-c3 with device alias net1 from instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.218 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:f8:7a:b5"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <target dev="tapfdaaf015-c3"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:11:46 np0005532048 kernel: tapfdaaf015-c3 (unregistering): left promiscuous mode
Nov 22 04:11:46 np0005532048 NetworkManager[48920]: <info>  [1763802706.3321] device (tapfdaaf015-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:11:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:46Z|00137|binding|INFO|Releasing lport fdaaf015-c32e-4960-a33a-2767bf447b71 from this chassis (sb_readonly=0)
Nov 22 04:11:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:46Z|00138|binding|INFO|Setting lport fdaaf015-c32e-4960-a33a-2767bf447b71 down in Southbound
Nov 22 04:11:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:46Z|00139|binding|INFO|Removing iface tapfdaaf015-c3 ovn-installed in OVS
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.347 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:7a:b5 10.100.0.14'], port_security=['fa:16:3e:f8:7a:b5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fdaaf015-c32e-4960-a33a-2767bf447b71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.348 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763802706.347724, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.349 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fdaaf015-c32e-4960-a33a-2767bf447b71 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.351 253665 DEBUG nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Start waiting for the detach event from libvirt for device tapfdaaf015-c3 with device alias net1 for instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.351 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.353 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.359 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface>not found in domain: <domain type='kvm' id='27'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <name>instance-00000018</name>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <uuid>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</uuid>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:11:42</nova:creationTime>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:port uuid="fdaaf015-c32e-4960-a33a-2767bf447b71">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='serial'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='uuid'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk' index='2'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config' index='1'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:14:85:74'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target dev='tapf70fa10f-f7'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c707,c812</label>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c707,c812</imagelabel>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.360 253665 INFO nova.virt.libvirt.driver [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tapfdaaf015-c3 from instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 from the live domain config.#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.361 253665 DEBUG nova.virt.libvirt.vif [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.362 253665 DEBUG nova.network.os_vif_util [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.363 253665 DEBUG nova.network.os_vif_util [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.363 253665 DEBUG os_vif [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.365 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.367 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdaaf015-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.370 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.372 253665 INFO os_vif [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3')#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.372 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[651c326f-66a1-47b4-9322-9cac6bbac1cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.373 253665 DEBUG nova.virt.libvirt.guest [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:11:46</nova:creationTime>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 04:11:46 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:11:46 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:11:46 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.414 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6c86b4a7-5b8b-4f75-9ff4-29bbb1115db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.419 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[74fba763-f144-49c8-a4fc-f64867ef8193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.454 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[52909475-5c3c-4885-b2f5-72b159b34181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.476 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[16c37001-23c4-4f88-a015-46316a921a73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550624, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287822, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.498 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c39a699c-7d1a-4491-8fea-fcfb936ff898]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550640, 'tstamp': 550640}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287823, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550644, 'tstamp': 550644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287823, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.500 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:46 np0005532048 nova_compute[253661]: 2025-11-22 09:11:46.504 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.504 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.504 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:46.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:11:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:11:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 794 KiB/s rd, 439 KiB/s wr, 60 op/s
Nov 22 04:11:47 np0005532048 nova_compute[253661]: 2025-11-22 09:11:47.226 253665 DEBUG nova.compute.manager [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:11:47 np0005532048 nova_compute[253661]: 2025-11-22 09:11:47.254 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:11:47 np0005532048 nova_compute[253661]: 2025-11-22 09:11:47.254 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:11:47 np0005532048 nova_compute[253661]: 2025-11-22 09:11:47.254 253665 DEBUG nova.network.neutron [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:11:47 np0005532048 nova_compute[253661]: 2025-11-22 09:11:47.278 253665 INFO nova.compute.manager [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] instance snapshotting#033[00m
Nov 22 04:11:47 np0005532048 nova_compute[253661]: 2025-11-22 09:11:47.278 253665 WARNING nova.compute.manager [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Nov 22 04:11:47 np0005532048 nova_compute[253661]: 2025-11-22 09:11:47.709 253665 INFO nova.virt.libvirt.driver [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Beginning live snapshot process#033[00m
Nov 22 04:11:47 np0005532048 nova_compute[253661]: 2025-11-22 09:11:47.859 253665 DEBUG nova.virt.libvirt.imagebackend [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.006 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-unplugged-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.007 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.008 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.008 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.008 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-unplugged-fdaaf015-c32e-4960-a33a-2767bf447b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.008 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-unplugged-fdaaf015-c32e-4960-a33a-2767bf447b71 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG oslo_concurrency.lockutils [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 WARNING nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-fdaaf015-c32e-4960-a33a-2767bf447b71 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.009 253665 DEBUG nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-deleted-fdaaf015-c32e-4960-a33a-2767bf447b71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.010 253665 INFO nova.compute.manager [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Neutron deleted interface fdaaf015-c32e-4960-a33a-2767bf447b71; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.010 253665 DEBUG nova.network.neutron [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.037 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(67d49bc907464fa0893c1b629039f058) on rbd image(b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.106 253665 DEBUG nova.objects.instance [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'system_metadata' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.161 253665 DEBUG nova.objects.instance [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'flavor' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.166 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.167 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.167 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.168 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.168 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.169 253665 INFO nova.compute.manager [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Terminating instance#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.170 253665 DEBUG nova.compute.manager [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.204 253665 DEBUG nova.virt.libvirt.vif [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:11:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.205 253665 DEBUG nova.network.os_vif_util [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.206 253665 DEBUG nova.network.os_vif_util [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.564 253665 INFO nova.network.neutron [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Port fdaaf015-c32e-4960-a33a-2767bf447b71 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.565 253665 DEBUG nova.network.neutron [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [{"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.578 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.599 253665 DEBUG oslo_concurrency.lockutils [None req-b2ca74a3-74e3-4a48-bdea-eb1fcc9266c4 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-fdaaf015-c32e-4960-a33a-2767bf447b71" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:48 np0005532048 kernel: tapf70fa10f-f7 (unregistering): left promiscuous mode
Nov 22 04:11:48 np0005532048 NetworkManager[48920]: <info>  [1763802708.8059] device (tapf70fa10f-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:11:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:48Z|00140|binding|INFO|Releasing lport f70fa10f-f756-4faa-aebf-deeb0b129704 from this chassis (sb_readonly=0)
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:48Z|00141|binding|INFO|Setting lport f70fa10f-f756-4faa-aebf-deeb0b129704 down in Southbound
Nov 22 04:11:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:11:48Z|00142|binding|INFO|Removing iface tapf70fa10f-f7 ovn-installed in OVS
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.818 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.823 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:85:74 10.100.0.11'], port_security=['fa:16:3e:14:85:74 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a27c3dda-3eb4-4e57-8ba7-ceb7743442e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '483aedc9-eae7-4cec-a714-9d623421c584', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f70fa10f-f756-4faa-aebf-deeb0b129704) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:11:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.825 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f70fa10f-f756-4faa-aebf-deeb0b129704 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis#033[00m
Nov 22 04:11:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.827 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:11:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.828 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30f008d2-bd42-44e8-9bf9-3706a8c2364d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:48.829 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace which is not needed anymore#033[00m
Nov 22 04:11:48 np0005532048 nova_compute[253661]: 2025-11-22 09:11:48.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:48 np0005532048 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 22 04:11:48 np0005532048 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Consumed 16.328s CPU time.
Nov 22 04:11:48 np0005532048 systemd-machined[215941]: Machine qemu-27-instance-00000018 terminated.
Nov 22 04:11:48 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [NOTICE]   (285875) : haproxy version is 2.8.14-c23fe91
Nov 22 04:11:48 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [NOTICE]   (285875) : path to executable is /usr/sbin/haproxy
Nov 22 04:11:48 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [WARNING]  (285875) : Exiting Master process...
Nov 22 04:11:48 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [WARNING]  (285875) : Exiting Master process...
Nov 22 04:11:48 np0005532048 virtqemud[254229]: cannot parse process status data
Nov 22 04:11:48 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [ALERT]    (285875) : Current worker (285877) exited with code 143 (Terminated)
Nov 22 04:11:48 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[285870]: [WARNING]  (285875) : All workers exited. Exiting... (0)
Nov 22 04:11:48 np0005532048 systemd[1]: libpod-64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f.scope: Deactivated successfully.
Nov 22 04:11:48 np0005532048 podman[287896]: 2025-11-22 09:11:48.993577221 +0000 UTC m=+0.056626597 container died 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:11:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.023 253665 DEBUG nova.virt.libvirt.guest [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.029 253665 DEBUG nova.virt.libvirt.guest [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:f8:7a:b5"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapfdaaf015-c3"/></interface>not found in domain: <domain type='kvm'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <name>instance-00000018</name>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <uuid>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</uuid>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:10:53</nova:creationTime>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 04:11:49 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <entry name='serial'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <entry name='uuid'>a27c3dda-3eb4-4e57-8ba7-ceb7743442e9</entry>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <cpu mode='host-model' check='partial'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_disk.config'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:14:85:74'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target dev='tapf70fa10f-f7'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <console type='pty'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9/console.log' append='off'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:11:49 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:11:49 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.030 253665 WARNING nova.virt.libvirt.driver [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Detaching interface fa:16:3e:f8:7a:b5 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapfdaaf015-c3' not found.#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.031 253665 DEBUG nova.virt.libvirt.vif [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:11:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.031 253665 DEBUG nova.network.os_vif_util [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.033 253665 DEBUG nova.network.os_vif_util [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.033 253665 DEBUG os_vif [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdaaf015-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.038 253665 INFO nova.virt.libvirt.driver [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Instance destroyed successfully.#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.038 253665 DEBUG nova.objects.instance [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'resources' on Instance uuid a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.040 253665 INFO os_vif [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3')#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.042 253665 DEBUG nova.virt.libvirt.guest [req-1ea79636-16cc-4843-a27f-f725bcc6e792 req-eb8800ba-a8fb-43db-8a1b-0e165670020a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-2040235378</nova:name>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:11:49</nova:creationTime>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    <nova:port uuid="f70fa10f-f756-4faa-aebf-deeb0b129704">
Nov 22 04:11:49 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:11:49 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:11:49 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:11:49 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.053 253665 DEBUG nova.virt.libvirt.vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.054 253665 DEBUG nova.network.os_vif_util [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f70fa10f-f756-4faa-aebf-deeb0b129704", "address": "fa:16:3e:14:85:74", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70fa10f-f7", "ovs_interfaceid": "f70fa10f-f756-4faa-aebf-deeb0b129704", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.055 253665 DEBUG nova.network.os_vif_util [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.055 253665 DEBUG os_vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:11:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.057 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.057 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf70fa10f-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:11:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f-userdata-shm.mount: Deactivated successfully.
Nov 22 04:11:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.066 253665 INFO os_vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:85:74,bridge_name='br-int',has_traffic_filtering=True,id=f70fa10f-f756-4faa-aebf-deeb0b129704,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70fa10f-f7')#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.067 253665 DEBUG nova.virt.libvirt.vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-2040235378',display_name='tempest-AttachInterfacesTestJSON-server-2040235378',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-2040235378',id=24,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAUfDrUYh6hP7YgRelJfBbQAsT76Cm9zyDPfEOeUvs+0H8NeAOKPSyGzI1dCfcAP3vP+DT9eRzeVWspLy9ZPX/eAGyQRF2PuAg59XnJ2eyQA7rUH4o6MVsOoXQhFt4QoYA==',key_name='tempest-keypair-1932735830',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-phy8v64w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=a27c3dda-3eb4-4e57-8ba7-ceb7743442e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.067 253665 DEBUG nova.network.os_vif_util [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "fdaaf015-c32e-4960-a33a-2767bf447b71", "address": "fa:16:3e:f8:7a:b5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdaaf015-c3", "ovs_interfaceid": "fdaaf015-c32e-4960-a33a-2767bf447b71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.068 253665 DEBUG nova.network.os_vif_util [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.068 253665 DEBUG os_vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:11:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7b7679634f901691bbb7933c31bc9e7b94437f81ed8d9c677c4422e9a5cf6ef0-merged.mount: Deactivated successfully.
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdaaf015-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.074 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.076 253665 INFO os_vif [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:7a:b5,bridge_name='br-int',has_traffic_filtering=True,id=fdaaf015-c32e-4960-a33a-2767bf447b71,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdaaf015-c3')#033[00m
Nov 22 04:11:49 np0005532048 podman[287896]: 2025-11-22 09:11:49.09644774 +0000 UTC m=+0.159497116 container cleanup 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:11:49 np0005532048 systemd[1]: libpod-conmon-64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f.scope: Deactivated successfully.
Nov 22 04:11:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 85 KiB/s wr, 108 op/s
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.198 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] cloning vms/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk@67d49bc907464fa0893c1b629039f058 to images/a229ff81-6736-4727-80db-88a96c174b36 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:11:49 np0005532048 podman[287953]: 2025-11-22 09:11:49.206127564 +0000 UTC m=+0.070442299 container remove 64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:11:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.214 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26ab7a40-fc22-48f5-a3db-a30c0892ee02]: (4, ('Sat Nov 22 09:11:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f)\n64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f\nSat Nov 22 09:11:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f)\n64fbb13da609cff7457c300105d8a9ab0116d1d2245947b7edae08bf11822c8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be658565-5851-4e31-9b95-3543c4aad205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.219 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:11:49 np0005532048 kernel: tap5e2cd359-c0: left promiscuous mode
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.241 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.248 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd387de-1b95-4fb7-9187-c132540d5b13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.264 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ef56db-efc0-44f4-a5f0-0b80a2aded30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.266 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6641a32a-9936-47b4-b3fe-95bbc9eb9978]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.293 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd236ce-62f8-43af-a533-a6de87a84f4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550613, 'reachable_time': 39639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288001, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:49 np0005532048 systemd[1]: run-netns-ovnmeta\x2d5e2cd359\x2dc68f\x2d4256\x2d90e8\x2d0ad40aff8a00.mount: Deactivated successfully.
Nov 22 04:11:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.298 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:11:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:11:49.298 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[07477c66-2858-4941-8650-610c72b30ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.305 253665 DEBUG nova.compute.manager [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-unplugged-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG oslo_concurrency.lockutils [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG oslo_concurrency.lockutils [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG oslo_concurrency.lockutils [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG nova.compute.manager [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-unplugged-f70fa10f-f756-4faa-aebf-deeb0b129704 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.306 253665 DEBUG nova.compute.manager [req-49b53f5b-c319-4246-96e4-98762fbdd494 req-4907c805-602e-45a9-ad77-2e19800b12e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-unplugged-f70fa10f-f756-4faa-aebf-deeb0b129704 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.377 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] flattening images/a229ff81-6736-4727-80db-88a96c174b36 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:49 np0005532048 nova_compute[253661]: 2025-11-22 09:11:49.751 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(67d49bc907464fa0893c1b629039f058) on rbd image(b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:11:50 np0005532048 nova_compute[253661]: 2025-11-22 09:11:50.034 253665 INFO nova.virt.libvirt.driver [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Deleting instance files /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_del#033[00m
Nov 22 04:11:50 np0005532048 nova_compute[253661]: 2025-11-22 09:11:50.035 253665 INFO nova.virt.libvirt.driver [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Deletion of /var/lib/nova/instances/a27c3dda-3eb4-4e57-8ba7-ceb7743442e9_del complete#033[00m
Nov 22 04:11:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 22 04:11:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 22 04:11:50 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 22 04:11:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:50 np0005532048 nova_compute[253661]: 2025-11-22 09:11:50.158 253665 DEBUG nova.storage.rbd_utils [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(snap) on rbd image(a229ff81-6736-4727-80db-88a96c174b36) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:11:50 np0005532048 nova_compute[253661]: 2025-11-22 09:11:50.218 253665 INFO nova.compute.manager [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Took 2.05 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:11:50 np0005532048 nova_compute[253661]: 2025-11-22 09:11:50.220 253665 DEBUG oslo.service.loopingcall [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:11:50 np0005532048 nova_compute[253661]: 2025-11-22 09:11:50.220 253665 DEBUG nova.compute.manager [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:11:50 np0005532048 nova_compute[253661]: 2025-11-22 09:11:50.221 253665 DEBUG nova.network.neutron [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:11:50 np0005532048 podman[288060]: 2025-11-22 09:11:50.431187797 +0000 UTC m=+0.115606887 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.031 253665 DEBUG nova.network.neutron [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.067 253665 INFO nova.compute.manager [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Took 0.85 seconds to deallocate network for instance.#033[00m
Nov 22 04:11:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 22 04:11:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.083 253665 DEBUG nova.compute.manager [req-eec92782-088b-40c7-b0ea-c64bbcfd1914 req-5b262c99-f220-4fcc-a909-0034a888e348 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-deleted-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.145 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.145 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 484 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 37 KiB/s wr, 141 op/s
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.303 253665 DEBUG oslo_concurrency.processutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.571 253665 DEBUG nova.compute.manager [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.573 253665 DEBUG oslo_concurrency.lockutils [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.573 253665 DEBUG oslo_concurrency.lockutils [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.574 253665 DEBUG oslo_concurrency.lockutils [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.574 253665 DEBUG nova.compute.manager [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] No waiting events found dispatching network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.574 253665 WARNING nova.compute.manager [req-85a53ea4-1c49-423e-bd30-01eebfab3e68 req-a8ae77ed-6044-4dc7-bbab-da3842e12849 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Received unexpected event network-vif-plugged-f70fa10f-f756-4faa-aebf-deeb0b129704 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:11:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:11:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/482563714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.843 253665 DEBUG oslo_concurrency.processutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.853 253665 DEBUG nova.compute.provider_tree [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.869 253665 DEBUG nova.scheduler.client.report [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:11:51 np0005532048 nova_compute[253661]: 2025-11-22 09:11:51.951 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:52 np0005532048 nova_compute[253661]: 2025-11-22 09:11:52.049 253665 INFO nova.scheduler.client.report [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Deleted allocations for instance a27c3dda-3eb4-4e57-8ba7-ceb7743442e9#033[00m
Nov 22 04:11:52 np0005532048 nova_compute[253661]: 2025-11-22 09:11:52.157 253665 DEBUG oslo_concurrency.lockutils [None req-8ef86aa6-0c46-4648-b379-8ed381772ea9 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "a27c3dda-3eb4-4e57-8ba7-ceb7743442e9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:11:52
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 475 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.6 MiB/s wr, 161 op/s
Nov 22 04:11:53 np0005532048 nova_compute[253661]: 2025-11-22 09:11:53.490 253665 INFO nova.virt.libvirt.driver [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Snapshot image upload complete#033[00m
Nov 22 04:11:53 np0005532048 nova_compute[253661]: 2025-11-22 09:11:53.491 253665 INFO nova.compute.manager [None req-4704a8e4-b873-4325-83cc-afb4e86e42cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 6.21 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:11:54 np0005532048 nova_compute[253661]: 2025-11-22 09:11:54.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:54 np0005532048 nova_compute[253661]: 2025-11-22 09:11:54.644 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:11:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:11:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 158 op/s
Nov 22 04:11:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 22 04:11:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 22 04:11:56 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 22 04:11:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 164 op/s
Nov 22 04:11:57 np0005532048 nova_compute[253661]: 2025-11-22 09:11:57.887 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:11:59 np0005532048 nova_compute[253661]: 2025-11-22 09:11:59.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Nov 22 04:11:59 np0005532048 nova_compute[253661]: 2025-11-22 09:11:59.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:11:59 np0005532048 nova_compute[253661]: 2025-11-22 09:11:59.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 22 04:12:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 451 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Nov 22 04:12:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 22 04:12:01 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.371 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.371 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.408 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.485 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.486 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.492 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.492 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.492 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.492 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.493 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.494 253665 INFO nova.compute.manager [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Terminating instance#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.495 253665 DEBUG nova.compute.manager [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.497 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.498 253665 INFO nova.compute.claims [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033831528374682557 of space, bias 1.0, pg target 1.0149458512404768 quantized to 32 (current 32)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010120461149638602 of space, bias 1.0, pg target 0.3026017883741942 quantized to 32 (current 32)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.697 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:02 np0005532048 kernel: tap816016d3-f4 (unregistering): left promiscuous mode
Nov 22 04:12:02 np0005532048 NetworkManager[48920]: <info>  [1763802722.8750] device (tap816016d3-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:12:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:02Z|00143|binding|INFO|Releasing lport 816016d3-f417-4c33-8f24-8e6360d6fa39 from this chassis (sb_readonly=0)
Nov 22 04:12:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:02Z|00144|binding|INFO|Setting lport 816016d3-f417-4c33-8f24-8e6360d6fa39 down in Southbound
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:02Z|00145|binding|INFO|Removing iface tap816016d3-f4 ovn-installed in OVS
Nov 22 04:12:02 np0005532048 nova_compute[253661]: 2025-11-22 09:12:02.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:02 np0005532048 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 22 04:12:02 np0005532048 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Consumed 2.189s CPU time.
Nov 22 04:12:02 np0005532048 systemd-machined[215941]: Machine qemu-29-instance-00000019 terminated.
Nov 22 04:12:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.002 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:74:61 10.100.0.9'], port_security=['fa:16:3e:78:74:61 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b7c923dd-3ae9-4c51-8d6d-6305a71fe97f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=816016d3-f417-4c33-8f24-8e6360d6fa39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.004 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 816016d3-f417-4c33-8f24-8e6360d6fa39 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis#033[00m
Nov 22 04:12:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.006 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:12:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.008 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a8da572c-7dac-479d-b0a5-b9e99c00ded9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:03.009 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace which is not needed anymore#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.149 253665 INFO nova.virt.libvirt.driver [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Instance destroyed successfully.#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.151 253665 DEBUG nova.objects.instance [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid b7c923dd-3ae9-4c51-8d6d-6305a71fe97f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2924160443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.175 253665 DEBUG nova.virt.libvirt.vif [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:11:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1474933117',display_name='tempest-ImagesTestJSON-server-1474933117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1474933117',id=25,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-vsxkgtmn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:11:53Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=b7c923dd-3ae9-4c51-8d6d-6305a71fe97f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.177 253665 DEBUG nova.network.os_vif_util [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "816016d3-f417-4c33-8f24-8e6360d6fa39", "address": "fa:16:3e:78:74:61", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap816016d3-f4", "ovs_interfaceid": "816016d3-f417-4c33-8f24-8e6360d6fa39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.178 253665 DEBUG nova.network.os_vif_util [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.179 253665 DEBUG os_vif [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.181 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap816016d3-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.183 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.184 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.187 253665 INFO os_vif [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:74:61,bridge_name='br-int',has_traffic_filtering=True,id=816016d3-f417-4c33-8f24-8e6360d6fa39,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap816016d3-f4')#033[00m
Nov 22 04:12:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 437 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.4 KiB/s wr, 43 op/s
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.205 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.212 253665 DEBUG nova.compute.provider_tree [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.226 253665 DEBUG nova.scheduler.client.report [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.395 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.396 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:12:03 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [NOTICE]   (287653) : haproxy version is 2.8.14-c23fe91
Nov 22 04:12:03 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [NOTICE]   (287653) : path to executable is /usr/sbin/haproxy
Nov 22 04:12:03 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [WARNING]  (287653) : Exiting Master process...
Nov 22 04:12:03 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [ALERT]    (287653) : Current worker (287658) exited with code 143 (Terminated)
Nov 22 04:12:03 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[287649]: [WARNING]  (287653) : All workers exited. Exiting... (0)
Nov 22 04:12:03 np0005532048 systemd[1]: libpod-3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e.scope: Deactivated successfully.
Nov 22 04:12:03 np0005532048 podman[288154]: 2025-11-22 09:12:03.48295697 +0000 UTC m=+0.376095897 container died 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.861 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.862 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.920 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:12:03 np0005532048 nova_compute[253661]: 2025-11-22 09:12:03.951 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.022 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802709.017618, a27c3dda-3eb4-4e57-8ba7-ceb7743442e9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.023 253665 INFO nova.compute.manager [-] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.048 253665 DEBUG nova.compute.manager [None req-803509fb-4545-4d75-aa14-620b5d6541f1 - - - - - -] [instance: a27c3dda-3eb4-4e57-8ba7-ceb7743442e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-59f397d4018771e7d833e8521b0dd4b258c9b2b44865d862d4ba5ce5ea501120-merged.mount: Deactivated successfully.
Nov 22 04:12:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e-userdata-shm.mount: Deactivated successfully.
Nov 22 04:12:04 np0005532048 podman[288154]: 2025-11-22 09:12:04.363932298 +0000 UTC m=+1.257071235 container cleanup 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:12:04 np0005532048 systemd[1]: libpod-conmon-3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e.scope: Deactivated successfully.
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.574 253665 DEBUG nova.policy [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.730 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.732 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.733 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Creating image(s)#033[00m
Nov 22 04:12:04 np0005532048 podman[288216]: 2025-11-22 09:12:04.760787505 +0000 UTC m=+0.363088495 container remove 3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.761 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.772 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[edc7ee53-1b29-4d9a-9900-69bb9c9f7265]: (4, ('Sat Nov 22 09:12:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e)\n3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e\nSat Nov 22 09:12:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e)\n3a22a1a4b8e2adc23a81b6f8805955b0433bb0018ff489ba8405da03601a769e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.774 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ebda4b-40e5-4691-97fb-7ed1a5d7f204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.775 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:04 np0005532048 kernel: tap2abeeeb2-20: left promiscuous mode
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.799 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.803 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a9b4a9-4086-40a4-83a2-d1e42196bbea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.815 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc682eb-3f22-4829-b3a7-9ec578cd5d7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.816 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[87679169-f790-47b4-ae55-0ce841ff5a6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.836 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff29825f-fe52-485d-8bb9-2502c4c36853]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555301, 'reachable_time': 18341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288275, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:04 np0005532048 systemd[1]: run-netns-ovnmeta\x2d2abeeeb2\x2d24a5\x2d4ccd\x2d93c8\x2d05b42d3a1a51.mount: Deactivated successfully.
Nov 22 04:12:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.841 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:12:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:04.841 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d97cbed3-e5d9-4ee8-9ec6-87d21a4418e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.845 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.850 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.938 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.939 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.940 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.941 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.968 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:04 np0005532048 nova_compute[253661]: 2025-11-22 09:12:04.973 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 405 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 KiB/s wr, 45 op/s
Nov 22 04:12:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 22 04:12:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 22 04:12:05 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 22 04:12:05 np0005532048 nova_compute[253661]: 2025-11-22 09:12:05.792 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:05 np0005532048 nova_compute[253661]: 2025-11-22 09:12:05.851 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] resizing rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:12:06 np0005532048 nova_compute[253661]: 2025-11-22 09:12:06.390 253665 DEBUG nova.objects.instance [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'migration_context' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:06 np0005532048 nova_compute[253661]: 2025-11-22 09:12:06.404 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:12:06 np0005532048 nova_compute[253661]: 2025-11-22 09:12:06.405 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Ensure instance console log exists: /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:12:06 np0005532048 nova_compute[253661]: 2025-11-22 09:12:06.405 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:06 np0005532048 nova_compute[253661]: 2025-11-22 09:12:06.406 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:06 np0005532048 nova_compute[253661]: 2025-11-22 09:12:06.406 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:06 np0005532048 nova_compute[253661]: 2025-11-22 09:12:06.833 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:06 np0005532048 nova_compute[253661]: 2025-11-22 09:12:06.834 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:06 np0005532048 nova_compute[253661]: 2025-11-22 09:12:06.939 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:12:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 405 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.4 MiB/s wr, 60 op/s
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.235 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.236 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.244 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.245 253665 INFO nova.compute.claims [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.311 253665 INFO nova.virt.libvirt.driver [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Deleting instance files /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_del#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.312 253665 INFO nova.virt.libvirt.driver [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Deletion of /var/lib/nova/instances/b7c923dd-3ae9-4c51-8d6d-6305a71fe97f_del complete#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.540 253665 INFO nova.compute.manager [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 5.05 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.541 253665 DEBUG oslo.service.loopingcall [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.541 253665 DEBUG nova.compute.manager [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.542 253665 DEBUG nova.network.neutron [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:12:07 np0005532048 nova_compute[253661]: 2025-11-22 09:12:07.586 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840169657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.082 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.090 253665 DEBUG nova.compute.provider_tree [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.111 253665 DEBUG nova.scheduler.client.report [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.146 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.147 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.175 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully created port: b82d7759-7fa9-4919-9812-a4f5df6893a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.183 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.206 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.207 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.226 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.246 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.343 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.345 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.345 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Creating image(s)#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.373 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.406 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.433 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.436 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.502 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.503 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.503 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.504 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.574 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.579 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.617 253665 DEBUG nova.policy [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97872d7ce91947789de976821b771135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:12:08 np0005532048 nova_compute[253661]: 2025-11-22 09:12:08.956 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:12:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 405 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Nov 22 04:12:09 np0005532048 nova_compute[253661]: 2025-11-22 09:12:09.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:09 np0005532048 nova_compute[253661]: 2025-11-22 09:12:09.597 253665 DEBUG nova.network.neutron [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:09 np0005532048 nova_compute[253661]: 2025-11-22 09:12:09.618 253665 INFO nova.compute.manager [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Took 2.08 seconds to deallocate network for instance.#033[00m
Nov 22 04:12:09 np0005532048 nova_compute[253661]: 2025-11-22 09:12:09.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:09 np0005532048 nova_compute[253661]: 2025-11-22 09:12:09.682 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:09 np0005532048 nova_compute[253661]: 2025-11-22 09:12:09.683 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:09 np0005532048 nova_compute[253661]: 2025-11-22 09:12:09.719 253665 DEBUG nova.compute.manager [req-5d1da9ec-c88f-4fbf-833c-ac524b31ab66 req-7ebf97cf-ef65-49f7-9daf-87bbb453d940 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Received event network-vif-deleted-816016d3-f417-4c33-8f24-8e6360d6fa39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.757211) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802729757292, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2131, "num_deletes": 254, "total_data_size": 3281700, "memory_usage": 3331200, "flush_reason": "Manual Compaction"}
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 22 04:12:09 np0005532048 nova_compute[253661]: 2025-11-22 09:12:09.847 253665 DEBUG oslo_concurrency.processutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802729882047, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3224128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25969, "largest_seqno": 28099, "table_properties": {"data_size": 3214602, "index_size": 5956, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20484, "raw_average_key_size": 20, "raw_value_size": 3195186, "raw_average_value_size": 3211, "num_data_blocks": 262, "num_entries": 995, "num_filter_entries": 995, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802525, "oldest_key_time": 1763802525, "file_creation_time": 1763802729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 124880 microseconds, and 17106 cpu microseconds.
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.882101) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3224128 bytes OK
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.882132) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.920630) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.920696) EVENT_LOG_v1 {"time_micros": 1763802729920684, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.920730) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3272661, prev total WAL file size 3272661, number of live WAL files 2.
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.922161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3148KB)], [59(7393KB)]
Nov 22 04:12:09 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802729922201, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10794722, "oldest_snapshot_seqno": -1}
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5316 keys, 9041565 bytes, temperature: kUnknown
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802730048276, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9041565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9004059, "index_size": 23124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 132056, "raw_average_key_size": 24, "raw_value_size": 8906292, "raw_average_value_size": 1675, "num_data_blocks": 949, "num_entries": 5316, "num_filter_entries": 5316, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.096 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully updated port: b82d7759-7fa9-4919-9812-a4f5df6893a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.162 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.163 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.163 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.208 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Successfully created port: 5898357d-7112-429d-86c6-24932a2fc274 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.048622) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9041565 bytes
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.235551) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.5 rd, 71.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.2 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5839, records dropped: 523 output_compression: NoCompression
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.235609) EVENT_LOG_v1 {"time_micros": 1763802730235582, "job": 32, "event": "compaction_finished", "compaction_time_micros": 126200, "compaction_time_cpu_micros": 20528, "output_level": 6, "num_output_files": 1, "total_output_size": 9041565, "num_input_records": 5839, "num_output_records": 5316, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802730236360, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802730237564, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:09.921845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:12:10.237678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592794506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.312 253665 DEBUG oslo_concurrency.processutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.320 253665 DEBUG nova.compute.provider_tree [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.342 253665 DEBUG nova.scheduler.client.report [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.376 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.404 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.440 253665 INFO nova.scheduler.client.report [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Deleted allocations for instance b7c923dd-3ae9-4c51-8d6d-6305a71fe97f#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.485 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] resizing rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.542 253665 DEBUG oslo_concurrency.lockutils [None req-29f26c73-9229-4280-9052-a25737717d6f 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "b7c923dd-3ae9-4c51-8d6d-6305a71fe97f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.576 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.950 253665 DEBUG nova.objects.instance [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'migration_context' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.963 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.963 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Ensure instance console log exists: /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.964 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.964 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:10 np0005532048 nova_compute[253661]: 2025-11-22 09:12:10.964 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 405 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.521 253665 DEBUG nova.network.neutron [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.570 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.571 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Instance network_info: |[{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.574 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Start _get_guest_xml network_info=[{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.580 253665 WARNING nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.586 253665 DEBUG nova.virt.libvirt.host [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.587 253665 DEBUG nova.virt.libvirt.host [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.591 253665 DEBUG nova.virt.libvirt.host [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.592 253665 DEBUG nova.virt.libvirt.host [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.592 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.592 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.593 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.593 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.594 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.594 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.594 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.595 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.595 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.595 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.595 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.596 253665 DEBUG nova.virt.hardware [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.599 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.914 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Successfully updated port: 5898357d-7112-429d-86c6-24932a2fc274 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.952 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.952 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:11 np0005532048 nova_compute[253661]: 2025-11-22 09:12:11.952 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.007 253665 DEBUG nova.compute.manager [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.007 253665 DEBUG nova.compute.manager [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-b82d7759-7fa9-4919-9812-a4f5df6893a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.007 253665 DEBUG oslo_concurrency.lockutils [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.008 253665 DEBUG oslo_concurrency.lockutils [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.008 253665 DEBUG nova.network.neutron [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port b82d7759-7fa9-4919-9812-a4f5df6893a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:12:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:12:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/590567606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.064 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.090 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.096 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.134 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:12:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:12:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3672761642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:12:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3672761642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:12:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612283366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.597 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.599 253665 DEBUG nova.virt.libvirt.vif [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.599 253665 DEBUG nova.network.os_vif_util [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.600 253665 DEBUG nova.network.os_vif_util [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.601 253665 DEBUG nova.objects.instance [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_devices' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.633 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.634 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.767 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <name>instance-0000001a</name>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:12:11</nova:creationTime>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <entry name="serial">3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <entry name="uuid">3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:78:3a:a5"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <target dev="tapb82d7759-7f"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log" append="off"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:12:12 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:12:12 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:12:12 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:12:12 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.769 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Preparing to wait for external event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.770 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.770 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.770 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.771 253665 DEBUG nova.virt.libvirt.vif [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.772 253665 DEBUG nova.network.os_vif_util [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.781 253665 DEBUG nova.network.os_vif_util [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.783 253665 DEBUG os_vif [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.786 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.787 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.790 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.797 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb82d7759-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.798 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb82d7759-7f, col_values=(('external_ids', {'iface-id': 'b82d7759-7fa9-4919-9812-a4f5df6893a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:3a:a5', 'vm-uuid': '3c70b093-a92a-4781-8e32-2a7eefde4a43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:12 np0005532048 NetworkManager[48920]: <info>  [1763802732.8018] manager: (tapb82d7759-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.803 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.808 253665 INFO os_vif [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:3a:a5,bridge_name='br-int',has_traffic_filtering=True,id=b82d7759-7fa9-4919-9812-a4f5df6893a7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb82d7759-7f')#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.865 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.865 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.872 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.872 253665 INFO nova.compute.claims [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.918 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.918 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.918 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:78:3a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:12:12 np0005532048 nova_compute[253661]: 2025-11-22 09:12:12.919 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Using config drive#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.028 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 426 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.1 MiB/s wr, 71 op/s
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.210 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.239 253665 DEBUG nova.network.neutron [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Updating instance_info_cache with network_info: [{"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.255 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.256 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance network_info: |[{"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.259 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start _get_guest_xml network_info=[{"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.263 253665 WARNING nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.269 253665 DEBUG nova.virt.libvirt.host [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.270 253665 DEBUG nova.virt.libvirt.host [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.272 253665 DEBUG nova.virt.libvirt.host [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.273 253665 DEBUG nova.virt.libvirt.host [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.273 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.273 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.274 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.274 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.274 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.274 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.275 253665 DEBUG nova.virt.hardware [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.279 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.330 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Creating config drive at /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.336 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa2gr2151 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.442 253665 DEBUG nova.network.neutron [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port b82d7759-7fa9-4919-9812-a4f5df6893a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.443 253665 DEBUG nova.network.neutron [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.457 253665 DEBUG oslo_concurrency.lockutils [req-fb9dea7b-1b2f-4d42-b76c-da79546b45e9 req-bbaac665-f5c7-435b-a163-de6a5a891a9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.475 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa2gr2151" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.499 253665 DEBUG nova.storage.rbd_utils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] rbd image 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.503 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252070460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.679 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.688 253665 DEBUG nova.compute.provider_tree [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.701 253665 DEBUG nova.scheduler.client.report [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.727 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.728 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:12:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:12:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126836262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.754 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.775 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.779 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.815 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.816 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.965 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.970 253665 DEBUG nova.policy [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96cac95dc532449d964ffb3705dae943', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:12:13 np0005532048 nova_compute[253661]: 2025-11-22 09:12:13.986 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.069 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.070 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.071 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Creating image(s)#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.095 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.221 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:12:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4010960236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.243 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.247 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.277 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.280 253665 DEBUG nova.virt.libvirt.vif [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-137830058',display_name='tempest-ImagesTestJSON-server-137830058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-137830058',id=27,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-3brb40ng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:08Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=6e825024-ffe6-4fdb-abaa-0c99c65ac38b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.281 253665 DEBUG nova.network.os_vif_util [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.282 253665 DEBUG nova.network.os_vif_util [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.284 253665 DEBUG nova.objects.instance [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.303 253665 DEBUG nova.compute.manager [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-changed-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.304 253665 DEBUG nova.compute.manager [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Refreshing instance network info cache due to event network-changed-5898357d-7112-429d-86c6-24932a2fc274. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.305 253665 DEBUG oslo_concurrency.lockutils [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.305 253665 DEBUG oslo_concurrency.lockutils [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.305 253665 DEBUG nova.network.neutron [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Refreshing network info cache for port 5898357d-7112-429d-86c6-24932a2fc274 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.308 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <uuid>6e825024-ffe6-4fdb-abaa-0c99c65ac38b</uuid>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <name>instance-0000001b</name>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <nova:name>tempest-ImagesTestJSON-server-137830058</nova:name>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:12:13</nova:creationTime>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <nova:port uuid="5898357d-7112-429d-86c6-24932a2fc274">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <entry name="serial">6e825024-ffe6-4fdb-abaa-0c99c65ac38b</entry>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <entry name="uuid">6e825024-ffe6-4fdb-abaa-0c99c65ac38b</entry>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:b0:97:c6"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <target dev="tap5898357d-71"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/console.log" append="off"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:12:14 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:12:14 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:12:14 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:12:14 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.309 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Preparing to wait for external event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.309 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.309 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.310 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.310 253665 DEBUG nova.virt.libvirt.vif [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-137830058',display_name='tempest-ImagesTestJSON-server-137830058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-137830058',id=27,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-3brb40ng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:08Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=6e825024-ffe6-4fdb-abaa-0c99c65ac38b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.310 253665 DEBUG nova.network.os_vif_util [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.311 253665 DEBUG nova.network.os_vif_util [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.311 253665 DEBUG os_vif [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.313 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.317 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5898357d-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.317 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5898357d-71, col_values=(('external_ids', {'iface-id': '5898357d-7112-429d-86c6-24932a2fc274', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:97:c6', 'vm-uuid': '6e825024-ffe6-4fdb-abaa-0c99c65ac38b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.318 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.318 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.319 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.319 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.339 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.343 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5babe591-239b-4ef7-b193-6960c7313292_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:14 np0005532048 NetworkManager[48920]: <info>  [1763802734.3675] manager: (tap5898357d-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.387 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.388 253665 INFO os_vif [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71')#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.494 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Successfully created port: d3202009-ab9d-4ee2-a94d-0d05cc739658 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.542 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.543 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.543 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:b0:97:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.544 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Using config drive#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.564 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.658 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.910 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Creating config drive at /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config#033[00m
Nov 22 04:12:14 np0005532048 nova_compute[253661]: 2025-11-22 09:12:14.916 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt4pfqha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.052 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance failed to shutdown in 60 seconds.#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.055 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyt4pfqha" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.085 253665 DEBUG nova.storage.rbd_utils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.091 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.124 253665 DEBUG oslo_concurrency.processutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config 3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.125 253665 INFO nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Deleting local config drive /var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/disk.config because it was imported into RBD.#033[00m
Nov 22 04:12:15 np0005532048 NetworkManager[48920]: <info>  [1763802735.1790] manager: (tapb82d7759-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 22 04:12:15 np0005532048 kernel: tapb82d7759-7f: entered promiscuous mode
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00146|binding|INFO|Claiming lport b82d7759-7fa9-4919-9812-a4f5df6893a7 for this chassis.
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00147|binding|INFO|b82d7759-7fa9-4919-9812-a4f5df6893a7: Claiming fa:16:3e:78:3a:a5 10.100.0.12
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00148|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.196 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:3a:a5 10.100.0.12'], port_security=['fa:16:3e:78:3a:a5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0e8c403-f9ed-4054-8f14-f56c4d8c06c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b82d7759-7fa9-4919-9812-a4f5df6893a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.198 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b82d7759-7fa9-4919-9812-a4f5df6893a7 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.201 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:12:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 451 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 4.3 MiB/s wr, 95 op/s
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 systemd-udevd[288982]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.219 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[91188c27-8847-47a6-8dc9-774a4310ad3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.220 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e2cd359-c1 in ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.222 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e2cd359-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4becf683-b57b-4b4e-ac3f-968edc4169bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.223 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b94e3059-71c6-4b4e-87b4-257b7d2b3188]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 NetworkManager[48920]: <info>  [1763802735.2295] device (tapb82d7759-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:12:15 np0005532048 NetworkManager[48920]: <info>  [1763802735.2303] device (tapb82d7759-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:12:15 np0005532048 systemd-machined[215941]: New machine qemu-30-instance-0000001a.
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.235 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4df535fe-ea4a-40d5-83f5-3d1aee8a92fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 systemd[1]: Started Virtual Machine qemu-30-instance-0000001a.
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.271 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a98af082-e0d6-4049-9fa1-7d7b0f02bc3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.303 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2e27227c-cb8d-4e3a-86d4-d5b337c048e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.324 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4170d9-eed4-4905-956a-e6bfbf0f3776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 NetworkManager[48920]: <info>  [1763802735.3253] manager: (tap5e2cd359-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Nov 22 04:12:15 np0005532048 systemd-udevd[288988]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:12:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.357 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[05e196e7-2ab8-4eb9-a10b-74d8f5b2121b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.360 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[284cbb8e-94f2-4eac-a856-5e7c6cf927c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 NetworkManager[48920]: <info>  [1763802735.3917] device (tap5e2cd359-c0): carrier: link connected
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.401 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c97272-b92a-4639-871c-06b69ff01e14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.418 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.423 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58ee8860-ccb5-4e95-8bb1-b32906f661ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289017, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00149|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.436 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Successfully updated port: d3202009-ab9d-4ee2-a94d-0d05cc739658 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00150|binding|INFO|Setting lport b82d7759-7fa9-4919-9812-a4f5df6893a7 ovn-installed in OVS
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00151|binding|INFO|Setting lport b82d7759-7fa9-4919-9812-a4f5df6893a7 up in Southbound
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.439 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.452 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.452 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquired lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.453 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.456 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d55c73-eb3b-44cf-bbb9-6dcf49a6d996]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:bd41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558612, 'tstamp': 558612}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289020, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.482 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[86443e01-ef07-479b-8658-ecb61dc00117]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289021, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.512 253665 DEBUG nova.network.neutron [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Updated VIF entry in instance network info cache for port 5898357d-7112-429d-86c6-24932a2fc274. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.513 253665 DEBUG nova.network.neutron [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Updating instance_info_cache with network_info: [{"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.521 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eed050b2-d15b-4e9f-8e99-2a683b4c22d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.527 253665 DEBUG oslo_concurrency.lockutils [req-4800868d-58da-46dd-9a2a-6bdd06bc1359 req-7b8f73ca-e6aa-4e5c-baad-ea050044f7d1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6e825024-ffe6-4fdb-abaa-0c99c65ac38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.575 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.606 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6339f1-035b-4f1a-88e0-6b5a7bdd3499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.608 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.608 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.609 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 NetworkManager[48920]: <info>  [1763802735.6116] manager: (tap5e2cd359-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 22 04:12:15 np0005532048 kernel: tap5e2cd359-c0: entered promiscuous mode
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.617 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.653 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00152|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.673 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.674 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.676 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0771aa9-b4cd-430c-83d9-1ee0396a610a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.677 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/5e2cd359-c68f-4256-90e8-0ad40aff8a00.pid.haproxy
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 5e2cd359-c68f-4256-90e8-0ad40aff8a00
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.677 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'env', 'PROCESS_TAG=haproxy-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e2cd359-c68f-4256-90e8-0ad40aff8a00.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.730 253665 DEBUG nova.compute.manager [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.730 253665 DEBUG oslo_concurrency.lockutils [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.731 253665 DEBUG oslo_concurrency.lockutils [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.731 253665 DEBUG oslo_concurrency.lockutils [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.731 253665 DEBUG nova.compute.manager [req-5c19d033-b920-45bd-914f-69d06813b6c0 req-6eb0fb86-577e-4986-afb0-d14b025b361b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Processing event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:12:15 np0005532048 kernel: tap716b716d-2e (unregistering): left promiscuous mode
Nov 22 04:12:15 np0005532048 NetworkManager[48920]: <info>  [1763802735.8667] device (tap716b716d-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00153|binding|INFO|Releasing lport 716b716d-2ee2-44e7-9850-c10854634f77 from this chassis (sb_readonly=0)
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00154|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 down in Southbound
Nov 22 04:12:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:15Z|00155|binding|INFO|Removing iface tap716b716d-2e ovn-installed in OVS
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:15.904 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:15 np0005532048 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 22 04:12:15 np0005532048 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000012.scope: Consumed 16.606s CPU time.
Nov 22 04:12:15 np0005532048 systemd-machined[215941]: Machine qemu-28-instance-00000012 terminated.
Nov 22 04:12:15 np0005532048 systemd-udevd[289010]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:12:15 np0005532048 NetworkManager[48920]: <info>  [1763802735.9462] manager: (tap716b716d-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.969 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.981 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.982 253665 DEBUG nova.virt.libvirt.vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:11:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:11:12Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.982 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.983 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.983 253665 DEBUG os_vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.985 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.985 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap716b716d-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.987 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:12:15 np0005532048 podman[289061]: 2025-11-22 09:12:15.989587953 +0000 UTC m=+0.083552856 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.994 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:15 np0005532048 nova_compute[253661]: 2025-11-22 09:12:15.997 253665 INFO os_vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')#033[00m
Nov 22 04:12:16 np0005532048 podman[289056]: 2025-11-22 09:12:16.016468171 +0000 UTC m=+0.104176943 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:12:16 np0005532048 podman[289139]: 2025-11-22 09:12:16.053887732 +0000 UTC m=+0.024538522 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.387 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-changed-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.387 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Refreshing instance network info cache due to event network-changed-d3202009-ab9d-4ee2-a94d-0d05cc739658. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.387 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.485 253665 DEBUG nova.network.neutron [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Updating instance_info_cache with network_info: [{"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.503 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Releasing lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.503 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance network_info: |[{"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.504 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.504 253665 DEBUG nova.network.neutron [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Refreshing network info cache for port d3202009-ab9d-4ee2-a94d-0d05cc739658 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:12:16 np0005532048 podman[289139]: 2025-11-22 09:12:16.633704911 +0000 UTC m=+0.604355691 container create e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.661 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.663 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802736.660913, 3c70b093-a92a-4781-8e32-2a7eefde4a43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.664 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] VM Started (Lifecycle Event)#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.667 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.671 253665 INFO nova.virt.libvirt.driver [-] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Instance spawned successfully.#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.671 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.686 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.692 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.695 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.695 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.696 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.696 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.697 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.697 253665 DEBUG nova.virt.libvirt.driver [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.719 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.720 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802736.6612504, 3c70b093-a92a-4781-8e32-2a7eefde4a43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.720 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.746 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.750 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802736.6669443, 3c70b093-a92a-4781-8e32-2a7eefde4a43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.750 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.757 253665 INFO nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Took 12.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.758 253665 DEBUG nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.769 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.772 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.803 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.842 253665 INFO nova.compute.manager [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Took 14.38 seconds to build instance.#033[00m
Nov 22 04:12:16 np0005532048 systemd[1]: Started libpod-conmon-e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d.scope.
Nov 22 04:12:16 np0005532048 nova_compute[253661]: 2025-11-22 09:12:16.856 253665 DEBUG oslo_concurrency.lockutils [None req-f6169d08-9f0f-4c10-80ba-568d35d10dee 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:12:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cb179eb9809a39cf76cdbb9739dddbf19fc65549fbc0ae24ede098ba40d347/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:17 np0005532048 podman[289139]: 2025-11-22 09:12:17.03481801 +0000 UTC m=+1.005468780 container init e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:12:17 np0005532048 podman[289139]: 2025-11-22 09:12:17.046352218 +0000 UTC m=+1.017002988 container start e95c04f90116afdf77189cb1988dffbbb5097a6d778592f085faa62963e4c12d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:12:17 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [NOTICE]   (289187) : New worker (289189) forked
Nov 22 04:12:17 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[289183]: [NOTICE]   (289187) : Loading success.
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.080 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5babe591-239b-4ef7-b193-6960c7313292_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.164 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] resizing rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.193 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.195 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:12:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 468 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.4 MiB/s wr, 93 op/s
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.214 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17584355-d1e1-4478-8158-6ce3d97f49db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.246 253665 DEBUG oslo_concurrency.processutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config 6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.247 253665 INFO nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Deleting local config drive /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b/disk.config because it was imported into RBD.#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.264 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5df9bb-7306-4880-a516-d6bda639cafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.270 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ee535955-0985-4364-b8bb-924eb09317eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.313 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0558d8-6fe3-4a97-a15e-971b99991279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 kernel: tap5898357d-71: entered promiscuous mode
Nov 22 04:12:17 np0005532048 NetworkManager[48920]: <info>  [1763802737.3190] manager: (tap5898357d-71): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Nov 22 04:12:17 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:17Z|00156|binding|INFO|Claiming lport 5898357d-7112-429d-86c6-24932a2fc274 for this chassis.
Nov 22 04:12:17 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:17Z|00157|binding|INFO|5898357d-7112-429d-86c6-24932a2fc274: Claiming fa:16:3e:b0:97:c6 10.100.0.3
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.325 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.333 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:97:c6 10.100.0.3'], port_security=['fa:16:3e:b0:97:c6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6e825024-ffe6-4fdb-abaa-0c99c65ac38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5898357d-7112-429d-86c6-24932a2fc274) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:17 np0005532048 NetworkManager[48920]: <info>  [1763802737.3390] device (tap5898357d-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:12:17 np0005532048 NetworkManager[48920]: <info>  [1763802737.3398] device (tap5898357d-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f94cd90f-2e93-433d-909d-e1baa4e7bce7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 952, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 15, 'rx_bytes': 952, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289270, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 systemd-machined[215941]: New machine qemu-31-instance-0000001b.
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.367 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1392c125-7db7-4a34-bf76-79e73eb96e68]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289274, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289274, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.369 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:17 np0005532048 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.404 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.405 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.405 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.405 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.407 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5898357d-7112-429d-86c6-24932a2fc274 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 bound to our chassis#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:17 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:17Z|00158|binding|INFO|Setting lport 5898357d-7112-429d-86c6-24932a2fc274 ovn-installed in OVS
Nov 22 04:12:17 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:17Z|00159|binding|INFO|Setting lport 5898357d-7112-429d-86c6-24932a2fc274 up in Southbound
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.410 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.428 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[79eb4f24-3d6e-443c-924f-b3cb389b3cf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.429 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2abeeeb2-21 in ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.431 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2abeeeb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.432 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a21552-191d-4652-abde-e2f3b4f8fe0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.433 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1db24e-7762-4aaa-9dca-11e994c282e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.450 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e7233903-4f06-43a3-8e89-9c6c1e1eadfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.464 253665 DEBUG nova.objects.instance [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 5babe591-239b-4ef7-b193-6960c7313292 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.476 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.477 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Ensure instance console log exists: /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.477 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.478 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.479 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.481 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start _get_guest_xml network_info=[{"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.484 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[058c277b-b817-4001-a11b-df0b7561a0ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.491 253665 WARNING nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.496 253665 DEBUG nova.virt.libvirt.host [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.496 253665 DEBUG nova.virt.libvirt.host [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.500 253665 DEBUG nova.virt.libvirt.host [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.501 253665 DEBUG nova.virt.libvirt.host [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.501 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.501 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.502 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.503 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.503 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.503 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.503 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.504 253665 DEBUG nova.virt.hardware [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.507 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.525 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[969074dd-f1ce-4162-8355-cb6755432edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 NetworkManager[48920]: <info>  [1763802737.5404] manager: (tap2abeeeb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.539 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[109fd7da-9bda-4b3d-8857-2b00959912a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.589 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[240ccec1-8254-4244-9287-07e2af6618eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.595 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[63533498-54a7-46d4-a954-b2cd56319602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 NetworkManager[48920]: <info>  [1763802737.6333] device (tap2abeeeb2-20): carrier: link connected
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.642 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[97645460-8a48-47c9-a611-2a05fc84b71f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.660 253665 DEBUG nova.network.neutron [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Updated VIF entry in instance network info cache for port d3202009-ab9d-4ee2-a94d-0d05cc739658. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.662 253665 DEBUG nova.network.neutron [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Updating instance_info_cache with network_info: [{"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.666 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f12baaf6-1ea1-4266-b127-cba00f1de0bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558837, 'reachable_time': 36353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289310, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.678 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-5babe591-239b-4ef7-b193-6960c7313292" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.679 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.679 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.680 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.680 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.680 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.681 253665 WARNING nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.681 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.681 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.682 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.682 253665 DEBUG oslo_concurrency.lockutils [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.682 253665 DEBUG nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.682 253665 WARNING nova.compute.manager [req-4835f7cd-334d-4980-936f-dda80557277b req-11df3868-126f-428f-b236-b3b2b355bed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.688 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[71a23941-65cc-4ec0-a799-273b21a1ad81]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:bff7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558837, 'tstamp': 558837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289321, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.714 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b54f668e-f26b-461b-878f-b2286b4cdd48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558837, 'reachable_time': 36353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289330, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.761 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5ac4a4-d3e4-4068-8c8d-86f1fc19a6c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.874 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[271c8ffd-b48b-4651-a5f7-2b458415e808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.880 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.881 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:17 np0005532048 NetworkManager[48920]: <info>  [1763802737.8865] manager: (tap2abeeeb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 22 04:12:17 np0005532048 kernel: tap2abeeeb2-20: entered promiscuous mode
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.889 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.893 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:17 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:17Z|00160|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.895 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.897 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.898 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d0a8fc-8182-4aac-88be-387c7a130e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.899 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:12:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:17.901 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'env', 'PROCESS_TAG=haproxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:12:17 np0005532048 nova_compute[253661]: 2025-11-22 09:12:17.914 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.032 253665 DEBUG nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.032 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.033 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.033 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.034 253665 DEBUG nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.035 253665 WARNING nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-b82d7759-7fa9-4919-9812-a4f5df6893a7 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.035 253665 DEBUG nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.035 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.036 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.036 253665 DEBUG oslo_concurrency.lockutils [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.036 253665 DEBUG nova.compute.manager [req-ef472489-5440-48a4-8019-a513ca650fda req-fcb40a40-2ecb-47a7-b3d5-7036ad246fa5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Processing event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:12:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:12:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577429672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.098 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.128 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.133 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.168 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802738.117068, 6e825024-ffe6-4fdb-abaa-0c99c65ac38b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.169 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] VM Started (Lifecycle Event)#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.173 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802723.1447194, b7c923dd-3ae9-4c51-8d6d-6305a71fe97f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.174 253665 INFO nova.compute.manager [-] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.175 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.187 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting instance files /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.188 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deletion of /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del complete#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.194 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.197 253665 DEBUG nova.compute.manager [None req-90a250fb-f5ab-4bff-970f-b9e6d52afe23 - - - - - -] [instance: b7c923dd-3ae9-4c51-8d6d-6305a71fe97f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.197 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.204 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance spawned successfully.#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.205 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.208 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.223 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.223 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802738.117386, 6e825024-ffe6-4fdb-abaa-0c99c65ac38b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.224 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.226 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.227 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.227 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.227 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.228 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.228 253665 DEBUG nova.virt.libvirt.driver [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.259 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.265 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.265 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.316 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802738.179927, 6e825024-ffe6-4fdb-abaa-0c99c65ac38b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.317 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.338 253665 INFO nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 9.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.339 253665 DEBUG nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.347 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.354 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.382 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:12:18 np0005532048 podman[289447]: 2025-11-22 09:12:18.412124783 +0000 UTC m=+0.075689705 container create c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.439 253665 INFO nova.compute.manager [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 11.23 seconds to build instance.#033[00m
Nov 22 04:12:18 np0005532048 podman[289447]: 2025-11-22 09:12:18.36842781 +0000 UTC m=+0.031992762 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.469 253665 DEBUG oslo_concurrency.lockutils [None req-faa4e073-94e4-44d9-8cda-d7a7ca1955cc 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.474 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.474 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating image(s)#033[00m
Nov 22 04:12:18 np0005532048 systemd[1]: Started libpod-conmon-c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928.scope.
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.504 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:12:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2cc462d771deebe4c19614685b97fc397e44751755ad5b1c8389e8f680dea6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.542 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:18 np0005532048 podman[289447]: 2025-11-22 09:12:18.551553484 +0000 UTC m=+0.215118436 container init c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:12:18 np0005532048 podman[289447]: 2025-11-22 09:12:18.558536363 +0000 UTC m=+0.222101295 container start c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:12:18 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [NOTICE]   (289521) : New worker (289538) forked
Nov 22 04:12:18 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [NOTICE]   (289521) : Loading success.
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.600 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.609 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:18 np0005532048 NetworkManager[48920]: <info>  [1763802738.7029] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 22 04:12:18 np0005532048 NetworkManager[48920]: <info>  [1763802738.7036] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:12:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4122684865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.723 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.725 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.725 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.726 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.757 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.762 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.801 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.804 253665 DEBUG nova.virt.libvirt.vif [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-557765812',display_name='tempest-ImagesOneServerNegativeTestJSON-server-557765812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-557765812',id=28,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-siow6hfb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:14Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=5babe591-239b-4ef7-b193-6960c7313292,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.805 253665 DEBUG nova.network.os_vif_util [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.806 253665 DEBUG nova.network.os_vif_util [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.807 253665 DEBUG nova.objects.instance [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5babe591-239b-4ef7-b193-6960c7313292 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263467567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.827 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <uuid>5babe591-239b-4ef7-b193-6960c7313292</uuid>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <name>instance-0000001c</name>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-557765812</nova:name>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:12:17</nova:creationTime>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <nova:user uuid="96cac95dc532449d964ffb3705dae943">tempest-ImagesOneServerNegativeTestJSON-251054159-project-member</nova:user>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <nova:project uuid="dcedb2f9ed6e43dfa8ecc3854373b0b5">tempest-ImagesOneServerNegativeTestJSON-251054159</nova:project>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <nova:port uuid="d3202009-ab9d-4ee2-a94d-0d05cc739658">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <entry name="serial">5babe591-239b-4ef7-b193-6960c7313292</entry>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <entry name="uuid">5babe591-239b-4ef7-b193-6960c7313292</entry>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/5babe591-239b-4ef7-b193-6960c7313292_disk">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/5babe591-239b-4ef7-b193-6960c7313292_disk.config">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:b2:33:a1"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <target dev="tapd3202009-ab"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/console.log" append="off"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:12:18 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:12:18 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:12:18 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:12:18 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.827 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Preparing to wait for external event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.828 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.828 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.828 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.829 253665 DEBUG nova.virt.libvirt.vif [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-557765812',display_name='tempest-ImagesOneServerNegativeTestJSON-server-557765812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-557765812',id=28,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-siow6hfb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:14Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=5babe591-239b-4ef7-b193-6960c7313292,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.830 253665 DEBUG nova.network.os_vif_util [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.830 253665 DEBUG nova.network.os_vif_util [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.831 253665 DEBUG os_vif [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.832 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.833 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3202009-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3202009-ab, col_values=(('external_ids', {'iface-id': 'd3202009-ab9d-4ee2-a94d-0d05cc739658', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:33:a1', 'vm-uuid': '5babe591-239b-4ef7-b193-6960c7313292'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:18 np0005532048 NetworkManager[48920]: <info>  [1763802738.8412] manager: (tapd3202009-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.853 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.856 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.858 253665 INFO os_vif [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab')#033[00m
Nov 22 04:12:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:18Z|00161|binding|INFO|Releasing lport 8bfa3e10-9d2e-4476-b427-92376a047c7f from this chassis (sb_readonly=0)
Nov 22 04:12:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:18Z|00162|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:12:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:18Z|00163|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.899 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.926 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.928 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.928 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No VIF found with MAC fa:16:3e:b2:33:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.929 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Using config drive#033[00m
Nov 22 04:12:18 np0005532048 nova_compute[253661]: 2025-11-22 09:12:18.970 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.075 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.075 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.079 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.080 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.083 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.083 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.087 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.087 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.091 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.092 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.096 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.096 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.165 253665 DEBUG oslo_concurrency.lockutils [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.166 253665 DEBUG oslo_concurrency.lockutils [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.166 253665 DEBUG nova.compute.manager [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.172 253665 DEBUG nova.compute.manager [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.173 253665 DEBUG nova.objects.instance [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'flavor' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.200 253665 DEBUG nova.virt.libvirt.driver [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:12:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 448 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 794 KiB/s rd, 4.4 MiB/s wr, 140 op/s
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.206 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.281 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] resizing rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.321 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Creating config drive at /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.328 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpciljblvs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.425 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.426 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Ensure instance console log exists: /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.427 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.427 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.428 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.431 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start _get_guest_xml network_info=[{"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.437 253665 WARNING nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.444 253665 DEBUG nova.virt.libvirt.host [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.446 253665 DEBUG nova.virt.libvirt.host [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.453 253665 DEBUG nova.virt.libvirt.host [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.453 253665 DEBUG nova.virt.libvirt.host [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.454 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.454 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.455 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.456 253665 DEBUG nova.virt.hardware [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.457 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.473 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.508 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpciljblvs" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.544 253665 DEBUG nova.storage.rbd_utils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 5babe591-239b-4ef7-b193-6960c7313292_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.550 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config 5babe591-239b-4ef7-b193-6960c7313292_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.794 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.795 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3538MB free_disk=59.754974365234375GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.796 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.796 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.810 253665 DEBUG oslo_concurrency.processutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config 5babe591-239b-4ef7-b193-6960c7313292_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.810 253665 INFO nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Deleting local config drive /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292/disk.config because it was imported into RBD.#033[00m
Nov 22 04:12:19 np0005532048 kernel: tapd3202009-ab: entered promiscuous mode
Nov 22 04:12:19 np0005532048 NetworkManager[48920]: <info>  [1763802739.8650] manager: (tapd3202009-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.866 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:19Z|00164|binding|INFO|Claiming lport d3202009-ab9d-4ee2-a94d-0d05cc739658 for this chassis.
Nov 22 04:12:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:19Z|00165|binding|INFO|d3202009-ab9d-4ee2-a94d-0d05cc739658: Claiming fa:16:3e:b2:33:a1 10.100.0.3
Nov 22 04:12:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:19Z|00166|binding|INFO|Setting lport d3202009-ab9d-4ee2-a94d-0d05cc739658 ovn-installed in OVS
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:19Z|00167|binding|INFO|Setting lport d3202009-ab9d-4ee2-a94d-0d05cc739658 up in Southbound
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.896 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:33:a1 10.100.0.3'], port_security=['fa:16:3e:b2:33:a1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5babe591-239b-4ef7-b193-6960c7313292', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=d3202009-ab9d-4ee2-a94d-0d05cc739658) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.898 162862 INFO neutron.agent.ovn.metadata.agent [-] Port d3202009-ab9d-4ee2-a94d-0d05cc739658 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff bound to our chassis#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.895 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.902 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 691e79ad-da5d-4276-aa7d-732c2aaedbff#033[00m
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.919 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[65705b9c-9e04-41d7-ac6c-15652b69eeda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.920 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap691e79ad-d1 in ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:12:19 np0005532048 systemd-udevd[289759]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.923 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap691e79ad-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.923 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbfe9c8-6e1b-4b6b-b9c8-45760754ba15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.929 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[42b21f75-6313-4fe0-9a11-f320b03b087e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:19 np0005532048 systemd-machined[215941]: New machine qemu-32-instance-0000001c.
Nov 22 04:12:19 np0005532048 NetworkManager[48920]: <info>  [1763802739.9401] device (tapd3202009-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:12:19 np0005532048 NetworkManager[48920]: <info>  [1763802739.9414] device (tapd3202009-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:12:19 np0005532048 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.950 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3ae08a2f-348c-406b-8ffc-9acb8a542e1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.950 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a06c5a55-cb57-4d3e-8177-2b93f47b1110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d99bd27b-0ff3-493e-a69c-6c7ec034aa81 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 96000606-0bc4-4cf1-9e33-360a640c2cb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance de145d76-062b-4362-bc82-09e09d2f9154 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 6e825024-ffe6-4fdb-abaa-0c99c65ac38b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.953 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 5babe591-239b-4ef7-b193-6960c7313292 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.953 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:12:19 np0005532048 nova_compute[253661]: 2025-11-22 09:12:19.954 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:12:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:19.971 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce0a70f-3060-407d-87b2-50ee66354f49]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.004 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6770ea6b-43f0-4c90-91e0-6949e5c359b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 systemd-udevd[289763]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:12:20 np0005532048 NetworkManager[48920]: <info>  [1763802740.0132] manager: (tap691e79ad-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.011 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e36d36-a170-44b6-ad8c-d86936095c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.060 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3996c406-1365-4358-a272-8be1d56c3f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:12:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888745374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.065 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef15fba-fabf-4424-89c6-fd6617dad3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 NetworkManager[48920]: <info>  [1763802740.1179] device (tap691e79ad-d0): carrier: link connected
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.116 253665 DEBUG nova.compute.manager [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.119 253665 DEBUG oslo_concurrency.lockutils [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.120 253665 DEBUG oslo_concurrency.lockutils [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.120 253665 DEBUG oslo_concurrency.lockutils [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.121 253665 DEBUG nova.compute.manager [req-1f7ad036-f4a8-4ef0-b28c-5d19dd836f02 req-b35b0765-478a-4819-b0c8-9b1d5f11ece1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Processing event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.121 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.126 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5406dee8-749f-4553-a6ab-9da3e1aa148f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.150 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.171 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.181 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[438bf922-80a5-4701-89c6-be498e6411c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559085, 'reachable_time': 34526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289808, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.209 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.213 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3430e576-ff13-42f9-a12b-1bd76868d8a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:f9e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559085, 'tstamp': 559085}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289814, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd8c0d2-f3bb-42de-91cf-4a432dad1172]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559085, 'reachable_time': 34526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289815, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.253 253665 DEBUG nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.254 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.255 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.255 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.256 253665 DEBUG nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] No waiting events found dispatching network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.256 253665 WARNING nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received unexpected event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 for instance with vm_state active and task_state powering-off.#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.256 253665 DEBUG nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-b82d7759-7fa9-4919-9812-a4f5df6893a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.257 253665 DEBUG nova.compute.manager [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-b82d7759-7fa9-4919-9812-a4f5df6893a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.257 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.258 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.258 253665 DEBUG nova.network.neutron [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port b82d7759-7fa9-4919-9812-a4f5df6893a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.290 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ac00a6-9ff3-46ee-a974-87282a4a5724]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.375 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e051443b-6ee7-4ac1-9757-932f76656c59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.377 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691e79ad-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.377 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.378 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap691e79ad-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:20 np0005532048 NetworkManager[48920]: <info>  [1763802740.3814] manager: (tap691e79ad-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 22 04:12:20 np0005532048 kernel: tap691e79ad-d0: entered promiscuous mode
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.385 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap691e79ad-d0, col_values=(('external_ids', {'iface-id': '6b990e4f-df30-4562-9550-e3e0ea811f07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:20Z|00168|binding|INFO|Releasing lport 6b990e4f-df30-4562-9550-e3e0ea811f07 from this chassis (sb_readonly=0)
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.408 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[effe8d96-295e-4767-ad16-db07cae38f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.410 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:12:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:20.411 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'env', 'PROCESS_TAG=haproxy-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/691e79ad-da5d-4276-aa7d-732c2aaedbff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.486 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.487 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802740.4856288, 5babe591-239b-4ef7-b193-6960c7313292 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.488 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] VM Started (Lifecycle Event)#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.504 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.513 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.520 253665 INFO nova.virt.libvirt.driver [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance spawned successfully.#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.521 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.524 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.545 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.545 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802740.4858258, 5babe591-239b-4ef7-b193-6960c7313292 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.545 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.559 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.560 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.561 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.561 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.561 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.562 253665 DEBUG nova.virt.libvirt.driver [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.566 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.570 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802740.5024066, 5babe591-239b-4ef7-b193-6960c7313292 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.570 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.593 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.597 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.614 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.655 253665 INFO nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Took 6.59 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.655 253665 DEBUG nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:12:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1093388408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.767 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.768 253665 DEBUG nova.virt.libvirt.vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:11:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:18Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.769 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.769 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.772 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <uuid>3ae08a2f-348c-406b-8ffc-9acb8a542e1c</uuid>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <name>instance-00000012</name>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAdminTestJSON-server-1439141870</nova:name>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:12:19</nova:creationTime>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <nova:user uuid="05cafdbce8334f9380b4dbd1d21f7d58">tempest-ServersAdminTestJSON-1985232284-project-member</nova:user>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <nova:project uuid="d78b26f20d674ae6a213d727050a50d1">tempest-ServersAdminTestJSON-1985232284</nova:project>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <nova:port uuid="716b716d-2ee2-44e7-9850-c10854634f77">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <entry name="serial">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <entry name="uuid">3ae08a2f-348c-406b-8ffc-9acb8a542e1c</entry>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:12:20 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:47:7d:dd"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <target dev="tap716b716d-2e"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/console.log" append="off"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:12:20 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:12:20 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:12:20 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:12:20 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.773 253665 DEBUG nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Preparing to wait for external event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.773 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.773 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.773 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.774 253665 DEBUG nova.virt.libvirt.vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:11:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:12:18Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.775 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.775 253665 DEBUG nova.network.os_vif_util [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.775 253665 DEBUG os_vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.776 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.777 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.784 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap716b716d-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.785 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap716b716d-2e, col_values=(('external_ids', {'iface-id': '716b716d-2ee2-44e7-9850-c10854634f77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:7d:dd', 'vm-uuid': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.787 253665 INFO nova.compute.manager [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Took 7.94 seconds to build instance.#033[00m
Nov 22 04:12:20 np0005532048 NetworkManager[48920]: <info>  [1763802740.7890] manager: (tap716b716d-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Nov 22 04:12:20 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3468576485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.816 253665 INFO os_vif [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.844 253665 DEBUG oslo_concurrency.lockutils [None req-40debc63-a994-4d10-bdec-c4549a4b89f2 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.860 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.877 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.894 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:12:20 np0005532048 podman[289932]: 2025-11-22 09:12:20.925094794 +0000 UTC m=+0.084455137 container create 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.944 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.945 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.945 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] No VIF found with MAC fa:16:3e:47:7d:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:12:20 np0005532048 nova_compute[253661]: 2025-11-22 09:12:20.946 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Using config drive#033[00m
Nov 22 04:12:20 np0005532048 podman[289932]: 2025-11-22 09:12:20.877850365 +0000 UTC m=+0.037210738 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:12:20 np0005532048 systemd[1]: Started libpod-conmon-328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da.scope.
Nov 22 04:12:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:12:21 np0005532048 nova_compute[253661]: 2025-11-22 09:12:21.008 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:21 np0005532048 podman[289941]: 2025-11-22 09:12:21.010497573 +0000 UTC m=+0.127335791 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:12:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff7de6cecd8d8be804ea66b96176eefccb4d3774ace81e67748b939ad84ffe4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:21 np0005532048 nova_compute[253661]: 2025-11-22 09:12:21.030 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:12:21 np0005532048 nova_compute[253661]: 2025-11-22 09:12:21.030 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:21 np0005532048 podman[289932]: 2025-11-22 09:12:21.033462676 +0000 UTC m=+0.192823049 container init 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:12:21 np0005532048 podman[289932]: 2025-11-22 09:12:21.040152608 +0000 UTC m=+0.199512951 container start 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:12:21 np0005532048 nova_compute[253661]: 2025-11-22 09:12:21.042 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:21 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [NOTICE]   (289993) : New worker (289995) forked
Nov 22 04:12:21 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [NOTICE]   (289993) : Loading success.
Nov 22 04:12:21 np0005532048 nova_compute[253661]: 2025-11-22 09:12:21.081 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'keypairs' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 448 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.028 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.028 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.244 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.244 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.245 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.351 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Creating config drive at /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.358 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfz3372qv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.423 253665 DEBUG nova.compute.manager [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.423 253665 DEBUG oslo_concurrency.lockutils [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.424 253665 DEBUG oslo_concurrency.lockutils [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.424 253665 DEBUG oslo_concurrency.lockutils [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.424 253665 DEBUG nova.compute.manager [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] No waiting events found dispatching network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.424 253665 WARNING nova.compute.manager [req-09f44469-9ad0-4d50-991a-e9874791db27 req-a691a7e6-5745-4305-a807-299e8f4fc64f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received unexpected event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.506 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfz3372qv" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.533 253665 DEBUG nova.storage.rbd_utils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] rbd image 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.538 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.599 253665 DEBUG nova.network.neutron [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port b82d7759-7fa9-4919-9812-a4f5df6893a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.600 253665 DEBUG nova.network.neutron [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.627 253665 DEBUG oslo_concurrency.lockutils [req-dfb6df99-6cca-465b-9c5d-713f67062068 req-d0e6acb3-d8d8-475c-951f-5f196352aa56 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.844 253665 DEBUG oslo_concurrency.processutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config 3ae08a2f-348c-406b-8ffc-9acb8a542e1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.845 253665 INFO nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting local config drive /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c/disk.config because it was imported into RBD.#033[00m
Nov 22 04:12:22 np0005532048 kernel: tap716b716d-2e: entered promiscuous mode
Nov 22 04:12:22 np0005532048 NetworkManager[48920]: <info>  [1763802742.9169] manager: (tap716b716d-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:22Z|00169|binding|INFO|Claiming lport 716b716d-2ee2-44e7-9850-c10854634f77 for this chassis.
Nov 22 04:12:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:22Z|00170|binding|INFO|716b716d-2ee2-44e7-9850-c10854634f77: Claiming fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.945 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:22.946 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '7', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:22.949 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a bound to our chassis#033[00m
Nov 22 04:12:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:22.952 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:12:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:22Z|00171|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 ovn-installed in OVS
Nov 22 04:12:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:22Z|00172|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 up in Southbound
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.955 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:22 np0005532048 nova_compute[253661]: 2025-11-22 09:12:22.956 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:22.977 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[caff061d-6bab-4a4c-983b-60ae8ed1ea56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:22 np0005532048 systemd-udevd[290057]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:12:22 np0005532048 systemd-machined[215941]: New machine qemu-33-instance-00000012.
Nov 22 04:12:23 np0005532048 NetworkManager[48920]: <info>  [1763802743.0038] device (tap716b716d-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:12:23 np0005532048 NetworkManager[48920]: <info>  [1763802743.0048] device (tap716b716d-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:12:23 np0005532048 systemd[1]: Started Virtual Machine qemu-33-instance-00000012.
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.039 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb65dca-a347-473e-b0d2-d211a54ac91a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.043 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcb1934-1255-458d-8b0e-fb3fcee9ac33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.084 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[87982d20-92fa-483f-bae1-64e42128035c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.117 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3529df-0f9c-409c-a1f6-7d493585cde3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 952, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 17, 'rx_bytes': 952, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290070, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc66e031-3371-4deb-9d7e-3224e0e775a5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290072, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290072, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.145 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.149 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.150 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.150 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:23.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 426 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 201 op/s
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.731 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 3ae08a2f-348c-406b-8ffc-9acb8a542e1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.732 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802743.7310624, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.732 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Started (Lifecycle Event)#033[00m
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.750 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.758 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802743.7312176, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.759 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.798 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.805 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:12:23 np0005532048 nova_compute[253661]: 2025-11-22 09:12:23.825 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:12:24 np0005532048 nova_compute[253661]: 2025-11-22 09:12:24.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.6 MiB/s wr, 312 op/s
Nov 22 04:12:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.396 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updating instance_info_cache with network_info: [{"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.432 253665 DEBUG nova.compute.manager [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.433 253665 DEBUG oslo_concurrency.lockutils [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.433 253665 DEBUG oslo_concurrency.lockutils [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.433 253665 DEBUG oslo_concurrency.lockutils [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.433 253665 DEBUG nova.compute.manager [req-dd141d13-93d9-433e-a87e-416422ec3889 req-b1feaa16-3d12-42b5-af73-21e2e8c119d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Processing event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.434 253665 DEBUG nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.439 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802745.4388597, 3ae08a2f-348c-406b-8ffc-9acb8a542e1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.439 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.441 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.445 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance spawned successfully.#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.445 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.464 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.472 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.478 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.479 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.479 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.480 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.480 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.480 253665 DEBUG nova.virt.libvirt.driver [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.506 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.507 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-d99bd27b-0ff3-493e-a69c-6c7ec034aa81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.507 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.508 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.509 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.568 253665 DEBUG nova.compute.manager [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.642 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.642 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.642 253665 DEBUG nova.objects.instance [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.697 253665 DEBUG oslo_concurrency.lockutils [None req-02487aa5-8b6b-45be-9553-753eb5d851e3 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:25 np0005532048 nova_compute[253661]: 2025-11-22 09:12:25.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.6 MiB/s wr, 320 op/s
Nov 22 04:12:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:27.956 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:27.956 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:27.958 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:28 np0005532048 nova_compute[253661]: 2025-11-22 09:12:28.330 253665 DEBUG nova.compute.manager [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:28 np0005532048 nova_compute[253661]: 2025-11-22 09:12:28.331 253665 DEBUG oslo_concurrency.lockutils [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:28 np0005532048 nova_compute[253661]: 2025-11-22 09:12:28.332 253665 DEBUG oslo_concurrency.lockutils [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:28 np0005532048 nova_compute[253661]: 2025-11-22 09:12:28.332 253665 DEBUG oslo_concurrency.lockutils [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:28 np0005532048 nova_compute[253661]: 2025-11-22 09:12:28.332 253665 DEBUG nova.compute.manager [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:28 np0005532048 nova_compute[253661]: 2025-11-22 09:12:28.333 253665 WARNING nova.compute.manager [req-09aabf41-3674-4f7e-b186-5d899419dcdc req-3b215724-5099-416a-9726-f141a8ebd6ff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:12:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 2.8 MiB/s wr, 364 op/s
Nov 22 04:12:29 np0005532048 nova_compute[253661]: 2025-11-22 09:12:29.316 253665 DEBUG nova.virt.libvirt.driver [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:12:29 np0005532048 nova_compute[253661]: 2025-11-22 09:12:29.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:30 np0005532048 nova_compute[253661]: 2025-11-22 09:12:30.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 465 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 1.8 MiB/s wr, 294 op/s
Nov 22 04:12:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:31Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:3a:a5 10.100.0.12
Nov 22 04:12:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:31Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:3a:a5 10.100.0.12
Nov 22 04:12:31 np0005532048 nova_compute[253661]: 2025-11-22 09:12:31.903 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:31 np0005532048 nova_compute[253661]: 2025-11-22 09:12:31.903 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:31 np0005532048 nova_compute[253661]: 2025-11-22 09:12:31.904 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:31 np0005532048 nova_compute[253661]: 2025-11-22 09:12:31.904 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:31 np0005532048 nova_compute[253661]: 2025-11-22 09:12:31.904 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:31 np0005532048 nova_compute[253661]: 2025-11-22 09:12:31.906 253665 INFO nova.compute.manager [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Terminating instance#033[00m
Nov 22 04:12:31 np0005532048 nova_compute[253661]: 2025-11-22 09:12:31.909 253665 DEBUG nova.compute.manager [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:12:32 np0005532048 nova_compute[253661]: 2025-11-22 09:12:32.426 253665 DEBUG nova.compute.manager [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:32 np0005532048 nova_compute[253661]: 2025-11-22 09:12:32.462 253665 INFO nova.compute.manager [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] instance snapshotting#033[00m
Nov 22 04:12:32 np0005532048 nova_compute[253661]: 2025-11-22 09:12:32.647 253665 INFO nova.virt.libvirt.driver [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Beginning live snapshot process#033[00m
Nov 22 04:12:32 np0005532048 nova_compute[253661]: 2025-11-22 09:12:32.792 253665 DEBUG nova.virt.libvirt.imagebackend [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:12:32 np0005532048 nova_compute[253661]: 2025-11-22 09:12:32.978 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] creating snapshot(12639cd7c1ad4538a8186cb2d407fe9f) on rbd image(5babe591-239b-4ef7-b193-6960c7313292_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:12:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 476 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.6 MiB/s wr, 307 op/s
Nov 22 04:12:33 np0005532048 kernel: tapc048a826-73 (unregistering): left promiscuous mode
Nov 22 04:12:33 np0005532048 NetworkManager[48920]: <info>  [1763802753.4167] device (tapc048a826-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.425 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:33Z|00173|binding|INFO|Releasing lport c048a826-73ad-49d3-a29f-5d790d359e51 from this chassis (sb_readonly=0)
Nov 22 04:12:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:33Z|00174|binding|INFO|Setting lport c048a826-73ad-49d3-a29f-5d790d359e51 down in Southbound
Nov 22 04:12:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:33Z|00175|binding|INFO|Removing iface tapc048a826-73 ovn-installed in OVS
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.428 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.452 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:b7:42 10.100.0.7'], port_security=['fa:16:3e:8c:b7:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'de145d76-062b-4362-bc82-09e09d2f9154', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c048a826-73ad-49d3-a29f-5d790d359e51) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.454 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c048a826-73ad-49d3-a29f-5d790d359e51 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.459 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:12:33 np0005532048 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 22 04:12:33 np0005532048 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Consumed 17.890s CPU time.
Nov 22 04:12:33 np0005532048 systemd-machined[215941]: Machine qemu-26-instance-00000017 terminated.
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.483 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6acbe84a-897e-4355-8070-8bce3e925b77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.525 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1dee8ee1-6059-4d4c-8a69-cd7d36430cee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.528 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1f77a57c-c213-47e3-9133-df2e9a67258b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.564 253665 INFO nova.virt.libvirt.driver [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Instance destroyed successfully.#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.564 253665 DEBUG nova.objects.instance [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid de145d76-062b-4362-bc82-09e09d2f9154 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.567 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0728b835-7120-45ef-bf59-d8e5cd9a43b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.581 253665 DEBUG nova.virt.libvirt.vif [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-27339221',display_name='tempest-ServersAdminTestJSON-server-27339221',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-27339221',id=23,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-xz93pz1e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:50Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=de145d76-062b-4362-bc82-09e09d2f9154,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.581 253665 DEBUG nova.network.os_vif_util [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "c048a826-73ad-49d3-a29f-5d790d359e51", "address": "fa:16:3e:8c:b7:42", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc048a826-73", "ovs_interfaceid": "c048a826-73ad-49d3-a29f-5d790d359e51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.582 253665 DEBUG nova.network.os_vif_util [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.583 253665 DEBUG os_vif [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.585 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.586 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc048a826-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.588 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.589 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.592 253665 INFO os_vif [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:b7:42,bridge_name='br-int',has_traffic_filtering=True,id=c048a826-73ad-49d3-a29f-5d790d359e51,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc048a826-73')#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.604 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[93f475ed-6a0d-453c-8b97-3421c5e2bfbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 952, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 19, 'rx_bytes': 952, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290185, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.628 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bd3f9e-ca59-451b-8b18-ac0e4494149f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290201, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290201, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.634 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.634 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.634 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:33.635 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.639 253665 DEBUG nova.compute.manager [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-unplugged-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.640 253665 DEBUG oslo_concurrency.lockutils [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.640 253665 DEBUG oslo_concurrency.lockutils [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.641 253665 DEBUG oslo_concurrency.lockutils [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.641 253665 DEBUG nova.compute.manager [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] No waiting events found dispatching network-vif-unplugged-c048a826-73ad-49d3-a29f-5d790d359e51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:33 np0005532048 nova_compute[253661]: 2025-11-22 09:12:33.641 253665 DEBUG nova.compute.manager [req-623b19df-58e1-4717-9a83-9072649d928e req-00211c34-9518-46a4-8c8f-ae3ebf4a282b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-unplugged-c048a826-73ad-49d3-a29f-5d790d359e51 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:12:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 22 04:12:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 22 04:12:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 22 04:12:34 np0005532048 nova_compute[253661]: 2025-11-22 09:12:34.264 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] cloning vms/5babe591-239b-4ef7-b193-6960c7313292_disk@12639cd7c1ad4538a8186cb2d407fe9f to images/ffc4be20-c068-44ca-a572-d433657a200f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:12:34 np0005532048 nova_compute[253661]: 2025-11-22 09:12:34.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 498 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.5 MiB/s wr, 187 op/s
Nov 22 04:12:35 np0005532048 nova_compute[253661]: 2025-11-22 09:12:35.348 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] flattening images/ffc4be20-c068-44ca-a572-d433657a200f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:12:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:35 np0005532048 nova_compute[253661]: 2025-11-22 09:12:35.719 253665 DEBUG nova.compute.manager [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:35 np0005532048 nova_compute[253661]: 2025-11-22 09:12:35.719 253665 DEBUG oslo_concurrency.lockutils [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de145d76-062b-4362-bc82-09e09d2f9154-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:35 np0005532048 nova_compute[253661]: 2025-11-22 09:12:35.719 253665 DEBUG oslo_concurrency.lockutils [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:35 np0005532048 nova_compute[253661]: 2025-11-22 09:12:35.720 253665 DEBUG oslo_concurrency.lockutils [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:35 np0005532048 nova_compute[253661]: 2025-11-22 09:12:35.720 253665 DEBUG nova.compute.manager [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] No waiting events found dispatching network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:35 np0005532048 nova_compute[253661]: 2025-11-22 09:12:35.720 253665 WARNING nova.compute.manager [req-d7e3f3aa-6652-43c0-b48d-3c96608a78eb req-bdbacb17-09e0-4a29-b331-20bb9808d0e0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received unexpected event network-vif-plugged-c048a826-73ad-49d3-a29f-5d790d359e51 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:12:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:36Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:97:c6 10.100.0.3
Nov 22 04:12:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:36Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:97:c6 10.100.0.3
Nov 22 04:12:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 532 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.3 MiB/s wr, 230 op/s
Nov 22 04:12:37 np0005532048 nova_compute[253661]: 2025-11-22 09:12:37.235 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] removing snapshot(12639cd7c1ad4538a8186cb2d407fe9f) on rbd image(5babe591-239b-4ef7-b193-6960c7313292_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:12:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 22 04:12:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 22 04:12:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 22 04:12:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:38Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:33:a1 10.100.0.3
Nov 22 04:12:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:38Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:33:a1 10.100.0.3
Nov 22 04:12:38 np0005532048 nova_compute[253661]: 2025-11-22 09:12:38.479 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] creating snapshot(snap) on rbd image(ffc4be20-c068-44ca-a572-d433657a200f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:12:38 np0005532048 nova_compute[253661]: 2025-11-22 09:12:38.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:38 np0005532048 nova_compute[253661]: 2025-11-22 09:12:38.909 253665 INFO nova.virt.libvirt.driver [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Deleting instance files /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154_del#033[00m
Nov 22 04:12:38 np0005532048 nova_compute[253661]: 2025-11-22 09:12:38.910 253665 INFO nova.virt.libvirt.driver [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Deletion of /var/lib/nova/instances/de145d76-062b-4362-bc82-09e09d2f9154_del complete#033[00m
Nov 22 04:12:38 np0005532048 nova_compute[253661]: 2025-11-22 09:12:38.993 253665 INFO nova.compute.manager [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Took 7.08 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:12:38 np0005532048 nova_compute[253661]: 2025-11-22 09:12:38.995 253665 DEBUG oslo.service.loopingcall [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:12:38 np0005532048 nova_compute[253661]: 2025-11-22 09:12:38.995 253665 DEBUG nova.compute.manager [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:12:38 np0005532048 nova_compute[253661]: 2025-11-22 09:12:38.996 253665 DEBUG nova.network.neutron [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:12:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 525 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 MiB/s wr, 366 op/s
Nov 22 04:12:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 22 04:12:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 22 04:12:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 22 04:12:39 np0005532048 nova_compute[253661]: 2025-11-22 09:12:39.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image ffc4be20-c068-44ca-a572-d433657a200f could not be found.
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID ffc4be20-c068-44ca-a572-d433657a200f
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver 
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver 
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image ffc4be20-c068-44ca-a572-d433657a200f could not be found.
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.018 253665 ERROR nova.virt.libvirt.driver #033[00m
Nov 22 04:12:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.546 253665 DEBUG nova.virt.libvirt.driver [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:12:40 np0005532048 nova_compute[253661]: 2025-11-22 09:12:40.682 253665 DEBUG nova.storage.rbd_utils [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] removing snapshot(snap) on rbd image(ffc4be20-c068-44ca-a572-d433657a200f) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:12:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:41.065 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:41.066 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:41.067 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.101 253665 DEBUG nova.network.neutron [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.118 253665 INFO nova.compute.manager [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Took 2.12 seconds to deallocate network for instance.#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.164 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.165 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.174 253665 DEBUG nova.compute.manager [req-2da275cd-d067-4538-b23f-0efb7801a67f req-eeb760cd-86f3-4044-bccc-730c586055cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Received event network-vif-deleted-c048a826-73ad-49d3-a29f-5d790d359e51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 525 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 8.4 MiB/s wr, 293 op/s
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.310 253665 DEBUG oslo_concurrency.processutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 22 04:12:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 22 04:12:41 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 22 04:12:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797358642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.788 253665 DEBUG oslo_concurrency.processutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.794 253665 DEBUG nova.compute.provider_tree [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.811 253665 DEBUG nova.scheduler.client.report [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.836 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.861 253665 INFO nova.scheduler.client.report [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Deleted allocations for instance de145d76-062b-4362-bc82-09e09d2f9154#033[00m
Nov 22 04:12:41 np0005532048 nova_compute[253661]: 2025-11-22 09:12:41.959 253665 DEBUG oslo_concurrency.lockutils [None req-76ab3f1e-933f-4e0a-b2bb-28ea8130182a 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "de145d76-062b-4362-bc82-09e09d2f9154" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:42 np0005532048 nova_compute[253661]: 2025-11-22 09:12:42.553 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:42 np0005532048 nova_compute[253661]: 2025-11-22 09:12:42.554 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:42 np0005532048 nova_compute[253661]: 2025-11-22 09:12:42.554 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:42 np0005532048 nova_compute[253661]: 2025-11-22 09:12:42.555 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:42 np0005532048 nova_compute[253661]: 2025-11-22 09:12:42.555 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:42 np0005532048 nova_compute[253661]: 2025-11-22 09:12:42.556 253665 INFO nova.compute.manager [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Terminating instance#033[00m
Nov 22 04:12:42 np0005532048 nova_compute[253661]: 2025-11-22 09:12:42.557 253665 DEBUG nova.compute.manager [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:12:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 538 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Nov 22 04:12:43 np0005532048 kernel: tap411035c7-ec (unregistering): left promiscuous mode
Nov 22 04:12:43 np0005532048 NetworkManager[48920]: <info>  [1763802763.4992] device (tap411035c7-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:12:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:43Z|00176|binding|INFO|Releasing lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa from this chassis (sb_readonly=0)
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:43Z|00177|binding|INFO|Setting lport 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa down in Southbound
Nov 22 04:12:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:43Z|00178|binding|INFO|Removing iface tap411035c7-ec ovn-installed in OVS
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.521 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:6f:23 10.100.0.10'], port_security=['fa:16:3e:cb:6f:23 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '96000606-0bc4-4cf1-9e33-360a640c2cb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.523 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.525 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.546 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[123dc51e-53dc-4e8c-ae0f-3399740f1e6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:43 np0005532048 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 22 04:12:43 np0005532048 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000016.scope: Consumed 22.579s CPU time.
Nov 22 04:12:43 np0005532048 systemd-machined[215941]: Machine qemu-25-instance-00000016 terminated.
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.587 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[59fc7443-58ba-4979-8cf8-abb87054c8d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.590 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[08198933-436e-4bdb-9e25-91a00aef8f50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.592 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.621 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[67ff4496-f59f-44d7-b4d2-fe43f6128607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[61017171-e9e5-4df7-8037-5836ac251794]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 952, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 21, 'rx_bytes': 952, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290366, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.667 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a04e0c66-42ef-4705-a164-d624f74c035d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290367, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290367, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.669 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.675 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.676 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.676 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.676 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:43.677 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.798 253665 INFO nova.virt.libvirt.driver [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Instance destroyed successfully.#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.799 253665 DEBUG nova.objects.instance [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid 96000606-0bc4-4cf1-9e33-360a640c2cb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.817 253665 DEBUG nova.virt.libvirt.vif [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-70556130',display_name='tempest-ServersAdminTestJSON-server-70556130',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-70556130',id=22,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-74e7hdfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:28Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=96000606-0bc4-4cf1-9e33-360a640c2cb7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.817 253665 DEBUG nova.network.os_vif_util [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "address": "fa:16:3e:cb:6f:23", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap411035c7-ec", "ovs_interfaceid": "411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.818 253665 DEBUG nova.network.os_vif_util [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.818 253665 DEBUG os_vif [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.821 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap411035c7-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.825 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:43 np0005532048 nova_compute[253661]: 2025-11-22 09:12:43.828 253665 INFO os_vif [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:6f:23,bridge_name='br-int',has_traffic_filtering=True,id=411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap411035c7-ec')#033[00m
Nov 22 04:12:44 np0005532048 nova_compute[253661]: 2025-11-22 09:12:44.270 253665 DEBUG nova.compute.manager [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-unplugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:44 np0005532048 nova_compute[253661]: 2025-11-22 09:12:44.271 253665 DEBUG oslo_concurrency.lockutils [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:44 np0005532048 nova_compute[253661]: 2025-11-22 09:12:44.274 253665 DEBUG oslo_concurrency.lockutils [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:44 np0005532048 nova_compute[253661]: 2025-11-22 09:12:44.275 253665 DEBUG oslo_concurrency.lockutils [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:44 np0005532048 nova_compute[253661]: 2025-11-22 09:12:44.275 253665 DEBUG nova.compute.manager [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] No waiting events found dispatching network-vif-unplugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:44 np0005532048 nova_compute[253661]: 2025-11-22 09:12:44.276 253665 DEBUG nova.compute.manager [req-f793b3e1-01ae-4b8b-aa78-7114d30a2a16 req-75becb4f-9681-4ede-b55f-ff35998cdee7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-unplugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:12:44 np0005532048 nova_compute[253661]: 2025-11-22 09:12:44.780 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 547 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.9 MiB/s wr, 310 op/s
Nov 22 04:12:45 np0005532048 kernel: tap5898357d-71 (unregistering): left promiscuous mode
Nov 22 04:12:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:45Z|00179|binding|INFO|Releasing lport 5898357d-7112-429d-86c6-24932a2fc274 from this chassis (sb_readonly=0)
Nov 22 04:12:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:45Z|00180|binding|INFO|Setting lport 5898357d-7112-429d-86c6-24932a2fc274 down in Southbound
Nov 22 04:12:45 np0005532048 nova_compute[253661]: 2025-11-22 09:12:45.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:45Z|00181|binding|INFO|Removing iface tap5898357d-71 ovn-installed in OVS
Nov 22 04:12:45 np0005532048 NetworkManager[48920]: <info>  [1763802765.2301] device (tap5898357d-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:12:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.237 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:97:c6 10.100.0.3'], port_security=['fa:16:3e:b0:97:c6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6e825024-ffe6-4fdb-abaa-0c99c65ac38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5898357d-7112-429d-86c6-24932a2fc274) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.239 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5898357d-7112-429d-86c6-24932a2fc274 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis#033[00m
Nov 22 04:12:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.243 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:12:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.245 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1c9859-a78b-4edc-8e31-0fc3378475cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:45.246 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace which is not needed anymore#033[00m
Nov 22 04:12:45 np0005532048 nova_compute[253661]: 2025-11-22 09:12:45.261 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:45 np0005532048 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 22 04:12:45 np0005532048 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 15.652s CPU time.
Nov 22 04:12:45 np0005532048 systemd-machined[215941]: Machine qemu-31-instance-0000001b terminated.
Nov 22 04:12:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:45 np0005532048 nova_compute[253661]: 2025-11-22 09:12:45.581 253665 INFO nova.virt.libvirt.driver [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance shutdown successfully after 26 seconds.#033[00m
Nov 22 04:12:45 np0005532048 nova_compute[253661]: 2025-11-22 09:12:45.590 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance destroyed successfully.#033[00m
Nov 22 04:12:45 np0005532048 nova_compute[253661]: 2025-11-22 09:12:45.591 253665 DEBUG nova.objects.instance [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:45 np0005532048 nova_compute[253661]: 2025-11-22 09:12:45.605 253665 DEBUG nova.compute.manager [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:45 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [NOTICE]   (289521) : haproxy version is 2.8.14-c23fe91
Nov 22 04:12:45 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [NOTICE]   (289521) : path to executable is /usr/sbin/haproxy
Nov 22 04:12:45 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [WARNING]  (289521) : Exiting Master process...
Nov 22 04:12:45 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [ALERT]    (289521) : Current worker (289538) exited with code 143 (Terminated)
Nov 22 04:12:45 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[289481]: [WARNING]  (289521) : All workers exited. Exiting... (0)
Nov 22 04:12:45 np0005532048 systemd[1]: libpod-c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928.scope: Deactivated successfully.
Nov 22 04:12:45 np0005532048 podman[290420]: 2025-11-22 09:12:45.621246579 +0000 UTC m=+0.237832065 container died c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:12:45 np0005532048 nova_compute[253661]: 2025-11-22 09:12:45.655 253665 DEBUG oslo_concurrency.lockutils [None req-cccd58cf-07bc-46d8-87a8-063740ca88c1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 26.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.347 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.348 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.348 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.348 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.348 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] No waiting events found dispatching network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.349 253665 WARNING nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received unexpected event network-vif-plugged-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.349 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-unplugged-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.349 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.349 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.350 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.350 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] No waiting events found dispatching network-vif-unplugged-5898357d-7112-429d-86c6-24932a2fc274 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.350 253665 WARNING nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received unexpected event network-vif-unplugged-5898357d-7112-429d-86c6-24932a2fc274 for instance with vm_state stopped and task_state None.#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.351 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.351 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.351 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.351 253665 DEBUG oslo_concurrency.lockutils [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.352 253665 DEBUG nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] No waiting events found dispatching network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:46 np0005532048 nova_compute[253661]: 2025-11-22 09:12:46.352 253665 WARNING nova.compute.manager [req-4789314b-335e-4470-b61c-486b325129e0 req-09a7a62c-e480-4601-bf3e-4091be0eff74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received unexpected event network-vif-plugged-5898357d-7112-429d-86c6-24932a2fc274 for instance with vm_state stopped and task_state None.#033[00m
Nov 22 04:12:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:46Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 04:12:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:46Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:7d:dd 10.100.0.8
Nov 22 04:12:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928-userdata-shm.mount: Deactivated successfully.
Nov 22 04:12:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1e2cc462d771deebe4c19614685b97fc397e44751755ad5b1c8389e8f680dea6-merged.mount: Deactivated successfully.
Nov 22 04:12:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 552 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 3.2 MiB/s wr, 131 op/s
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:12:47 np0005532048 podman[290420]: 2025-11-22 09:12:47.566564075 +0000 UTC m=+2.183149541 container cleanup c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:12:47 np0005532048 systemd[1]: libpod-conmon-c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928.scope: Deactivated successfully.
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:12:47 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bcfce270-6d82-4a67-8111-907db03cc725 does not exist
Nov 22 04:12:47 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 212886a2-e170-4162-b3e5-4ceaf877cfa7 does not exist
Nov 22 04:12:47 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 03d3ebc4-e1be-4662-840b-459c6febf050 does not exist
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:12:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:12:47 np0005532048 nova_compute[253661]: 2025-11-22 09:12:47.790 253665 DEBUG nova.compute.manager [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:47 np0005532048 podman[290505]: 2025-11-22 09:12:47.824091043 +0000 UTC m=+1.748479952 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:12:47 np0005532048 podman[290506]: 2025-11-22 09:12:47.826155283 +0000 UTC m=+1.752359746 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 22 04:12:47 np0005532048 nova_compute[253661]: 2025-11-22 09:12:47.841 253665 INFO nova.compute.manager [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] instance snapshotting#033[00m
Nov 22 04:12:47 np0005532048 nova_compute[253661]: 2025-11-22 09:12:47.842 253665 WARNING nova.compute.manager [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Nov 22 04:12:48 np0005532048 nova_compute[253661]: 2025-11-22 09:12:48.068 253665 INFO nova.virt.libvirt.driver [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Beginning cold snapshot process#033[00m
Nov 22 04:12:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:12:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:12:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:12:48 np0005532048 podman[290614]: 2025-11-22 09:12:48.33626259 +0000 UTC m=+0.738989296 container remove c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:12:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.343 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f19fdb3d-448e-477c-a927-cadd8632255a]: (4, ('Sat Nov 22 09:12:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928)\nc001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928\nSat Nov 22 09:12:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (c001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928)\nc001ce936058bd34a821fddad8bbad3b2b0e9fb0c47fa79f64367e85f17fe928\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[893d8b17-f9d6-4dc7-8b2e-eb184682a7fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.346 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:48 np0005532048 kernel: tap2abeeeb2-20: left promiscuous mode
Nov 22 04:12:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.374 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4667d0-e4ba-4990-babb-b33d63288dc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.388 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3d497d-37fa-4e7b-b4e3-ef16e482036a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.391 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1c535971-75e0-4e90-995f-73f12af37527]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:48 np0005532048 nova_compute[253661]: 2025-11-22 09:12:48.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.411 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9158061c-ce20-4311-9e31-68f7f9f47f50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558825, 'reachable_time': 21815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290809, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:48 np0005532048 systemd[1]: run-netns-ovnmeta\x2d2abeeeb2\x2d24a5\x2d4ccd\x2d93c8\x2d05b42d3a1a51.mount: Deactivated successfully.
Nov 22 04:12:48 np0005532048 nova_compute[253661]: 2025-11-22 09:12:48.418 253665 DEBUG nova.virt.libvirt.imagebackend [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:12:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.416 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:12:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:48.425 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1e83a419-1d7a-489e-bf17-1a203840d5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:48 np0005532048 nova_compute[253661]: 2025-11-22 09:12:48.560 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802753.5599756, de145d76-062b-4362-bc82-09e09d2f9154 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:48 np0005532048 nova_compute[253661]: 2025-11-22 09:12:48.561 253665 INFO nova.compute.manager [-] [instance: de145d76-062b-4362-bc82-09e09d2f9154] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:12:48 np0005532048 nova_compute[253661]: 2025-11-22 09:12:48.579 253665 DEBUG nova.compute.manager [None req-489440ab-055c-4a24-8ab9-378e56502abe - - - - - -] [instance: de145d76-062b-4362-bc82-09e09d2f9154] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:48 np0005532048 podman[290823]: 2025-11-22 09:12:48.581786599 +0000 UTC m=+0.117544205 container create 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:12:48 np0005532048 podman[290823]: 2025-11-22 09:12:48.488412439 +0000 UTC m=+0.024170075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:12:48 np0005532048 nova_compute[253661]: 2025-11-22 09:12:48.605 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(af237d8df3944bf985d1958a12fa2e46) on rbd image(6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:12:48 np0005532048 systemd[1]: Started libpod-conmon-339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023.scope.
Nov 22 04:12:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:12:48 np0005532048 nova_compute[253661]: 2025-11-22 09:12:48.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:48 np0005532048 podman[290823]: 2025-11-22 09:12:48.979786924 +0000 UTC m=+0.515544550 container init 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:12:48 np0005532048 podman[290823]: 2025-11-22 09:12:48.989234071 +0000 UTC m=+0.524991677 container start 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:12:48 np0005532048 zen_yonath[290857]: 167 167
Nov 22 04:12:48 np0005532048 systemd[1]: libpod-339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023.scope: Deactivated successfully.
Nov 22 04:12:48 np0005532048 conmon[290857]: conmon 339e7accbe7da6923b44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023.scope/container/memory.events
Nov 22 04:12:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 528 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 416 KiB/s rd, 2.7 MiB/s wr, 142 op/s
Nov 22 04:12:49 np0005532048 podman[290823]: 2025-11-22 09:12:49.278970956 +0000 UTC m=+0.814728562 container attach 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:12:49 np0005532048 podman[290823]: 2025-11-22 09:12:49.281733622 +0000 UTC m=+0.817491268 container died 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:12:49 np0005532048 nova_compute[253661]: 2025-11-22 09:12:49.782 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-20198849129e2d44e7b3d580572dbc12383e0394c90d75e795ec2f2dfea1b10a-merged.mount: Deactivated successfully.
Nov 22 04:12:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 22 04:12:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 22 04:12:50 np0005532048 podman[290823]: 2025-11-22 09:12:50.685583625 +0000 UTC m=+2.221341261 container remove 339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:12:50 np0005532048 systemd[1]: libpod-conmon-339e7accbe7da6923b44a033bc83e6ccbd810808ea8d912515e15dbd16e2d023.scope: Deactivated successfully.
Nov 22 04:12:50 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 22 04:12:51 np0005532048 podman[290882]: 2025-11-22 09:12:50.963876824 +0000 UTC m=+0.041312167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:12:51 np0005532048 podman[290882]: 2025-11-22 09:12:51.158625519 +0000 UTC m=+0.236060812 container create 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:12:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 528 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 2.7 MiB/s wr, 144 op/s
Nov 22 04:12:51 np0005532048 systemd[1]: Started libpod-conmon-494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc.scope.
Nov 22 04:12:51 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:12:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:51 np0005532048 nova_compute[253661]: 2025-11-22 09:12:51.410 253665 WARNING nova.compute.manager [None req-861f8488-9857-4c52-8d32-65d2ade5ac13 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Image not found during snapshot: nova.exception.ImageNotFound: Image ffc4be20-c068-44ca-a572-d433657a200f could not be found.#033[00m
Nov 22 04:12:51 np0005532048 podman[290896]: 2025-11-22 09:12:51.584123617 +0000 UTC m=+0.383601649 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:12:51 np0005532048 podman[290882]: 2025-11-22 09:12:51.584456795 +0000 UTC m=+0.661892118 container init 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:12:51 np0005532048 podman[290882]: 2025-11-22 09:12:51.594473686 +0000 UTC m=+0.671908989 container start 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:12:51 np0005532048 podman[290882]: 2025-11-22 09:12:51.839255267 +0000 UTC m=+0.916690580 container attach 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:12:52
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'volumes', 'backups', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:12:52 np0005532048 nova_compute[253661]: 2025-11-22 09:12:52.654 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] cloning vms/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk@af237d8df3944bf985d1958a12fa2e46 to images/c0a1f7fa-e570-4e82-9df8-99d640ef5df3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:52 np0005532048 vigorous_benz[290916]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:12:52 np0005532048 vigorous_benz[290916]: --> relative data size: 1.0
Nov 22 04:12:52 np0005532048 vigorous_benz[290916]: --> All data devices are unavailable
Nov 22 04:12:52 np0005532048 systemd[1]: libpod-494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc.scope: Deactivated successfully.
Nov 22 04:12:52 np0005532048 systemd[1]: libpod-494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc.scope: Consumed 1.032s CPU time.
Nov 22 04:12:52 np0005532048 podman[290882]: 2025-11-22 09:12:52.763560469 +0000 UTC m=+1.840995762 container died 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:12:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 489 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.248 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.249 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.249 253665 DEBUG nova.objects.instance [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.275 253665 DEBUG nova.objects.instance [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.276 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.276 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.277 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.277 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.277 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.278 253665 INFO nova.compute.manager [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Terminating instance#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.279 253665 DEBUG nova.compute.manager [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.285 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:12:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2feeb9bf6293c3ee63737afe35baa625266d2a6085ae50bddfcda2d83255e486-merged.mount: Deactivated successfully.
Nov 22 04:12:53 np0005532048 nova_compute[253661]: 2025-11-22 09:12:53.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.220 253665 DEBUG nova.policy [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:12:54 np0005532048 podman[290882]: 2025-11-22 09:12:54.264960044 +0000 UTC m=+3.342395337 container remove 494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_benz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:12:54 np0005532048 systemd[1]: libpod-conmon-494dd17cefaed86b1572d6c9f0c361c44a3b10e4a3f32bfac4269db7d432e2dc.scope: Deactivated successfully.
Nov 22 04:12:54 np0005532048 kernel: tapd3202009-ab (unregistering): left promiscuous mode
Nov 22 04:12:54 np0005532048 NetworkManager[48920]: <info>  [1763802774.5413] device (tapd3202009-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:54Z|00182|binding|INFO|Releasing lport d3202009-ab9d-4ee2-a94d-0d05cc739658 from this chassis (sb_readonly=0)
Nov 22 04:12:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:54Z|00183|binding|INFO|Setting lport d3202009-ab9d-4ee2-a94d-0d05cc739658 down in Southbound
Nov 22 04:12:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:54Z|00184|binding|INFO|Removing iface tapd3202009-ab ovn-installed in OVS
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.565 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:33:a1 10.100.0.3'], port_security=['fa:16:3e:b2:33:a1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5babe591-239b-4ef7-b193-6960c7313292', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=d3202009-ab9d-4ee2-a94d-0d05cc739658) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.568 162862 INFO neutron.agent.ovn.metadata.agent [-] Port d3202009-ab9d-4ee2-a94d-0d05cc739658 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff unbound from our chassis#033[00m
Nov 22 04:12:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.571 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 691e79ad-da5d-4276-aa7d-732c2aaedbff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:12:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.572 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abf9d7f7-9f5d-4361-868a-ed9679fc4fcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:54.574 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff namespace which is not needed anymore#033[00m
Nov 22 04:12:54 np0005532048 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 22 04:12:54 np0005532048 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 14.946s CPU time.
Nov 22 04:12:54 np0005532048 systemd-machined[215941]: Machine qemu-32-instance-0000001c terminated.
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.722 253665 INFO nova.virt.libvirt.driver [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Instance destroyed successfully.#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.722 253665 DEBUG nova.objects.instance [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'resources' on Instance uuid 5babe591-239b-4ef7-b193-6960c7313292 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.736 253665 DEBUG nova.virt.libvirt.vif [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-557765812',display_name='tempest-ImagesOneServerNegativeTestJSON-server-557765812',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-557765812',id=28,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-siow6hfb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:51Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=5babe591-239b-4ef7-b193-6960c7313292,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.736 253665 DEBUG nova.network.os_vif_util [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "address": "fa:16:3e:b2:33:a1", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3202009-ab", "ovs_interfaceid": "d3202009-ab9d-4ee2-a94d-0d05cc739658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.737 253665 DEBUG nova.network.os_vif_util [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.737 253665 DEBUG os_vif [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.739 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.739 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3202009-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.741 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.743 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.747 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.750 253665 INFO os_vif [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:33:a1,bridge_name='br-int',has_traffic_filtering=True,id=d3202009-ab9d-4ee2-a94d-0d05cc739658,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3202009-ab')#033[00m
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:54 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [NOTICE]   (289993) : haproxy version is 2.8.14-c23fe91
Nov 22 04:12:54 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [NOTICE]   (289993) : path to executable is /usr/sbin/haproxy
Nov 22 04:12:54 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [WARNING]  (289993) : Exiting Master process...
Nov 22 04:12:54 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [WARNING]  (289993) : Exiting Master process...
Nov 22 04:12:54 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [ALERT]    (289993) : Current worker (289995) exited with code 143 (Terminated)
Nov 22 04:12:54 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[289977]: [WARNING]  (289993) : All workers exited. Exiting... (0)
Nov 22 04:12:54 np0005532048 systemd[1]: libpod-328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da.scope: Deactivated successfully.
Nov 22 04:12:54 np0005532048 podman[291124]: 2025-11-22 09:12:54.837744053 +0000 UTC m=+0.157706753 container died 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:12:54 np0005532048 nova_compute[253661]: 2025-11-22 09:12:54.872 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] flattening images/c0a1f7fa-e570-4e82-9df8-99d640ef5df3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:12:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:12:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bff7de6cecd8d8be804ea66b96176eefccb4d3774ace81e67748b939ad84ffe4-merged.mount: Deactivated successfully.
Nov 22 04:12:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da-userdata-shm.mount: Deactivated successfully.
Nov 22 04:12:55 np0005532048 nova_compute[253661]: 2025-11-22 09:12:55.196 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully created port: f1f391af-c757-4aab-b0ce-ddad3dab55e7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:12:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 436 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 540 KiB/s wr, 93 op/s
Nov 22 04:12:55 np0005532048 nova_compute[253661]: 2025-11-22 09:12:55.429 253665 DEBUG nova.compute.manager [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-unplugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:55 np0005532048 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG oslo_concurrency.lockutils [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:55 np0005532048 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG oslo_concurrency.lockutils [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:55 np0005532048 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG oslo_concurrency.lockutils [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:55 np0005532048 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG nova.compute.manager [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] No waiting events found dispatching network-vif-unplugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:55 np0005532048 nova_compute[253661]: 2025-11-22 09:12:55.430 253665 DEBUG nova.compute.manager [req-bb274f90-aad9-4848-a0d4-aab8f4197b3c req-ab071d5c-2aa7-432e-97ac-ad80eb8ba4ef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-unplugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:12:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:55 np0005532048 podman[291124]: 2025-11-22 09:12:55.647877614 +0000 UTC m=+0.967840294 container cleanup 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:12:55 np0005532048 systemd[1]: libpod-conmon-328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da.scope: Deactivated successfully.
Nov 22 04:12:56 np0005532048 podman[291226]: 2025-11-22 09:12:56.338107373 +0000 UTC m=+0.662960382 container remove 328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:12:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.348 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[336541ad-0058-44d2-ac46-a2763afd7f34]: (4, ('Sat Nov 22 09:12:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff (328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da)\n328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da\nSat Nov 22 09:12:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff (328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da)\n328e06bee825e358c781c57ccdd5faedde0a4a098ef1372f27d8e5e6ce2734da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.351 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e093c3f-3e38-4f77-a420-26c2954ec8a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.351 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691e79ad-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:56 np0005532048 nova_compute[253661]: 2025-11-22 09:12:56.354 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:56 np0005532048 kernel: tap691e79ad-d0: left promiscuous mode
Nov 22 04:12:56 np0005532048 nova_compute[253661]: 2025-11-22 09:12:56.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.379 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[85dc2ac6-e7be-4f56-a799-d28dd820fce5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:56 np0005532048 nova_compute[253661]: 2025-11-22 09:12:56.385 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully updated port: f1f391af-c757-4aab-b0ce-ddad3dab55e7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:12:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8709bf4b-d174-4d95-b2f4-0c98a26389f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.396 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d52922a5-9776-40b7-b961-834ce1b484df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:56 np0005532048 nova_compute[253661]: 2025-11-22 09:12:56.400 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:56 np0005532048 nova_compute[253661]: 2025-11-22 09:12:56.401 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:56 np0005532048 nova_compute[253661]: 2025-11-22 09:12:56.402 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:12:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.421 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cbec4509-794d-4cbc-baa5-f7a7415fffb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559073, 'reachable_time': 16379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291257, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:56 np0005532048 systemd[1]: run-netns-ovnmeta\x2d691e79ad\x2dda5d\x2d4276\x2daa7d\x2d732c2aaedbff.mount: Deactivated successfully.
Nov 22 04:12:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.423 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:12:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:56.424 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9cfbaeae-8875-459b-ac38-49a13729dd30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:56 np0005532048 podman[291258]: 2025-11-22 09:12:56.462289307 +0000 UTC m=+0.027906173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:12:56 np0005532048 podman[291258]: 2025-11-22 09:12:56.822573913 +0000 UTC m=+0.388190779 container create cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:12:56 np0005532048 nova_compute[253661]: 2025-11-22 09:12:56.943 253665 WARNING nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it#033[00m
Nov 22 04:12:57 np0005532048 systemd[1]: Started libpod-conmon-cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4.scope.
Nov 22 04:12:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:12:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 442 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 544 KiB/s wr, 86 op/s
Nov 22 04:12:57 np0005532048 nova_compute[253661]: 2025-11-22 09:12:57.270 253665 DEBUG nova.compute.manager [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:57 np0005532048 nova_compute[253661]: 2025-11-22 09:12:57.271 253665 DEBUG nova.compute.manager [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-f1f391af-c757-4aab-b0ce-ddad3dab55e7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:12:57 np0005532048 nova_compute[253661]: 2025-11-22 09:12:57.271 253665 DEBUG oslo_concurrency.lockutils [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:12:57 np0005532048 podman[291258]: 2025-11-22 09:12:57.309066862 +0000 UTC m=+0.874683748 container init cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:12:57 np0005532048 podman[291258]: 2025-11-22 09:12:57.317122625 +0000 UTC m=+0.882739491 container start cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:12:57 np0005532048 xenodochial_cannon[291274]: 167 167
Nov 22 04:12:57 np0005532048 systemd[1]: libpod-cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4.scope: Deactivated successfully.
Nov 22 04:12:57 np0005532048 podman[291258]: 2025-11-22 09:12:57.539559427 +0000 UTC m=+1.105176323 container attach cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 04:12:57 np0005532048 podman[291258]: 2025-11-22 09:12:57.540150952 +0000 UTC m=+1.105767818 container died cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:12:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-700bf1d6dd92ec38ba68438a77216b66b278f877e758717ab16d00885b7dcf71-merged.mount: Deactivated successfully.
Nov 22 04:12:58 np0005532048 podman[291258]: 2025-11-22 09:12:58.219800926 +0000 UTC m=+1.785417792 container remove cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.293 253665 DEBUG nova.compute.manager [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.293 253665 DEBUG oslo_concurrency.lockutils [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5babe591-239b-4ef7-b193-6960c7313292-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.293 253665 DEBUG oslo_concurrency.lockutils [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.294 253665 DEBUG oslo_concurrency.lockutils [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.294 253665 DEBUG nova.compute.manager [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] No waiting events found dispatching network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.294 253665 WARNING nova.compute.manager [req-dde8a436-7a73-43a9-9a54-a5eaac11496d req-6ec2d7b1-d9f5-47dc-b551-ff760a197ee6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received unexpected event network-vif-plugged-d3202009-ab9d-4ee2-a94d-0d05cc739658 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:12:58 np0005532048 systemd[1]: libpod-conmon-cc7f2cc6e961f2c93fe833f19af8b067f4549320f8730f1ca458b8f474b39bf4.scope: Deactivated successfully.
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.326 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(af237d8df3944bf985d1958a12fa2e46) on rbd image(6e825024-ffe6-4fdb-abaa-0c99c65ac38b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.376 253665 INFO nova.virt.libvirt.driver [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Deleting instance files /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7_del#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.377 253665 INFO nova.virt.libvirt.driver [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Deletion of /var/lib/nova/instances/96000606-0bc4-4cf1-9e33-360a640c2cb7_del complete#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.438 253665 INFO nova.compute.manager [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Took 15.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.439 253665 DEBUG oslo.service.loopingcall [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.440 253665 DEBUG nova.compute.manager [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.440 253665 DEBUG nova.network.neutron [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:12:58 np0005532048 podman[291316]: 2025-11-22 09:12:58.442895603 +0000 UTC m=+0.060471388 container create 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:12:58 np0005532048 systemd[1]: Started libpod-conmon-308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035.scope.
Nov 22 04:12:58 np0005532048 podman[291316]: 2025-11-22 09:12:58.416149079 +0000 UTC m=+0.033724894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:12:58 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:12:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:58 np0005532048 podman[291316]: 2025-11-22 09:12:58.544047593 +0000 UTC m=+0.161623408 container init 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:12:58 np0005532048 podman[291316]: 2025-11-22 09:12:58.552349412 +0000 UTC m=+0.169925237 container start 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:12:58 np0005532048 podman[291316]: 2025-11-22 09:12:58.556526043 +0000 UTC m=+0.174101858 container attach 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:12:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.731 253665 INFO nova.virt.libvirt.driver [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Deleting instance files /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292_del#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.732 253665 INFO nova.virt.libvirt.driver [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Deletion of /var/lib/nova/instances/5babe591-239b-4ef7-b193-6960c7313292_del complete#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.739 253665 DEBUG nova.network.neutron [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 22 04:12:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.774 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.775 253665 DEBUG oslo_concurrency.lockutils [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.775 253665 DEBUG nova.network.neutron [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port f1f391af-c757-4aab-b0ce-ddad3dab55e7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.779 253665 DEBUG nova.virt.libvirt.vif [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.779 253665 DEBUG nova.network.os_vif_util [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.780 253665 DEBUG nova.network.os_vif_util [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.780 253665 DEBUG os_vif [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.781 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.781 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.781 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.785 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1f391af-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.786 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1f391af-c7, col_values=(('external_ids', {'iface-id': 'f1f391af-c757-4aab-b0ce-ddad3dab55e7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:0f:1c', 'vm-uuid': '3c70b093-a92a-4781-8e32-2a7eefde4a43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:58 np0005532048 NetworkManager[48920]: <info>  [1763802778.7942] manager: (tapf1f391af-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.974 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802763.7952714, 96000606-0bc4-4cf1-9e33-360a640c2cb7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.974 253665 INFO nova.compute.manager [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.978 253665 INFO os_vif [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7')#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.979 253665 DEBUG nova.virt.libvirt.vif [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.979 253665 DEBUG nova.network.os_vif_util [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.980 253665 DEBUG nova.network.os_vif_util [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:12:58 np0005532048 nova_compute[253661]: 2025-11-22 09:12:58.985 253665 DEBUG nova.storage.rbd_utils [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(snap) on rbd image(c0a1f7fa-e570-4e82-9df8-99d640ef5df3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.050 253665 DEBUG nova.compute.manager [None req-c769c8a3-72ad-4b1b-ab91-51610934b095 - - - - - -] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.053 253665 DEBUG nova.virt.libvirt.guest [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:97:0f:1c"/>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <target dev="tapf1f391af-c7"/>
Nov 22 04:12:59 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:12:59 np0005532048 nova_compute[253661]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.056 253665 INFO nova.compute.manager [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Took 5.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.056 253665 DEBUG oslo.service.loopingcall [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.057 253665 DEBUG nova.compute.manager [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.057 253665 DEBUG nova.network.neutron [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.067 253665 DEBUG nova.network.neutron [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:59 np0005532048 kernel: tapf1f391af-c7: entered promiscuous mode
Nov 22 04:12:59 np0005532048 NetworkManager[48920]: <info>  [1763802779.0714] manager: (tapf1f391af-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Nov 22 04:12:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:59Z|00185|binding|INFO|Claiming lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 for this chassis.
Nov 22 04:12:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:59Z|00186|binding|INFO|f1f391af-c757-4aab-b0ce-ddad3dab55e7: Claiming fa:16:3e:97:0f:1c 10.100.0.13
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.089 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:0f:1c 10.100.0.13'], port_security=['fa:16:3e:97:0f:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f1f391af-c757-4aab-b0ce-ddad3dab55e7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.091 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f1f391af-c757-4aab-b0ce-ddad3dab55e7 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.096 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.100 253665 INFO nova.compute.manager [-] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Took 0.66 seconds to deallocate network for instance.#033[00m
Nov 22 04:12:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:59Z|00187|binding|INFO|Setting lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 ovn-installed in OVS
Nov 22 04:12:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:12:59Z|00188|binding|INFO|Setting lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 up in Southbound
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.111 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:59 np0005532048 systemd-udevd[291368]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.122 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[981ba118-3733-40c3-94e2-2187b59008da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:59 np0005532048 NetworkManager[48920]: <info>  [1763802779.1342] device (tapf1f391af-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:12:59 np0005532048 NetworkManager[48920]: <info>  [1763802779.1351] device (tapf1f391af-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.160 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.161 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.164 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[18b88600-348e-416f-a067-1e770a9ec4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.170 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d389fe6a-071a-4452-9a1e-19736f3c924c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.205 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fc1282-b5a5-415c-97bd-f8595dfb5cfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.209 253665 DEBUG nova.virt.libvirt.driver [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.209 253665 DEBUG nova.virt.libvirt.driver [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.210 253665 DEBUG nova.virt.libvirt.driver [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:78:3a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.210 253665 DEBUG nova.virt.libvirt.driver [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:97:0f:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:12:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 499 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 4.7 MiB/s wr, 153 op/s
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.227 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[729c6531-da76-4f51-be6e-1e06197c6703]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291377, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.241 253665 DEBUG nova.virt.libvirt.guest [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:12:59</nova:creationTime>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 04:12:59 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 04:12:59 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:12:59 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:12:59 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:12:59 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.253 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[60fe7a64-15f3-496b-9450-b7aa17c1d163]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291378, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291378, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.256 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.258 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.264 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.264 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.265 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:12:59.265 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.267 253665 DEBUG oslo_concurrency.lockutils [None req-8d629336-adcb-4fdf-a7c6-7a490215e887 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.356 253665 DEBUG oslo_concurrency.processutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]: {
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:    "0": [
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:        {
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "devices": [
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "/dev/loop3"
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            ],
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_name": "ceph_lv0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_size": "21470642176",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "name": "ceph_lv0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "tags": {
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.cluster_name": "ceph",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.crush_device_class": "",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.encrypted": "0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.osd_id": "0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.type": "block",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.vdo": "0"
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            },
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "type": "block",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "vg_name": "ceph_vg0"
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:        }
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:    ],
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:    "1": [
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:        {
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "devices": [
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "/dev/loop4"
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            ],
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_name": "ceph_lv1",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_size": "21470642176",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "name": "ceph_lv1",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "tags": {
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.cluster_name": "ceph",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.crush_device_class": "",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.encrypted": "0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.osd_id": "1",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.type": "block",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.vdo": "0"
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            },
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "type": "block",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "vg_name": "ceph_vg1"
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:        }
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:    ],
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:    "2": [
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:        {
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "devices": [
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "/dev/loop5"
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            ],
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_name": "ceph_lv2",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_size": "21470642176",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "name": "ceph_lv2",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "tags": {
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.cluster_name": "ceph",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.crush_device_class": "",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.encrypted": "0",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.osd_id": "2",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.type": "block",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:                "ceph.vdo": "0"
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            },
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "type": "block",
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:            "vg_name": "ceph_vg2"
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:        }
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]:    ]
Nov 22 04:12:59 np0005532048 sleepy_buck[291334]: }
Nov 22 04:12:59 np0005532048 systemd[1]: libpod-308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035.scope: Deactivated successfully.
Nov 22 04:12:59 np0005532048 conmon[291334]: conmon 308d8108899240841c04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035.scope/container/memory.events
Nov 22 04:12:59 np0005532048 podman[291316]: 2025-11-22 09:12:59.514794735 +0000 UTC m=+1.132370520 container died 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:12:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-35e1396e78545d00a85f0e6f4c759eb3fe20b64e672022861ecddd54ad43ff8e-merged.mount: Deactivated successfully.
Nov 22 04:12:59 np0005532048 podman[291316]: 2025-11-22 09:12:59.598731488 +0000 UTC m=+1.216307273 container remove 308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:12:59 np0005532048 systemd[1]: libpod-conmon-308d8108899240841c04aac87188b793946a8657f68b3bb1dec3f412627be035.scope: Deactivated successfully.
Nov 22 04:12:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 22 04:12:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:12:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 22 04:12:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62017300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.864 253665 DEBUG oslo_concurrency.processutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.872 253665 DEBUG nova.compute.provider_tree [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.883 253665 DEBUG nova.network.neutron [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.898 253665 DEBUG nova.scheduler.client.report [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.905 253665 INFO nova.compute.manager [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Took 0.85 seconds to deallocate network for instance.#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.930 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.958 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.959 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:12:59 np0005532048 nova_compute[253661]: 2025-11-22 09:12:59.968 253665 INFO nova.scheduler.client.report [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Deleted allocations for instance 96000606-0bc4-4cf1-9e33-360a640c2cb7#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.005 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.006 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.006 253665 DEBUG nova.objects.instance [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.047 253665 DEBUG oslo_concurrency.lockutils [None req-5bddf76f-f713-4a51-a2dc-12369f63a865 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "96000606-0bc4-4cf1-9e33-360a640c2cb7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 17.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.102 253665 DEBUG oslo_concurrency.processutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:00 np0005532048 podman[291557]: 2025-11-22 09:13:00.245413558 +0000 UTC m=+0.042615629 container create 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:13:00 np0005532048 systemd[1]: Started libpod-conmon-90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef.scope.
Nov 22 04:13:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:00Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:0f:1c 10.100.0.13
Nov 22 04:13:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:00Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:0f:1c 10.100.0.13
Nov 22 04:13:00 np0005532048 podman[291557]: 2025-11-22 09:13:00.224853872 +0000 UTC m=+0.022055973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:13:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:13:00 np0005532048 podman[291557]: 2025-11-22 09:13:00.354496178 +0000 UTC m=+0.151698269 container init 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:13:00 np0005532048 podman[291557]: 2025-11-22 09:13:00.362835029 +0000 UTC m=+0.160037110 container start 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 04:13:00 np0005532048 affectionate_hawking[291592]: 167 167
Nov 22 04:13:00 np0005532048 systemd[1]: libpod-90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef.scope: Deactivated successfully.
Nov 22 04:13:00 np0005532048 conmon[291592]: conmon 90d7ee2b41fe8c93e212 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef.scope/container/memory.events
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.369 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 96000606-0bc4-4cf1-9e33-360a640c2cb7] Received event network-vif-deleted-411035c7-ec82-4bcc-bfb8-0ef8afc9dcaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.372 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.372 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.372 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.373 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.373 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.373 253665 WARNING nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.373 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG oslo_concurrency.lockutils [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:00 np0005532048 podman[291557]: 2025-11-22 09:13:00.372356258 +0000 UTC m=+0.169558339 container attach 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 WARNING nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-f1f391af-c757-4aab-b0ce-ddad3dab55e7 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.374 253665 DEBUG nova.compute.manager [req-80c77d97-2c26-4e31-8c24-16e469dbac08 req-ab0964f6-b5c4-4a61-84f3-85783ef2853b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Received event network-vif-deleted-d3202009-ab9d-4ee2-a94d-0d05cc739658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:00 np0005532048 podman[291557]: 2025-11-22 09:13:00.375468053 +0000 UTC m=+0.172670124 container died 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:13:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-66baa077c0e7c25481ce55fa9e7b1c8ab42faa0933dce8ad2607d0b0437e8066-merged.mount: Deactivated successfully.
Nov 22 04:13:00 np0005532048 podman[291557]: 2025-11-22 09:13:00.44337858 +0000 UTC m=+0.240580651 container remove 90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_hawking, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:13:00 np0005532048 systemd[1]: libpod-conmon-90d7ee2b41fe8c93e212964389af6af96cd3aed1773789803c178ad9cd1d85ef.scope: Deactivated successfully.
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.478 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802765.4771852, 6e825024-ffe6-4fdb-abaa-0c99c65ac38b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.479 253665 INFO nova.compute.manager [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.513 253665 DEBUG nova.compute.manager [None req-d842b4e0-7595-40c6-96ee-8ae741a82b6d - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.520 253665 DEBUG nova.compute.manager [None req-d842b4e0-7595-40c6-96ee-8ae741a82b6d - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: image_uploading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.539 253665 INFO nova.compute.manager [None req-d842b4e0-7595-40c6-96ee-8ae741a82b6d - - - - - -] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] During sync_power_state the instance has a pending task (image_uploading). Skip.#033[00m
Nov 22 04:13:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3201247690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.614 253665 DEBUG oslo_concurrency.processutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.621 253665 DEBUG nova.compute.provider_tree [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.631 253665 DEBUG nova.scheduler.client.report [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:13:00 np0005532048 podman[291616]: 2025-11-22 09:13:00.635470251 +0000 UTC m=+0.051962584 container create 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.642 253665 DEBUG nova.network.neutron [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port f1f391af-c757-4aab-b0ce-ddad3dab55e7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.643 253665 DEBUG nova.network.neutron [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.657 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.660 253665 DEBUG oslo_concurrency.lockutils [req-8b272806-77f1-4edb-bfe9-b90466caf999 req-425febd9-82a6-4b17-bbca-ae29a6dd0252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.687 253665 INFO nova.scheduler.client.report [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Deleted allocations for instance 5babe591-239b-4ef7-b193-6960c7313292#033[00m
Nov 22 04:13:00 np0005532048 systemd[1]: Started libpod-conmon-1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557.scope.
Nov 22 04:13:00 np0005532048 podman[291616]: 2025-11-22 09:13:00.611215206 +0000 UTC m=+0.027707559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:13:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:13:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.758 253665 DEBUG oslo_concurrency.lockutils [None req-68e6e0f6-637b-4b49-ac78-2d2aa6efc5d4 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "5babe591-239b-4ef7-b193-6960c7313292" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:00 np0005532048 podman[291616]: 2025-11-22 09:13:00.773172131 +0000 UTC m=+0.189664464 container init 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:13:00 np0005532048 podman[291616]: 2025-11-22 09:13:00.781950272 +0000 UTC m=+0.198442605 container start 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:13:00 np0005532048 podman[291616]: 2025-11-22 09:13:00.788538961 +0000 UTC m=+0.205031344 container attach 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.827 253665 DEBUG nova.objects.instance [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:00 np0005532048 nova_compute[253661]: 2025-11-22 09:13:00.839 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.005 253665 DEBUG nova.policy [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:13:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 499 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.9 MiB/s wr, 153 op/s
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.260 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.261 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.262 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.263 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.263 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.266 253665 INFO nova.compute.manager [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Terminating instance#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.268 253665 DEBUG nova.compute.manager [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:13:01 np0005532048 kernel: tapa36e1a52-1f (unregistering): left promiscuous mode
Nov 22 04:13:01 np0005532048 NetworkManager[48920]: <info>  [1763802781.3394] device (tapa36e1a52-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.352 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:01Z|00189|binding|INFO|Releasing lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 from this chassis (sb_readonly=0)
Nov 22 04:13:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:01Z|00190|binding|INFO|Setting lport a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 down in Southbound
Nov 22 04:13:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:01Z|00191|binding|INFO|Removing iface tapa36e1a52-1f ovn-installed in OVS
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:01 np0005532048 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 22 04:13:01 np0005532048 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Consumed 20.596s CPU time.
Nov 22 04:13:01 np0005532048 systemd-machined[215941]: Machine qemu-22-instance-00000013 terminated.
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.436 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:5a:f3 10.100.0.11'], port_security=['fa:16:3e:0c:5a:f3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd99bd27b-0ff3-493e-a69c-6c7ec034aa81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.439 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.442 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.467 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9084f498-35c8-4994-bde0-e6c0187bd6cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.510 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2b9a2351-0170-411a-9b81-6ff8ecc60b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.515 253665 INFO nova.virt.libvirt.driver [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Instance destroyed successfully.#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.515 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23d353f2-ae68-4505-8a12-a670f1551d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.517 253665 DEBUG nova.objects.instance [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid d99bd27b-0ff3-493e-a69c-6c7ec034aa81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.536 253665 DEBUG nova.virt.libvirt.vif [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:10:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1874754552',display_name='tempest-ServersAdminTestJSON-server-1874754552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1874754552',id=19,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:10:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-otgq40uh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:10:12Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=d99bd27b-0ff3-493e-a69c-6c7ec034aa81,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.537 253665 DEBUG nova.network.os_vif_util [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "address": "fa:16:3e:0c:5a:f3", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa36e1a52-1f", "ovs_interfaceid": "a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.538 253665 DEBUG nova.network.os_vif_util [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.538 253665 DEBUG os_vif [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.542 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa36e1a52-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.549 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.552 253665 INFO os_vif [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:5a:f3,bridge_name='br-int',has_traffic_filtering=True,id=a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa36e1a52-1f')#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.556 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e123e8a0-3d9d-465a-b72c-4bc26688a1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.578 253665 INFO nova.virt.libvirt.driver [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Snapshot image upload complete#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.578 253665 INFO nova.compute.manager [None req-895f1476-082c-48be-9e01-8d124f53ab99 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 13.73 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.580 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[50bcfd2c-594f-4743-bf46-9213d24fca58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514ab32c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:d9:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 23, 'rx_bytes': 1036, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 23, 'rx_bytes': 1036, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545926, 'reachable_time': 28314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291680, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.606 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f434ac-cf39-468b-81ae-307e71801987]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545939, 'tstamp': 545939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291695, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap514ab32c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545942, 'tstamp': 545942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291695, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.608 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:01 np0005532048 nova_compute[253661]: 2025-11-22 09:13:01.614 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.615 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514ab32c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.615 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.616 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514ab32c-30, col_values=(('external_ids', {'iface-id': '8bfa3e10-9d2e-4476-b427-92376a047c7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:01.616 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]: {
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "osd_id": 1,
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "type": "bluestore"
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:    },
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "osd_id": 0,
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "type": "bluestore"
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:    },
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "osd_id": 2,
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:        "type": "bluestore"
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]:    }
Nov 22 04:13:01 np0005532048 quirky_jackson[291635]: }
Nov 22 04:13:01 np0005532048 systemd[1]: libpod-1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557.scope: Deactivated successfully.
Nov 22 04:13:01 np0005532048 systemd[1]: libpod-1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557.scope: Consumed 1.025s CPU time.
Nov 22 04:13:01 np0005532048 podman[291616]: 2025-11-22 09:13:01.822600619 +0000 UTC m=+1.239092952 container died 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.100 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully created port: 995224e6-d1ff-4d74-bca5-3996eb4d404d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003791913907338673 of space, bias 1.0, pg target 1.137574172201602 quantized to 32 (current 32)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013400469859021944 of space, bias 1.0, pg target 0.40067404878475615 quantized to 32 (current 32)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 22 04:13:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-98d4be17cfb40eada9034b12adae175ba6ec7bafe897d7e66e7f3daf6d4767c7-merged.mount: Deactivated successfully.
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.657 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-unplugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.657 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.657 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.658 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.658 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] No waiting events found dispatching network-vif-unplugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.658 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-unplugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.658 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 DEBUG oslo_concurrency.lockutils [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 DEBUG nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] No waiting events found dispatching network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:02 np0005532048 nova_compute[253661]: 2025-11-22 09:13:02.659 253665 WARNING nova.compute.manager [req-1ab0d2a4-c058-464a-a4e1-965487b351d9 req-fbe8dc41-5ebc-4b9d-8399-f2ede4be9634 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received unexpected event network-vif-plugged-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:13:03 np0005532048 podman[291616]: 2025-11-22 09:13:03.206790919 +0000 UTC m=+2.623283252 container remove 1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:13:03 np0005532048 systemd[1]: libpod-conmon-1813ed79fe96147955bc0979f7ec6aad32644a893d62022fa7f9a74d64af6557.scope: Deactivated successfully.
Nov 22 04:13:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 480 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.8 MiB/s wr, 153 op/s
Nov 22 04:13:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:13:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:13:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:13:03 np0005532048 nova_compute[253661]: 2025-11-22 09:13:03.417 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully updated port: 995224e6-d1ff-4d74-bca5-3996eb4d404d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:13:03 np0005532048 nova_compute[253661]: 2025-11-22 09:13:03.433 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:13:03 np0005532048 nova_compute[253661]: 2025-11-22 09:13:03.433 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:13:03 np0005532048 nova_compute[253661]: 2025-11-22 09:13:03.433 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:13:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:13:03 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ba50615c-2ba6-498d-8fd2-54cff71cd9ca does not exist
Nov 22 04:13:03 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d0c69ac1-e3f2-4c07-ac95-6eb96affee3b does not exist
Nov 22 04:13:03 np0005532048 nova_compute[253661]: 2025-11-22 09:13:03.588 253665 WARNING nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it#033[00m
Nov 22 04:13:03 np0005532048 nova_compute[253661]: 2025-11-22 09:13:03.588 253665 WARNING nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it#033[00m
Nov 22 04:13:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:13:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:13:04 np0005532048 nova_compute[253661]: 2025-11-22 09:13:04.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 438 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.5 MiB/s wr, 169 op/s
Nov 22 04:13:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.772 253665 DEBUG nova.network.neutron [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.790 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.793 253665 DEBUG nova.virt.libvirt.vif [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.793 253665 DEBUG nova.network.os_vif_util [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.794 253665 DEBUG nova.network.os_vif_util [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.795 253665 DEBUG os_vif [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.795 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.796 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.796 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.800 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap995224e6-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.800 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap995224e6-d1, col_values=(('external_ids', {'iface-id': '995224e6-d1ff-4d74-bca5-3996eb4d404d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:fb:57', 'vm-uuid': '3c70b093-a92a-4781-8e32-2a7eefde4a43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.802 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:05 np0005532048 NetworkManager[48920]: <info>  [1763802785.8031] manager: (tap995224e6-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.813 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.814 253665 INFO os_vif [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1')#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.815 253665 DEBUG nova.virt.libvirt.vif [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.816 253665 DEBUG nova.network.os_vif_util [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.817 253665 DEBUG nova.network.os_vif_util [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:fb:57,bridge_name='br-int',has_traffic_filtering=True,id=995224e6-d1ff-4d74-bca5-3996eb4d404d,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap995224e6-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.825 253665 DEBUG nova.virt.libvirt.guest [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 04:13:05 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:a6:fb:57"/>
Nov 22 04:13:05 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:13:05 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:13:05 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:13:05 np0005532048 nova_compute[253661]:  <target dev="tap995224e6-d1"/>
Nov 22 04:13:05 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:13:05 np0005532048 nova_compute[253661]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 22 04:13:05 np0005532048 kernel: tap995224e6-d1: entered promiscuous mode
Nov 22 04:13:05 np0005532048 NetworkManager[48920]: <info>  [1763802785.8431] manager: (tap995224e6-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Nov 22 04:13:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:05Z|00192|binding|INFO|Claiming lport 995224e6-d1ff-4d74-bca5-3996eb4d404d for this chassis.
Nov 22 04:13:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:05Z|00193|binding|INFO|995224e6-d1ff-4d74-bca5-3996eb4d404d: Claiming fa:16:3e:a6:fb:57 10.100.0.11
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.847 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.856 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:fb:57 10.100.0.11'], port_security=['fa:16:3e:a6:fb:57 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=995224e6-d1ff-4d74-bca5-3996eb4d404d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:13:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.858 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 995224e6-d1ff-4d74-bca5-3996eb4d404d in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis#033[00m
Nov 22 04:13:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.859 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:13:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:05Z|00194|binding|INFO|Setting lport 995224e6-d1ff-4d74-bca5-3996eb4d404d ovn-installed in OVS
Nov 22 04:13:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:05Z|00195|binding|INFO|Setting lport 995224e6-d1ff-4d74-bca5-3996eb4d404d up in Southbound
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:05 np0005532048 nova_compute[253661]: 2025-11-22 09:13:05.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.885 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3305be6a-2317-446e-8653-e67b59171bf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:05 np0005532048 systemd-udevd[291789]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:13:05 np0005532048 NetworkManager[48920]: <info>  [1763802785.9030] device (tap995224e6-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:13:05 np0005532048 NetworkManager[48920]: <info>  [1763802785.9037] device (tap995224e6-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:13:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.925 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbc5d91-b5d0-49df-96cc-63a471e06f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.930 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a290f258-a2f4-4e67-8eeb-991c1406e296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:05.970 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ce07fed7-20a6-41e1-ae76-b4dd38b56b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.002 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48d848f2-f712-4e48-a8ce-09bae10d8b2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291798, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.023 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[426cf276-1489-40ef-96ba-53e5d8e91e0a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291799, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291799, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.025 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:06 np0005532048 nova_compute[253661]: 2025-11-22 09:13:06.028 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:06 np0005532048 nova_compute[253661]: 2025-11-22 09:13:06.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.035 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:06.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:06 np0005532048 nova_compute[253661]: 2025-11-22 09:13:06.178 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:13:06 np0005532048 nova_compute[253661]: 2025-11-22 09:13:06.179 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:13:06 np0005532048 nova_compute[253661]: 2025-11-22 09:13:06.179 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:78:3a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:13:06 np0005532048 nova_compute[253661]: 2025-11-22 09:13:06.180 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:97:0f:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:13:06 np0005532048 nova_compute[253661]: 2025-11-22 09:13:06.180 253665 DEBUG nova.virt.libvirt.driver [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:a6:fb:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:13:06 np0005532048 nova_compute[253661]: 2025-11-22 09:13:06.204 253665 DEBUG nova.virt.libvirt.guest [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:13:06</nova:creationTime>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 04:13:06 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 04:13:06 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 04:13:06 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:06 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:13:06 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:13:06 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:13:06 np0005532048 nova_compute[253661]: 2025-11-22 09:13:06.230 253665 DEBUG oslo_concurrency.lockutils [None req-147d2cac-8381-4afd-b1f9-a0d5522f1850 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 22 04:13:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 22 04:13:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 432 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.1 MiB/s wr, 87 op/s
Nov 22 04:13:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:07Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a6:fb:57 10.100.0.11
Nov 22 04:13:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:07Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:fb:57 10.100.0.11
Nov 22 04:13:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 392 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 919 KiB/s wr, 81 op/s
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.344 253665 INFO nova.virt.libvirt.driver [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Deleting instance files /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81_del#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.345 253665 INFO nova.virt.libvirt.driver [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Deletion of /var/lib/nova/instances/d99bd27b-0ff3-493e-a69c-6c7ec034aa81_del complete#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.399 253665 INFO nova.compute.manager [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Took 8.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.400 253665 DEBUG oslo.service.loopingcall [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.400 253665 DEBUG nova.compute.manager [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.400 253665 DEBUG nova.network.neutron [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.515 253665 DEBUG nova.compute.manager [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-995224e6-d1ff-4d74-bca5-3996eb4d404d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.515 253665 DEBUG nova.compute.manager [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-995224e6-d1ff-4d74-bca5-3996eb4d404d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.516 253665 DEBUG oslo_concurrency.lockutils [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.516 253665 DEBUG oslo_concurrency.lockutils [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.516 253665 DEBUG nova.network.neutron [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port 995224e6-d1ff-4d74-bca5-3996eb4d404d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.721 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802774.7174582, 5babe591-239b-4ef7-b193-6960c7313292 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.721 253665 INFO nova.compute.manager [-] [instance: 5babe591-239b-4ef7-b193-6960c7313292] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.745 253665 DEBUG nova.compute.manager [None req-db15371d-8bb0-4731-8de1-d36742f17e95 - - - - - -] [instance: 5babe591-239b-4ef7-b193-6960c7313292] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.792 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.979 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.979 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.979 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.980 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.980 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.982 253665 INFO nova.compute.manager [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Terminating instance#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.983 253665 DEBUG nova.compute.manager [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.989 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Instance destroyed successfully.#033[00m
Nov 22 04:13:09 np0005532048 nova_compute[253661]: 2025-11-22 09:13:09.990 253665 DEBUG nova.objects.instance [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid 6e825024-ffe6-4fdb-abaa-0c99c65ac38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.007 253665 DEBUG nova.virt.libvirt.vif [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-137830058',display_name='tempest-ImagesTestJSON-server-137830058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-137830058',id=27,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-3brb40ng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:01Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=6e825024-ffe6-4fdb-abaa-0c99c65ac38b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.007 253665 DEBUG nova.network.os_vif_util [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "5898357d-7112-429d-86c6-24932a2fc274", "address": "fa:16:3e:b0:97:c6", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5898357d-71", "ovs_interfaceid": "5898357d-7112-429d-86c6-24932a2fc274", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.008 253665 DEBUG nova.network.os_vif_util [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.008 253665 DEBUG os_vif [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.010 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5898357d-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.013 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.015 253665 INFO os_vif [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:97:c6,bridge_name='br-int',has_traffic_filtering=True,id=5898357d-7112-429d-86c6-24932a2fc274,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5898357d-71')#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.247 253665 DEBUG nova.network.neutron [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.271 253665 INFO nova.compute.manager [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Took 0.87 seconds to deallocate network for instance.#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.319 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.320 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 22 04:13:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 22 04:13:10 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.712 253665 DEBUG oslo_concurrency.processutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.893 253665 DEBUG nova.network.neutron [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port 995224e6-d1ff-4d74-bca5-3996eb4d404d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.894 253665 DEBUG nova.network.neutron [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.917 253665 DEBUG oslo_concurrency.lockutils [req-431f4848-634c-4e96-aa1c-7d0ff848b086 req-84326922-427e-4cb6-968a-439f4e88fb3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.979 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-07d520ca-fd4a-49e6-b52e-ee9e8208b902" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.980 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-07d520ca-fd4a-49e6-b52e-ee9e8208b902" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.981 253665 DEBUG nova.objects.instance [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.995 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:10 np0005532048 nova_compute[253661]: 2025-11-22 09:13:10.996 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.022 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.112 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 392 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 20 KiB/s wr, 53 op/s
Nov 22 04:13:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3449920996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.447 253665 DEBUG oslo_concurrency.processutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.455 253665 DEBUG nova.compute.provider_tree [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.469 253665 DEBUG nova.scheduler.client.report [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.680 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.682 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.690 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.691 253665 INFO nova.compute.claims [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.726 253665 INFO nova.scheduler.client.report [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Deleted allocations for instance d99bd27b-0ff3-493e-a69c-6c7ec034aa81#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.802 253665 DEBUG oslo_concurrency.lockutils [None req-15d33e08-ddeb-4231-9ed1-a1086800efe6 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "d99bd27b-0ff3-493e-a69c-6c7ec034aa81" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:11 np0005532048 nova_compute[253661]: 2025-11-22 09:13:11.861 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.161 253665 DEBUG nova.objects.instance [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.177 253665 DEBUG nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:13:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077584066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:13:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/474190434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:13:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:13:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/474190434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.447 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.447 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.448 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.448 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.448 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.449 253665 WARNING nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d for instance with vm_state active and task_state None.#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.449 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.449 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 DEBUG oslo_concurrency.lockutils [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 WARNING nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-995224e6-d1ff-4d74-bca5-3996eb4d404d for instance with vm_state active and task_state None.#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.450 253665 DEBUG nova.compute.manager [req-f9a0afcf-7384-4bd6-9e8c-348c2aa2c535 req-6ef77d78-de18-4667-aa32-724c4183180f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Received event network-vif-deleted-a36e1a52-1f3b-4ba4-8e92-d1dae8b54f19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.455 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.462 253665 DEBUG nova.compute.provider_tree [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.475 253665 DEBUG nova.scheduler.client.report [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.544 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.545 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.644 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.645 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.719 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.768 253665 INFO nova.virt.libvirt.driver [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Deleting instance files /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_del#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.769 253665 INFO nova.virt.libvirt.driver [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Deletion of /var/lib/nova/instances/6e825024-ffe6-4fdb-abaa-0c99c65ac38b_del complete#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.793 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.941 253665 INFO nova.compute.manager [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 2.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.941 253665 DEBUG oslo.service.loopingcall [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.942 253665 DEBUG nova.compute.manager [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.942 253665 DEBUG nova.network.neutron [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:13:12 np0005532048 nova_compute[253661]: 2025-11-22 09:13:12.985 253665 DEBUG nova.policy [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96cac95dc532449d964ffb3705dae943', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.028 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.029 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.030 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Creating image(s)#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.052 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.078 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.116 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.127 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.190 253665 DEBUG nova.policy [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.208 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.209 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.210 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.210 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.229 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 304 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 8.0 KiB/s wr, 43 op/s
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.233 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.610 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.611 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.611 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.611 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.611 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.613 253665 INFO nova.compute.manager [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Terminating instance#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.614 253665 DEBUG nova.compute.manager [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.659 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.659 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.677 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:13:13 np0005532048 kernel: tap716b716d-2e (unregistering): left promiscuous mode
Nov 22 04:13:13 np0005532048 NetworkManager[48920]: <info>  [1763802793.7681] device (tap716b716d-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:13:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:13Z|00196|binding|INFO|Releasing lport 716b716d-2ee2-44e7-9850-c10854634f77 from this chassis (sb_readonly=0)
Nov 22 04:13:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:13Z|00197|binding|INFO|Setting lport 716b716d-2ee2-44e7-9850-c10854634f77 down in Southbound
Nov 22 04:13:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:13Z|00198|binding|INFO|Removing iface tap716b716d-2e ovn-installed in OVS
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.776 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.777 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.778 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.792 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:7d:dd 10.100.0.8'], port_security=['fa:16:3e:47:7d:dd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3ae08a2f-348c-406b-8ffc-9acb8a542e1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd78b26f20d674ae6a213d727050a50d1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '17b75003-3c1e-45e8-beb3-16a4e74ae1d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a2a9242-c3da-4509-a73f-cefb2238e3e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=716b716d-2ee2-44e7-9850-c10854634f77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:13:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.793 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 716b716d-2ee2-44e7-9850-c10854634f77 in datapath 514ab32c-3e9b-4d95-81f8-6acc06be6d1a unbound from our chassis#033[00m
Nov 22 04:13:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.795 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 514ab32c-3e9b-4d95-81f8-6acc06be6d1a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.788 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.789 253665 INFO nova.compute.claims [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.792 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d703b1-40cc-4fba-9e37-24a5a7081da1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:13.797 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a namespace which is not needed anymore#033[00m
Nov 22 04:13:13 np0005532048 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 22 04:13:13 np0005532048 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000012.scope: Consumed 18.423s CPU time.
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:13 np0005532048 systemd-machined[215941]: Machine qemu-33-instance-00000012 terminated.
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.906 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] resizing rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:13:13 np0005532048 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [NOTICE]   (282428) : haproxy version is 2.8.14-c23fe91
Nov 22 04:13:13 np0005532048 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [NOTICE]   (282428) : path to executable is /usr/sbin/haproxy
Nov 22 04:13:13 np0005532048 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [WARNING]  (282428) : Exiting Master process...
Nov 22 04:13:13 np0005532048 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [WARNING]  (282428) : Exiting Master process...
Nov 22 04:13:13 np0005532048 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [ALERT]    (282428) : Current worker (282430) exited with code 143 (Terminated)
Nov 22 04:13:13 np0005532048 neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a[282424]: [WARNING]  (282428) : All workers exited. Exiting... (0)
Nov 22 04:13:13 np0005532048 systemd[1]: libpod-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6.scope: Deactivated successfully.
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.956 253665 DEBUG nova.network.neutron [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:13 np0005532048 conmon[282424]: conmon af2907c7193e7dfa3192 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6.scope/container/memory.events
Nov 22 04:13:13 np0005532048 podman[292017]: 2025-11-22 09:13:13.962886503 +0000 UTC m=+0.057051243 container died af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.973 253665 INFO nova.compute.manager [-] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Took 1.03 seconds to deallocate network for instance.#033[00m
Nov 22 04:13:13 np0005532048 nova_compute[253661]: 2025-11-22 09:13:13.981 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Successfully created port: 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.026 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6-userdata-shm.mount: Deactivated successfully.
Nov 22 04:13:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d31e66055707f791e950c824f07df600e3199b957edd8fd29251cf5299b718d3-merged.mount: Deactivated successfully.
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.062 253665 INFO nova.virt.libvirt.driver [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Instance destroyed successfully.#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.062 253665 DEBUG nova.objects.instance [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lazy-loading 'resources' on Instance uuid 3ae08a2f-348c-406b-8ffc-9acb8a542e1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.071 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:14 np0005532048 podman[292017]: 2025-11-22 09:13:14.080717799 +0000 UTC m=+0.174882539 container cleanup af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:13:14 np0005532048 systemd[1]: libpod-conmon-af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6.scope: Deactivated successfully.
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.109 253665 DEBUG nova.virt.libvirt.vif [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:10:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1439141870',display_name='tempest-ServersAdminTestJSON-server-1439141870',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1439141870',id=18,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d78b26f20d674ae6a213d727050a50d1',ramdisk_id='',reservation_id='r-2osuzqui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1985232284',owner_user_name='tempest-ServersAdminTestJSON-1985232284-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:30Z,user_data=None,user_id='05cafdbce8334f9380b4dbd1d21f7d58',uuid=3ae08a2f-348c-406b-8ffc-9acb8a542e1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.110 253665 DEBUG nova.network.os_vif_util [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converting VIF {"id": "716b716d-2ee2-44e7-9850-c10854634f77", "address": "fa:16:3e:47:7d:dd", "network": {"id": "514ab32c-3e9b-4d95-81f8-6acc06be6d1a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1012807098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d78b26f20d674ae6a213d727050a50d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap716b716d-2e", "ovs_interfaceid": "716b716d-2ee2-44e7-9850-c10854634f77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.111 253665 DEBUG nova.network.os_vif_util [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.111 253665 DEBUG os_vif [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.145 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.146 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap716b716d-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.158 253665 DEBUG nova.objects.instance [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 14600eae-75dc-4ffc-a15a-bdb234f164d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.162 253665 INFO os_vif [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:7d:dd,bridge_name='br-int',has_traffic_filtering=True,id=716b716d-2ee2-44e7-9850-c10854634f77,network=Network(514ab32c-3e9b-4d95-81f8-6acc06be6d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap716b716d-2e')#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.182 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.182 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Ensure instance console log exists: /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:14 np0005532048 podman[292075]: 2025-11-22 09:13:14.18453767 +0000 UTC m=+0.079972126 container remove af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:13:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc51f79-3b3b-4b36-a5f3-02b6aaa869e2]: (4, ('Sat Nov 22 09:13:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a (af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6)\naf2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6\nSat Nov 22 09:13:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a (af2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6)\naf2907c7193e7dfa319223738f0febeb318ea69d296371cc42dcb089f2f0a8e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.194 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[632e28b9-181f-48e0-b39e-a6d9d378976b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.195 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514ab32c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:14 np0005532048 kernel: tap514ab32c-30: left promiscuous mode
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.213 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.215 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81601035-2b21-49bd-a9f0-3869c22eed12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.234 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[80d75b39-5fe9-4eff-bb44-1e73a7a95804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.236 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5bd2dc0-ddfe-450e-b3a4-443e08080125]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.253 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be7ce0d0-018d-4653-a3f3-dfbc9d238604]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545917, 'reachable_time': 32355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292145, 'error': None, 'target': 'ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:14 np0005532048 systemd[1]: run-netns-ovnmeta\x2d514ab32c\x2d3e9b\x2d4d95\x2d81f8\x2d6acc06be6d1a.mount: Deactivated successfully.
Nov 22 04:13:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.255 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-514ab32c-3e9b-4d95-81f8-6acc06be6d1a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:13:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:14.256 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ae790934-45e6-4df6-b7fa-31c203227787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.286 253665 DEBUG nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Successfully updated port: 07d520ca-fd4a-49e6-b52e-ee9e8208b902 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.299 253665 DEBUG nova.compute.manager [req-f9d5b89f-bba0-45a4-8f1c-f21128f451ea req-4bdacb9c-1e3b-43d1-9081-aa31e9388d4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e825024-ffe6-4fdb-abaa-0c99c65ac38b] Received event network-vif-deleted-5898357d-7112-429d-86c6-24932a2fc274 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.300 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.301 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.301 253665 DEBUG nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:13:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563768531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.543 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.549 253665 DEBUG nova.compute.provider_tree [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.562 253665 DEBUG nova.scheduler.client.report [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.581 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.581 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.584 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.623 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.623 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.638 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.654 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.694 253665 DEBUG oslo_concurrency.processutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.769 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.771 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.772 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Creating image(s)#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.794 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.823 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.844 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.848 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.884 253665 DEBUG nova.policy [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97872d7ce91947789de976821b771135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.887 253665 WARNING nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.887 253665 WARNING nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.887 253665 WARNING nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.897 253665 INFO nova.virt.libvirt.driver [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deleting instance files /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.898 253665 INFO nova.virt.libvirt.driver [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deletion of /var/lib/nova/instances/3ae08a2f-348c-406b-8ffc-9acb8a542e1c_del complete#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.923 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.923 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.924 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.924 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.944 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.947 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.981 253665 INFO nova.compute.manager [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Took 1.37 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.982 253665 DEBUG oslo.service.loopingcall [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.983 253665 DEBUG nova.compute.manager [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:13:14 np0005532048 nova_compute[253661]: 2025-11-22 09:13:14.983 253665 DEBUG nova.network.neutron [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.168 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Successfully updated port: 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:13:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609026723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.184 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquired lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.184 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.200 253665 DEBUG oslo_concurrency.processutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.205 253665 DEBUG nova.compute.provider_tree [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.221 253665 DEBUG nova.scheduler.client.report [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 200 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 12 KiB/s wr, 78 op/s
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.243 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.269 253665 INFO nova.scheduler.client.report [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Deleted allocations for instance 6e825024-ffe6-4fdb-abaa-0c99c65ac38b#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.290 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.291 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.291 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.291 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.291 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.292 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-unplugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.292 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-changed-07d520ca-fd4a-49e6-b52e-ee9e8208b902 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.292 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing instance network info cache due to event network-changed-07d520ca-fd4a-49e6-b52e-ee9e8208b902. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.293 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.328 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.355 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.361 253665 DEBUG oslo_concurrency.lockutils [None req-150c51b6-1a21-4389-95b8-396d8273f56d 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "6e825024-ffe6-4fdb-abaa-0c99c65ac38b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.406 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] resizing rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.523 253665 DEBUG nova.objects.instance [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'migration_context' on Instance uuid fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.539 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.540 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Ensure instance console log exists: /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.540 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.540 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.541 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 22 04:13:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 22 04:13:15 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.648 253665 DEBUG nova.network.neutron [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.663 253665 INFO nova.compute.manager [-] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Took 0.68 seconds to deallocate network for instance.#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.708 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.709 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.734 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Successfully created port: 2c059df4-a5a0-4c31-8485-01ccdea02b01 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:13:15 np0005532048 nova_compute[253661]: 2025-11-22 09:13:15.821 253665 DEBUG oslo_concurrency.processutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987270026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.284 253665 DEBUG oslo_concurrency.processutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.291 253665 DEBUG nova.compute.provider_tree [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.315 253665 DEBUG nova.scheduler.client.report [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.339 253665 DEBUG nova.network.neutron [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Updating instance_info_cache with network_info: [{"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.343 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.366 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Releasing lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.367 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Instance network_info: |[{"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.369 253665 INFO nova.scheduler.client.report [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Deleted allocations for instance 3ae08a2f-348c-406b-8ffc-9acb8a542e1c#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.373 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Start _get_guest_xml network_info=[{"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.378 253665 WARNING nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.385 253665 DEBUG nova.virt.libvirt.host [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.385 253665 DEBUG nova.virt.libvirt.host [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.392 253665 DEBUG nova.virt.libvirt.host [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.393 253665 DEBUG nova.virt.libvirt.host [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.394 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.394 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.394 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.395 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.395 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.395 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.395 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.396 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.396 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.396 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.396 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.397 253665 DEBUG nova.virt.hardware [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.401 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.439 253665 DEBUG oslo_concurrency.lockutils [None req-ebffc75e-4c9a-49f6-9661-af121b0fc8bf 05cafdbce8334f9380b4dbd1d21f7d58 d78b26f20d674ae6a213d727050a50d1 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.511 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802781.509268, d99bd27b-0ff3-493e-a69c-6c7ec034aa81 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.511 253665 INFO nova.compute.manager [-] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.529 253665 DEBUG nova.compute.manager [None req-2a8de323-d873-4f53-bd10-92291e2fb3b9 - - - - - -] [instance: d99bd27b-0ff3-493e-a69c-6c7ec034aa81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:13:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421444660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.869 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.893 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:16 np0005532048 nova_compute[253661]: 2025-11-22 09:13:16.897 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.066 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Successfully updated port: 2c059df4-a5a0-4c31-8485-01ccdea02b01 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.082 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.083 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.083 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.219 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:13:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 206 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 1.3 MiB/s wr, 106 op/s
Nov 22 04:13:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:13:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862241399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.364 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.366 253665 DEBUG nova.virt.libvirt.vif [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1301305920',id=29,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-febfj4xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:12Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=14600eae-75dc-4ffc-a15a-bdb234f164d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.366 253665 DEBUG nova.network.os_vif_util [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.367 253665 DEBUG nova.network.os_vif_util [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.369 253665 DEBUG nova.objects.instance [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14600eae-75dc-4ffc-a15a-bdb234f164d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.381 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <uuid>14600eae-75dc-4ffc-a15a-bdb234f164d0</uuid>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <name>instance-0000001d</name>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1301305920</nova:name>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:13:16</nova:creationTime>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <nova:user uuid="96cac95dc532449d964ffb3705dae943">tempest-ImagesOneServerNegativeTestJSON-251054159-project-member</nova:user>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <nova:project uuid="dcedb2f9ed6e43dfa8ecc3854373b0b5">tempest-ImagesOneServerNegativeTestJSON-251054159</nova:project>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <nova:port uuid="2f6ebd6c-b451-455e-b4aa-19a0ccf66a44">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <entry name="serial">14600eae-75dc-4ffc-a15a-bdb234f164d0</entry>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <entry name="uuid">14600eae-75dc-4ffc-a15a-bdb234f164d0</entry>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/14600eae-75dc-4ffc-a15a-bdb234f164d0_disk">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:13:97:6f"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <target dev="tap2f6ebd6c-b4"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/console.log" append="off"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:13:17 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:13:17 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:13:17 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:13:17 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.382 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Preparing to wait for external event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.383 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.383 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.383 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.384 253665 DEBUG nova.virt.libvirt.vif [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1301305920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1301305920',id=29,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcedb2f9ed6e43dfa8ecc3854373b0b5',ramdisk_id='',reservation_id='r-febfj4xt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-251054159',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-251054159-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:12Z,user_data=None,user_id='96cac95dc532449d964ffb3705dae943',uuid=14600eae-75dc-4ffc-a15a-bdb234f164d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.384 253665 DEBUG nova.network.os_vif_util [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converting VIF {"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.385 253665 DEBUG nova.network.os_vif_util [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.385 253665 DEBUG os_vif [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.386 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.386 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f6ebd6c-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f6ebd6c-b4, col_values=(('external_ids', {'iface-id': '2f6ebd6c-b451-455e-b4aa-19a0ccf66a44', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:97:6f', 'vm-uuid': '14600eae-75dc-4ffc-a15a-bdb234f164d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:17 np0005532048 NetworkManager[48920]: <info>  [1763802797.3931] manager: (tap2f6ebd6c-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.398 253665 INFO os_vif [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:97:6f,bridge_name='br-int',has_traffic_filtering=True,id=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44,network=Network(691e79ad-da5d-4276-aa7d-732c2aaedbff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6ebd6c-b4')#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.447 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.448 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.448 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No VIF found with MAC fa:16:3e:13:97:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.449 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Using config drive#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.471 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.498 253665 DEBUG nova.compute.manager [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-changed-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.499 253665 DEBUG nova.compute.manager [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Refreshing instance network info cache due to event network-changed-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.499 253665 DEBUG oslo_concurrency.lockutils [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.499 253665 DEBUG oslo_concurrency.lockutils [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.500 253665 DEBUG nova.network.neutron [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Refreshing network info cache for port 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:13:17 np0005532048 nova_compute[253661]: 2025-11-22 09:13:17.632 253665 DEBUG nova.compute.manager [req-2b04d1dc-d025-40dd-81c0-d00340a14b68 req-473de715-0cba-470d-9217-82bd80f303c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-deleted-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.117 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Creating config drive at /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.124 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1oz3vvrk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.270 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1oz3vvrk" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.296 253665 DEBUG nova.storage.rbd_utils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] rbd image 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.302 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:18 np0005532048 podman[292465]: 2025-11-22 09:13:18.375658991 +0000 UTC m=+0.063269026 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:13:18 np0005532048 podman[292453]: 2025-11-22 09:13:18.391834478 +0000 UTC m=+0.079703740 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.414 253665 DEBUG nova.network.neutron [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Updating instance_info_cache with network_info: [{"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.445 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.445 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Instance network_info: |[{"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.448 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Start _get_guest_xml network_info=[{"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.454 253665 WARNING nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.458 253665 DEBUG nova.virt.libvirt.host [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.459 253665 DEBUG nova.virt.libvirt.host [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.464 253665 DEBUG nova.virt.libvirt.host [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.465 253665 DEBUG nova.virt.libvirt.host [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.465 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.465 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.466 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.466 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.466 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.466 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.467 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.468 253665 DEBUG nova.virt.hardware [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.470 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.495 253665 DEBUG oslo_concurrency.processutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config 14600eae-75dc-4ffc-a15a-bdb234f164d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.496 253665 INFO nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Deleting local config drive /var/lib/nova/instances/14600eae-75dc-4ffc-a15a-bdb234f164d0/disk.config because it was imported into RBD.#033[00m
Nov 22 04:13:18 np0005532048 kernel: tap2f6ebd6c-b4: entered promiscuous mode
Nov 22 04:13:18 np0005532048 NetworkManager[48920]: <info>  [1763802798.5517] manager: (tap2f6ebd6c-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Nov 22 04:13:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:18Z|00199|binding|INFO|Claiming lport 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 for this chassis.
Nov 22 04:13:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:18Z|00200|binding|INFO|2f6ebd6c-b451-455e-b4aa-19a0ccf66a44: Claiming fa:16:3e:13:97:6f 10.100.0.11
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:18Z|00201|binding|INFO|Setting lport 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 ovn-installed in OVS
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:18 np0005532048 systemd-machined[215941]: New machine qemu-34-instance-0000001d.
Nov 22 04:13:18 np0005532048 systemd[1]: Started Virtual Machine qemu-34-instance-0000001d.
Nov 22 04:13:18 np0005532048 systemd-udevd[292552]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.618 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:97:6f 10.100.0.11'], port_security=['fa:16:3e:13:97:6f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '14600eae-75dc-4ffc-a15a-bdb234f164d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcedb2f9ed6e43dfa8ecc3854373b0b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc00b739-f7be-45ec-82d1-43cf2c8c1544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d529718-199e-4cab-8a60-f03c6cb8db18, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2f6ebd6c-b451-455e-b4aa-19a0ccf66a44) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.619 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 in datapath 691e79ad-da5d-4276-aa7d-732c2aaedbff bound to our chassis#033[00m
Nov 22 04:13:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:18Z|00202|binding|INFO|Setting lport 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 up in Southbound
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.621 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 691e79ad-da5d-4276-aa7d-732c2aaedbff#033[00m
Nov 22 04:13:18 np0005532048 NetworkManager[48920]: <info>  [1763802798.6302] device (tap2f6ebd6c-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:13:18 np0005532048 NetworkManager[48920]: <info>  [1763802798.6311] device (tap2f6ebd6c-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.633 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[213ea8e8-12eb-4713-a196-7014fdbd14bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.635 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap691e79ad-d1 in ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.636 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap691e79ad-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.636 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[545cc5f4-b741-4a08-b7e5-0454c27d3d92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.637 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a184a5b9-18eb-46c8-8a6a-67b6e48af03a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.649 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c82a2fee-05c8-44a7-94ec-c10bf597f2db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.679 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03befbdf-b58d-4a87-b845-da125bb92d24]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.725 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe334a07-8aa6-4354-972a-c8d9a0b5dbc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 NetworkManager[48920]: <info>  [1763802798.7361] manager: (tap691e79ad-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.734 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3af42d95-b4a4-4575-8cae-0f859ef4d9bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.778 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[da52d9a4-dbca-4b5d-8794-21cb2cfe250d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.782 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[78f75cda-2b97-425d-b097-66066cf46284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 NetworkManager[48920]: <info>  [1763802798.8127] device (tap691e79ad-d0): carrier: link connected
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.819 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[99b21347-6225-4793-b04d-c743e3ff69c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be183257-1eef-461e-9047-a2aa148ee3be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564955, 'reachable_time': 24605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292587, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.860 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a45bcca-8db3-461c-ac47-79f5ef772a53]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:f9e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564955, 'tstamp': 564955}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292588, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.878 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d131c23-ac92-49d1-a710-427a07af4291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap691e79ad-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:f9:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564955, 'reachable_time': 24605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292589, 'error': None, 'target': 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.920 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a667a99-5909-405e-919a-b3fb828caa08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:13:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3378602376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c32e0b8a-b2d1-4663-b8b7-f90b8fa90ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.987 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691e79ad-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.987 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.988 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap691e79ad-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:18 np0005532048 NetworkManager[48920]: <info>  [1763802798.9905] manager: (tap691e79ad-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Nov 22 04:13:18 np0005532048 kernel: tap691e79ad-d0: entered promiscuous mode
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.994 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap691e79ad-d0, col_values=(('external_ids', {'iface-id': '6b990e4f-df30-4562-9550-e3e0ea811f07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:18 np0005532048 nova_compute[253661]: 2025-11-22 09:13:18.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:18Z|00203|binding|INFO|Releasing lport 6b990e4f-df30-4562-9550-e3e0ea811f07 from this chassis (sb_readonly=0)
Nov 22 04:13:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:18.998 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:19.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e650f153-bf89-4aaa-bce8-1b7c6d594a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:19.001 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/691e79ad-da5d-4276-aa7d-732c2aaedbff.pid.haproxy
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 691e79ad-da5d-4276-aa7d-732c2aaedbff
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:13:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:19.002 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'env', 'PROCESS_TAG=haproxy-691e79ad-da5d-4276-aa7d-732c2aaedbff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/691e79ad-da5d-4276-aa7d-732c2aaedbff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.035 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.064 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.070 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 213 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 5.0 MiB/s wr, 196 op/s
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.238 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:19 np0005532048 podman[292660]: 2025-11-22 09:13:19.414545765 +0000 UTC m=+0.060382896 container create c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 04:13:19 np0005532048 systemd[1]: Started libpod-conmon-c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438.scope.
Nov 22 04:13:19 np0005532048 podman[292660]: 2025-11-22 09:13:19.378783126 +0000 UTC m=+0.024620277 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:13:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:13:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a101ddd0cbf37b355cc7ffa96e374d86e06f82052cc49d6d94999ac85daf5cc3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:19 np0005532048 podman[292660]: 2025-11-22 09:13:19.5376685 +0000 UTC m=+0.183505661 container init c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:13:19 np0005532048 podman[292660]: 2025-11-22 09:13:19.546009876 +0000 UTC m=+0.191847007 container start c850021e6737f69f011da7912ccc66e094a01cfb39806c4d1ea96e6b36ee2438 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:13:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:13:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2709526242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:13:19 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[292693]: [NOTICE]   (292716) : New worker (292724) forked
Nov 22 04:13:19 np0005532048 neutron-haproxy-ovnmeta-691e79ad-da5d-4276-aa7d-732c2aaedbff[292693]: [NOTICE]   (292716) : Loading success.
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.580 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.582 253665 DEBUG nova.virt.libvirt.vif [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2140089311',display_name='tempest-ImagesTestJSON-server-2140089311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2140089311',id=30,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-63rtx74t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:14Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=fd1f1ba2-6963-47bb-8d59-86e2ed015ad1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.582 253665 DEBUG nova.network.os_vif_util [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.583 253665 DEBUG nova.network.os_vif_util [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.584 253665 DEBUG nova.objects.instance [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.598 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <uuid>fd1f1ba2-6963-47bb-8d59-86e2ed015ad1</uuid>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <name>instance-0000001e</name>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <nova:name>tempest-ImagesTestJSON-server-2140089311</nova:name>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:13:18</nova:creationTime>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <nova:port uuid="2c059df4-a5a0-4c31-8485-01ccdea02b01">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <entry name="serial">fd1f1ba2-6963-47bb-8d59-86e2ed015ad1</entry>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <entry name="uuid">fd1f1ba2-6963-47bb-8d59-86e2ed015ad1</entry>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:43:82:bb"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <target dev="tap2c059df4-a5"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/console.log" append="off"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:13:19 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:13:19 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:13:19 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:13:19 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.599 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Preparing to wait for external event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.600 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.600 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.600 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.601 253665 DEBUG nova.virt.libvirt.vif [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:13:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2140089311',display_name='tempest-ImagesTestJSON-server-2140089311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2140089311',id=30,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-63rtx74t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:13:14Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=fd1f1ba2-6963-47bb-8d59-86e2ed015ad1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.601 253665 DEBUG nova.network.os_vif_util [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.602 253665 DEBUG nova.network.os_vif_util [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.602 253665 DEBUG os_vif [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.603 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.603 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.603 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.606 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c059df4-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.607 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c059df4-a5, col_values=(('external_ids', {'iface-id': '2c059df4-a5a0-4c31-8485-01ccdea02b01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:82:bb', 'vm-uuid': 'fd1f1ba2-6963-47bb-8d59-86e2ed015ad1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.609 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:19 np0005532048 NetworkManager[48920]: <info>  [1763802799.6113] manager: (tap2c059df4-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.612 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.621 253665 INFO os_vif [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:82:bb,bridge_name='br-int',has_traffic_filtering=True,id=2c059df4-a5a0-4c31-8485-01ccdea02b01,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c059df4-a5')#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.681 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.682 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.683 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:43:82:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.684 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Using config drive#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.707 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.714 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802799.6844804, 14600eae-75dc-4ffc-a15a-bdb234f164d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.715 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] VM Started (Lifecycle Event)#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.735 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.740 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802799.6848211, 14600eae-75dc-4ffc-a15a-bdb234f164d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.740 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.758 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.769 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.789 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:13:19 np0005532048 nova_compute[253661]: 2025-11-22 09:13:19.797 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.066 253665 DEBUG nova.network.neutron [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Updated VIF entry in instance network info cache for port 2f6ebd6c-b451-455e-b4aa-19a0ccf66a44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.066 253665 DEBUG nova.network.neutron [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Updating instance_info_cache with network_info: [{"id": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "address": "fa:16:3e:13:97:6f", "network": {"id": "691e79ad-da5d-4276-aa7d-732c2aaedbff", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-859281337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcedb2f9ed6e43dfa8ecc3854373b0b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6ebd6c-b4", "ovs_interfaceid": "2f6ebd6c-b451-455e-b4aa-19a0ccf66a44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.082 253665 DEBUG oslo_concurrency.lockutils [req-3268fe25-e900-4fc7-97d1-50838318d3c2 req-f0611aa7-2f6b-40ec-ae44-1581241ffc35 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-14600eae-75dc-4ffc-a15a-bdb234f164d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.110 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Creating config drive at /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.116 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd8diaau execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.156 253665 DEBUG nova.compute.manager [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received event network-changed-2c059df4-a5a0-4c31-8485-01ccdea02b01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.157 253665 DEBUG nova.compute.manager [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Refreshing instance network info cache due to event network-changed-2c059df4-a5a0-4c31-8485-01ccdea02b01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.158 253665 DEBUG oslo_concurrency.lockutils [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.158 253665 DEBUG oslo_concurrency.lockutils [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.159 253665 DEBUG nova.network.neutron [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Refreshing network info cache for port 2c059df4-a5a0-4c31-8485-01ccdea02b01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.208 253665 DEBUG nova.compute.manager [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.209 253665 DEBUG oslo_concurrency.lockutils [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.209 253665 DEBUG oslo_concurrency.lockutils [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.209 253665 DEBUG oslo_concurrency.lockutils [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.210 253665 DEBUG nova.compute.manager [req-9b8e8c42-912c-4a9b-a7e6-5ee0c22ef4e9 req-31cb1d64-0b40-40d3-84e8-f1a61c08bf7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Processing event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.211 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.215 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802800.2150548, 14600eae-75dc-4ffc-a15a-bdb234f164d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.216 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.219 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.229 253665 INFO nova.virt.libvirt.driver [-] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Instance spawned successfully.#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.230 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.245 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.250 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.259 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprd8diaau" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.290 253665 DEBUG nova.storage.rbd_utils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.295 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.332 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.339 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.340 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.341 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.341 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.342 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.342 253665 DEBUG nova.virt.libvirt.driver [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.347 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.348 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.349 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.388 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.388 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.389 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.389 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.390 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.439 253665 INFO nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Took 7.41 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.440 253665 DEBUG nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.478 253665 DEBUG oslo_concurrency.processutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config fd1f1ba2-6963-47bb-8d59-86e2ed015ad1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.479 253665 INFO nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Deleting local config drive /var/lib/nova/instances/fd1f1ba2-6963-47bb-8d59-86e2ed015ad1/disk.config because it was imported into RBD.#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.498 253665 INFO nova.compute.manager [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Took 9.41 seconds to build instance.#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.515 253665 DEBUG oslo_concurrency.lockutils [None req-bc66dc03-85fa-4c45-b2c1-5757b7ae656f 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:20 np0005532048 kernel: tap2c059df4-a5: entered promiscuous mode
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.539 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:20Z|00204|binding|INFO|Claiming lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 for this chassis.
Nov 22 04:13:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:20Z|00205|binding|INFO|2c059df4-a5a0-4c31-8485-01ccdea02b01: Claiming fa:16:3e:43:82:bb 10.100.0.14
Nov 22 04:13:20 np0005532048 NetworkManager[48920]: <info>  [1763802800.5423] manager: (tap2c059df4-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Nov 22 04:13:20 np0005532048 systemd-udevd[292583]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.548 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:82:bb 10.100.0.14'], port_security=['fa:16:3e:43:82:bb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fd1f1ba2-6963-47bb-8d59-86e2ed015ad1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2c059df4-a5a0-4c31-8485-01ccdea02b01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.550 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2c059df4-a5a0-4c31-8485-01ccdea02b01 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 bound to our chassis#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.552 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51#033[00m
Nov 22 04:13:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:20Z|00206|binding|INFO|Setting lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 ovn-installed in OVS
Nov 22 04:13:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:20Z|00207|binding|INFO|Setting lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 up in Southbound
Nov 22 04:13:20 np0005532048 NetworkManager[48920]: <info>  [1763802800.5604] device (tap2c059df4-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:13:20 np0005532048 NetworkManager[48920]: <info>  [1763802800.5613] device (tap2c059df4-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.576 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d549b8-a028-4d0a-9201-984fe137e92d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.580 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2abeeeb2-21 in ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.584 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2abeeeb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.585 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bca0f9-93a4-4a06-9fdf-70a57e650cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.587 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5fb0a5-c681-4de7-bb2a-26c7b56dee01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.601 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8424f25c-17fa-4394-a2cb-14b3a8c1fe90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 systemd-machined[215941]: New machine qemu-35-instance-0000001e.
Nov 22 04:13:20 np0005532048 systemd[1]: Started Virtual Machine qemu-35-instance-0000001e.
Nov 22 04:13:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.629 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf32e0d-51be-44d9-96c7-df99eec6a23d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.679 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ac724161-4583-4899-b935-7d03635952af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 NetworkManager[48920]: <info>  [1763802800.6900] manager: (tap2abeeeb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.689 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[87e6c37c-2e3d-4626-ba18-647864fe5407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.745 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d31055-ea22-4f4e-80b0-6adad5c1cd5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.749 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c7cb32-7d4b-4250-a963-2eb9934b1b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 NetworkManager[48920]: <info>  [1763802800.7773] device (tap2abeeeb2-20): carrier: link connected
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.782 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8439b8dd-1868-434b-b4ad-1390867c385f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.801 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b630501f-a917-472b-955d-4a6aec5f70d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565151, 'reachable_time': 33933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292843, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.825 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[643a6f30-e795-4a2f-9750-4afbdcf2a19a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:bff7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565151, 'tstamp': 565151}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292844, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.856 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[911ab794-5242-4727-8a53-2c1997e5585b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565151, 'reachable_time': 33933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292845, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940237445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.896 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74c7380c-ecf3-45a2-9e45-8a2fa1710ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 nova_compute[253661]: 2025-11-22 09:13:20.921 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.987 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69e90d3f-1c5d-43b1-9d13-3bc8eaff96e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.989 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.989 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:20.996 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.000 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.000 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.002 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:21 np0005532048 NetworkManager[48920]: <info>  [1763802801.0034] manager: (tap2abeeeb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Nov 22 04:13:21 np0005532048 kernel: tap2abeeeb2-20: entered promiscuous mode
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.008 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.010 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.012 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:13:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:21Z|00208|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.012 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.014 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.016 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4850eb3a-5c5f-463b-a3af-97a2bab9be69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.017 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:13:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:21.018 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'env', 'PROCESS_TAG=haproxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.019 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.020 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.165 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802801.164695, fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.165 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.188 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.191 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802801.165603, fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.191 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.205 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.208 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.225 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:13:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 213 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 4.3 MiB/s wr, 168 op/s
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.324 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4037MB free_disk=59.90109634399414GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.326 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.326 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.408 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.408 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 14600eae-75dc-4ffc-a15a-bdb234f164d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.408 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.408 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.409 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:13:21 np0005532048 podman[292921]: 2025-11-22 09:13:21.484712485 +0000 UTC m=+0.087699496 container create 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.492 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:13:21 np0005532048 podman[292921]: 2025-11-22 09:13:21.421336088 +0000 UTC m=+0.024323099 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:13:21 np0005532048 systemd[1]: Started libpod-conmon-75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad.scope.
Nov 22 04:13:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:13:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05db8838f8babda2674b21fdd9cba56d8e24a25007d525322ca9fed2068a5d9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:21 np0005532048 podman[292921]: 2025-11-22 09:13:21.622217195 +0000 UTC m=+0.225204256 container init 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:13:21 np0005532048 podman[292921]: 2025-11-22 09:13:21.628549071 +0000 UTC m=+0.231536082 container start 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 04:13:21 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [NOTICE]   (292942) : New worker (292960) forked
Nov 22 04:13:21 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [NOTICE]   (292942) : Loading success.
Nov 22 04:13:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3587739031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.960 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.965 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:13:21 np0005532048 nova_compute[253661]: 2025-11-22 09:13:21.982 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.005 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.006 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.044 253665 DEBUG nova.network.neutron [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.064 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.066 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.066 253665 DEBUG nova.network.neutron [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Refreshing network info cache for port 07d520ca-fd4a-49e6-b52e-ee9e8208b902 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.070 253665 DEBUG nova.virt.libvirt.vif [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.070 253665 DEBUG nova.network.os_vif_util [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.071 253665 DEBUG nova.network.os_vif_util [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.071 253665 DEBUG os_vif [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.078 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07d520ca-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.079 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07d520ca-fd, col_values=(('external_ids', {'iface-id': '07d520ca-fd4a-49e6-b52e-ee9e8208b902', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:2d:f5', 'vm-uuid': '3c70b093-a92a-4781-8e32-2a7eefde4a43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:22 np0005532048 NetworkManager[48920]: <info>  [1763802802.0822] manager: (tap07d520ca-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.093 253665 INFO os_vif [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd')#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.094 253665 DEBUG nova.virt.libvirt.vif [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.095 253665 DEBUG nova.network.os_vif_util [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.095 253665 DEBUG nova.network.os_vif_util [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:2d:f5,bridge_name='br-int',has_traffic_filtering=True,id=07d520ca-fd4a-49e6-b52e-ee9e8208b902,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07d520ca-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.100 253665 DEBUG nova.virt.libvirt.guest [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:cd:2d:f5"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <target dev="tap07d520ca-fd"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:13:22 np0005532048 nova_compute[253661]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 22 04:13:22 np0005532048 kernel: tap07d520ca-fd: entered promiscuous mode
Nov 22 04:13:22 np0005532048 NetworkManager[48920]: <info>  [1763802802.1144] manager: (tap07d520ca-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.119 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:22Z|00209|binding|INFO|Claiming lport 07d520ca-fd4a-49e6-b52e-ee9e8208b902 for this chassis.
Nov 22 04:13:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:22Z|00210|binding|INFO|07d520ca-fd4a-49e6-b52e-ee9e8208b902: Claiming fa:16:3e:cd:2d:f5 10.100.0.8
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.130 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:2d:f5 10.100.0.8'], port_security=['fa:16:3e:cd:2d:f5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-202407542', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-202407542', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=07d520ca-fd4a-49e6-b52e-ee9e8208b902) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.131 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 07d520ca-fd4a-49e6-b52e-ee9e8208b902 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.133 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:13:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:22Z|00211|binding|INFO|Setting lport 07d520ca-fd4a-49e6-b52e-ee9e8208b902 ovn-installed in OVS
Nov 22 04:13:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:22Z|00212|binding|INFO|Setting lport 07d520ca-fd4a-49e6-b52e-ee9e8208b902 up in Southbound
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.156 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[44b03c80-e42c-4699-8be7-449ed9b68c44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:22 np0005532048 systemd-udevd[292989]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.171 253665 DEBUG nova.network.neutron [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Updated VIF entry in instance network info cache for port 2c059df4-a5a0-4c31-8485-01ccdea02b01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.172 253665 DEBUG nova.network.neutron [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Updating instance_info_cache with network_info: [{"id": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "address": "fa:16:3e:43:82:bb", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c059df4-a5", "ovs_interfaceid": "2c059df4-a5a0-4c31-8485-01ccdea02b01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:22 np0005532048 NetworkManager[48920]: <info>  [1763802802.1897] device (tap07d520ca-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:13:22 np0005532048 NetworkManager[48920]: <info>  [1763802802.1906] device (tap07d520ca-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.196 253665 DEBUG oslo_concurrency.lockutils [req-f65d83ad-748c-41e7-9965-7b7e958e2d44 req-54265a47-4d6c-44f7-80d9-8fbb03246ab7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.202 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[97aeb580-e00c-4bce-a5aa-ac1d8c147f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.206 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe76e5de-06cc-4bd1-aba8-5c755178d5af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.215 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:78:3a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:97:0f:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:a6:fb:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.216 253665 DEBUG nova.virt.libvirt.driver [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:cd:2d:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.240 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3e18a8b1-302e-4a0a-b4e2-22568d815f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.246 253665 DEBUG nova.virt.libvirt.guest [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:13:22</nova:creationTime>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 04:13:22 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 04:13:22 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 04:13:22 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 04:13:22 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:22 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:13:22 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:13:22 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:13:22 np0005532048 podman[292977]: 2025-11-22 09:13:22.261973269 +0000 UTC m=+0.114543537 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.264 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06701e47-eb0a-4643-8383-0690095ab53f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 698, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 698, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293009, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.272 253665 DEBUG oslo_concurrency.lockutils [None req-1d211294-1043-4da6-90a7-90898283c483 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-07d520ca-fd4a-49e6-b52e-ee9e8208b902" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.285 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e63e2ce0-5bf1-43dd-a42c-166679ae12c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293011, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293011, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.287 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.290 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.291 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.291 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:22.291 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.886 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:22 np0005532048 nova_compute[253661]: 2025-11-22 09:13:22.886 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.042 253665 DEBUG nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.043 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.043 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.043 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "14600eae-75dc-4ffc-a15a-bdb234f164d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.043 253665 DEBUG nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] No waiting events found dispatching network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 WARNING nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Received unexpected event network-vif-plugged-2f6ebd6c-b451-455e-b4aa-19a0ccf66a44 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG oslo_concurrency.lockutils [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.044 253665 DEBUG nova.compute.manager [req-12223a02-09f6-48d5-af47-1f50cd25fad7 req-0425a448-e500-4180-a420-f43ed41ad18d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Processing event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.045 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.051 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802803.0509188, fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.051 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.053 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.058 253665 INFO nova.virt.libvirt.driver [-] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Instance spawned successfully.#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.058 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.073 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.080 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.084 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.084 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.084 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.084 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.085 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.085 253665 DEBUG nova.virt.libvirt.driver [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.119 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.155 253665 INFO nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Took 8.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.156 253665 DEBUG nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.224 253665 INFO nova.compute.manager [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Took 9.48 seconds to build instance.#033[00m
Nov 22 04:13:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 213 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 313 KiB/s rd, 4.3 MiB/s wr, 169 op/s
Nov 22 04:13:23 np0005532048 nova_compute[253661]: 2025-11-22 09:13:23.239 253665 DEBUG oslo_concurrency.lockutils [None req-0bcb9aaa-d4b2-45e5-a8ca-abb929941ac3 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:13:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:24Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:2d:f5 10.100.0.8
Nov 22 04:13:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:24Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:2d:f5 10.100.0.8
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.855 253665 DEBUG nova.network.neutron [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updated VIF entry in instance network info cache for port 07d520ca-fd4a-49e6-b52e-ee9e8208b902. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.857 253665 DEBUG nova.network.neutron [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.878 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.879 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.879 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.880 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.881 253665 DEBUG oslo_concurrency.lockutils [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3ae08a2f-348c-406b-8ffc-9acb8a542e1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.881 253665 DEBUG nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] No waiting events found dispatching network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:24 np0005532048 nova_compute[253661]: 2025-11-22 09:13:24.882 253665 WARNING nova.compute.manager [req-13346e61-75ed-456e-80f9-8c6503aa138e req-c764a82d-670f-4eef-9f94-a89a09c3809a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3ae08a2f-348c-406b-8ffc-9acb8a542e1c] Received unexpected event network-vif-plugged-716b716d-2ee2-44e7-9850-c10854634f77 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.142 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-f1f391af-c757-4aab-b0ce-ddad3dab55e7" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.143 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-3c70b093-a92a-4781-8e32-2a7eefde4a43-f1f391af-c757-4aab-b0ce-ddad3dab55e7" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.159 253665 DEBUG nova.objects.instance [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.181 253665 DEBUG nova.virt.libvirt.vif [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.183 253665 DEBUG nova.network.os_vif_util [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.184 253665 DEBUG nova.network.os_vif_util [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.190 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.193 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.197 253665 DEBUG nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Attempting to detach device tapf1f391af-c7 from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.197 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:97:0f:1c"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <target dev="tapf1f391af-c7"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:13:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 214 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 209 op/s
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.282 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.289 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface>not found in domain: <domain type='kvm' id='30'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <name>instance-0000001a</name>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:13:22</nova:creationTime>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='serial'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='uuid'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk' index='2'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config' index='1'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:78:3a:a5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='tapb82d7759-7f'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:97:0f:1c'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='tapf1f391af-c7'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='net1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:a6:fb:57'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='tap995224e6-d1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='net2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:cd:2d:f5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='tap07d520ca-fd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='net3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c214,c646</label>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c646</imagelabel>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.300 253665 INFO nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tapf1f391af-c7 from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the persistent domain config.#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.300 253665 DEBUG nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] (1/8): Attempting to detach device tapf1f391af-c7 with device alias net1 from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.301 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:97:0f:1c"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <target dev="tapf1f391af-c7"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.318 253665 DEBUG nova.objects.instance [None req-4a4278ff-8271-4e79-a6af-308ddef6f082 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:25Z|00213|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:13:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:25Z|00214|binding|INFO|Releasing lport 6b990e4f-df30-4562-9550-e3e0ea811f07 from this chassis (sb_readonly=0)
Nov 22 04:13:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:25Z|00215|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.363 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802805.3435202, fd1f1ba2-6963-47bb-8d59-86e2ed015ad1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.365 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.371 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.372 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.372 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.372 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "fd1f1ba2-6963-47bb-8d59-86e2ed015ad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] No waiting events found dispatching network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 WARNING nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Received unexpected event network-vif-plugged-2c059df4-a5a0-4c31-8485-01ccdea02b01 for instance with vm_state active and task_state suspending.#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.373 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.374 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.374 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.374 253665 WARNING nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.374 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 DEBUG oslo_concurrency.lockutils [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3c70b093-a92a-4781-8e32-2a7eefde4a43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 DEBUG nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] No waiting events found dispatching network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.375 253665 WARNING nova.compute.manager [req-fa320de4-98ff-4a60-8852-7a22691edf3d req-26c52d2a-50e5-487c-8c4a-f192dbec53f7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received unexpected event network-vif-plugged-07d520ca-fd4a-49e6-b52e-ee9e8208b902 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.388 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.394 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.419 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 22 04:13:25 np0005532048 kernel: tapf1f391af-c7 (unregistering): left promiscuous mode
Nov 22 04:13:25 np0005532048 NetworkManager[48920]: <info>  [1763802805.4357] device (tapf1f391af-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:13:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:25Z|00216|binding|INFO|Releasing lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 from this chassis (sb_readonly=0)
Nov 22 04:13:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:25Z|00217|binding|INFO|Setting lport f1f391af-c757-4aab-b0ce-ddad3dab55e7 down in Southbound
Nov 22 04:13:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:25Z|00218|binding|INFO|Removing iface tapf1f391af-c7 ovn-installed in OVS
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.453 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:0f:1c 10.100.0.13'], port_security=['fa:16:3e:97:0f:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3c70b093-a92a-4781-8e32-2a7eefde4a43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f1f391af-c757-4aab-b0ce-ddad3dab55e7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.455 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f1f391af-c757-4aab-b0ce-ddad3dab55e7 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.454 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763802805.4527683, 3c70b093-a92a-4781-8e32-2a7eefde4a43 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.457 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.460 253665 DEBUG nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Start waiting for the detach event from libvirt for device tapf1f391af-c7 with device alias net1 for instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.461 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.466 253665 DEBUG nova.compute.manager [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.474 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface>not found in domain: <domain type='kvm' id='30'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <name>instance-0000001a</name>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:13:22</nova:creationTime>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="f1f391af-c757-4aab-b0ce-ddad3dab55e7">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='serial'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='uuid'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk' index='2'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config' index='1'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:78:3a:a5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='tapb82d7759-7f'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:a6:fb:57'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='tap995224e6-d1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='net2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:cd:2d:f5'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target dev='tap07d520ca-fd'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='net3'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/3c70b093-a92a-4781-8e32-2a7eefde4a43/console.log' append='off'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c214,c646</label>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c646</imagelabel>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.474 253665 INFO nova.virt.libvirt.driver [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tapf1f391af-c7 from instance 3c70b093-a92a-4781-8e32-2a7eefde4a43 from the live domain config.#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.475 253665 DEBUG nova.virt.libvirt.vif [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.476 253665 DEBUG nova.network.os_vif_util [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.478 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab32280-efed-4f8d-90a2-2fd15578a4e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.480 253665 DEBUG nova.network.os_vif_util [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.480 253665 DEBUG os_vif [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.483 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1f391af-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.485 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.489 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.494 253665 INFO os_vif [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7')#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.495 253665 DEBUG nova.virt.libvirt.guest [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:13:25</nova:creationTime>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 04:13:25 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:25 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:13:25 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.525 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b027005c-ecbe-4481-afc9-eca3b317e7f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.528 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[740d982c-7a2f-40f1-b8c5-e32c73b71cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.537 253665 INFO nova.compute.manager [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] instance snapshotting#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.560 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[56591169-967e-4d37-a443-a22f35410a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.580 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca2bd83-8ee8-4c73-b8f4-815b8a9dacdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 782, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 782, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558612, 'reachable_time': 36035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293026, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.607 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11f4cafb-9c5e-43b9-9ef4-b364a9999713]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558628, 'tstamp': 558628}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293027, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558633, 'tstamp': 558633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293027, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.609 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.614 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.614 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.614 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.614 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:13:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.766 253665 INFO nova.virt.libvirt.driver [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] [instance: 14600eae-75dc-4ffc-a15a-bdb234f164d0] Beginning live snapshot process#033[00m
Nov 22 04:13:25 np0005532048 kernel: tap2c059df4-a5 (unregistering): left promiscuous mode
Nov 22 04:13:25 np0005532048 NetworkManager[48920]: <info>  [1763802805.8914] device (tap2c059df4-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:13:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:25Z|00219|binding|INFO|Releasing lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 from this chassis (sb_readonly=0)
Nov 22 04:13:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:25Z|00220|binding|INFO|Setting lport 2c059df4-a5a0-4c31-8485-01ccdea02b01 down in Southbound
Nov 22 04:13:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:13:25Z|00221|binding|INFO|Removing iface tap2c059df4-a5 ovn-installed in OVS
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.907 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:82:bb 10.100.0.14'], port_security=['fa:16:3e:43:82:bb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fd1f1ba2-6963-47bb-8d59-86e2ed015ad1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2c059df4-a5a0-4c31-8485-01ccdea02b01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.908 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2c059df4-a5a0-4c31-8485-01ccdea02b01 in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.909 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.910 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2e25e85e-806b-4f13-814e-01da99b43392]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:25.913 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace which is not needed anymore#033[00m
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.944 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:25 np0005532048 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 22 04:13:25 np0005532048 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Consumed 2.862s CPU time.
Nov 22 04:13:25 np0005532048 systemd-machined[215941]: Machine qemu-35-instance-0000001e terminated.
Nov 22 04:13:25 np0005532048 nova_compute[253661]: 2025-11-22 09:13:25.959 253665 DEBUG nova.virt.libvirt.imagebackend [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:13:26 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [NOTICE]   (292942) : haproxy version is 2.8.14-c23fe91
Nov 22 04:13:26 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [NOTICE]   (292942) : path to executable is /usr/sbin/haproxy
Nov 22 04:13:26 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [WARNING]  (292942) : Exiting Master process...
Nov 22 04:13:26 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [WARNING]  (292942) : Exiting Master process...
Nov 22 04:13:26 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [ALERT]    (292942) : Current worker (292960) exited with code 143 (Terminated)
Nov 22 04:13:26 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[292937]: [WARNING]  (292942) : All workers exited. Exiting... (0)
Nov 22 04:13:26 np0005532048 systemd[1]: libpod-75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad.scope: Deactivated successfully.
Nov 22 04:13:26 np0005532048 podman[293083]: 2025-11-22 09:13:26.07960876 +0000 UTC m=+0.052218815 container died 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.088 253665 DEBUG nova.compute.manager [None req-4a4278ff-8271-4e79-a6af-308ddef6f082 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: fd1f1ba2-6963-47bb-8d59-86e2ed015ad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:13:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad-userdata-shm.mount: Deactivated successfully.
Nov 22 04:13:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-05db8838f8babda2674b21fdd9cba56d8e24a25007d525322ca9fed2068a5d9c-merged.mount: Deactivated successfully.
Nov 22 04:13:26 np0005532048 podman[293083]: 2025-11-22 09:13:26.147342104 +0000 UTC m=+0.119952129 container cleanup 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:13:26 np0005532048 systemd[1]: libpod-conmon-75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad.scope: Deactivated successfully.
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.159 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.160 253665 DEBUG oslo_concurrency.lockutils [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-3c70b093-a92a-4781-8e32-2a7eefde4a43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.160 253665 DEBUG nova.network.neutron [None req-f035f43c-8313-4119-b9f5-0cae000fb01b 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.217 253665 DEBUG nova.storage.rbd_utils [None req-a5301016-9ccf-43d9-b037-76730c1e5bde 96cac95dc532449d964ffb3705dae943 dcedb2f9ed6e43dfa8ecc3854373b0b5 - - default default] creating snapshot(a63ea19ccc764cfb863961ba6d076325) on rbd image(14600eae-75dc-4ffc-a15a-bdb234f164d0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:13:26 np0005532048 podman[293124]: 2025-11-22 09:13:26.231123053 +0000 UTC m=+0.054092601 container remove 75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:13:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.237 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1271770b-b3d5-42e8-a0da-926f9a326756]: (4, ('Sat Nov 22 09:13:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad)\n75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad\nSat Nov 22 09:13:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad)\n75e3622d1452c81a0b2b42cbfec6ceb44d6e084aaa74c3ce479a1f2ecc4451ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[12c48433-d620-4e74-b301-a4abc8c901ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.240 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:13:26 np0005532048 kernel: tap2abeeeb2-20: left promiscuous mode
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.253 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.254 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.258 253665 DEBUG nova.compute.manager [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Received event network-vif-deleted-f1f391af-c757-4aab-b0ce-ddad3dab55e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.259 253665 INFO nova.compute.manager [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Neutron deleted interface f1f391af-c757-4aab-b0ce-ddad3dab55e7; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.259 253665 DEBUG nova.network.neutron [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3c70b093-a92a-4781-8e32-2a7eefde4a43] Updating instance_info_cache with network_info: [{"id": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "address": "fa:16:3e:78:3a:a5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb82d7759-7f", "ovs_interfaceid": "b82d7759-7fa9-4919-9812-a4f5df6893a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "address": "fa:16:3e:a6:fb:57", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap995224e6-d1", "ovs_interfaceid": "995224e6-d1ff-4d74-bca5-3996eb4d404d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "address": "fa:16:3e:cd:2d:f5", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07d520ca-fd", "ovs_interfaceid": "07d520ca-fd4a-49e6-b52e-ee9e8208b902", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:13:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.266 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e3298ccb-7320-4445-ad40-52b2b342106f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.275 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:13:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.286 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1bec1e8-730e-47a6-9eae-f7f5725045e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.289 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[49044ccf-0528-4191-b329-80396473393f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.291 253665 DEBUG nova.objects.instance [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'system_metadata' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.310 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b9bbce-0b0d-4777-9836-2c55316e0f87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565140, 'reachable_time': 35029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293160, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:26 np0005532048 systemd[1]: run-netns-ovnmeta\x2d2abeeeb2\x2d24a5\x2d4ccd\x2d93c8\x2d05b42d3a1a51.mount: Deactivated successfully.
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.314 253665 DEBUG nova.objects.instance [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'flavor' on Instance uuid 3c70b093-a92a-4781-8e32-2a7eefde4a43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:13:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.313 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:13:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:13:26.313 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[803d3c8e-b30f-45e8-8aee-80f01b956cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.338 253665 DEBUG nova.virt.libvirt.vif [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:12:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-604356135',display_name='tempest-AttachInterfacesTestJSON-server-604356135',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-604356135',id=26,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBJa7HiHbUXPuCD2RZr20LZ2SMN5t0gHgHUJ54qbo3Bkd1oxy9Q+1GcAJNNGFvOc5LPSlRy5Zi2vsdTNKtGAt2Jw6sHTu0bjG18rkGSz9dJgEDg+DWUisFqQ9vamd0BW8A==',key_name='tempest-keypair-1142498879',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-yd4r0bsf',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:12:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=3c70b093-a92a-4781-8e32-2a7eefde4a43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.338 253665 DEBUG nova.network.os_vif_util [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "address": "fa:16:3e:97:0f:1c", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f391af-c7", "ovs_interfaceid": "f1f391af-c757-4aab-b0ce-ddad3dab55e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.339 253665 DEBUG nova.network.os_vif_util [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:0f:1c,bridge_name='br-int',has_traffic_filtering=True,id=f1f391af-c757-4aab-b0ce-ddad3dab55e7,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f391af-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.345 253665 DEBUG nova.virt.libvirt.guest [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:13:26 np0005532048 nova_compute[253661]: 2025-11-22 09:13:26.349 253665 DEBUG nova.virt.libvirt.guest [req-5c2ce2d3-b952-4609-a304-8ee923d98649 req-f715fc64-acec-48af-abef-cd00a38c7e62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:97:0f:1c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf1f391af-c7"/></interface>not found in domain: <domain type='kvm' id='30'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <name>instance-0000001a</name>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <uuid>3c70b093-a92a-4781-8e32-2a7eefde4a43</uuid>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesTestJSON-server-604356135</nova:name>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:13:25</nova:creationTime>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:port uuid="b82d7759-7fa9-4919-9812-a4f5df6893a7">
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:port uuid="995224e6-d1ff-4d74-bca5-3996eb4d404d">
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <nova:port uuid="07d520ca-fd4a-49e6-b52e-ee9e8208b902">
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:13:26 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <entry name='serial'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <entry name='uuid'>3c70b093-a92a-4781-8e32-2a7eefde4a43</entry>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk' index='2'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/3c70b093-a92a-4781-8e32-2a7eefde4a43_disk.config' index='1'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:13:26 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:14:29 np0005532048 nova_compute[253661]: 2025-11-22 09:14:29.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:29 np0005532048 nova_compute[253661]: 2025-11-22 09:14:29.062 253665 DEBUG nova.network.neutron [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:14:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 195 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 183 op/s
Nov 22 04:14:29 np0005532048 rsyslogd[1005]: imjournal: 4620 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 04:14:29 np0005532048 nova_compute[253661]: 2025-11-22 09:14:29.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.025 253665 DEBUG nova.network.neutron [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [{"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.046 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.047 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance network_info: |[{"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.050 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Start _get_guest_xml network_info=[{"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.056 253665 WARNING nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.062 253665 DEBUG nova.virt.libvirt.host [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.063 253665 DEBUG nova.virt.libvirt.host [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.066 253665 DEBUG nova.virt.libvirt.host [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.067 253665 DEBUG nova.virt.libvirt.host [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.068 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.068 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.069 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.069 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.069 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.069 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.070 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.071 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.071 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.071 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.072 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.072 253665 DEBUG nova.virt.hardware [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.075 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.124 253665 DEBUG nova.compute.manager [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.125 253665 DEBUG nova.compute.manager [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing instance network info cache due to event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.125 253665 DEBUG oslo_concurrency.lockutils [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.408 253665 DEBUG nova.network.neutron [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.424 253665 DEBUG oslo_concurrency.lockutils [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.425 253665 DEBUG oslo_concurrency.lockutils [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.425 253665 DEBUG nova.network.neutron [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.429 253665 DEBUG nova.virt.libvirt.vif [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.429 253665 DEBUG nova.network.os_vif_util [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.430 253665 DEBUG nova.network.os_vif_util [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.430 253665 DEBUG os_vif [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.430 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.431 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.432 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.434 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.434 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d31cb94-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.434 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d31cb94-62, col_values=(('external_ids', {'iface-id': '1d31cb94-62b9-4490-a333-cbc7c9ea8f01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:ce:ed', 'vm-uuid': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:30 np0005532048 NetworkManager[48920]: <info>  [1763802870.4368] manager: (tap1d31cb94-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.444 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.445 253665 INFO os_vif [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.445 253665 DEBUG nova.virt.libvirt.vif [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.446 253665 DEBUG nova.network.os_vif_util [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.446 253665 DEBUG nova.network.os_vif_util [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.449 253665 DEBUG nova.virt.libvirt.guest [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <target dev="tap1d31cb94-62"/>
Nov 22 04:14:30 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:14:30 np0005532048 nova_compute[253661]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 22 04:14:30 np0005532048 kernel: tap1d31cb94-62: entered promiscuous mode
Nov 22 04:14:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:30Z|00283|binding|INFO|Claiming lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for this chassis.
Nov 22 04:14:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:30Z|00284|binding|INFO|1d31cb94-62b9-4490-a333-cbc7c9ea8f01: Claiming fa:16:3e:a2:ce:ed 10.100.0.3
Nov 22 04:14:30 np0005532048 NetworkManager[48920]: <info>  [1763802870.4680] manager: (tap1d31cb94-62): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.472 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ce:ed 10.100.0.3'], port_security=['fa:16:3e:a2:ce:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1d31cb94-62b9-4490-a333-cbc7c9ea8f01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.474 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.475 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:14:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:30Z|00285|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 ovn-installed in OVS
Nov 22 04:14:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:30Z|00286|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 up in Southbound
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.508 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98590acf-bc25-4832-89ff-9876d3525658]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:30Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:a2:dd 10.100.0.7
Nov 22 04:14:30 np0005532048 systemd-udevd[298430]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:14:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:14:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:30Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:a2:dd 10.100.0.7
Nov 22 04:14:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441735348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:14:30 np0005532048 NetworkManager[48920]: <info>  [1763802870.5373] device (tap1d31cb94-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:14:30 np0005532048 NetworkManager[48920]: <info>  [1763802870.5380] device (tap1d31cb94-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.549 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0e4ef3-60c8-4ad2-a574-5a0ac7294721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.552 253665 DEBUG nova.virt.libvirt.driver [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.554 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4762b3-ed35-4d11-b6e5-1cc9aff11fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.553 253665 DEBUG nova.virt.libvirt.driver [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.555 253665 DEBUG nova.virt.libvirt.driver [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:0e:fa:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.555 253665 DEBUG nova.virt.libvirt.driver [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:a2:ce:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.559 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.587 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.594 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d5665640-520c-4d9f-8e7e-f4c403f5df82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.599 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.623 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[893ce54f-0d02-4259-8536-6457bb76d432]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298456, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.653 253665 DEBUG nova.virt.libvirt.guest [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <nova:name>tempest-tempest.common.compute-instance-1344454464</nova:name>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:14:30</nova:creationTime>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    <nova:port uuid="250740a7-7283-491e-b03e-1e30171a9f3f">
Nov 22 04:14:30 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 04:14:30 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:30 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:14:30 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:14:30 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.654 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6db33cb3-9ac2-46ac-877b-9d83c1b08321]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298458, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298458, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.656 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.662 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.663 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.663 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:30.663 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:30 np0005532048 nova_compute[253661]: 2025-11-22 09:14:30.687 253665 DEBUG oslo_concurrency.lockutils [None req-32fd1848-a98d-43f1-82fd-c6f37fd185c2 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:14:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120644826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.100 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.101 253665 DEBUG nova.virt.libvirt.vif [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1900630937',display_name='tempest-ImagesTestJSON-server-1900630937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1900630937',id=38,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-059ygmd7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:14:27Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=264036ef-37a3-4681-9c7a-9dc70c4b5282,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.102 253665 DEBUG nova.network.os_vif_util [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.103 253665 DEBUG nova.network.os_vif_util [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.104 253665 DEBUG nova.objects.instance [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'pci_devices' on Instance uuid 264036ef-37a3-4681-9c7a-9dc70c4b5282 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.116 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <uuid>264036ef-37a3-4681-9c7a-9dc70c4b5282</uuid>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <name>instance-00000026</name>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <nova:name>tempest-ImagesTestJSON-server-1900630937</nova:name>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:14:30</nova:creationTime>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <nova:user uuid="97872d7ce91947789de976821b771135">tempest-ImagesTestJSON-1798612164-project-member</nova:user>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <nova:project uuid="d6a9a80b05bf4bb3acb99c5e55603a36">tempest-ImagesTestJSON-1798612164</nova:project>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <nova:port uuid="044e2e50-96f0-48f4-aae3-a5fce049c81f">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <entry name="serial">264036ef-37a3-4681-9c7a-9dc70c4b5282</entry>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <entry name="uuid">264036ef-37a3-4681-9c7a-9dc70c4b5282</entry>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/264036ef-37a3-4681-9c7a-9dc70c4b5282_disk">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:c2:c1:a7"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <target dev="tap044e2e50-96"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/console.log" append="off"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:14:31 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:14:31 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:14:31 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:14:31 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.116 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Preparing to wait for external event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.117 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.117 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.117 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.118 253665 DEBUG nova.virt.libvirt.vif [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1900630937',display_name='tempest-ImagesTestJSON-server-1900630937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1900630937',id=38,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-059ygmd7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:14:27Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=264036ef-37a3-4681-9c7a-9dc70c4b5282,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.118 253665 DEBUG nova.network.os_vif_util [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.119 253665 DEBUG nova.network.os_vif_util [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.119 253665 DEBUG os_vif [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.120 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.120 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.121 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.124 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.125 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap044e2e50-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.125 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap044e2e50-96, col_values=(('external_ids', {'iface-id': '044e2e50-96f0-48f4-aae3-a5fce049c81f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:c1:a7', 'vm-uuid': '264036ef-37a3-4681-9c7a-9dc70c4b5282'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:31 np0005532048 NetworkManager[48920]: <info>  [1763802871.1285] manager: (tap044e2e50-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.133 253665 INFO os_vif [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96')#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.196 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.196 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.197 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No VIF found with MAC fa:16:3e:c2:c1:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.197 253665 INFO nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Using config drive#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.217 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:14:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 195 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 183 op/s
Nov 22 04:14:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 22 04:14:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 22 04:14:31 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.550 253665 INFO nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Creating config drive at /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.561 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4k3rffo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.681 253665 DEBUG nova.network.neutron [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updated VIF entry in instance network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.682 253665 DEBUG nova.network.neutron [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.698 253665 DEBUG nova.compute.manager [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.699 253665 DEBUG oslo_concurrency.lockutils [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.699 253665 DEBUG oslo_concurrency.lockutils [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.700 253665 DEBUG oslo_concurrency.lockutils [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.700 253665 DEBUG nova.compute.manager [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.700 253665 WARNING nova.compute.manager [req-ddf86e26-b70a-4270-9a15-92efc994a59d req-28a9fda5-8c68-4928-98e5-a075b23fe5bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.702 253665 DEBUG oslo_concurrency.lockutils [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.702 253665 DEBUG nova.compute.manager [req-58ce5f35-0f27-4214-a19c-9bffe766e69e req-026e78ea-e69e-4f28-a0c0-c68c2e5515db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Received event network-vif-deleted-c1d9117b-dc46-4b02-a00a-2d78a8027873 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.722 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4k3rffo" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.747 253665 DEBUG nova.storage.rbd_utils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] rbd image 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.751 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.901 253665 DEBUG oslo_concurrency.processutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config 264036ef-37a3-4681-9c7a-9dc70c4b5282_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.902 253665 INFO nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deleting local config drive /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282/disk.config because it was imported into RBD.#033[00m
Nov 22 04:14:31 np0005532048 kernel: tap044e2e50-96: entered promiscuous mode
Nov 22 04:14:31 np0005532048 NetworkManager[48920]: <info>  [1763802871.9576] manager: (tap044e2e50-96): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:31Z|00287|binding|INFO|Claiming lport 044e2e50-96f0-48f4-aae3-a5fce049c81f for this chassis.
Nov 22 04:14:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:31Z|00288|binding|INFO|044e2e50-96f0-48f4-aae3-a5fce049c81f: Claiming fa:16:3e:c2:c1:a7 10.100.0.6
Nov 22 04:14:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.966 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:c1:a7 10.100.0.6'], port_security=['fa:16:3e:c2:c1:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '264036ef-37a3-4681-9c7a-9dc70c4b5282', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=044e2e50-96f0-48f4-aae3-a5fce049c81f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:14:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.967 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 044e2e50-96f0-48f4-aae3-a5fce049c81f in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 bound to our chassis#033[00m
Nov 22 04:14:31 np0005532048 NetworkManager[48920]: <info>  [1763802871.9691] device (tap044e2e50-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:14:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.969 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51#033[00m
Nov 22 04:14:31 np0005532048 NetworkManager[48920]: <info>  [1763802871.9714] device (tap044e2e50-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:14:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:31Z|00289|binding|INFO|Setting lport 044e2e50-96f0-48f4-aae3-a5fce049c81f ovn-installed in OVS
Nov 22 04:14:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:31Z|00290|binding|INFO|Setting lport 044e2e50-96f0-48f4-aae3-a5fce049c81f up in Southbound
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.979 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.981 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07c01ced-68c5-48f6-95a2-b1fe49845e63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.982 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2abeeeb2-21 in ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:14:31 np0005532048 nova_compute[253661]: 2025-11-22 09:14:31.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.984 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2abeeeb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:14:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.984 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d01d9234-6e57-4172-bbf2-0900be8486b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.985 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be0aee78-8b12-493a-a83a-8d1c774901af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:31.998 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[da69d23a-0e97-47ab-8006-dc733f5caf94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 systemd-machined[215941]: New machine qemu-43-instance-00000026.
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.023 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc05cc9e-b396-43e1-bd86-b5e27f72cf77]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 systemd[1]: Started Virtual Machine qemu-43-instance-00000026.
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.054 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8c2d04-5ffd-4ea6-8039-a6a541361978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.060 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2d2cfe-6d5a-4e70-ac4c-747afc7c28cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 NetworkManager[48920]: <info>  [1763802872.0615] manager: (tap2abeeeb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.098 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8b51e2-ba7c-474e-a7bd-4d1ea429d664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.102 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[950296dc-15b3-408f-82a4-09d04dd51412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 NetworkManager[48920]: <info>  [1763802872.1299] device (tap2abeeeb2-20): carrier: link connected
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.137 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2f14c9-c4aa-4db8-af6d-e553b26855ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.155 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82e7a47e-4886-4d3c-858c-d1bbbfabc79a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572286, 'reachable_time': 29754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298582, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.171 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b911db27-a1f6-453a-95f9-7c218b3614c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:bff7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572286, 'tstamp': 572286}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298583, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.188 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb160e6-c76b-4df0-95c7-9ed8f15c6741]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2abeeeb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:bf:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572286, 'reachable_time': 29754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298584, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e9486a-5c52-42f4-b789-d00620a6c371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.300 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[43756ee3-337a-4798-9473-9c85c94c2fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.302 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.302 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.302 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2abeeeb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:32 np0005532048 NetworkManager[48920]: <info>  [1763802872.3049] manager: (tap2abeeeb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Nov 22 04:14:32 np0005532048 kernel: tap2abeeeb2-20: entered promiscuous mode
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.308 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2abeeeb2-20, col_values=(('external_ids', {'iface-id': '3249a299-7633-4c70-aa35-5f648ecb0d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:32Z|00291|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.310 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.327 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.328 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f94375-4586-4aff-bb75-d9094ebc9267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.329 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.pid.haproxy
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.330 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'env', 'PROCESS_TAG=haproxy-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:14:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:32Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a2:ce:ed 10.100.0.3
Nov 22 04:14:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:32Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:ce:ed 10.100.0.3
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.432 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802872.4320843, 264036ef-37a3-4681-9c7a-9dc70c4b5282 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.433 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] VM Started (Lifecycle Event)#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.449 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.454 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802872.4322066, 264036ef-37a3-4681-9c7a-9dc70c4b5282 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.454 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.468 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.472 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.485 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:14:32 np0005532048 podman[298658]: 2025-11-22 09:14:32.678504548 +0000 UTC m=+0.027386264 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.779 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.780 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.794 253665 DEBUG nova.objects.instance [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.816 253665 DEBUG nova.virt.libvirt.vif [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.817 253665 DEBUG nova.network.os_vif_util [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.817 253665 DEBUG nova.network.os_vif_util [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.821 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.823 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.825 253665 DEBUG nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Attempting to detach device tap1d31cb94-62 from instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.826 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <target dev="tap1d31cb94-62"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:14:32 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.828 253665 DEBUG nova.compute.manager [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-changed-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.829 253665 DEBUG nova.compute.manager [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing instance network info cache due to event network-changed-1d31cb94-62b9-4490-a333-cbc7c9ea8f01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.829 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.829 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.829 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing network info cache for port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.852 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.855 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface>not found in domain: <domain type='kvm' id='40'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <name>instance-00000023</name>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <uuid>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</uuid>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:name>tempest-tempest.common.compute-instance-1344454464</nova:name>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:14:30</nova:creationTime>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:port uuid="250740a7-7283-491e-b03e-1e30171a9f3f">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:14:32 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='serial'>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='uuid'>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk' index='2'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config' index='1'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:0e:fa:90'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target dev='tap250740a7-72'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:a2:ce:ed'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target dev='tap1d31cb94-62'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='net1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <source path='/dev/pts/3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log' append='off'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/3'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <source path='/dev/pts/3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log' append='off'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c50,c423</label>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c50,c423</imagelabel>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:14:32 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:14:32 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.855 253665 INFO nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tap1d31cb94-62 from instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 from the persistent domain config.#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.855 253665 DEBUG nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] (1/8): Attempting to detach device tap1d31cb94-62 with device alias net1 from instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.856 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <target dev="tap1d31cb94-62"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:14:32 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:14:32 np0005532048 podman[298658]: 2025-11-22 09:14:32.897935291 +0000 UTC m=+0.246816987 container create a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:14:32 np0005532048 systemd[1]: Started libpod-conmon-a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09.scope.
Nov 22 04:14:32 np0005532048 kernel: tap1d31cb94-62 (unregistering): left promiscuous mode
Nov 22 04:14:32 np0005532048 NetworkManager[48920]: <info>  [1763802872.9655] device (tap1d31cb94-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:14:32 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:14:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:32Z|00292|binding|INFO|Releasing lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 from this chassis (sb_readonly=0)
Nov 22 04:14:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:32Z|00293|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 down in Southbound
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:32Z|00294|binding|INFO|Removing iface tap1d31cb94-62 ovn-installed in OVS
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.977 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763802872.977399, 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.980 253665 DEBUG nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Start waiting for the detach event from libvirt for device tap1d31cb94-62 with device alias net1 for instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.980 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:14:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef65a135aef4d51b10b011a564893fe453cc14eebdbfd2c6f1ff1ad066c4d17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:32.984 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ce:ed 10.100.0.3'], port_security=['fa:16:3e:a2:ce:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1d31cb94-62b9-4490-a333-cbc7c9ea8f01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:14:32 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.985 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface>not found in domain: <domain type='kvm' id='40'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <name>instance-00000023</name>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <uuid>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</uuid>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:name>tempest-tempest.common.compute-instance-1344454464</nova:name>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:14:30</nova:creationTime>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:port uuid="250740a7-7283-491e-b03e-1e30171a9f3f">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:14:32 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='serial'>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='uuid'>8b620ce3-1fc9-42ba-aafb-709cad3d65a6</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk' index='2'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_disk.config' index='1'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:0e:fa:90'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target dev='tap250740a7-72'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <source path='/dev/pts/3'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log' append='off'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:14:32 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/3'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <source path='/dev/pts/3'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6/console.log' append='off'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c50,c423</label>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c50,c423</imagelabel>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:14:33 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:14:33 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.986 253665 INFO nova.virt.libvirt.driver [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tap1d31cb94-62 from instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 from the live domain config.#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.986 253665 DEBUG nova.virt.libvirt.vif [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.986 253665 DEBUG nova.network.os_vif_util [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.987 253665 DEBUG nova.network.os_vif_util [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.987 253665 DEBUG os_vif [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.989 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d31cb94-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:33 np0005532048 podman[298658]: 2025-11-22 09:14:32.993664034 +0000 UTC m=+0.342545750 container init a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.994 253665 INFO os_vif [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:32.995 253665 DEBUG nova.virt.libvirt.guest [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  <nova:name>tempest-tempest.common.compute-instance-1344454464</nova:name>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:14:32</nova:creationTime>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    <nova:port uuid="250740a7-7283-491e-b03e-1e30171a9f3f">
Nov 22 04:14:33 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:33 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:14:33 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:14:33 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:14:33 np0005532048 podman[298658]: 2025-11-22 09:14:33.000986164 +0000 UTC m=+0.349867870 container start a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:14:33 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [NOTICE]   (298680) : New worker (298682) forked
Nov 22 04:14:33 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [NOTICE]   (298680) : Loading success.
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.080 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.082 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5c7611-80f1-451b-a157-45a9cc99b71c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.146 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4386d293-8fab-47bb-8764-1cfae1175935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.149 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1f19debe-8d43-4df2-b1dd-a28d5335e9dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.188 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[92dbcd80-1a6a-40e8-84fb-4c7bcf2ca7ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.213 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ac1daa-b472-43a6-b669-5ab03b202238]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298696, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.233 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f65338-3084-4db5-aed4-0dd1380427b8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298697, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298697, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.236 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.239 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.241 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.242 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.243 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:33.243 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 223 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 3.5 MiB/s wr, 124 op/s
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.398 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802858.3972633, 01e238b6-d7eb-43ed-b69e-507706f9d9f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.399 253665 INFO nova.compute.manager [-] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.421 253665 DEBUG nova.compute.manager [None req-801756db-d25a-4d88-834d-3c2f7dba887d - - - - - -] [instance: 01e238b6-d7eb-43ed-b69e-507706f9d9f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.831 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.831 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.832 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.832 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 WARNING nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.833 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Processing event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.834 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.835 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.835 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] No waiting events found dispatching network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.835 253665 WARNING nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received unexpected event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.835 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] No waiting events found dispatching network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.836 253665 WARNING nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received unexpected event network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG oslo_concurrency.lockutils [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 DEBUG nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.837 253665 WARNING nova.compute.manager [req-cf434de5-041e-4292-b0b6-9ee284cd97fe req-8a3911c5-4da3-4919-a8fa-57d6012d4bcb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.838 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.841 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802873.8416646, 264036ef-37a3-4681-9c7a-9dc70c4b5282 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.842 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.843 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.846 253665 INFO nova.virt.libvirt.driver [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance spawned successfully.#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.846 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.858 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.865 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.867 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.868 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.868 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.868 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.869 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.869 253665 DEBUG nova.virt.libvirt.driver [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.891 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.922 253665 INFO nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 6.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.923 253665 DEBUG nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:14:33 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.984 253665 INFO nova.compute.manager [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 7.73 seconds to build instance.#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:33.999 253665 DEBUG oslo_concurrency.lockutils [None req-5602bfd8-afad-4294-a1cc-91a8163873b1 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.405 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updated VIF entry in instance network info cache for port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.406 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.425 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.426 253665 DEBUG nova.compute.manager [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-changed-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.426 253665 DEBUG nova.compute.manager [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Refreshing instance network info cache due to event network-changed-044e2e50-96f0-48f4-aae3-a5fce049c81f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.426 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.426 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.427 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Refreshing network info cache for port 044e2e50-96f0-48f4-aae3-a5fce049c81f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.703 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.704 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.704 253665 DEBUG nova.network.neutron [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:14:34 np0005532048 nova_compute[253661]: 2025-11-22 09:14:34.874 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.210 253665 DEBUG nova.compute.manager [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.252 253665 INFO nova.compute.manager [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] instance snapshotting#033[00m
Nov 22 04:14:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 247 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.480 253665 INFO nova.virt.libvirt.driver [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Beginning live snapshot process#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.623 253665 DEBUG nova.virt.libvirt.imagebackend [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.709 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updated VIF entry in instance network info cache for port 044e2e50-96f0-48f4-aae3-a5fce049c81f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.710 253665 DEBUG nova.network.neutron [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [{"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.727 253665 DEBUG oslo_concurrency.lockutils [req-10696bc4-f823-4494-8155-346a2e5c4fec req-47eb7371-d5f7-4217-8a8a-0f58e5854067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.811 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(4b0cf3baa55b4bd3be59ef54adda1e52) on rbd image(264036ef-37a3-4681-9c7a-9dc70c4b5282_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.920 253665 INFO nova.network.neutron [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.920 253665 DEBUG nova.network.neutron [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.935 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:35 np0005532048 nova_compute[253661]: 2025-11-22 09:14:35.953 253665 DEBUG oslo_concurrency.lockutils [None req-577b53f4-1ae7-457f-803d-2ed8cf6c98c7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-8b620ce3-1fc9-42ba-aafb-709cad3d65a6-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 22 04:14:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 22 04:14:36 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 22 04:14:36 np0005532048 nova_compute[253661]: 2025-11-22 09:14:36.875 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] cloning vms/264036ef-37a3-4681-9c7a-9dc70c4b5282_disk@4b0cf3baa55b4bd3be59ef54adda1e52 to images/6c59e9c0-6dc8-47c7-8319-04839c1264af clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:14:36 np0005532048 nova_compute[253661]: 2025-11-22 09:14:36.983 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] flattening images/6c59e9c0-6dc8-47c7-8319-04839c1264af flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:14:37 np0005532048 nova_compute[253661]: 2025-11-22 09:14:37.221 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(4b0cf3baa55b4bd3be59ef54adda1e52) on rbd image(264036ef-37a3-4681-9c7a-9dc70c4b5282_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:14:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 247 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.4 MiB/s wr, 172 op/s
Nov 22 04:14:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 22 04:14:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 22 04:14:37 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 22 04:14:37 np0005532048 nova_compute[253661]: 2025-11-22 09:14:37.868 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] creating snapshot(snap) on rbd image(6c59e9c0-6dc8-47c7-8319-04839c1264af) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:14:37 np0005532048 nova_compute[253661]: 2025-11-22 09:14:37.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.358 253665 DEBUG nova.compute.manager [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.358 253665 DEBUG nova.compute.manager [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing instance network info cache due to event network-changed-250740a7-7283-491e-b03e-1e30171a9f3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.359 253665 DEBUG oslo_concurrency.lockutils [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.359 253665 DEBUG oslo_concurrency.lockutils [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.359 253665 DEBUG nova.network.neutron [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Refreshing network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:14:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:38Z|00295|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:14:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:38Z|00296|binding|INFO|Releasing lport 3249a299-7633-4c70-aa35-5f648ecb0d7e from this chassis (sb_readonly=0)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 22 04:14:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 22 04:14:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 6c59e9c0-6dc8-47c7-8319-04839c1264af could not be found.
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 6c59e9c0-6dc8-47c7-8319-04839c1264af
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver 
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver 
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 6c59e9c0-6dc8-47c7-8319-04839c1264af could not be found.
Nov 22 04:14:38 np0005532048 nova_compute[253661]: 2025-11-22 09:14:38.979 253665 ERROR nova.virt.libvirt.driver #033[00m
Nov 22 04:14:39 np0005532048 nova_compute[253661]: 2025-11-22 09:14:39.020 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802864.0191896, 6c9b56d3-9edf-4e5a-88e4-c0470a193778 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:14:39 np0005532048 nova_compute[253661]: 2025-11-22 09:14:39.020 253665 INFO nova.compute.manager [-] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:14:39 np0005532048 nova_compute[253661]: 2025-11-22 09:14:39.022 253665 DEBUG nova.storage.rbd_utils [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] removing snapshot(snap) on rbd image(6c59e9c0-6dc8-47c7-8319-04839c1264af) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:14:39 np0005532048 nova_compute[253661]: 2025-11-22 09:14:39.035 253665 DEBUG nova.compute.manager [None req-fb63d959-75b7-4d34-b103-1f68f09f5265 - - - - - -] [instance: 6c9b56d3-9edf-4e5a-88e4-c0470a193778] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:14:39 np0005532048 nova_compute[253661]: 2025-11-22 09:14:39.043 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:39 np0005532048 nova_compute[253661]: 2025-11-22 09:14:39.044 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:39 np0005532048 nova_compute[253661]: 2025-11-22 09:14:39.044 253665 DEBUG nova.objects.instance [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:14:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 270 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 4.3 MiB/s wr, 308 op/s
Nov 22 04:14:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 22 04:14:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 22 04:14:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 22 04:14:39 np0005532048 nova_compute[253661]: 2025-11-22 09:14:39.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:40 np0005532048 nova_compute[253661]: 2025-11-22 09:14:40.223 253665 WARNING nova.compute.manager [None req-d6c3c534-cd63-4206-b119-99b8410ca353 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Image not found during snapshot: nova.exception.ImageNotFound: Image 6c59e9c0-6dc8-47c7-8319-04839c1264af could not be found.#033[00m
Nov 22 04:14:40 np0005532048 nova_compute[253661]: 2025-11-22 09:14:40.290 253665 DEBUG nova.objects.instance [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'pci_requests' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:14:40 np0005532048 nova_compute[253661]: 2025-11-22 09:14:40.302 253665 DEBUG nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:14:40 np0005532048 nova_compute[253661]: 2025-11-22 09:14:40.599 253665 DEBUG nova.policy [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36a143bf132742b89e463c08d46df5c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:14:40 np0005532048 nova_compute[253661]: 2025-11-22 09:14:40.768 253665 DEBUG nova.network.neutron [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updated VIF entry in instance network info cache for port 250740a7-7283-491e-b03e-1e30171a9f3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:14:40 np0005532048 nova_compute[253661]: 2025-11-22 09:14:40.769 253665 DEBUG nova.network.neutron [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [{"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:40 np0005532048 nova_compute[253661]: 2025-11-22 09:14:40.795 253665 DEBUG oslo_concurrency.lockutils [req-831d3688-e275-4bda-97e7-956c661f881d req-fc027644-d108-4d9b-aca8-c44f2b9ea484 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8b620ce3-1fc9-42ba-aafb-709cad3d65a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 270 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 3.0 MiB/s wr, 194 op/s
Nov 22 04:14:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.060 253665 DEBUG nova.compute.manager [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-changed-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.061 253665 DEBUG nova.compute.manager [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing instance network info cache due to event network-changed-8c2fda4f-7fa8-479c-8573-592021820968. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.061 253665 DEBUG oslo_concurrency.lockutils [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.062 253665 DEBUG oslo_concurrency.lockutils [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.062 253665 DEBUG nova.network.neutron [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.251 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.251 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.252 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.252 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.252 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.330 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.333 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.394 253665 INFO nova.compute.manager [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Terminating instance#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.396 253665 DEBUG nova.compute.manager [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.396 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.398 253665 DEBUG nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Successfully updated port: 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.417 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:42 np0005532048 kernel: tap044e2e50-96 (unregistering): left promiscuous mode
Nov 22 04:14:42 np0005532048 NetworkManager[48920]: <info>  [1763802882.4359] device (tap044e2e50-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.441 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:42Z|00297|binding|INFO|Releasing lport 044e2e50-96f0-48f4-aae3-a5fce049c81f from this chassis (sb_readonly=0)
Nov 22 04:14:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:42Z|00298|binding|INFO|Setting lport 044e2e50-96f0-48f4-aae3-a5fce049c81f down in Southbound
Nov 22 04:14:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:42Z|00299|binding|INFO|Removing iface tap044e2e50-96 ovn-installed in OVS
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.448 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:c1:a7 10.100.0.6'], port_security=['fa:16:3e:c2:c1:a7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '264036ef-37a3-4681-9c7a-9dc70c4b5282', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6a9a80b05bf4bb3acb99c5e55603a36', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9500f38-2058-4bd0-8772-507fa96530aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=38352d7a-6341-43b5-abf6-f3e29feaf5d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=044e2e50-96f0-48f4-aae3-a5fce049c81f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.449 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 044e2e50-96f0-48f4-aae3-a5fce049c81f in datapath 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 unbound from our chassis#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.451 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.452 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc511aa3-8772-4a6b-929e-da1dee67de6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.452 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 namespace which is not needed anymore#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Deactivated successfully.
Nov 22 04:14:42 np0005532048 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000026.scope: Consumed 9.118s CPU time.
Nov 22 04:14:42 np0005532048 systemd-machined[215941]: Machine qemu-43-instance-00000026 terminated.
Nov 22 04:14:42 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [NOTICE]   (298680) : haproxy version is 2.8.14-c23fe91
Nov 22 04:14:42 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [NOTICE]   (298680) : path to executable is /usr/sbin/haproxy
Nov 22 04:14:42 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [WARNING]  (298680) : Exiting Master process...
Nov 22 04:14:42 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [ALERT]    (298680) : Current worker (298682) exited with code 143 (Terminated)
Nov 22 04:14:42 np0005532048 neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51[298673]: [WARNING]  (298680) : All workers exited. Exiting... (0)
Nov 22 04:14:42 np0005532048 systemd[1]: libpod-a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09.scope: Deactivated successfully.
Nov 22 04:14:42 np0005532048 podman[298901]: 2025-11-22 09:14:42.589122052 +0000 UTC m=+0.047936819 container died a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:14:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09-userdata-shm.mount: Deactivated successfully.
Nov 22 04:14:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1ef65a135aef4d51b10b011a564893fe453cc14eebdbfd2c6f1ff1ad066c4d17-merged.mount: Deactivated successfully.
Nov 22 04:14:42 np0005532048 podman[298901]: 2025-11-22 09:14:42.627642818 +0000 UTC m=+0.086457565 container cleanup a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.648 253665 INFO nova.virt.libvirt.driver [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance destroyed successfully.#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.648 253665 DEBUG nova.objects.instance [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid 264036ef-37a3-4681-9c7a-9dc70c4b5282 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:14:42 np0005532048 systemd[1]: libpod-conmon-a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09.scope: Deactivated successfully.
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.660 253665 DEBUG nova.virt.libvirt.vif [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1900630937',display_name='tempest-ImagesTestJSON-server-1900630937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1900630937',id=38,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6a9a80b05bf4bb3acb99c5e55603a36',ramdisk_id='',reservation_id='r-059ygmd7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1798612164',owner_user_name='tempest-ImagesTestJSON-1798612164-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:40Z,user_data=None,user_id='97872d7ce91947789de976821b771135',uuid=264036ef-37a3-4681-9c7a-9dc70c4b5282,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.661 253665 DEBUG nova.network.os_vif_util [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converting VIF {"id": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "address": "fa:16:3e:c2:c1:a7", "network": {"id": "2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51", "bridge": "br-int", "label": "tempest-ImagesTestJSON-878711130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6a9a80b05bf4bb3acb99c5e55603a36", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e2e50-96", "ovs_interfaceid": "044e2e50-96f0-48f4-aae3-a5fce049c81f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.662 253665 DEBUG nova.network.os_vif_util [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.662 253665 DEBUG os_vif [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.664 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.664 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap044e2e50-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.668 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.672 253665 INFO os_vif [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:c1:a7,bridge_name='br-int',has_traffic_filtering=True,id=044e2e50-96f0-48f4-aae3-a5fce049c81f,network=Network(2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e2e50-96')#033[00m
Nov 22 04:14:42 np0005532048 podman[298935]: 2025-11-22 09:14:42.7061841 +0000 UTC m=+0.051514508 container remove a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.714 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7a7c6b-4c04-4baf-859b-9981b63d9d64]: (4, ('Sat Nov 22 09:14:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09)\na5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09\nSat Nov 22 09:14:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 (a5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09)\na5d711918ba74759a148709877f9cfd0645c80a3df41b39b1e6e8b742edcfd09\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.719 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[76632135-e46c-4b2e-924e-0cff8732d4b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.720 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2abeeeb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 kernel: tap2abeeeb2-20: left promiscuous mode
Nov 22 04:14:42 np0005532048 nova_compute[253661]: 2025-11-22 09:14:42.746 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.751 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8f9e8b-f385-430c-98c3-ac35b6cd7afd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.768 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4afefb-2adb-4da1-8b18-f9895b7018b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.770 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc972ba-85d4-421b-ab68-6cda45435078]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e948d0c-09d5-453e-942c-d6000e9b6a88]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572278, 'reachable_time': 41172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298973, 'error': None, 'target': 'ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.796 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2abeeeb2-24a5-4ccd-93c8-05b42d3a1a51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:14:42 np0005532048 systemd[1]: run-netns-ovnmeta\x2d2abeeeb2\x2d24a5\x2d4ccd\x2d93c8\x2d05b42d3a1a51.mount: Deactivated successfully.
Nov 22 04:14:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:42.796 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1a07d7ad-ce66-44b1-9103-aa7e55879669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.100 253665 INFO nova.virt.libvirt.driver [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deleting instance files /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282_del#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.101 253665 INFO nova.virt.libvirt.driver [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deletion of /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282_del complete#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.148 253665 DEBUG nova.compute.manager [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-changed-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.148 253665 DEBUG nova.compute.manager [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing instance network info cache due to event network-changed-1d31cb94-62b9-4490-a333-cbc7c9ea8f01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.149 253665 DEBUG oslo_concurrency.lockutils [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.168 253665 INFO nova.compute.manager [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.169 253665 DEBUG oslo.service.loopingcall [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.169 253665 DEBUG nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.170 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:14:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 270 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.939 253665 DEBUG nova.network.neutron [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updated VIF entry in instance network info cache for port 8c2fda4f-7fa8-479c-8573-592021820968. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.940 253665 DEBUG nova.network.neutron [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.961 253665 DEBUG oslo_concurrency.lockutils [req-9db21721-e18d-43d3-ac4e-59ff31251197 req-7c80e2ad-1754-492c-8426-c7c7aae5941e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.962 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:43 np0005532048 nova_compute[253661]: 2025-11-22 09:14:43.963 253665 DEBUG nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.226 253665 WARNING nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] 5e2cd359-c68f-4256-90e8-0ad40aff8a00 already exists in list: networks containing: ['5e2cd359-c68f-4256-90e8-0ad40aff8a00']. ignoring it#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.276 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.297 253665 INFO nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 1.13 seconds to deallocate network for instance.#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.347 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.348 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.446 253665 DEBUG oslo_concurrency.processutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.496 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.543 253665 DEBUG nova.compute.manager [req-cfc6a970-35ad-4d53-ab67-0c3447ddfca4 req-32584332-bfc5-4704-b7d0-d72cf660b31f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-deleted-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:14:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463990540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.861 253665 DEBUG oslo_concurrency.processutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.867 253665 DEBUG nova.compute.provider_tree [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.881 253665 DEBUG nova.scheduler.client.report [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.899 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.924 253665 INFO nova.scheduler.client.report [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Deleted allocations for instance 264036ef-37a3-4681-9c7a-9dc70c4b5282#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.989 253665 DEBUG oslo_concurrency.lockutils [None req-28b29987-12d7-4908-81eb-0f9340871839 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.991 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.991 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.991 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.992 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.993 253665 INFO nova.compute.manager [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Terminating instance#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.993 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquiring lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.994 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Acquired lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:44 np0005532048 nova_compute[253661]: 2025-11-22 09:14:44.994 253665 DEBUG nova.network.neutron [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.014 253665 DEBUG nova.compute.utils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010#033[00m
Nov 22 04:14:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 219 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.9 MiB/s wr, 206 op/s
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.306 253665 DEBUG nova.network.neutron [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.521 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-unplugged-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.523 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.524 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.525 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.525 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] No waiting events found dispatching network-vif-unplugged-044e2e50-96f0-48f4-aae3-a5fce049c81f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.526 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-unplugged-044e2e50-96f0-48f4-aae3-a5fce049c81f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.526 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.527 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.528 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.529 253665 DEBUG oslo_concurrency.lockutils [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.530 253665 DEBUG nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] No waiting events found dispatching network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.530 253665 WARNING nova.compute.manager [req-f4f9bd4f-4316-4e45-b3ac-35a74ed25e6d req-bf041fb1-84cd-44dd-b2bc-c3ae5d2142c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Received unexpected event network-vif-plugged-044e2e50-96f0-48f4-aae3-a5fce049c81f for instance with vm_state deleted and task_state deleting.#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.584 253665 DEBUG nova.network.neutron [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.602 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Releasing lock "refresh_cache-264036ef-37a3-4681-9c7a-9dc70c4b5282" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.603 253665 DEBUG nova.compute.manager [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.610 253665 DEBUG nova.virt.libvirt.driver [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] During wait destroy, instance disappeared. _wait_for_destroy /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1527#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.610 253665 INFO nova.virt.libvirt.driver [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance destroyed successfully.#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.611 253665 DEBUG nova.objects.instance [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lazy-loading 'resources' on Instance uuid 264036ef-37a3-4681-9c7a-9dc70c4b5282 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.644 253665 INFO nova.virt.libvirt.driver [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deletion of /var/lib/nova/instances/264036ef-37a3-4681-9c7a-9dc70c4b5282_del complete#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.684 253665 INFO nova.compute.manager [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 0.08 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.685 253665 DEBUG oslo.service.loopingcall [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.685 253665 DEBUG nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.685 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.832 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.844 253665 DEBUG nova.network.neutron [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.860 253665 INFO nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Took 0.17 seconds to deallocate network for instance.#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.889 253665 INFO nova.compute.manager [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Instance disappeared during terminate#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.890 253665 DEBUG oslo_concurrency.lockutils [None req-a5de926c-dc86-407b-a8f7-d75d08e8bae0 97872d7ce91947789de976821b771135 d6a9a80b05bf4bb3acb99c5e55603a36 - - default default] Lock "264036ef-37a3-4681-9c7a-9dc70c4b5282" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.960 253665 DEBUG nova.network.neutron [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.982 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.984 253665 DEBUG oslo_concurrency.lockutils [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.984 253665 DEBUG nova.network.neutron [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Refreshing network info cache for port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.987 253665 DEBUG nova.virt.libvirt.vif [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.987 253665 DEBUG nova.network.os_vif_util [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.988 253665 DEBUG nova.network.os_vif_util [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.989 253665 DEBUG os_vif [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.989 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.990 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.993 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d31cb94-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:45 np0005532048 nova_compute[253661]: 2025-11-22 09:14:45.994 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d31cb94-62, col_values=(('external_ids', {'iface-id': '1d31cb94-62b9-4490-a333-cbc7c9ea8f01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:ce:ed', 'vm-uuid': 'bf96e20f-af8f-4db3-977f-cee93b1d7934'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:46 np0005532048 NetworkManager[48920]: <info>  [1763802886.0204] manager: (tap1d31cb94-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.025 253665 INFO os_vif [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.026 253665 DEBUG nova.virt.libvirt.vif [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.028 253665 DEBUG nova.network.os_vif_util [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.029 253665 DEBUG nova.network.os_vif_util [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.032 253665 DEBUG nova.virt.libvirt.guest [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] attach device xml: <interface type="ethernet">
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <target dev="tap1d31cb94-62"/>
Nov 22 04:14:46 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:14:46 np0005532048 nova_compute[253661]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 22 04:14:46 np0005532048 kernel: tap1d31cb94-62: entered promiscuous mode
Nov 22 04:14:46 np0005532048 NetworkManager[48920]: <info>  [1763802886.0453] manager: (tap1d31cb94-62): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Nov 22 04:14:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:46Z|00300|binding|INFO|Claiming lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for this chassis.
Nov 22 04:14:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:46Z|00301|binding|INFO|1d31cb94-62b9-4490-a333-cbc7c9ea8f01: Claiming fa:16:3e:a2:ce:ed 10.100.0.3
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.053 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ce:ed 10.100.0.3'], port_security=['fa:16:3e:a2:ce:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bf96e20f-af8f-4db3-977f-cee93b1d7934', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1d31cb94-62b9-4490-a333-cbc7c9ea8f01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.055 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 bound to our chassis#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.056 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:14:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:46Z|00302|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 ovn-installed in OVS
Nov 22 04:14:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:46Z|00303|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 up in Southbound
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.074 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d272a25-cb4b-451e-a15a-8da5db0a0866]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:46 np0005532048 systemd-udevd[299021]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.108 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[529d858c-06d0-4229-a61e-9bfbe5eaa0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.114 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b2c9da-6826-4ab0-bdf9-0a9d1da1fb99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:46 np0005532048 NetworkManager[48920]: <info>  [1763802886.1159] device (tap1d31cb94-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:14:46 np0005532048 NetworkManager[48920]: <info>  [1763802886.1178] device (tap1d31cb94-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.126 253665 DEBUG nova.virt.libvirt.driver [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.126 253665 DEBUG nova.virt.libvirt.driver [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.126 253665 DEBUG nova.virt.libvirt.driver [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:48:a2:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.127 253665 DEBUG nova.virt.libvirt.driver [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] No VIF found with MAC fa:16:3e:a2:ce:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.150 253665 DEBUG nova.virt.libvirt.guest [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <nova:name>tempest-tempest.common.compute-instance-659535483</nova:name>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:14:46</nova:creationTime>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    <nova:port uuid="8c2fda4f-7fa8-479c-8573-592021820968">
Nov 22 04:14:46 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 04:14:46 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:46 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:14:46 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:14:46 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.152 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[29315ca6-d01f-4eaf-84c7-332fc497860a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.172 253665 DEBUG oslo_concurrency.lockutils [None req-f346e51f-d893-4bd4-ac54-5d7956e77cc7 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.173 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5d212591-0631-4c12-9df8-9e94bc2e721b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299026, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2c7397-4012-4b86-9bad-23334e9beb98]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299027, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299027, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.197 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:46 np0005532048 nova_compute[253661]: 2025-11-22 09:14:46.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:46.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 22 04:14:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 22 04:14:46 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.092 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.093 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.107 253665 DEBUG nova.objects.instance [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'flavor' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.125 253665 DEBUG nova.virt.libvirt.vif [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.126 253665 DEBUG nova.network.os_vif_util [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.127 253665 DEBUG nova.network.os_vif_util [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.129 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.132 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.135 253665 DEBUG nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Attempting to detach device tap1d31cb94-62 from instance bf96e20f-af8f-4db3-977f-cee93b1d7934 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.135 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <target dev="tap1d31cb94-62"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.152 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.157 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface>not found in domain: <domain type='kvm' id='41'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <name>instance-00000024</name>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <uuid>bf96e20f-af8f-4db3-977f-cee93b1d7934</uuid>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:name>tempest-tempest.common.compute-instance-659535483</nova:name>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:14:46</nova:creationTime>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:port uuid="8c2fda4f-7fa8-479c-8573-592021820968">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='serial'>bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='uuid'>bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk' index='2'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config' index='1'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:48:a2:dd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target dev='tap8c2fda4f-7f'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:a2:ce:ed'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target dev='tap1d31cb94-62'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='net1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log' append='off'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log' append='off'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c565,c929</label>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c565,c929</imagelabel>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.157 253665 INFO nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tap1d31cb94-62 from instance bf96e20f-af8f-4db3-977f-cee93b1d7934 from the persistent domain config.#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.159 253665 DEBUG nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] (1/8): Attempting to detach device tap1d31cb94-62 with device alias net1 from instance bf96e20f-af8f-4db3-977f-cee93b1d7934 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.159 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:a2:ce:ed"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <target dev="tap1d31cb94-62"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:14:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 200 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1001 KiB/s wr, 103 op/s
Nov 22 04:14:47 np0005532048 kernel: tap1d31cb94-62 (unregistering): left promiscuous mode
Nov 22 04:14:47 np0005532048 NetworkManager[48920]: <info>  [1763802887.2738] device (tap1d31cb94-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:14:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:47Z|00304|binding|INFO|Releasing lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 from this chassis (sb_readonly=0)
Nov 22 04:14:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:47Z|00305|binding|INFO|Setting lport 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 down in Southbound
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:47Z|00306|binding|INFO|Removing iface tap1d31cb94-62 ovn-installed in OVS
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.288 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763802887.2880626, bf96e20f-af8f-4db3-977f-cee93b1d7934 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.289 253665 DEBUG nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Start waiting for the detach event from libvirt for device tap1d31cb94-62 with device alias net1 for instance bf96e20f-af8f-4db3-977f-cee93b1d7934 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.290 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.292 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:ce:ed 10.100.0.3'], port_security=['fa:16:3e:a2:ce:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bf96e20f-af8f-4db3-977f-cee93b1d7934', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1216307044', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e84a0ef5-f0e2-4cdc-b9e8-144c7479a44b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1d31cb94-62b9-4490-a333-cbc7c9ea8f01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.294 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.295 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a2:ce:ed"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1d31cb94-62"/></interface>not found in domain: <domain type='kvm' id='41'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <name>instance-00000024</name>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <uuid>bf96e20f-af8f-4db3-977f-cee93b1d7934</uuid>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:name>tempest-tempest.common.compute-instance-659535483</nova:name>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:14:46</nova:creationTime>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:port uuid="8c2fda4f-7fa8-479c-8573-592021820968">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:port uuid="1d31cb94-62b9-4490-a333-cbc7c9ea8f01">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='serial'>bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='uuid'>bf96e20f-af8f-4db3-977f-cee93b1d7934</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk' index='2'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/bf96e20f-af8f-4db3-977f-cee93b1d7934_disk.config' index='1'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:48:a2:dd'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target dev='tap8c2fda4f-7f'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log' append='off'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934/console.log' append='off'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c565,c929</label>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c565,c929</imagelabel>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.297 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.296 253665 INFO nova.virt.libvirt.driver [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully detached device tap1d31cb94-62 from instance bf96e20f-af8f-4db3-977f-cee93b1d7934 from the live domain config.#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.296 253665 DEBUG nova.virt.libvirt.vif [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.297 253665 DEBUG nova.network.os_vif_util [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.298 253665 DEBUG nova.network.os_vif_util [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.298 253665 DEBUG os_vif [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.301 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d31cb94-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.303 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.304 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.308 253665 INFO os_vif [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.309 253665 DEBUG nova.virt.libvirt.guest [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:name>tempest-tempest.common.compute-instance-659535483</nova:name>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:14:47</nova:creationTime>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:user uuid="36a143bf132742b89e463c08d46df5c2">tempest-AttachInterfacesTestJSON-580246692-project-member</nova:user>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:project uuid="da9c9bd9364947fb9fa2712525f20b3d">tempest-AttachInterfacesTestJSON-580246692</nova:project>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    <nova:port uuid="8c2fda4f-7fa8-479c-8573-592021820968">
Nov 22 04:14:47 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:14:47 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:14:47 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.322 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a881ca58-05d5-444a-884e-85c594b59a70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.367 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9c553f1b-0bc1-4df2-a797-894e68ef890f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.371 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[995cfc05-b8cb-4a20-9d97-cc3d205ce78c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.410 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b23cdb-a8f3-44d5-a31c-f4710383d267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.430 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f84650be-9253-40b3-a219-4e7ccd6684a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299037, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.446 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01dc51e4-8b93-4d9d-9e94-c26400a3f327]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299038, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299038, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.449 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.452 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.452 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.453 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:47.453 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.583 253665 DEBUG nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.584 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.584 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.584 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.584 253665 DEBUG nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 WARNING nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 DEBUG nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.585 253665 DEBUG oslo_concurrency.lockutils [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.586 253665 DEBUG nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:47 np0005532048 nova_compute[253661]: 2025-11-22 09:14:47.586 253665 WARNING nova.compute.manager [req-f5f3343e-ad07-4081-80a0-1e629bdd211a req-dfb6e809-92f0-4433-b029-d33f17556558 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.048 253665 DEBUG nova.network.neutron [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updated VIF entry in instance network info cache for port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.049 253665 DEBUG nova.network.neutron [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.063 253665 DEBUG oslo_concurrency.lockutils [req-7144b9a4-1f36-484d-a39a-e6ad42d1eab2 req-4a623dcf-4b56-440f-999f-57a1d13bec2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 200 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 852 KiB/s wr, 88 op/s
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.667 253665 DEBUG nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 DEBUG nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.668 253665 WARNING nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-unplugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG oslo_concurrency.lockutils [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.669 253665 DEBUG nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.670 253665 WARNING nova.compute.manager [req-d3cd9262-31c3-4945-8f45-6aec6631572f req-23d883c1-968b-46e1-a234-cbf86e92889c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-plugged-1d31cb94-62b9-4490-a333-cbc7c9ea8f01 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.881 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.962 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.962 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquired lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:14:49 np0005532048 nova_compute[253661]: 2025-11-22 09:14:49.963 253665 DEBUG nova.network.neutron [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:14:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:50.335 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:50 np0005532048 podman[299040]: 2025-11-22 09:14:50.367797667 +0000 UTC m=+0.061413290 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:14:50 np0005532048 podman[299039]: 2025-11-22 09:14:50.389296305 +0000 UTC m=+0.082809356 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:14:50 np0005532048 nova_compute[253661]: 2025-11-22 09:14:50.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:50 np0005532048 nova_compute[253661]: 2025-11-22 09:14:50.994 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:50 np0005532048 nova_compute[253661]: 2025-11-22 09:14:50.995 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:50 np0005532048 nova_compute[253661]: 2025-11-22 09:14:50.995 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:50 np0005532048 nova_compute[253661]: 2025-11-22 09:14:50.996 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:50 np0005532048 nova_compute[253661]: 2025-11-22 09:14:50.996 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:50 np0005532048 nova_compute[253661]: 2025-11-22 09:14:50.997 253665 INFO nova.compute.manager [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Terminating instance#033[00m
Nov 22 04:14:50 np0005532048 nova_compute[253661]: 2025-11-22 09:14:50.998 253665 DEBUG nova.compute.manager [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:14:51 np0005532048 kernel: tap8c2fda4f-7f (unregistering): left promiscuous mode
Nov 22 04:14:51 np0005532048 NetworkManager[48920]: <info>  [1763802891.0478] device (tap8c2fda4f-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:14:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:51Z|00307|binding|INFO|Releasing lport 8c2fda4f-7fa8-479c-8573-592021820968 from this chassis (sb_readonly=0)
Nov 22 04:14:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:51Z|00308|binding|INFO|Setting lport 8c2fda4f-7fa8-479c-8573-592021820968 down in Southbound
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.056 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:51Z|00309|binding|INFO|Removing iface tap8c2fda4f-7f ovn-installed in OVS
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.062 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:a2:dd 10.100.0.7'], port_security=['fa:16:3e:48:a2:dd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bf96e20f-af8f-4db3-977f-cee93b1d7934', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccbdff20-588a-43ee-a362-2464b4cf13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8c2fda4f-7fa8-479c-8573-592021820968) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.064 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8c2fda4f-7fa8-479c-8573-592021820968 in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.065 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.081 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.083 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7c196c-6ee6-4f9e-a0ed-71e674c00d5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:51 np0005532048 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000024.scope: Deactivated successfully.
Nov 22 04:14:51 np0005532048 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000024.scope: Consumed 14.502s CPU time.
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.110 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dc60ee3b-0c1c-4a3e-87cf-e5a44145d5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:51 np0005532048 systemd-machined[215941]: Machine qemu-41-instance-00000024 terminated.
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.114 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[579fc746-9feb-4585-b44a-6a425fed0b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.150 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6d09a195-ff6a-4b54-89a6-49261b3a6ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.169 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[57a4c176-ba47-412e-9640-13b3eb870a86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e2cd359-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:bd:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568640, 'reachable_time': 17138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299090, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.187 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfac1866-1073-4f2d-8853-ba5ead78f1d6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568657, 'tstamp': 568657}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299091, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e2cd359-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568661, 'tstamp': 568661}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299091, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.192 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.200 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e2cd359-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.202 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e2cd359-c0, col_values=(('external_ids', {'iface-id': '8bfe00e9-2c65-4f8c-8588-d9f34b25ac93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:51.202 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.220 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.235 253665 INFO nova.virt.libvirt.driver [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Instance destroyed successfully.#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.235 253665 DEBUG nova.objects.instance [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'resources' on Instance uuid bf96e20f-af8f-4db3-977f-cee93b1d7934 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.250 253665 DEBUG nova.virt.libvirt.vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.252 253665 DEBUG nova.network.os_vif_util [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.253 253665 DEBUG nova.network.os_vif_util [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.253 253665 DEBUG os_vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.255 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c2fda4f-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.259 253665 INFO os_vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:a2:dd,bridge_name='br-int',has_traffic_filtering=True,id=8c2fda4f-7fa8-479c-8573-592021820968,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2fda4f-7f')#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.260 253665 DEBUG nova.virt.libvirt.vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659535483',display_name='tempest-tempest.common.compute-instance-659535483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659535483',id=36,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:14:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-kx67gr3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:14:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=bf96e20f-af8f-4db3-977f-cee93b1d7934,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.260 253665 DEBUG nova.network.os_vif_util [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "address": "fa:16:3e:a2:ce:ed", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d31cb94-62", "ovs_interfaceid": "1d31cb94-62b9-4490-a333-cbc7c9ea8f01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.261 253665 DEBUG nova.network.os_vif_util [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.261 253665 DEBUG os_vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.262 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d31cb94-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.262 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.264 253665 INFO os_vif [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:ce:ed,bridge_name='br-int',has_traffic_filtering=True,id=1d31cb94-62b9-4490-a333-cbc7c9ea8f01,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1d31cb94-62')#033[00m
Nov 22 04:14:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 200 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 998 KiB/s rd, 803 KiB/s wr, 83 op/s
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.328407) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891328455, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2292, "num_deletes": 518, "total_data_size": 2935068, "memory_usage": 2982248, "flush_reason": "Manual Compaction"}
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891342674, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1911362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28100, "largest_seqno": 30391, "table_properties": {"data_size": 1903314, "index_size": 4098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 23405, "raw_average_key_size": 20, "raw_value_size": 1883679, "raw_average_value_size": 1646, "num_data_blocks": 181, "num_entries": 1144, "num_filter_entries": 1144, "num_deletions": 518, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802730, "oldest_key_time": 1763802730, "file_creation_time": 1763802891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 14322 microseconds, and 5975 cpu microseconds.
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.342724) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1911362 bytes OK
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.342747) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.345205) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.345217) EVENT_LOG_v1 {"time_micros": 1763802891345213, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.345234) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2924170, prev total WAL file size 2924170, number of live WAL files 2.
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.346369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1866KB)], [62(8829KB)]
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891346439, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10952927, "oldest_snapshot_seqno": -1}
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5491 keys, 8517169 bytes, temperature: kUnknown
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891392298, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8517169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8479703, "index_size": 22636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 137498, "raw_average_key_size": 25, "raw_value_size": 8380109, "raw_average_value_size": 1526, "num_data_blocks": 929, "num_entries": 5491, "num_filter_entries": 5491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.392625) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8517169 bytes
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.394403) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 238.3 rd, 185.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.6 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 6460, records dropped: 969 output_compression: NoCompression
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.394421) EVENT_LOG_v1 {"time_micros": 1763802891394413, "job": 34, "event": "compaction_finished", "compaction_time_micros": 45967, "compaction_time_cpu_micros": 24870, "output_level": 6, "num_output_files": 1, "total_output_size": 8517169, "num_input_records": 6460, "num_output_records": 5491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891394863, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802891396239, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.346215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:14:51 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:14:51.396537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.687 253665 INFO nova.virt.libvirt.driver [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Deleting instance files /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934_del#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.688 253665 INFO nova.virt.libvirt.driver [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Deletion of /var/lib/nova/instances/bf96e20f-af8f-4db3-977f-cee93b1d7934_del complete#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.708 253665 INFO nova.network.neutron [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Port 1d31cb94-62b9-4490-a333-cbc7c9ea8f01 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.709 253665 DEBUG nova.network.neutron [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [{"id": "8c2fda4f-7fa8-479c-8573-592021820968", "address": "fa:16:3e:48:a2:dd", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2fda4f-7f", "ovs_interfaceid": "8c2fda4f-7fa8-479c-8573-592021820968", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.749 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Releasing lock "refresh_cache-bf96e20f-af8f-4db3-977f-cee93b1d7934" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.758 253665 INFO nova.compute.manager [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.758 253665 DEBUG oslo.service.loopingcall [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.759 253665 DEBUG nova.compute.manager [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.759 253665 DEBUG nova.network.neutron [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:14:51 np0005532048 nova_compute[253661]: 2025-11-22 09:14:51.782 253665 DEBUG oslo_concurrency.lockutils [None req-3ed35344-f053-4348-b050-014cf35e5152 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "interface-bf96e20f-af8f-4db3-977f-cee93b1d7934-1d31cb94-62b9-4490-a333-cbc7c9ea8f01" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:14:52
Nov 22 04:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups']
Nov 22 04:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:14:53 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:14:53 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:14:53 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:14:53 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:14:53 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:14:53 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:14:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 167 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 8.5 KiB/s wr, 63 op/s
Nov 22 04:14:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:53Z|00310|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:14:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:53Z|00311|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:14:53 np0005532048 nova_compute[253661]: 2025-11-22 09:14:53.769 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:53 np0005532048 nova_compute[253661]: 2025-11-22 09:14:53.849 253665 DEBUG nova.compute.manager [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-unplugged-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:53 np0005532048 nova_compute[253661]: 2025-11-22 09:14:53.850 253665 DEBUG oslo_concurrency.lockutils [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:53 np0005532048 nova_compute[253661]: 2025-11-22 09:14:53.851 253665 DEBUG oslo_concurrency.lockutils [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:53 np0005532048 nova_compute[253661]: 2025-11-22 09:14:53.851 253665 DEBUG oslo_concurrency.lockutils [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:53 np0005532048 nova_compute[253661]: 2025-11-22 09:14:53.851 253665 DEBUG nova.compute.manager [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-unplugged-8c2fda4f-7fa8-479c-8573-592021820968 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:53 np0005532048 nova_compute[253661]: 2025-11-22 09:14:53.852 253665 DEBUG nova.compute.manager [req-253bc7c0-9c1c-4b24-add8-5258098f817c req-0db0bdcd-be87-4a1f-b32e-1126e44bdd25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-unplugged-8c2fda4f-7fa8-479c-8573-592021820968 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:14:54 np0005532048 podman[299123]: 2025-11-22 09:14:54.441873591 +0000 UTC m=+0.129912114 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:14:54 np0005532048 nova_compute[253661]: 2025-11-22 09:14:54.486 253665 DEBUG nova.network.neutron [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:54 np0005532048 nova_compute[253661]: 2025-11-22 09:14:54.505 253665 INFO nova.compute.manager [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Took 2.75 seconds to deallocate network for instance.#033[00m
Nov 22 04:14:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:54Z|00312|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:14:54 np0005532048 nova_compute[253661]: 2025-11-22 09:14:54.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:54 np0005532048 nova_compute[253661]: 2025-11-22 09:14:54.558 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:54 np0005532048 nova_compute[253661]: 2025-11-22 09:14:54.559 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:54Z|00313|binding|INFO|Releasing lport 8bfe00e9-2c65-4f8c-8588-d9f34b25ac93 from this chassis (sb_readonly=0)
Nov 22 04:14:54 np0005532048 nova_compute[253661]: 2025-11-22 09:14:54.631 253665 DEBUG oslo_concurrency.processutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:14:54 np0005532048 nova_compute[253661]: 2025-11-22 09:14:54.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:14:54 np0005532048 nova_compute[253661]: 2025-11-22 09:14:54.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:14:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:14:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:14:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271775981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.070 253665 DEBUG oslo_concurrency.processutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.078 253665 DEBUG nova.compute.provider_tree [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.093 253665 DEBUG nova.scheduler.client.report [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.127 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.155 253665 INFO nova.scheduler.client.report [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Deleted allocations for instance bf96e20f-af8f-4db3-977f-cee93b1d7934#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.217 253665 DEBUG oslo_concurrency.lockutils [None req-74757975-702e-40c3-ad55-d76ed26ad389 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 121 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 7.2 KiB/s wr, 49 op/s
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.957 253665 DEBUG nova.compute.manager [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.958 253665 DEBUG oslo_concurrency.lockutils [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.958 253665 DEBUG oslo_concurrency.lockutils [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.958 253665 DEBUG oslo_concurrency.lockutils [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bf96e20f-af8f-4db3-977f-cee93b1d7934-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.958 253665 DEBUG nova.compute.manager [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] No waiting events found dispatching network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.959 253665 WARNING nova.compute.manager [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received unexpected event network-vif-plugged-8c2fda4f-7fa8-479c-8573-592021820968 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:14:55 np0005532048 nova_compute[253661]: 2025-11-22 09:14:55.959 253665 DEBUG nova.compute.manager [req-cbf12e7d-6543-498b-8ae9-d81882dbdc57 req-ad1510e9-dfb9-4d38-b0c9-63775fb8492f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Received event network-vif-deleted-8c2fda4f-7fa8-479c-8573-592021820968 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.045 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.046 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.046 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.046 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.047 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.048 253665 INFO nova.compute.manager [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Terminating instance#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.049 253665 DEBUG nova.compute.manager [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:14:56 np0005532048 kernel: tap250740a7-72 (unregistering): left promiscuous mode
Nov 22 04:14:56 np0005532048 NetworkManager[48920]: <info>  [1763802896.1173] device (tap250740a7-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:14:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:56Z|00314|binding|INFO|Releasing lport 250740a7-7283-491e-b03e-1e30171a9f3f from this chassis (sb_readonly=0)
Nov 22 04:14:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:56Z|00315|binding|INFO|Setting lport 250740a7-7283-491e-b03e-1e30171a9f3f down in Southbound
Nov 22 04:14:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:14:56Z|00316|binding|INFO|Removing iface tap250740a7-72 ovn-installed in OVS
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.125 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.139 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:fa:90 10.100.0.13'], port_security=['fa:16:3e:0e:fa:90 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8b620ce3-1fc9-42ba-aafb-709cad3d65a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da9c9bd9364947fb9fa2712525f20b3d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccbdff20-588a-43ee-a362-2464b4cf13b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5e5ba0a-cce2-46e3-adc6-3b4cfbf6ef43, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=250740a7-7283-491e-b03e-1e30171a9f3f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.140 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 250740a7-7283-491e-b03e-1e30171a9f3f in datapath 5e2cd359-c68f-4256-90e8-0ad40aff8a00 unbound from our chassis#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.141 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e2cd359-c68f-4256-90e8-0ad40aff8a00, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00be96b4-0190-48b3-be03-69230f0c0be2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.142 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 namespace which is not needed anymore#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:56 np0005532048 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 22 04:14:56 np0005532048 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Consumed 15.819s CPU time.
Nov 22 04:14:56 np0005532048 systemd-machined[215941]: Machine qemu-40-instance-00000023 terminated.
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:56 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [NOTICE]   (295942) : haproxy version is 2.8.14-c23fe91
Nov 22 04:14:56 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [NOTICE]   (295942) : path to executable is /usr/sbin/haproxy
Nov 22 04:14:56 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [WARNING]  (295942) : Exiting Master process...
Nov 22 04:14:56 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [ALERT]    (295942) : Current worker (295944) exited with code 143 (Terminated)
Nov 22 04:14:56 np0005532048 neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00[295937]: [WARNING]  (295942) : All workers exited. Exiting... (0)
Nov 22 04:14:56 np0005532048 systemd[1]: libpod-e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88.scope: Deactivated successfully.
Nov 22 04:14:56 np0005532048 podman[299197]: 2025-11-22 09:14:56.290912757 +0000 UTC m=+0.051402235 container died e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.291 253665 INFO nova.virt.libvirt.driver [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Instance destroyed successfully.#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.292 253665 DEBUG nova.objects.instance [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lazy-loading 'resources' on Instance uuid 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.302 253665 DEBUG nova.virt.libvirt.vif [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:13:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1344454464',display_name='tempest-tempest.common.compute-instance-1344454464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1344454464',id=35,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObVtZRrhd3TFfhUFA9XvCELf+HbchsS0ciAZmjOcu6pN1ieFJKlTdO+RsV4JW0nIm5syuI4OVWIlUlm/tcb/R5/KLhW4TMPu5F/7n3WScy3y+PfBzPaAKNL7Zs2coHNDg==',key_name='tempest-keypair-579796014',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:13:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da9c9bd9364947fb9fa2712525f20b3d',ramdisk_id='',reservation_id='r-von0l9xo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-580246692',owner_user_name='tempest-AttachInterfacesTestJSON-580246692-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:13:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='36a143bf132742b89e463c08d46df5c2',uuid=8b620ce3-1fc9-42ba-aafb-709cad3d65a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.303 253665 DEBUG nova.network.os_vif_util [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converting VIF {"id": "250740a7-7283-491e-b03e-1e30171a9f3f", "address": "fa:16:3e:0e:fa:90", "network": {"id": "5e2cd359-c68f-4256-90e8-0ad40aff8a00", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-622799446-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da9c9bd9364947fb9fa2712525f20b3d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap250740a7-72", "ovs_interfaceid": "250740a7-7283-491e-b03e-1e30171a9f3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.303 253665 DEBUG nova.network.os_vif_util [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.304 253665 DEBUG os_vif [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.306 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap250740a7-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.331 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.333 253665 INFO os_vif [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:fa:90,bridge_name='br-int',has_traffic_filtering=True,id=250740a7-7283-491e-b03e-1e30171a9f3f,network=Network(5e2cd359-c68f-4256-90e8-0ad40aff8a00),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap250740a7-72')#033[00m
Nov 22 04:14:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c33cf72e08b1ddd2978dc2e03da03122704dc3ba1214ec5d21f8fe0f1527d854-merged.mount: Deactivated successfully.
Nov 22 04:14:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88-userdata-shm.mount: Deactivated successfully.
Nov 22 04:14:56 np0005532048 podman[299197]: 2025-11-22 09:14:56.358109359 +0000 UTC m=+0.118598827 container cleanup e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:14:56 np0005532048 systemd[1]: libpod-conmon-e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88.scope: Deactivated successfully.
Nov 22 04:14:56 np0005532048 podman[299254]: 2025-11-22 09:14:56.42896142 +0000 UTC m=+0.047885148 container remove e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.435 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[97717ee3-fd77-4a7d-b25d-5a15337043b8]: (4, ('Sat Nov 22 09:14:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88)\ne4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88\nSat Nov 22 09:14:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 (e4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88)\ne4d652abfcd4233a13c5d937bcfe289e2bdbdfdfed9155f3695d04d912d6ec88\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.437 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[410caa76-b15c-4820-b149-58deb2e783ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.438 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e2cd359-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:56 np0005532048 kernel: tap5e2cd359-c0: left promiscuous mode
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.458 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.461 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c65a4b1a-ac11-43d0-83cb-5853f536019a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.471 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b13c7ceb-5302-4a67-8aea-cd2f961dc37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.473 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc00de0-50cc-4b2e-9c69-87ffa57734fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.495 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaadce06-56b5-4d0f-82d6-10f4d7c63ab2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568630, 'reachable_time': 15097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299272, 'error': None, 'target': 'ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.497 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e2cd359-c68f-4256-90e8-0ad40aff8a00 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:14:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:14:56.498 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a9d3f1-ba73-4a2d-8d78-025f660afe34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:14:56 np0005532048 systemd[1]: run-netns-ovnmeta\x2d5e2cd359\x2dc68f\x2d4256\x2d90e8\x2d0ad40aff8a00.mount: Deactivated successfully.
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.788 253665 INFO nova.virt.libvirt.driver [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Deleting instance files /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_del#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.789 253665 INFO nova.virt.libvirt.driver [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Deletion of /var/lib/nova/instances/8b620ce3-1fc9-42ba-aafb-709cad3d65a6_del complete#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.844 253665 INFO nova.compute.manager [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.845 253665 DEBUG oslo.service.loopingcall [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.845 253665 DEBUG nova.compute.manager [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:14:56 np0005532048 nova_compute[253661]: 2025-11-22 09:14:56.845 253665 DEBUG nova.network.neutron [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:14:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 96 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 6.9 KiB/s wr, 47 op/s
Nov 22 04:14:57 np0005532048 nova_compute[253661]: 2025-11-22 09:14:57.645 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802882.6444323, 264036ef-37a3-4681-9c7a-9dc70c4b5282 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:14:57 np0005532048 nova_compute[253661]: 2025-11-22 09:14:57.646 253665 INFO nova.compute.manager [-] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:14:57 np0005532048 nova_compute[253661]: 2025-11-22 09:14:57.670 253665 DEBUG nova.compute.manager [None req-77b94f36-ee54-49df-a2e9-9d8911fcb0f9 - - - - - -] [instance: 264036ef-37a3-4681-9c7a-9dc70c4b5282] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.278 253665 DEBUG nova.network.neutron [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.300 253665 INFO nova.compute.manager [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Took 1.45 seconds to deallocate network for instance.#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.338 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.339 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.380 253665 DEBUG oslo_concurrency.processutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.500 253665 DEBUG nova.compute.manager [req-bb5575f0-5c09-4ec0-82b0-b02417bcdf12 req-106e575f-1327-4a8e-8b6e-5f2ae2fc5d07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Received event network-vif-deleted-250740a7-7283-491e-b03e-1e30171a9f3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:14:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:14:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765326782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.894 253665 DEBUG oslo_concurrency.processutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.899 253665 DEBUG nova.compute.provider_tree [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.916 253665 DEBUG nova.scheduler.client.report [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.941 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:58 np0005532048 nova_compute[253661]: 2025-11-22 09:14:58.994 253665 INFO nova.scheduler.client.report [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Deleted allocations for instance 8b620ce3-1fc9-42ba-aafb-709cad3d65a6#033[00m
Nov 22 04:14:59 np0005532048 nova_compute[253661]: 2025-11-22 09:14:59.154 253665 DEBUG oslo_concurrency.lockutils [None req-22315b7f-0803-4cb6-93b7-3be6ca13c47f 36a143bf132742b89e463c08d46df5c2 da9c9bd9364947fb9fa2712525f20b3d - - default default] Lock "8b620ce3-1fc9-42ba-aafb-709cad3d65a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:14:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 42 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Nov 22 04:14:59 np0005532048 nova_compute[253661]: 2025-11-22 09:14:59.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:00 np0005532048 nova_compute[253661]: 2025-11-22 09:15:00.545 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:00 np0005532048 nova_compute[253661]: 2025-11-22 09:15:00.546 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:00 np0005532048 nova_compute[253661]: 2025-11-22 09:15:00.619 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:15:00 np0005532048 nova_compute[253661]: 2025-11-22 09:15:00.755 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:00 np0005532048 nova_compute[253661]: 2025-11-22 09:15:00.755 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:00 np0005532048 nova_compute[253661]: 2025-11-22 09:15:00.767 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:15:00 np0005532048 nova_compute[253661]: 2025-11-22 09:15:00.768 253665 INFO nova.compute.claims [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.139 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 42 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 22 04:15:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.331 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:15:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241279518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.610 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.619 253665 DEBUG nova.compute.provider_tree [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.637 253665 DEBUG nova.scheduler.client.report [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.689 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.690 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.768 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.768 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.817 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:15:01 np0005532048 nova_compute[253661]: 2025-11-22 09:15:01.874 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.049 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.051 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.051 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Creating image(s)#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.078 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.103 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.132 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.137 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.225 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.226 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.227 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.227 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.318 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.324 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.373 253665 DEBUG nova.policy [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e5b221447624e728e9eb5442b5238d1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6fc32fb5484840b1b6654dffb70595ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.7344004362831283e-06 of space, bias 1.0, pg target 0.0008203201308849384 quantized to 32 (current 32)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.887 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:02 np0005532048 nova_compute[253661]: 2025-11-22 09:15:02.961 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] resizing rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:15:03 np0005532048 nova_compute[253661]: 2025-11-22 09:15:03.074 253665 DEBUG nova.objects.instance [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lazy-loading 'migration_context' on Instance uuid 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:15:03 np0005532048 nova_compute[253661]: 2025-11-22 09:15:03.087 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:15:03 np0005532048 nova_compute[253661]: 2025-11-22 09:15:03.087 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Ensure instance console log exists: /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:15:03 np0005532048 nova_compute[253661]: 2025-11-22 09:15:03.088 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:03 np0005532048 nova_compute[253661]: 2025-11-22 09:15:03.088 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:03 np0005532048 nova_compute[253661]: 2025-11-22 09:15:03.089 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 41 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.0 KiB/s wr, 55 op/s
Nov 22 04:15:04 np0005532048 nova_compute[253661]: 2025-11-22 09:15:04.761 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Successfully created port: 50e75895-e769-4e23-b607-7d52eb14fb62 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:15:04 np0005532048 nova_compute[253661]: 2025-11-22 09:15:04.889 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 61 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 1.0 MiB/s wr, 73 op/s
Nov 22 04:15:06 np0005532048 nova_compute[253661]: 2025-11-22 09:15:06.233 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802891.232201, bf96e20f-af8f-4db3-977f-cee93b1d7934 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:06 np0005532048 nova_compute[253661]: 2025-11-22 09:15:06.234 253665 INFO nova.compute.manager [-] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:15:06 np0005532048 nova_compute[253661]: 2025-11-22 09:15:06.251 253665 DEBUG nova.compute.manager [None req-a43f9ab2-c44f-44d4-9476-6be8acb5a8d2 - - - - - -] [instance: bf96e20f-af8f-4db3-977f-cee93b1d7934] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:06 np0005532048 nova_compute[253661]: 2025-11-22 09:15:06.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 22 04:15:08 np0005532048 nova_compute[253661]: 2025-11-22 09:15:08.061 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Successfully updated port: 50e75895-e769-4e23-b607-7d52eb14fb62 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:15:08 np0005532048 nova_compute[253661]: 2025-11-22 09:15:08.076 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:15:08 np0005532048 nova_compute[253661]: 2025-11-22 09:15:08.077 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquired lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:15:08 np0005532048 nova_compute[253661]: 2025-11-22 09:15:08.077 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:15:08 np0005532048 nova_compute[253661]: 2025-11-22 09:15:08.206 253665 DEBUG nova.compute.manager [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:08 np0005532048 nova_compute[253661]: 2025-11-22 09:15:08.207 253665 DEBUG nova.compute.manager [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing instance network info cache due to event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:15:08 np0005532048 nova_compute[253661]: 2025-11-22 09:15:08.207 253665 DEBUG oslo_concurrency.lockutils [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:15:08 np0005532048 nova_compute[253661]: 2025-11-22 09:15:08.297 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:15:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 22 04:15:09 np0005532048 nova_compute[253661]: 2025-11-22 09:15:09.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.185 253665 DEBUG nova.network.neutron [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.220 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Releasing lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.221 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance network_info: |[{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.222 253665 DEBUG oslo_concurrency.lockutils [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.222 253665 DEBUG nova.network.neutron [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.225 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start _get_guest_xml network_info=[{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.231 253665 WARNING nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.237 253665 DEBUG nova.virt.libvirt.host [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.237 253665 DEBUG nova.virt.libvirt.host [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.247 253665 DEBUG nova.virt.libvirt.host [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.248 253665 DEBUG nova.virt.libvirt.host [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.248 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.249 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.249 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.250 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.250 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.250 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.250 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.251 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.251 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.252 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.252 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.252 253665 DEBUG nova.virt.hardware [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.255 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:15:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513364440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.749 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.780 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:10 np0005532048 nova_compute[253661]: 2025-11-22 09:15:10.786 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:15:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2379157159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.248 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.250 253665 DEBUG nova.virt.libvirt.vif [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-467780681',id=39,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fc32fb5484840b1b6654dffb70595ef',ramdisk_id='',reservation_id='r-072h7wv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:01Z,user_data=None,user_id='0e5b221447624e728e9eb5442b5238d1',uuid=9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.251 253665 DEBUG nova.network.os_vif_util [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converting VIF {"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.252 253665 DEBUG nova.network.os_vif_util [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.253 253665 DEBUG nova.objects.instance [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.270 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <uuid>9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1</uuid>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <name>instance-00000027</name>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681</nova:name>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:15:10</nova:creationTime>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <nova:user uuid="0e5b221447624e728e9eb5442b5238d1">tempest-FloatingIPsAssociationNegativeTestJSON-1334234428-project-member</nova:user>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <nova:project uuid="6fc32fb5484840b1b6654dffb70595ef">tempest-FloatingIPsAssociationNegativeTestJSON-1334234428</nova:project>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <nova:port uuid="50e75895-e769-4e23-b607-7d52eb14fb62">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <entry name="serial">9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1</entry>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <entry name="uuid">9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1</entry>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:07:ef:1b"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <target dev="tap50e75895-e7"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/console.log" append="off"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:15:11 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:15:11 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:15:11 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:15:11 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.272 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Preparing to wait for external event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.272 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.273 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.273 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.274 253665 DEBUG nova.virt.libvirt.vif [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-467780681',id=39,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fc32fb5484840b1b6654dffb70595ef',ramdisk_id='',reservation_id='r-072h7wv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:01Z,user_data=None,user_id='0e5b221447624e728e9eb5442b5238d1',uuid=9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.275 253665 DEBUG nova.network.os_vif_util [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converting VIF {"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.276 253665 DEBUG nova.network.os_vif_util [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.277 253665 DEBUG os_vif [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.278 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.278 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:15:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.283 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.284 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e75895-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.285 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50e75895-e7, col_values=(('external_ids', {'iface-id': '50e75895-e769-4e23-b607-7d52eb14fb62', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:ef:1b', 'vm-uuid': '9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:11 np0005532048 NetworkManager[48920]: <info>  [1763802911.2881] manager: (tap50e75895-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.290 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802896.2880502, 8b620ce3-1fc9-42ba-aafb-709cad3d65a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.290 253665 INFO nova.compute.manager [-] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.295 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.296 253665 INFO os_vif [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7')#033[00m
Nov 22 04:15:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.467 253665 DEBUG nova.compute.manager [None req-36c53732-f07c-4dc3-becc-e23540e3cdd4 - - - - - -] [instance: 8b620ce3-1fc9-42ba-aafb-709cad3d65a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.487 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.488 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.488 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] No VIF found with MAC fa:16:3e:07:ef:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.489 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Using config drive#033[00m
Nov 22 04:15:11 np0005532048 nova_compute[253661]: 2025-11-22 09:15:11.513 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.018 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Creating config drive at /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.024 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgdl6zihk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.167 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgdl6zihk" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.204 253665 DEBUG nova.storage.rbd_utils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] rbd image 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.209 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:15:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1869376563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:15:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:15:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1869376563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.707 253665 DEBUG oslo_concurrency.processutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.708 253665 INFO nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Deleting local config drive /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1/disk.config because it was imported into RBD.#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.773 253665 DEBUG nova.network.neutron [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updated VIF entry in instance network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.774 253665 DEBUG nova.network.neutron [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.790 253665 DEBUG oslo_concurrency.lockutils [req-58918838-813a-4122-956e-348273beb18d req-e66b0878-b2af-4c6a-9200-2ea2c93ba1c2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:15:12 np0005532048 kernel: tap50e75895-e7: entered promiscuous mode
Nov 22 04:15:12 np0005532048 NetworkManager[48920]: <info>  [1763802912.8041] manager: (tap50e75895-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Nov 22 04:15:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:12Z|00317|binding|INFO|Claiming lport 50e75895-e769-4e23-b607-7d52eb14fb62 for this chassis.
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:12Z|00318|binding|INFO|50e75895-e769-4e23-b607-7d52eb14fb62: Claiming fa:16:3e:07:ef:1b 10.100.0.9
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.815 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:12 np0005532048 systemd-udevd[299620]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.837 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:ef:1b 10.100.0.9'], port_security=['fa:16:3e:07:ef:1b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fc32fb5484840b1b6654dffb70595ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f093055-0f73-4edf-a345-d9278a345d48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7266d51a-8673-408f-8e3f-05b71c491331, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=50e75895-e769-4e23-b607-7d52eb14fb62) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.838 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 50e75895-e769-4e23-b607-7d52eb14fb62 in datapath 35d4669f-adae-4ff8-9cc1-a890f0b28c31 bound to our chassis#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.839 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35d4669f-adae-4ff8-9cc1-a890f0b28c31#033[00m
Nov 22 04:15:12 np0005532048 NetworkManager[48920]: <info>  [1763802912.8499] device (tap50e75895-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:15:12 np0005532048 NetworkManager[48920]: <info>  [1763802912.8511] device (tap50e75895-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:15:12 np0005532048 systemd-machined[215941]: New machine qemu-44-instance-00000027.
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.855 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[116c533d-f821-4125-9e78-9bb9e38b2db2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.856 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35d4669f-a1 in ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.858 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35d4669f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.858 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc29ec3-8aa7-4c66-9f53-02d1a1c035e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.859 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8382ea0f-3e2a-4b0d-84c5-0b6633d7607a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.875 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0c818a86-bdfc-4efd-aff8-5a0c319c1701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:12 np0005532048 systemd[1]: Started Virtual Machine qemu-44-instance-00000027.
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.907 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5297c63-01dc-4db2-8de3-94a7c060c584]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.909 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:12Z|00319|binding|INFO|Setting lport 50e75895-e769-4e23-b607-7d52eb14fb62 ovn-installed in OVS
Nov 22 04:15:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:12Z|00320|binding|INFO|Setting lport 50e75895-e769-4e23-b607-7d52eb14fb62 up in Southbound
Nov 22 04:15:12 np0005532048 nova_compute[253661]: 2025-11-22 09:15:12.916 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.950 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1d862f1a-0e0e-498f-b877-0d18c276da2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:12 np0005532048 NetworkManager[48920]: <info>  [1763802912.9604] manager: (tap35d4669f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.959 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[174b025c-0319-41ab-8e6b-222f3d709bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.994 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7918d848-7ce6-4c74-af46-7c4156293809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:12.997 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a533447f-e0cc-4fc5-9180-ce1dfe352422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:13 np0005532048 NetworkManager[48920]: <info>  [1763802913.0238] device (tap35d4669f-a0): carrier: link connected
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.033 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[96339895-4771-4b0d-af35-8668fdb3267f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.055 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcab0dd0-9d07-4ab9-aaed-918c03149876]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35d4669f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:da:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576376, 'reachable_time': 22389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299654, 'error': None, 'target': 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.075 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be5ee53c-4f0c-46c0-a45b-20d33a884861]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:da4b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576376, 'tstamp': 576376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299655, 'error': None, 'target': 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.092 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46ec4f4f-8022-4597-b634-5d563581fb3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35d4669f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:da:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576376, 'reachable_time': 22389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299656, 'error': None, 'target': 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.128 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[074112bb-9015-472f-9675-884d4550c382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.158 253665 DEBUG nova.compute.manager [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.159 253665 DEBUG oslo_concurrency.lockutils [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.159 253665 DEBUG oslo_concurrency.lockutils [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.160 253665 DEBUG oslo_concurrency.lockutils [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.160 253665 DEBUG nova.compute.manager [req-df450ec0-cad4-4020-b045-e09edcb743b3 req-5734c326-b4bf-43fe-8bc0-875da29e7f91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Processing event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.203 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46791a56-3352-40b0-a60d-cb6939f2602c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.204 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35d4669f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.204 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.204 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35d4669f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:13 np0005532048 NetworkManager[48920]: <info>  [1763802913.2071] manager: (tap35d4669f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Nov 22 04:15:13 np0005532048 kernel: tap35d4669f-a0: entered promiscuous mode
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.210 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.211 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35d4669f-a0, col_values=(('external_ids', {'iface-id': 'd5d5c0f3-ca4b-44d4-9294-d8da8d674dc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:13Z|00321|binding|INFO|Releasing lport d5d5c0f3-ca4b-44d4-9294-d8da8d674dc8 from this chassis (sb_readonly=0)
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.234 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35d4669f-adae-4ff8-9cc1-a890f0b28c31.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35d4669f-adae-4ff8-9cc1-a890f0b28c31.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.235 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd205b4-197c-4d42-b39a-c78979e096ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.235 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-35d4669f-adae-4ff8-9cc1-a890f0b28c31
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/35d4669f-adae-4ff8-9cc1-a890f0b28c31.pid.haproxy
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 35d4669f-adae-4ff8-9cc1-a890f0b28c31
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:15:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:13.236 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'env', 'PROCESS_TAG=haproxy-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35d4669f-adae-4ff8-9cc1-a890f0b28c31.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:15:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.460 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802913.460091, 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.462 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.464 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.468 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.473 253665 INFO nova.virt.libvirt.driver [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance spawned successfully.#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.474 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.480 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.484 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.496 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.497 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.498 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.498 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.499 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.500 253665 DEBUG nova.virt.libvirt.driver [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.504 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.505 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802913.4603581, 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.505 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.533 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.539 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802913.467998, 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.540 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.574 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.582 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.612 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.627 253665 INFO nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Took 11.58 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.628 253665 DEBUG nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.690 253665 INFO nova.compute.manager [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Took 12.95 seconds to build instance.#033[00m
Nov 22 04:15:13 np0005532048 podman[299729]: 2025-11-22 09:15:13.710133262 +0000 UTC m=+0.098137053 container create 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:15:13 np0005532048 nova_compute[253661]: 2025-11-22 09:15:13.716 253665 DEBUG oslo_concurrency.lockutils [None req-10773566-4e4a-4735-ac7f-952902c1306a 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:13 np0005532048 podman[299729]: 2025-11-22 09:15:13.643190646 +0000 UTC m=+0.031194457 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:15:13 np0005532048 systemd[1]: Started libpod-conmon-2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb.scope.
Nov 22 04:15:13 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:15:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd3306a5b7915795f7c52c8141867f19aeb43af9b9e1b85d687973702260cc6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:13 np0005532048 podman[299729]: 2025-11-22 09:15:13.82522749 +0000 UTC m=+0.213231311 container init 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:15:13 np0005532048 podman[299729]: 2025-11-22 09:15:13.832325595 +0000 UTC m=+0.220329386 container start 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 04:15:13 np0005532048 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [NOTICE]   (299748) : New worker (299750) forked
Nov 22 04:15:13 np0005532048 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [NOTICE]   (299748) : Loading success.
Nov 22 04:15:14 np0005532048 nova_compute[253661]: 2025-11-22 09:15:14.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:15 np0005532048 nova_compute[253661]: 2025-11-22 09:15:15.261 253665 DEBUG nova.compute.manager [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:15 np0005532048 nova_compute[253661]: 2025-11-22 09:15:15.261 253665 DEBUG oslo_concurrency.lockutils [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:15 np0005532048 nova_compute[253661]: 2025-11-22 09:15:15.262 253665 DEBUG oslo_concurrency.lockutils [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:15 np0005532048 nova_compute[253661]: 2025-11-22 09:15:15.262 253665 DEBUG oslo_concurrency.lockutils [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:15 np0005532048 nova_compute[253661]: 2025-11-22 09:15:15.263 253665 DEBUG nova.compute.manager [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] No waiting events found dispatching network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:15:15 np0005532048 nova_compute[253661]: 2025-11-22 09:15:15.263 253665 WARNING nova.compute.manager [req-abcb8a09-96db-49a1-86ef-03f041af2b62 req-a9ce9a56-0980-4fb9-87f4-661658d51781 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received unexpected event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:15:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 22 04:15:16 np0005532048 nova_compute[253661]: 2025-11-22 09:15:16.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.727111) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916727185, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 465, "num_deletes": 251, "total_data_size": 386966, "memory_usage": 397096, "flush_reason": "Manual Compaction"}
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916732620, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 383190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30392, "largest_seqno": 30856, "table_properties": {"data_size": 380547, "index_size": 679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6385, "raw_average_key_size": 18, "raw_value_size": 375335, "raw_average_value_size": 1107, "num_data_blocks": 31, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802891, "oldest_key_time": 1763802891, "file_creation_time": 1763802916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 5570 microseconds, and 2616 cpu microseconds.
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.732683) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 383190 bytes OK
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.732711) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734283) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734300) EVENT_LOG_v1 {"time_micros": 1763802916734295, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734329) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 384180, prev total WAL file size 384180, number of live WAL files 2.
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734806) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(374KB)], [65(8317KB)]
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916734845, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 8900359, "oldest_snapshot_seqno": -1}
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5320 keys, 7222833 bytes, temperature: kUnknown
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916787120, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7222833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7187830, "index_size": 20621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 134640, "raw_average_key_size": 25, "raw_value_size": 7092483, "raw_average_value_size": 1333, "num_data_blocks": 836, "num_entries": 5320, "num_filter_entries": 5320, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763802916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.787551) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7222833 bytes
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.788956) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.8 rd, 137.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 8.1 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(42.1) write-amplify(18.8) OK, records in: 5830, records dropped: 510 output_compression: NoCompression
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.788981) EVENT_LOG_v1 {"time_micros": 1763802916788968, "job": 36, "event": "compaction_finished", "compaction_time_micros": 52412, "compaction_time_cpu_micros": 17945, "output_level": 6, "num_output_files": 1, "total_output_size": 7222833, "num_input_records": 5830, "num_output_records": 5320, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916789245, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763802916790931, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.734701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:15:16.791027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:15:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 882 KiB/s rd, 801 KiB/s wr, 42 op/s
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:17 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c763b04c-5aae-4842-8d06-dfcc8fe86cd7 does not exist
Nov 22 04:15:17 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 568958b2-50ce-4539-a8f7-8e9a0d952aad does not exist
Nov 22 04:15:17 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 766f4205-894a-4170-8880-7df53836c9c5 does not exist
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:15:18 np0005532048 podman[300148]: 2025-11-22 09:15:18.502717394 +0000 UTC m=+0.049891917 container create cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 04:15:18 np0005532048 podman[300148]: 2025-11-22 09:15:18.477519445 +0000 UTC m=+0.024693988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:18 np0005532048 systemd[1]: Started libpod-conmon-cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704.scope.
Nov 22 04:15:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:15:18 np0005532048 podman[300148]: 2025-11-22 09:15:18.701567552 +0000 UTC m=+0.248742085 container init cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:15:18 np0005532048 podman[300148]: 2025-11-22 09:15:18.713196937 +0000 UTC m=+0.260371460 container start cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:15:18 np0005532048 great_faraday[300164]: 167 167
Nov 22 04:15:18 np0005532048 systemd[1]: libpod-cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704.scope: Deactivated successfully.
Nov 22 04:15:18 np0005532048 podman[300148]: 2025-11-22 09:15:18.72671276 +0000 UTC m=+0.273887293 container attach cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:15:18 np0005532048 podman[300148]: 2025-11-22 09:15:18.72877482 +0000 UTC m=+0.275949343 container died cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:15:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay-03e60de34f06bb88de27e2bf1ab62e5020244f0235333c974b80c74c826fbff8-merged.mount: Deactivated successfully.
Nov 22 04:15:19 np0005532048 podman[300148]: 2025-11-22 09:15:19.125175974 +0000 UTC m=+0.672350507 container remove cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:15:19 np0005532048 systemd[1]: libpod-conmon-cf30025c31aacfd4fc275f9e796f6506edc59349e12eeda669ecd11f1aa0a704.scope: Deactivated successfully.
Nov 22 04:15:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:15:19 np0005532048 podman[300191]: 2025-11-22 09:15:19.34272364 +0000 UTC m=+0.061736258 container create 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:15:19 np0005532048 systemd[1]: Started libpod-conmon-348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb.scope.
Nov 22 04:15:19 np0005532048 podman[300191]: 2025-11-22 09:15:19.315631895 +0000 UTC m=+0.034644533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:15:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:19 np0005532048 podman[300191]: 2025-11-22 09:15:19.481068721 +0000 UTC m=+0.200081359 container init 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:15:19 np0005532048 podman[300191]: 2025-11-22 09:15:19.491097747 +0000 UTC m=+0.210110365 container start 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:15:19 np0005532048 podman[300191]: 2025-11-22 09:15:19.514998204 +0000 UTC m=+0.234010842 container attach 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 04:15:19 np0005532048 nova_compute[253661]: 2025-11-22 09:15:19.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.431 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.431 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.519 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:15:20 np0005532048 compassionate_blackwell[300207]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:15:20 np0005532048 compassionate_blackwell[300207]: --> relative data size: 1.0
Nov 22 04:15:20 np0005532048 compassionate_blackwell[300207]: --> All data devices are unavailable
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:20 np0005532048 NetworkManager[48920]: <info>  [1763802920.6943] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Nov 22 04:15:20 np0005532048 NetworkManager[48920]: <info>  [1763802920.6957] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Nov 22 04:15:20 np0005532048 podman[300191]: 2025-11-22 09:15:20.710107477 +0000 UTC m=+1.429120105 container died 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:15:20 np0005532048 systemd[1]: libpod-348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb.scope: Deactivated successfully.
Nov 22 04:15:20 np0005532048 systemd[1]: libpod-348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb.scope: Consumed 1.116s CPU time.
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.714 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.716 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.743 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.744 253665 INFO nova.compute.claims [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:20Z|00322|binding|INFO|Releasing lport d5d5c0f3-ca4b-44d4-9294-d8da8d674dc8 from this chassis (sb_readonly=0)
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:20 np0005532048 nova_compute[253661]: 2025-11-22 09:15:20.916 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e38ef69bebc14a24628800c005874c9cf171d721867cd3458b667aabe69dec97-merged.mount: Deactivated successfully.
Nov 22 04:15:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.292 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:21 np0005532048 podman[300191]: 2025-11-22 09:15:21.33961025 +0000 UTC m=+2.058622868 container remove 348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:15:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:15:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2920842919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.412 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:21 np0005532048 systemd[1]: libpod-conmon-348badc47e93917053c04cfa1ef6374b932b17b2730e3cd96a72a7ea79b8fdfb.scope: Deactivated successfully.
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.422 253665 DEBUG nova.compute.provider_tree [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.436 253665 DEBUG nova.scheduler.client.report [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.457 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.458 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:15:21 np0005532048 podman[300244]: 2025-11-22 09:15:21.483471236 +0000 UTC m=+0.720868639 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 22 04:15:21 np0005532048 podman[300237]: 2025-11-22 09:15:21.500881763 +0000 UTC m=+0.740879610 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.529 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.531 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.551 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.571 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.666 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.667 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.668 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Creating image(s)#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.695 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.727 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.753 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.758 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.829 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.830 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.830 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.831 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.881 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.895 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 45051f55-4273-48ff-b5be-72501a74d560_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:21 np0005532048 nova_compute[253661]: 2025-11-22 09:15:21.938 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:22 np0005532048 podman[300521]: 2025-11-22 09:15:22.00524488 +0000 UTC m=+0.028971843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.129 253665 DEBUG nova.policy [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fefecdd1a6a94e3ea3896308da03d91b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc07b24fb9ba4101a34be65493a83a22', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:15:22 np0005532048 podman[300521]: 2025-11-22 09:15:22.174011328 +0000 UTC m=+0.197738251 container create 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:22 np0005532048 systemd[1]: Started libpod-conmon-39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e.scope.
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.265 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.266 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:22 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:15:22 np0005532048 podman[300521]: 2025-11-22 09:15:22.312411 +0000 UTC m=+0.336137933 container init 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:15:22 np0005532048 podman[300521]: 2025-11-22 09:15:22.327183592 +0000 UTC m=+0.350910505 container start 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:15:22 np0005532048 zen_jackson[300552]: 167 167
Nov 22 04:15:22 np0005532048 systemd[1]: libpod-39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e.scope: Deactivated successfully.
Nov 22 04:15:22 np0005532048 podman[300521]: 2025-11-22 09:15:22.391884403 +0000 UTC m=+0.415611346 container attach 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:15:22 np0005532048 podman[300521]: 2025-11-22 09:15:22.392677402 +0000 UTC m=+0.416404315 container died 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:15:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a9fa2c8a6744e1671ada74f5be31ecd67ae5a552762d8f5cdbb7406eda9f63a2-merged.mount: Deactivated successfully.
Nov 22 04:15:22 np0005532048 podman[300521]: 2025-11-22 09:15:22.64200364 +0000 UTC m=+0.665730553 container remove 39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:22 np0005532048 systemd[1]: libpod-conmon-39ae4876e7906fe2725e6b39b294f89137c41e87c0e32acf09f0bb792ae9666e.scope: Deactivated successfully.
Nov 22 04:15:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:15:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118184208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.836 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.904 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:15:22 np0005532048 nova_compute[253661]: 2025-11-22 09:15:22.905 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:15:22 np0005532048 podman[300598]: 2025-11-22 09:15:22.845723727 +0000 UTC m=+0.026797089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.073 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.076 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4022MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.076 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.076 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:23 np0005532048 podman[300598]: 2025-11-22 09:15:23.088664388 +0000 UTC m=+0.269737740 container create c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.167 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.167 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 45051f55-4273-48ff-b5be-72501a74d560 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:15:23 np0005532048 systemd[1]: Started libpod-conmon-c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29.scope.
Nov 22 04:15:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:15:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.216 253665 DEBUG nova.compute.manager [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.216 253665 DEBUG nova.compute.manager [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing instance network info cache due to event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.217 253665 DEBUG oslo_concurrency.lockutils [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.217 253665 DEBUG oslo_concurrency.lockutils [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.217 253665 DEBUG nova.network.neutron [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.245 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 88 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 22 04:15:23 np0005532048 podman[300598]: 2025-11-22 09:15:23.302153776 +0000 UTC m=+0.483227158 container init c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:15:23 np0005532048 podman[300598]: 2025-11-22 09:15:23.311933946 +0000 UTC m=+0.493007298 container start c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.344 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 45051f55-4273-48ff-b5be-72501a74d560_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:23 np0005532048 podman[300598]: 2025-11-22 09:15:23.356103981 +0000 UTC m=+0.537177363 container attach c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.537 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] resizing rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:15:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:15:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638808918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.822 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.830 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.848 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.952 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:15:23 np0005532048 nova_compute[253661]: 2025-11-22 09:15:23.953 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]: {
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:    "0": [
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:        {
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "devices": [
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "/dev/loop3"
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            ],
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_name": "ceph_lv0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_size": "21470642176",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "name": "ceph_lv0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "tags": {
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.cluster_name": "ceph",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.crush_device_class": "",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.encrypted": "0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.osd_id": "0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.type": "block",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.vdo": "0"
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            },
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "type": "block",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "vg_name": "ceph_vg0"
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:        }
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:    ],
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:    "1": [
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:        {
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "devices": [
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "/dev/loop4"
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            ],
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_name": "ceph_lv1",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_size": "21470642176",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "name": "ceph_lv1",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "tags": {
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.cluster_name": "ceph",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.crush_device_class": "",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.encrypted": "0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.osd_id": "1",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.type": "block",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.vdo": "0"
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            },
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "type": "block",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "vg_name": "ceph_vg1"
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:        }
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:    ],
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:    "2": [
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:        {
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "devices": [
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "/dev/loop5"
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            ],
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_name": "ceph_lv2",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_size": "21470642176",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "name": "ceph_lv2",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "tags": {
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.cluster_name": "ceph",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.crush_device_class": "",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.encrypted": "0",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.osd_id": "2",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.type": "block",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:                "ceph.vdo": "0"
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            },
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "type": "block",
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:            "vg_name": "ceph_vg2"
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:        }
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]:    ]
Nov 22 04:15:24 np0005532048 crazy_yonath[300615]: }
Nov 22 04:15:24 np0005532048 systemd[1]: libpod-c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29.scope: Deactivated successfully.
Nov 22 04:15:24 np0005532048 podman[300598]: 2025-11-22 09:15:24.151519711 +0000 UTC m=+1.332593103 container died c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.295 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.298 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.400 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:15:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ad084ae0aa10272221bac580da8e7f5fad105d354b5741eee977af95816c15b9-merged.mount: Deactivated successfully.
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.547 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.549 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.558 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.558 253665 INFO nova.compute.claims [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.662 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Successfully created port: 1da58540-88e4-4125-96c0-62be7cec281d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.678 253665 DEBUG nova.objects.instance [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lazy-loading 'migration_context' on Instance uuid 45051f55-4273-48ff-b5be-72501a74d560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.690 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.691 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Ensure instance console log exists: /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.692 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.692 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:24 np0005532048 podman[300598]: 2025-11-22 09:15:24.69301194 +0000 UTC m=+1.874085302 container remove c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.693 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:24 np0005532048 systemd[1]: libpod-conmon-c64a7474c54894270ba0645de81baaae0253b85f594224e9ac9111f677268a29.scope: Deactivated successfully.
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.770 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:24 np0005532048 podman[300732]: 2025-11-22 09:15:24.876338086 +0000 UTC m=+0.121149128 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.945 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.945 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.946 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.999 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.999 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:24 np0005532048 nova_compute[253661]: 2025-11-22 09:15:24.999 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.220 253665 DEBUG nova.network.neutron [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updated VIF entry in instance network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.221 253665 DEBUG nova.network.neutron [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.235 253665 DEBUG oslo_concurrency.lockutils [req-59509b5e-3ab1-4508-99e8-b4286b0656be req-3db18dd9-3b6b-44ea-a36e-b9c5377250dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:15:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:15:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1038955148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.273 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.280 253665 DEBUG nova.compute.provider_tree [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:15:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 126 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.292 253665 DEBUG nova.scheduler.client.report [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:15:25 np0005532048 podman[300920]: 2025-11-22 09:15:25.309090152 +0000 UTC m=+0.025191220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.423 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.424 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.548 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.548 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:15:25 np0005532048 podman[300920]: 2025-11-22 09:15:25.549786438 +0000 UTC m=+0.265887476 container create aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.571 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:15:25 np0005532048 systemd[1]: Started libpod-conmon-aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f.scope.
Nov 22 04:15:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.665 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:15:25 np0005532048 nova_compute[253661]: 2025-11-22 09:15:25.762 253665 DEBUG nova.policy [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'db8ccc99aef946c58a2604bc21e0ef23', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ad111e77e47541688eda72c9090309e9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:15:25 np0005532048 podman[300920]: 2025-11-22 09:15:25.778283514 +0000 UTC m=+0.494384562 container init aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:15:25 np0005532048 podman[300920]: 2025-11-22 09:15:25.787922641 +0000 UTC m=+0.504023699 container start aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:15:25 np0005532048 xenodochial_wiles[300936]: 167 167
Nov 22 04:15:25 np0005532048 systemd[1]: libpod-aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f.scope: Deactivated successfully.
Nov 22 04:15:25 np0005532048 podman[300920]: 2025-11-22 09:15:25.910853623 +0000 UTC m=+0.626954871 container attach aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:15:25 np0005532048 podman[300920]: 2025-11-22 09:15:25.911897568 +0000 UTC m=+0.627998606 container died aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-533109b4ac836828aafc0a360d806ded20bfe72878438a9c9a4f0d1e488e86b9-merged.mount: Deactivated successfully.
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.479 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.482 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.483 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Creating image(s)#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.517 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.556 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.589 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.595 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.683 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.685 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.685 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.686 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.715 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:26 np0005532048 nova_compute[253661]: 2025-11-22 09:15:26.721 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:27 np0005532048 podman[300920]: 2025-11-22 09:15:27.237006386 +0000 UTC m=+1.953107434 container remove aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wiles, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:15:27 np0005532048 systemd[1]: libpod-conmon-aa95ad359f021992fe38a83467a2b437db7de6832345b75922e6bd08b678d42f.scope: Deactivated successfully.
Nov 22 04:15:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 134 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 22 04:15:27 np0005532048 podman[301053]: 2025-11-22 09:15:27.407376954 +0000 UTC m=+0.037825531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:27 np0005532048 podman[301053]: 2025-11-22 09:15:27.700702543 +0000 UTC m=+0.331151100 container create a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:15:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:27.959 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:27.959 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:27.960 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:28 np0005532048 systemd[1]: Started libpod-conmon-a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b.scope.
Nov 22 04:15:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:15:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:28 np0005532048 nova_compute[253661]: 2025-11-22 09:15:28.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:15:28 np0005532048 podman[301053]: 2025-11-22 09:15:28.504387276 +0000 UTC m=+1.134835883 container init a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:15:28 np0005532048 podman[301053]: 2025-11-22 09:15:28.520629055 +0000 UTC m=+1.151077582 container start a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:15:28 np0005532048 nova_compute[253661]: 2025-11-22 09:15:28.860 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Successfully created port: 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:15:28 np0005532048 podman[301053]: 2025-11-22 09:15:28.899197369 +0000 UTC m=+1.529645936 container attach a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:15:29 np0005532048 nova_compute[253661]: 2025-11-22 09:15:29.023 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Successfully updated port: 1da58540-88e4-4125-96c0-62be7cec281d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:15:29 np0005532048 nova_compute[253661]: 2025-11-22 09:15:29.126 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:15:29 np0005532048 nova_compute[253661]: 2025-11-22 09:15:29.127 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquired lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:15:29 np0005532048 nova_compute[253661]: 2025-11-22 09:15:29.128 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:15:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 141 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 96 op/s
Nov 22 04:15:29 np0005532048 nova_compute[253661]: 2025-11-22 09:15:29.355 253665 DEBUG nova.compute.manager [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-changed-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:29 np0005532048 nova_compute[253661]: 2025-11-22 09:15:29.356 253665 DEBUG nova.compute.manager [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Refreshing instance network info cache due to event network-changed-1da58540-88e4-4125-96c0-62be7cec281d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:15:29 np0005532048 nova_compute[253661]: 2025-11-22 09:15:29.356 253665 DEBUG oslo_concurrency.lockutils [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]: {
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "osd_id": 1,
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "type": "bluestore"
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:    },
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "osd_id": 0,
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "type": "bluestore"
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:    },
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "osd_id": 2,
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:        "type": "bluestore"
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]:    }
Nov 22 04:15:29 np0005532048 quizzical_rosalind[301069]: }
Nov 22 04:15:29 np0005532048 systemd[1]: libpod-a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b.scope: Deactivated successfully.
Nov 22 04:15:29 np0005532048 systemd[1]: libpod-a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b.scope: Consumed 1.051s CPU time.
Nov 22 04:15:29 np0005532048 podman[301053]: 2025-11-22 09:15:29.582288698 +0000 UTC m=+2.212737255 container died a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:15:29 np0005532048 nova_compute[253661]: 2025-11-22 09:15:29.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:30 np0005532048 nova_compute[253661]: 2025-11-22 09:15:30.126 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:15:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-add950bbd960fb1b1486a6ed07f039a8539ecb2eee75c31e8a2ae0fe84becbdd-merged.mount: Deactivated successfully.
Nov 22 04:15:30 np0005532048 nova_compute[253661]: 2025-11-22 09:15:30.455 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Successfully updated port: 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:15:30 np0005532048 nova_compute[253661]: 2025-11-22 09:15:30.468 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:15:30 np0005532048 nova_compute[253661]: 2025-11-22 09:15:30.469 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquired lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:15:30 np0005532048 nova_compute[253661]: 2025-11-22 09:15:30.469 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:15:30 np0005532048 nova_compute[253661]: 2025-11-22 09:15:30.625 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:15:30 np0005532048 podman[301053]: 2025-11-22 09:15:30.961667862 +0000 UTC m=+3.592116399 container remove a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:15:30 np0005532048 systemd[1]: libpod-conmon-a4834bce28a5367b7c6666ce07603b7708de0d882dec41cba97d70942be65f7b.scope: Deactivated successfully.
Nov 22 04:15:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:15:31 np0005532048 nova_compute[253661]: 2025-11-22 09:15:31.116 253665 DEBUG nova.compute.manager [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-changed-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:31 np0005532048 nova_compute[253661]: 2025-11-22 09:15:31.117 253665 DEBUG nova.compute.manager [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Refreshing instance network info cache due to event network-changed-14eb6b64-11d1-4c6f-9c3c-e24463c899c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:15:31 np0005532048 nova_compute[253661]: 2025-11-22 09:15:31.117 253665 DEBUG oslo_concurrency.lockutils [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:15:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:15:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 141 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 2.4 MiB/s wr, 61 op/s
Nov 22 04:15:31 np0005532048 nova_compute[253661]: 2025-11-22 09:15:31.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 161e868c-971b-462c-8688-625899dc2d0f does not exist
Nov 22 04:15:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c89e39f5-e36b-4d06-a87c-ab5d4da87495 does not exist
Nov 22 04:15:32 np0005532048 nova_compute[253661]: 2025-11-22 09:15:32.086 253665 DEBUG nova.network.neutron [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Updating instance_info_cache with network_info: [{"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:15:32 np0005532048 nova_compute[253661]: 2025-11-22 09:15:32.101 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Releasing lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:15:32 np0005532048 nova_compute[253661]: 2025-11-22 09:15:32.102 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance network_info: |[{"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:15:32 np0005532048 nova_compute[253661]: 2025-11-22 09:15:32.102 253665 DEBUG oslo_concurrency.lockutils [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:15:32 np0005532048 nova_compute[253661]: 2025-11-22 09:15:32.102 253665 DEBUG nova.network.neutron [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Refreshing network info cache for port 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:15:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.198 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.277 253665 DEBUG nova.network.neutron [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updating instance_info_cache with network_info: [{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:15:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 151 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.4 MiB/s wr, 68 op/s
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.306 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Releasing lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.307 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance network_info: |[{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.308 253665 DEBUG oslo_concurrency.lockutils [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.309 253665 DEBUG nova.network.neutron [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Refreshing network info cache for port 1da58540-88e4-4125-96c0-62be7cec281d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.313 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start _get_guest_xml network_info=[{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.321 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] resizing rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.391 253665 WARNING nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.400 253665 DEBUG nova.virt.libvirt.host [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.400 253665 DEBUG nova.virt.libvirt.host [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.403 253665 DEBUG nova.virt.libvirt.host [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.404 253665 DEBUG nova.virt.libvirt.host [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.405 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.405 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.405 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.406 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.406 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.406 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.407 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.407 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.407 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.407 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.408 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.408 253665 DEBUG nova.virt.hardware [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.411 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.593 253665 DEBUG nova.objects.instance [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lazy-loading 'migration_context' on Instance uuid aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.608 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.608 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Ensure instance console log exists: /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.609 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.609 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.610 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.614 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start _get_guest_xml network_info=[{"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.620 253665 WARNING nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.625 253665 DEBUG nova.virt.libvirt.host [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.626 253665 DEBUG nova.virt.libvirt.host [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.630 253665 DEBUG nova.virt.libvirt.host [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.630 253665 DEBUG nova.virt.libvirt.host [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.630 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.631 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.632 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.632 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.632 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.632 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.633 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.633 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.633 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.633 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.634 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.634 253665 DEBUG nova.virt.hardware [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.637 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:15:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951426530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.874 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.895 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:33 np0005532048 nova_compute[253661]: 2025-11-22 09:15:33.900 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:15:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2296681367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:15:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:34Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:07:ef:1b 10.100.0.9
Nov 22 04:15:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:34Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:07:ef:1b 10.100.0.9
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.086 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.108 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.112 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.183 253665 DEBUG nova.network.neutron [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Updated VIF entry in instance network info cache for port 14eb6b64-11d1-4c6f-9c3c-e24463c899c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.184 253665 DEBUG nova.network.neutron [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Updating instance_info_cache with network_info: [{"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.205 253665 DEBUG oslo_concurrency.lockutils [req-17f2cdb2-3f37-4c30-9c25-5e4b912e2f30 req-426a558e-a49b-44e4-87fa-13f04c61925e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:15:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:15:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479638859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.374 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.376 253665 DEBUG nova.virt.libvirt.vif [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1927732921',display_name='tempest-ServersTestManualDisk-server-1927732921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1927732921',id=40,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBX0yjbHKSpcMTELYvbrtlV9HnVJ+VN3g8rkd9TCKWMPUjySXweCS4cpqzW/ksedFJ/34L4Xm/tZKO9hmn9Qms+oHuE0viyLQ9MdGgB+HYr9JkLrXZ9hRmwZrKPRvprMA==',key_name='tempest-keypair-581094436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc07b24fb9ba4101a34be65493a83a22',ramdisk_id='',reservation_id='r-tu3melt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-357496739',owner_user_name='tempest-ServersTestManualDisk-357496739-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fefecdd1a6a94e3ea3896308da03d91b',uuid=45051f55-4273-48ff-b5be-72501a74d560,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.376 253665 DEBUG nova.network.os_vif_util [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converting VIF {"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.377 253665 DEBUG nova.network.os_vif_util [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.378 253665 DEBUG nova.objects.instance [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45051f55-4273-48ff-b5be-72501a74d560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.392 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <uuid>45051f55-4273-48ff-b5be-72501a74d560</uuid>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <name>instance-00000028</name>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersTestManualDisk-server-1927732921</nova:name>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:15:33</nova:creationTime>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:user uuid="fefecdd1a6a94e3ea3896308da03d91b">tempest-ServersTestManualDisk-357496739-project-member</nova:user>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:project uuid="dc07b24fb9ba4101a34be65493a83a22">tempest-ServersTestManualDisk-357496739</nova:project>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:port uuid="1da58540-88e4-4125-96c0-62be7cec281d">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="serial">45051f55-4273-48ff-b5be-72501a74d560</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="uuid">45051f55-4273-48ff-b5be-72501a74d560</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/45051f55-4273-48ff-b5be-72501a74d560_disk">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/45051f55-4273-48ff-b5be-72501a74d560_disk.config">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:23:3e:da"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <target dev="tap1da58540-88"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/console.log" append="off"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:15:34 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:15:34 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.393 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Preparing to wait for external event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.394 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.394 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.394 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.395 253665 DEBUG nova.virt.libvirt.vif [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1927732921',display_name='tempest-ServersTestManualDisk-server-1927732921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1927732921',id=40,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBX0yjbHKSpcMTELYvbrtlV9HnVJ+VN3g8rkd9TCKWMPUjySXweCS4cpqzW/ksedFJ/34L4Xm/tZKO9hmn9Qms+oHuE0viyLQ9MdGgB+HYr9JkLrXZ9hRmwZrKPRvprMA==',key_name='tempest-keypair-581094436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc07b24fb9ba4101a34be65493a83a22',ramdisk_id='',reservation_id='r-tu3melt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-357496739',owner_user_name='tempest-ServersTestManualDisk-357496739-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fefecdd1a6a94e3ea3896308da03d91b',uuid=45051f55-4273-48ff-b5be-72501a74d560,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.395 253665 DEBUG nova.network.os_vif_util [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converting VIF {"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.396 253665 DEBUG nova.network.os_vif_util [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.396 253665 DEBUG os_vif [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.397 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.398 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.402 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1da58540-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.402 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1da58540-88, col_values=(('external_ids', {'iface-id': '1da58540-88e4-4125-96c0-62be7cec281d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:3e:da', 'vm-uuid': '45051f55-4273-48ff-b5be-72501a74d560'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:34 np0005532048 NetworkManager[48920]: <info>  [1763802934.4054] manager: (tap1da58540-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.407 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.413 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.414 253665 INFO os_vif [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88')#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.458 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.459 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.459 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] No VIF found with MAC fa:16:3e:23:3e:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.460 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Using config drive#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.482 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:15:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2845922941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.603 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.605 253665 DEBUG nova.virt.libvirt.vif [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-121050772',display_name='tempest-ImagesOneServerTestJSON-server-121050772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-121050772',id=41,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad111e77e47541688eda72c9090309e9',ramdisk_id='',reservation_id='r-820nx03b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1578797770',owner_user_name='tempest-ImagesOneServerTestJSON-1578797770-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:25Z,user_data=None,user_id='db8ccc99aef946c58a2604bc21e0ef23',uuid=aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.605 253665 DEBUG nova.network.os_vif_util [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converting VIF {"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.606 253665 DEBUG nova.network.os_vif_util [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.607 253665 DEBUG nova.objects.instance [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.619 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <uuid>aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad</uuid>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <name>instance-00000029</name>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:name>tempest-ImagesOneServerTestJSON-server-121050772</nova:name>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:15:33</nova:creationTime>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:user uuid="db8ccc99aef946c58a2604bc21e0ef23">tempest-ImagesOneServerTestJSON-1578797770-project-member</nova:user>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:project uuid="ad111e77e47541688eda72c9090309e9">tempest-ImagesOneServerTestJSON-1578797770</nova:project>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <nova:port uuid="14eb6b64-11d1-4c6f-9c3c-e24463c899c9">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="serial">aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="uuid">aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:f8:a3:a6"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <target dev="tap14eb6b64-11"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/console.log" append="off"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:15:34 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:15:34 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:15:34 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:15:34 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.620 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Preparing to wait for external event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.621 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.621 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.621 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.622 253665 DEBUG nova.virt.libvirt.vif [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-121050772',display_name='tempest-ImagesOneServerTestJSON-server-121050772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-121050772',id=41,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad111e77e47541688eda72c9090309e9',ramdisk_id='',reservation_id='r-820nx03b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1578797770',owner_user_name='tempest-ImagesOneServerTestJSON-1578797770-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:25Z,user_data=None,user_id='db8ccc99aef946c58a2604bc21e0ef23',uuid=aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.622 253665 DEBUG nova.network.os_vif_util [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converting VIF {"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.623 253665 DEBUG nova.network.os_vif_util [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.625 253665 DEBUG os_vif [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.626 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.627 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.630 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14eb6b64-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.630 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14eb6b64-11, col_values=(('external_ids', {'iface-id': '14eb6b64-11d1-4c6f-9c3c-e24463c899c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:a3:a6', 'vm-uuid': 'aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:34 np0005532048 NetworkManager[48920]: <info>  [1763802934.6329] manager: (tap14eb6b64-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.643 253665 INFO os_vif [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11')#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.683 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.684 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.684 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No VIF found with MAC fa:16:3e:f8:a3:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.685 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Using config drive#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.708 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:34 np0005532048 nova_compute[253661]: 2025-11-22 09:15:34.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.100 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Creating config drive at /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.109 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45xa9b7v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.138 253665 DEBUG nova.network.neutron [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updated VIF entry in instance network info cache for port 1da58540-88e4-4125-96c0-62be7cec281d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.139 253665 DEBUG nova.network.neutron [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updating instance_info_cache with network_info: [{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.145 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Creating config drive at /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.151 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0wk_gxsh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.189 253665 DEBUG oslo_concurrency.lockutils [req-4c4f7c4e-fa45-4fc2-85c3-4e78dde555c8 req-5dfcef96-a418-4666-99c4-8b4b17154806 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.248 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp45xa9b7v" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.273 253665 DEBUG nova.storage.rbd_utils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] rbd image aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.279 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 194 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 5.2 MiB/s wr, 102 op/s
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.321 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0wk_gxsh" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.346 253665 DEBUG nova.storage.rbd_utils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] rbd image 45051f55-4273-48ff-b5be-72501a74d560_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.350 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config 45051f55-4273-48ff-b5be-72501a74d560_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.482 253665 DEBUG oslo_concurrency.processutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.483 253665 INFO nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Deleting local config drive /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad/disk.config because it was imported into RBD.#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.527 253665 DEBUG oslo_concurrency.processutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config 45051f55-4273-48ff-b5be-72501a74d560_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.527 253665 INFO nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Deleting local config drive /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560/disk.config because it was imported into RBD.#033[00m
Nov 22 04:15:35 np0005532048 NetworkManager[48920]: <info>  [1763802935.5565] manager: (tap14eb6b64-11): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Nov 22 04:15:35 np0005532048 kernel: tap14eb6b64-11: entered promiscuous mode
Nov 22 04:15:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:35Z|00323|binding|INFO|Claiming lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 for this chassis.
Nov 22 04:15:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:35Z|00324|binding|INFO|14eb6b64-11d1-4c6f-9c3c-e24463c899c9: Claiming fa:16:3e:f8:a3:a6 10.100.0.14
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:35Z|00325|binding|INFO|Setting lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 ovn-installed in OVS
Nov 22 04:15:35 np0005532048 systemd-udevd[301500]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.593 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:35 np0005532048 NetworkManager[48920]: <info>  [1763802935.6084] device (tap14eb6b64-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:15:35 np0005532048 NetworkManager[48920]: <info>  [1763802935.6094] device (tap14eb6b64-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.613 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:35 np0005532048 NetworkManager[48920]: <info>  [1763802935.6441] manager: (tap1da58540-88): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Nov 22 04:15:35 np0005532048 kernel: tap1da58540-88: entered promiscuous mode
Nov 22 04:15:35 np0005532048 systemd-udevd[301506]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:15:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:35Z|00326|if_status|INFO|Not updating pb chassis for 1da58540-88e4-4125-96c0-62be7cec281d now as sb is readonly
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:35 np0005532048 systemd-machined[215941]: New machine qemu-45-instance-00000029.
Nov 22 04:15:35 np0005532048 NetworkManager[48920]: <info>  [1763802935.6568] device (tap1da58540-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:15:35 np0005532048 NetworkManager[48920]: <info>  [1763802935.6598] device (tap1da58540-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:35 np0005532048 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.673 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:35 np0005532048 systemd-machined[215941]: New machine qemu-46-instance-00000028.
Nov 22 04:15:35 np0005532048 systemd[1]: Started Virtual Machine qemu-46-instance-00000028.
Nov 22 04:15:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:35Z|00327|binding|INFO|Claiming lport 1da58540-88e4-4125-96c0-62be7cec281d for this chassis.
Nov 22 04:15:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:35Z|00328|binding|INFO|1da58540-88e4-4125-96c0-62be7cec281d: Claiming fa:16:3e:23:3e:da 10.100.0.5
Nov 22 04:15:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:35Z|00329|binding|INFO|Setting lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 up in Southbound
Nov 22 04:15:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:35Z|00330|binding|INFO|Setting lport 1da58540-88e4-4125-96c0-62be7cec281d ovn-installed in OVS
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.781 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:a3:a6 10.100.0.14'], port_security=['fa:16:3e:f8:a3:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad111e77e47541688eda72c9090309e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '848a987a-5baf-4ba8-9981-79089e68d473', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f043f8a-2814-434a-a39b-7e1b32dc2849, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=14eb6b64-11d1-4c6f-9c3c-e24463c899c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:15:35 np0005532048 nova_compute[253661]: 2025-11-22 09:15:35.783 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.785 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 in datapath 4ca459bc-d9ea-444a-9677-3a7c12339ffd bound to our chassis#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.789 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca459bc-d9ea-444a-9677-3a7c12339ffd#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.804 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1a756e13-b377-414e-8c9c-3f653dcbcffe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.805 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ca459bc-d1 in ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.808 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ca459bc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.808 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc88ddce-b198-477f-ac73-44c5cf8661b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.810 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c0a70-68e1-45fd-9f5f-593309c15011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.823 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0867097a-9b62-4ab3-acde-35c711b20576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.847 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:3e:da 10.100.0.5'], port_security=['fa:16:3e:23:3e:da 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '45051f55-4273-48ff-b5be-72501a74d560', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc07b24fb9ba4101a34be65493a83a22', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82819533-0bf5-47c8-9437-4b645122166d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36ec89d9-135e-42eb-84d1-00a3805c21a1, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1da58540-88e4-4125-96c0-62be7cec281d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:15:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:35Z|00331|binding|INFO|Setting lport 1da58540-88e4-4125-96c0-62be7cec281d up in Southbound
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.848 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba37940-7268-48df-bd04-0373832f84e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.884 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[29fba930-861a-49bd-8eca-75ee5af6673e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.891 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e94831b3-8ffc-4bd5-8fd7-ab623d40d197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 NetworkManager[48920]: <info>  [1763802935.8926] manager: (tap4ca459bc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.935 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[87e2eba3-87a1-4d53-9807-26668229b889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.939 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d843547b-e236-4b48-903c-37e18fc21857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 NetworkManager[48920]: <info>  [1763802935.9676] device (tap4ca459bc-d0): carrier: link connected
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.974 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4b49ddf4-97eb-456e-9431-0b4242397a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:35.995 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7552866-ecbe-46ff-8554-19f9a8116c91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca459bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:89:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578670, 'reachable_time': 16065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301555, 'error': None, 'target': 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.014 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c6487cd5-2397-4e70-af3e-02783bb471d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec2:893c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578670, 'tstamp': 578670}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301556, 'error': None, 'target': 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.036 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b461d02e-a5ad-4a02-b579-4ea43cba2993]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca459bc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:89:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578670, 'reachable_time': 16065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301557, 'error': None, 'target': 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.079 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[09fd131a-2087-41e8-9416-8dc6bd00eab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.162 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e19b9bea-3027-4d07-bab8-7a8de4473ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.165 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca459bc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.166 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.166 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca459bc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:36 np0005532048 kernel: tap4ca459bc-d0: entered promiscuous mode
Nov 22 04:15:36 np0005532048 NetworkManager[48920]: <info>  [1763802936.1708] manager: (tap4ca459bc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.183 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca459bc-d0, col_values=(('external_ids', {'iface-id': '113a1272-74c8-4666-96b6-8dbb3f235854'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:36Z|00332|binding|INFO|Releasing lport 113a1272-74c8-4666-96b6-8dbb3f235854 from this chassis (sb_readonly=0)
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.194 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ca459bc-d9ea-444a-9677-3a7c12339ffd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ca459bc-d9ea-444a-9677-3a7c12339ffd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee41a379-0f83-4452-b8f8-6f27eedd2974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.196 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-4ca459bc-d9ea-444a-9677-3a7c12339ffd
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/4ca459bc-d9ea-444a-9677-3a7c12339ffd.pid.haproxy
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 4ca459bc-d9ea-444a-9677-3a7c12339ffd
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.198 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'env', 'PROCESS_TAG=haproxy-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ca459bc-d9ea-444a-9677-3a7c12339ffd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.365 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802936.3636937, aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.365 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] VM Started (Lifecycle Event)#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.385 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.390 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802936.363955, aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.391 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.405 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.409 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.427 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.615 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802936.6145773, 45051f55-4273-48ff-b5be-72501a74d560 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.617 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] VM Started (Lifecycle Event)#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.635 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.640 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802936.6164296, 45051f55-4273-48ff-b5be-72501a74d560 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.641 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:15:36 np0005532048 podman[301674]: 2025-11-22 09:15:36.677468256 +0000 UTC m=+0.084877508 container create 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:15:36 np0005532048 systemd[1]: Started libpod-conmon-39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0.scope.
Nov 22 04:15:36 np0005532048 podman[301674]: 2025-11-22 09:15:36.64229334 +0000 UTC m=+0.049702692 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:15:36 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:15:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d134d16ff507fa45a351af632330afd5dc37dd1d818793e1cf8013d20243ecc4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:36 np0005532048 podman[301674]: 2025-11-22 09:15:36.77367518 +0000 UTC m=+0.181084452 container init 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:15:36 np0005532048 podman[301674]: 2025-11-22 09:15:36.786592528 +0000 UTC m=+0.194001780 container start 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:15:36 np0005532048 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [NOTICE]   (301693) : New worker (301695) forked
Nov 22 04:15:36 np0005532048 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [NOTICE]   (301693) : Loading success.
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.842 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.849 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.851 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1da58540-88e4-4125-96c0-62be7cec281d in datapath 1a784673-76a0-4c6e-a5bb-2fe1d4413dea unbound from our chassis#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.854 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1a784673-76a0-4c6e-a5bb-2fe1d4413dea#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.867 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb05832-ae04-43f6-865e-fa5402b0d416]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.868 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1a784673-71 in ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.871 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1a784673-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac9e6a8-f3eb-4363-b304-b8e57059262b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.873 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b814e0f2-e351-4b55-ab81-408141a8e1eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.890 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4abf3d60-dc5b-4440-8c11-14f3d70067b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.908 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1c86b1-e259-403b-a082-3383ecc2940b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 nova_compute[253661]: 2025-11-22 09:15:36.927 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.943 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa12588-df7b-4d9c-b581-9c6f8aa781a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 systemd-udevd[301541]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.952 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abec80c6-32a2-4599-9bb2-2bb5840f25bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:36 np0005532048 NetworkManager[48920]: <info>  [1763802936.9545] manager: (tap1a784673-70): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Nov 22 04:15:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:36.996 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a685692a-d53d-4d44-960d-306f65d00f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.002 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[35ae62a9-029c-48af-9088-3171d1cdeb09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:37 np0005532048 NetworkManager[48920]: <info>  [1763802937.0352] device (tap1a784673-70): carrier: link connected
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.043 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[28889c55-94fb-4346-aed7-3cd0e30ea693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.065 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5286fc04-c3d7-4913-b3ac-c122e76ea7c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a784673-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:03:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578777, 'reachable_time': 41542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301714, 'error': None, 'target': 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.081 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2b3953-72df-4446-a4df-df34b785257c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:354'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578777, 'tstamp': 578777}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301715, 'error': None, 'target': 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1613c536-fd1c-4894-b5ea-00fc246e9cca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1a784673-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:03:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578777, 'reachable_time': 41542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301716, 'error': None, 'target': 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.145 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[27ec54c3-dfc2-4550-ba6d-4e63f6bf282c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.219 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[96dae545-10b6-4dce-a181-15006ffe652a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.222 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a784673-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.222 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.223 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a784673-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.225 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:37 np0005532048 kernel: tap1a784673-70: entered promiscuous mode
Nov 22 04:15:37 np0005532048 NetworkManager[48920]: <info>  [1763802937.2262] manager: (tap1a784673-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.228 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.228 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1a784673-70, col_values=(('external_ids', {'iface-id': '779c5ded-a0c7-4d1b-a9a9-ea6d3ab61012'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:37Z|00333|binding|INFO|Releasing lport 779c5ded-a0c7-4d1b-a9a9-ea6d3ab61012 from this chassis (sb_readonly=0)
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.255 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.256 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1a784673-76a0-4c6e-a5bb-2fe1d4413dea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1a784673-76a0-4c6e-a5bb-2fe1d4413dea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.257 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21bbfc6b-22c9-406e-80c9-9dce089ff3bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.258 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-1a784673-76a0-4c6e-a5bb-2fe1d4413dea
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/1a784673-76a0-4c6e-a5bb-2fe1d4413dea.pid.haproxy
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 1a784673-76a0-4c6e-a5bb-2fe1d4413dea
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:15:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:37.259 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'env', 'PROCESS_TAG=haproxy-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1a784673-76a0-4c6e-a5bb-2fe1d4413dea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:15:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 208 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 4.2 MiB/s wr, 121 op/s
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.311 253665 DEBUG nova.compute.manager [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.311 253665 DEBUG oslo_concurrency.lockutils [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.312 253665 DEBUG oslo_concurrency.lockutils [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.312 253665 DEBUG oslo_concurrency.lockutils [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.312 253665 DEBUG nova.compute.manager [req-265103c9-4861-4ae4-aecb-0e10ba93c160 req-0d1af624-25bf-4c21-bf07-60d9c853290a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Processing event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.313 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.318 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802937.317737, aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.318 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.321 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.326 253665 INFO nova.virt.libvirt.driver [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance spawned successfully.#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.327 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.358 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.368 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.373 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.373 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.374 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.375 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.375 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.375 253665 DEBUG nova.virt.libvirt.driver [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.613 253665 INFO nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 11.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.615 253665 DEBUG nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:37 np0005532048 podman[301746]: 2025-11-22 09:15:37.681942263 +0000 UTC m=+0.053358682 container create a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:15:37 np0005532048 systemd[1]: Started libpod-conmon-a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0.scope.
Nov 22 04:15:37 np0005532048 podman[301746]: 2025-11-22 09:15:37.656487898 +0000 UTC m=+0.027904337 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:15:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:15:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024c7edb50aa40522fe8ae29b9f05311d4a6ea55593e5b288aee9bfe5a69618c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.780 253665 INFO nova.compute.manager [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 13.26 seconds to build instance.#033[00m
Nov 22 04:15:37 np0005532048 podman[301746]: 2025-11-22 09:15:37.794851179 +0000 UTC m=+0.166267618 container init a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 04:15:37 np0005532048 podman[301746]: 2025-11-22 09:15:37.804563618 +0000 UTC m=+0.175980037 container start a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:15:37 np0005532048 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [NOTICE]   (301765) : New worker (301767) forked
Nov 22 04:15:37 np0005532048 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [NOTICE]   (301765) : Loading success.
Nov 22 04:15:37 np0005532048 nova_compute[253661]: 2025-11-22 09:15:37.837 253665 DEBUG oslo_concurrency.lockutils [None req-a3d4c80e-7a2b-47c4-a0ad-57d46e457f58 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 22 04:15:39 np0005532048 nova_compute[253661]: 2025-11-22 09:15:39.635 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:39 np0005532048 nova_compute[253661]: 2025-11-22 09:15:39.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.094 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.094 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.094 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.094 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] No waiting events found dispatching network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 WARNING nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received unexpected event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.095 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.096 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.096 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Processing event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.096 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.096 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.097 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.097 253665 DEBUG oslo_concurrency.lockutils [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.097 253665 DEBUG nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] No waiting events found dispatching network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.097 253665 WARNING nova.compute.manager [req-61ab642d-ba51-44e2-943a-8815aa045539 req-7e649fcd-b7bd-47e2-89eb-4c1ddb9c6568 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received unexpected event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.098 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.103 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802941.1022315, 45051f55-4273-48ff-b5be-72501a74d560 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.104 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.106 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.111 253665 INFO nova.virt.libvirt.driver [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance spawned successfully.#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.112 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.130 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.138 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.141 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.142 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.142 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.143 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.143 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.144 253665 DEBUG nova.virt.libvirt.driver [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.168 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:15:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 179 op/s
Nov 22 04:15:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.468 253665 INFO nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Took 19.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.469 253665 DEBUG nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.524 253665 INFO nova.compute.manager [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Took 20.84 seconds to build instance.#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.550 253665 DEBUG oslo_concurrency.lockutils [None req-45db96a1-be15-443a-a8ee-63f6896455a2 fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.865 253665 DEBUG nova.compute.manager [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:15:41 np0005532048 nova_compute[253661]: 2025-11-22 09:15:41.903 253665 INFO nova.compute.manager [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] instance snapshotting#033[00m
Nov 22 04:15:42 np0005532048 nova_compute[253661]: 2025-11-22 09:15:42.141 253665 INFO nova.virt.libvirt.driver [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Beginning live snapshot process#033[00m
Nov 22 04:15:42 np0005532048 nova_compute[253661]: 2025-11-22 09:15:42.747 253665 DEBUG nova.virt.libvirt.imagebackend [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:15:42 np0005532048 nova_compute[253661]: 2025-11-22 09:15:42.918 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] creating snapshot(edc3549a16324003bc4517f29f0dcf23) on rbd image(aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:15:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 22 04:15:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 22 04:15:43 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 22 04:15:43 np0005532048 nova_compute[253661]: 2025-11-22 09:15:43.237 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] cloning vms/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk@edc3549a16324003bc4517f29f0dcf23 to images/dfd5ab81-737b-4b61-b64f-2eae89761a6b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:15:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 219 op/s
Nov 22 04:15:43 np0005532048 nova_compute[253661]: 2025-11-22 09:15:43.338 253665 DEBUG nova.compute.manager [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:43 np0005532048 nova_compute[253661]: 2025-11-22 09:15:43.339 253665 DEBUG nova.compute.manager [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing instance network info cache due to event network-changed-50e75895-e769-4e23-b607-7d52eb14fb62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:15:43 np0005532048 nova_compute[253661]: 2025-11-22 09:15:43.339 253665 DEBUG oslo_concurrency.lockutils [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:15:43 np0005532048 nova_compute[253661]: 2025-11-22 09:15:43.339 253665 DEBUG oslo_concurrency.lockutils [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:15:43 np0005532048 nova_compute[253661]: 2025-11-22 09:15:43.339 253665 DEBUG nova.network.neutron [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Refreshing network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:15:43 np0005532048 nova_compute[253661]: 2025-11-22 09:15:43.434 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] flattening images/dfd5ab81-737b-4b61-b64f-2eae89761a6b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:15:44 np0005532048 nova_compute[253661]: 2025-11-22 09:15:44.265 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] removing snapshot(edc3549a16324003bc4517f29f0dcf23) on rbd image(aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:15:44 np0005532048 nova_compute[253661]: 2025-11-22 09:15:44.639 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:44 np0005532048 nova_compute[253661]: 2025-11-22 09:15:44.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 22 04:15:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 22 04:15:45 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 22 04:15:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 214 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 104 KiB/s wr, 281 op/s
Nov 22 04:15:45 np0005532048 nova_compute[253661]: 2025-11-22 09:15:45.315 253665 DEBUG nova.storage.rbd_utils [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] creating snapshot(snap) on rbd image(dfd5ab81-737b-4b61-b64f-2eae89761a6b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:15:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 22 04:15:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 22 04:15:46 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 22 04:15:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:46 np0005532048 nova_compute[253661]: 2025-11-22 09:15:46.697 253665 DEBUG nova.network.neutron [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updated VIF entry in instance network info cache for port 50e75895-e769-4e23-b607-7d52eb14fb62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:15:46 np0005532048 nova_compute[253661]: 2025-11-22 09:15:46.698 253665 DEBUG nova.network.neutron [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [{"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:15:46 np0005532048 nova_compute[253661]: 2025-11-22 09:15:46.713 253665 DEBUG oslo_concurrency.lockutils [req-addfd6ec-7a33-4765-b6a8-76cdad921d63 req-6fc77c51-93ff-420b-9da4-6c2f2dad3057 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:15:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 238 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 1.9 MiB/s wr, 261 op/s
Nov 22 04:15:47 np0005532048 nova_compute[253661]: 2025-11-22 09:15:47.801 253665 INFO nova.virt.libvirt.driver [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Snapshot image upload complete#033[00m
Nov 22 04:15:47 np0005532048 nova_compute[253661]: 2025-11-22 09:15:47.802 253665 INFO nova.compute.manager [None req-57d5c1e3-60bd-4e87-958a-128e8addf4c2 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 5.90 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:15:48 np0005532048 nova_compute[253661]: 2025-11-22 09:15:48.006 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:48 np0005532048 nova_compute[253661]: 2025-11-22 09:15:48.006 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:48 np0005532048 nova_compute[253661]: 2025-11-22 09:15:48.375 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:15:48 np0005532048 nova_compute[253661]: 2025-11-22 09:15:48.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:48.390 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:15:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:48.396 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:15:48 np0005532048 nova_compute[253661]: 2025-11-22 09:15:48.680 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:48 np0005532048 nova_compute[253661]: 2025-11-22 09:15:48.681 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:48 np0005532048 nova_compute[253661]: 2025-11-22 09:15:48.698 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:15:48 np0005532048 nova_compute[253661]: 2025-11-22 09:15:48.699 253665 INFO nova.compute.claims [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:15:48 np0005532048 nova_compute[253661]: 2025-11-22 09:15:48.981 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 260 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 3.5 MiB/s wr, 266 op/s
Nov 22 04:15:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:15:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039658547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:49 np0005532048 nova_compute[253661]: 2025-11-22 09:15:49.508 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:49 np0005532048 nova_compute[253661]: 2025-11-22 09:15:49.515 253665 DEBUG nova.compute.provider_tree [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:15:49 np0005532048 nova_compute[253661]: 2025-11-22 09:15:49.531 253665 DEBUG nova.scheduler.client.report [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:15:49 np0005532048 nova_compute[253661]: 2025-11-22 09:15:49.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:49 np0005532048 nova_compute[253661]: 2025-11-22 09:15:49.907 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:50Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:a3:a6 10.100.0.14
Nov 22 04:15:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:50Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:a3:a6 10.100.0.14
Nov 22 04:15:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 260 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.7 MiB/s wr, 204 op/s
Nov 22 04:15:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 22 04:15:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 22 04:15:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 22 04:15:51 np0005532048 nova_compute[253661]: 2025-11-22 09:15:51.918 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:51 np0005532048 nova_compute[253661]: 2025-11-22 09:15:51.920 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:15:52
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:15:52 np0005532048 podman[301940]: 2025-11-22 09:15:52.392629196 +0000 UTC m=+0.069627053 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:15:52 np0005532048 podman[301941]: 2025-11-22 09:15:52.406458155 +0000 UTC m=+0.071935639 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 272 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 102 op/s
Nov 22 04:15:54 np0005532048 nova_compute[253661]: 2025-11-22 09:15:54.646 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:15:54 np0005532048 nova_compute[253661]: 2025-11-22 09:15:54.910 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:15:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:15:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 297 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.0 MiB/s wr, 183 op/s
Nov 22 04:15:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:55.398 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:55 np0005532048 podman[301978]: 2025-11-22 09:15:55.430442499 +0000 UTC m=+0.111730698 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:15:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:55Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:3e:da 10.100.0.5
Nov 22 04:15:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:55Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:3e:da 10.100.0.5
Nov 22 04:15:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:56 np0005532048 nova_compute[253661]: 2025-11-22 09:15:56.772 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:15:56 np0005532048 nova_compute[253661]: 2025-11-22 09:15:56.773 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:15:56 np0005532048 nova_compute[253661]: 2025-11-22 09:15:56.794 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:15:56 np0005532048 nova_compute[253661]: 2025-11-22 09:15:56.820 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:15:56 np0005532048 nova_compute[253661]: 2025-11-22 09:15:56.934 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:15:56 np0005532048 nova_compute[253661]: 2025-11-22 09:15:56.935 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:15:56 np0005532048 nova_compute[253661]: 2025-11-22 09:15:56.935 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Creating image(s)#033[00m
Nov 22 04:15:56 np0005532048 nova_compute[253661]: 2025-11-22 09:15:56.958 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:56 np0005532048 nova_compute[253661]: 2025-11-22 09:15:56.980 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.002 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.006 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.035 253665 DEBUG nova.policy [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b394acfc2f44ed180b65249224f2788', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.070 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.072 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.072 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.072 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.092 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.096 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b45c203c-7ae1-436b-86d3-bfc0146dd536_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:15:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 303 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 134 op/s
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.456 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b45c203c-7ae1-436b-86d3-bfc0146dd536_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.514 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] resizing rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.559 253665 DEBUG nova.compute.manager [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-changed-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.559 253665 DEBUG nova.compute.manager [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Refreshing instance network info cache due to event network-changed-1da58540-88e4-4125-96c0-62be7cec281d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.559 253665 DEBUG oslo_concurrency.lockutils [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.560 253665 DEBUG oslo_concurrency.lockutils [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.560 253665 DEBUG nova.network.neutron [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Refreshing network info cache for port 1da58540-88e4-4125-96c0-62be7cec281d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.645 253665 DEBUG nova.objects.instance [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'migration_context' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.657 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.657 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Ensure instance console log exists: /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.658 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.658 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:57 np0005532048 nova_compute[253661]: 2025-11-22 09:15:57.658 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.347 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.347 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.348 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.348 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.348 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.350 253665 INFO nova.compute.manager [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Terminating instance#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.351 253665 DEBUG nova.compute.manager [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:15:58 np0005532048 kernel: tap50e75895-e7 (unregistering): left promiscuous mode
Nov 22 04:15:58 np0005532048 NetworkManager[48920]: <info>  [1763802958.4609] device (tap50e75895-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:15:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:58Z|00334|binding|INFO|Releasing lport 50e75895-e769-4e23-b607-7d52eb14fb62 from this chassis (sb_readonly=0)
Nov 22 04:15:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:58Z|00335|binding|INFO|Setting lport 50e75895-e769-4e23-b607-7d52eb14fb62 down in Southbound
Nov 22 04:15:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:15:58Z|00336|binding|INFO|Removing iface tap50e75895-e7 ovn-installed in OVS
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.523 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:ef:1b 10.100.0.9'], port_security=['fa:16:3e:07:ef:1b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fc32fb5484840b1b6654dffb70595ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f093055-0f73-4edf-a345-d9278a345d48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7266d51a-8673-408f-8e3f-05b71c491331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=50e75895-e769-4e23-b607-7d52eb14fb62) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.524 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 50e75895-e769-4e23-b607-7d52eb14fb62 in datapath 35d4669f-adae-4ff8-9cc1-a890f0b28c31 unbound from our chassis#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.526 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35d4669f-adae-4ff8-9cc1-a890f0b28c31, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.527 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d915c0-d7e4-4ccf-a66e-c6b10dde11b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.528 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 namespace which is not needed anymore#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:58 np0005532048 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Deactivated successfully.
Nov 22 04:15:58 np0005532048 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Consumed 15.278s CPU time.
Nov 22 04:15:58 np0005532048 systemd-machined[215941]: Machine qemu-44-instance-00000027 terminated.
Nov 22 04:15:58 np0005532048 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [NOTICE]   (299748) : haproxy version is 2.8.14-c23fe91
Nov 22 04:15:58 np0005532048 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [NOTICE]   (299748) : path to executable is /usr/sbin/haproxy
Nov 22 04:15:58 np0005532048 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [WARNING]  (299748) : Exiting Master process...
Nov 22 04:15:58 np0005532048 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [WARNING]  (299748) : Exiting Master process...
Nov 22 04:15:58 np0005532048 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [ALERT]    (299748) : Current worker (299750) exited with code 143 (Terminated)
Nov 22 04:15:58 np0005532048 neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31[299744]: [WARNING]  (299748) : All workers exited. Exiting... (0)
Nov 22 04:15:58 np0005532048 systemd[1]: libpod-2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb.scope: Deactivated successfully.
Nov 22 04:15:58 np0005532048 podman[302193]: 2025-11-22 09:15:58.66787515 +0000 UTC m=+0.048365390 container died 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:15:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dbd3306a5b7915795f7c52c8141867f19aeb43af9b9e1b85d687973702260cc6-merged.mount: Deactivated successfully.
Nov 22 04:15:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb-userdata-shm.mount: Deactivated successfully.
Nov 22 04:15:58 np0005532048 podman[302193]: 2025-11-22 09:15:58.707223658 +0000 UTC m=+0.087713878 container cleanup 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:15:58 np0005532048 systemd[1]: libpod-conmon-2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb.scope: Deactivated successfully.
Nov 22 04:15:58 np0005532048 podman[302223]: 2025-11-22 09:15:58.775898326 +0000 UTC m=+0.046342780 container remove 2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9c93b00d-fb3c-4fc6-a655-68892ddf049b]: (4, ('Sat Nov 22 09:15:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 (2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb)\n2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb\nSat Nov 22 09:15:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 (2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb)\n2a2d329c77eed0841ec27bc9c448ad38b4b5c07eecd06fb2a3d0568caa8208fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.785 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47e1b9ba-1942-4fb3-9f58-1c0165328b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.786 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35d4669f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.788 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:58 np0005532048 kernel: tap35d4669f-a0: left promiscuous mode
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.792 253665 INFO nova.virt.libvirt.driver [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Instance destroyed successfully.#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.793 253665 DEBUG nova.objects.instance [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lazy-loading 'resources' on Instance uuid 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.810 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[090716c2-dbc4-4360-9d0a-c665bc748863]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.813 253665 DEBUG nova.virt.libvirt.vif [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-467780681',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-467780681',id=39,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fc32fb5484840b1b6654dffb70595ef',ramdisk_id='',reservation_id='r-072h7wv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1334234428-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:15:13Z,user_data=None,user_id='0e5b221447624e728e9eb5442b5238d1',uuid=9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.813 253665 DEBUG nova.network.os_vif_util [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converting VIF {"id": "50e75895-e769-4e23-b607-7d52eb14fb62", "address": "fa:16:3e:07:ef:1b", "network": {"id": "35d4669f-adae-4ff8-9cc1-a890f0b28c31", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-2032165637-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fc32fb5484840b1b6654dffb70595ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50e75895-e7", "ovs_interfaceid": "50e75895-e769-4e23-b607-7d52eb14fb62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.814 253665 DEBUG nova.network.os_vif_util [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.815 253665 DEBUG os_vif [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.817 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e75895-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.818 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.821 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[36e8068a-83f7-4926-954d-71aaa8a2af6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b379a088-7b44-4ef4-ac31-a57c94a4c8d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.825 253665 INFO os_vif [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:07:ef:1b,bridge_name='br-int',has_traffic_filtering=True,id=50e75895-e769-4e23-b607-7d52eb14fb62,network=Network(35d4669f-adae-4ff8-9cc1-a890f0b28c31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50e75895-e7')#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.844 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[634ad8d2-6921-437a-b928-7756da5899d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576368, 'reachable_time': 31150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302253, 'error': None, 'target': 'ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.847 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35d4669f-adae-4ff8-9cc1-a890f0b28c31 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:15:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:15:58.847 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec46165-9dd5-4a43-8d07-1d2deed9bc0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:15:58 np0005532048 systemd[1]: run-netns-ovnmeta\x2d35d4669f\x2dadae\x2d4ff8\x2d9cc1\x2da890f0b28c31.mount: Deactivated successfully.
Nov 22 04:15:58 np0005532048 nova_compute[253661]: 2025-11-22 09:15:58.890 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Successfully created port: 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:15:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 358 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 783 KiB/s rd, 6.7 MiB/s wr, 191 op/s
Nov 22 04:15:59 np0005532048 nova_compute[253661]: 2025-11-22 09:15:59.314 253665 INFO nova.virt.libvirt.driver [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Deleting instance files /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_del#033[00m
Nov 22 04:15:59 np0005532048 nova_compute[253661]: 2025-11-22 09:15:59.315 253665 INFO nova.virt.libvirt.driver [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Deletion of /var/lib/nova/instances/9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1_del complete#033[00m
Nov 22 04:15:59 np0005532048 nova_compute[253661]: 2025-11-22 09:15:59.372 253665 INFO nova.compute.manager [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:15:59 np0005532048 nova_compute[253661]: 2025-11-22 09:15:59.373 253665 DEBUG oslo.service.loopingcall [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:15:59 np0005532048 nova_compute[253661]: 2025-11-22 09:15:59.373 253665 DEBUG nova.compute.manager [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:15:59 np0005532048 nova_compute[253661]: 2025-11-22 09:15:59.373 253665 DEBUG nova.network.neutron [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:15:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 22 04:15:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 22 04:15:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 22 04:15:59 np0005532048 nova_compute[253661]: 2025-11-22 09:15:59.913 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:15:59 np0005532048 nova_compute[253661]: 2025-11-22 09:15:59.994 253665 DEBUG nova.network.neutron [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updated VIF entry in instance network info cache for port 1da58540-88e4-4125-96c0-62be7cec281d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:15:59 np0005532048 nova_compute[253661]: 2025-11-22 09:15:59.995 253665 DEBUG nova.network.neutron [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updating instance_info_cache with network_info: [{"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.025 253665 DEBUG oslo_concurrency.lockutils [req-821339bd-2198-47af-b1cb-fb5120c5a6df req-a858341b-1e62-4d69-b762-543119aad6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-45051f55-4273-48ff-b5be-72501a74d560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.090 253665 DEBUG nova.compute.manager [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-unplugged-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.091 253665 DEBUG oslo_concurrency.lockutils [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.092 253665 DEBUG oslo_concurrency.lockutils [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.092 253665 DEBUG oslo_concurrency.lockutils [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.093 253665 DEBUG nova.compute.manager [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] No waiting events found dispatching network-vif-unplugged-50e75895-e769-4e23-b607-7d52eb14fb62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.093 253665 DEBUG nova.compute.manager [req-d0bd1d53-6a2e-4c36-aef0-a1c9c4febc17 req-c06828e7-4a7d-4ae3-ba61-564e11b13855 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-unplugged-50e75895-e769-4e23-b607-7d52eb14fb62 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.342 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Successfully updated port: 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.357 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.358 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.358 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.551 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.588 253665 DEBUG nova.network.neutron [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.634 253665 INFO nova.compute.manager [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Took 1.26 seconds to deallocate network for instance.#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.652 253665 DEBUG nova.compute.manager [req-a497c4e8-7132-4b87-aed0-2caf5753a489 req-b09be5a1-68aa-4ece-985e-e4510c058e11 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-deleted-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.682 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.682 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.725 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.750 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.751 253665 DEBUG nova.compute.provider_tree [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.774 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.779 253665 DEBUG nova.compute.manager [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.800 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.826 253665 INFO nova.compute.manager [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] instance snapshotting#033[00m
Nov 22 04:16:00 np0005532048 nova_compute[253661]: 2025-11-22 09:16:00.898 253665 DEBUG oslo_concurrency.processutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.079 253665 INFO nova.virt.libvirt.driver [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Beginning live snapshot process#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.233 253665 DEBUG nova.virt.libvirt.imagebackend [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:16:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 358 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 787 KiB/s rd, 6.7 MiB/s wr, 192 op/s
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.306 253665 DEBUG nova.network.neutron [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.323 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.324 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance network_info: |[{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.330 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start _get_guest_xml network_info=[{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.335 253665 WARNING nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:16:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.342 253665 DEBUG nova.virt.libvirt.host [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.343 253665 DEBUG nova.virt.libvirt.host [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.348 253665 DEBUG nova.virt.libvirt.host [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.349 253665 DEBUG nova.virt.libvirt.host [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.350 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:16:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687046343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.350 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.351 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.351 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.351 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.351 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.352 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.352 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.352 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.353 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.353 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.353 253665 DEBUG nova.virt.hardware [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.358 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.402 253665 DEBUG oslo_concurrency.processutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.411 253665 DEBUG nova.compute.provider_tree [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.422 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] creating snapshot(7613fd608e534b8ca1bfd4aed4e348b2) on rbd image(aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.459 253665 DEBUG nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.487 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.515 253665 INFO nova.scheduler.client.report [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Deleted allocations for instance 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1#033[00m
Nov 22 04:16:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 22 04:16:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 22 04:16:01 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.588 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] cloning vms/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk@7613fd608e534b8ca1bfd4aed4e348b2 to images/cf47a6a1-d9bd-4443-bd1d-cb349f9fcfe4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.629 253665 DEBUG oslo_concurrency.lockutils [None req-395a1a39-8fd3-4798-b5ee-5807344e6e8c 0e5b221447624e728e9eb5442b5238d1 6fc32fb5484840b1b6654dffb70595ef - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.713 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] flattening images/cf47a6a1-d9bd-4443-bd1d-cb349f9fcfe4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:16:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:16:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692037821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.875 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.904 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:01 np0005532048 nova_compute[253661]: 2025-11-22 09:16:01.909 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.364 253665 DEBUG nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.366 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.366 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.366 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.366 253665 DEBUG nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] No waiting events found dispatching network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.368 253665 WARNING nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Received unexpected event network-vif-plugged-50e75895-e769-4e23-b607-7d52eb14fb62 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.368 253665 DEBUG nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.368 253665 DEBUG nova.compute.manager [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing instance network info cache due to event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.369 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.369 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.369 253665 DEBUG nova.network.neutron [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:16:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:16:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297915625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.439 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.441 253665 DEBUG nova.virt.libvirt.vif [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1942299149',display_name='tempest-AttachInterfacesUnderV243Test-server-1942299149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1942299149',id=42,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJjKzxQ6a+OuJML0HHQQYvCuHT4o36Pe0HTJXEDf/t0kK24QNwKu6PCguH+C6XVYn+ibPKaOztSJwRFEDsoyxOxItcOZetU3VENvv82U9z5y/gmG/qHovd9IPqkeCrJiA==',key_name='tempest-keypair-1712592069',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f74a0d8c2374c07a9c9cd48b42318c3',ramdisk_id='',reservation_id='r-2g9ce9k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-776663851',owner_user_name='tempest-AttachInterfacesUnderV243Test-776663851-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b394acfc2f44ed180b65249224f2788',uuid=b45c203c-7ae1-436b-86d3-bfc0146dd536,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.441 253665 DEBUG nova.network.os_vif_util [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converting VIF {"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.442 253665 DEBUG nova.network.os_vif_util [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.443 253665 DEBUG nova.objects.instance [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <uuid>b45c203c-7ae1-436b-86d3-bfc0146dd536</uuid>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <name>instance-0000002a</name>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1942299149</nova:name>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:16:01</nova:creationTime>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <nova:user uuid="7b394acfc2f44ed180b65249224f2788">tempest-AttachInterfacesUnderV243Test-776663851-project-member</nova:user>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <nova:project uuid="2f74a0d8c2374c07a9c9cd48b42318c3">tempest-AttachInterfacesUnderV243Test-776663851</nova:project>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <nova:port uuid="91a0d7d2-517a-4636-a7fd-86f4d72aed04">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <entry name="serial">b45c203c-7ae1-436b-86d3-bfc0146dd536</entry>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <entry name="uuid">b45c203c-7ae1-436b-86d3-bfc0146dd536</entry>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b45c203c-7ae1-436b-86d3-bfc0146dd536_disk">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:d8:22:b3"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <target dev="tap91a0d7d2-51"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/console.log" append="off"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:16:02 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:16:02 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:16:02 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:16:02 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Preparing to wait for external event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.455 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.456 253665 DEBUG nova.virt.libvirt.vif [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1942299149',display_name='tempest-AttachInterfacesUnderV243Test-server-1942299149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1942299149',id=42,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJjKzxQ6a+OuJML0HHQQYvCuHT4o36Pe0HTJXEDf/t0kK24QNwKu6PCguH+C6XVYn+ibPKaOztSJwRFEDsoyxOxItcOZetU3VENvv82U9z5y/gmG/qHovd9IPqkeCrJiA==',key_name='tempest-keypair-1712592069',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2f74a0d8c2374c07a9c9cd48b42318c3',ramdisk_id='',reservation_id='r-2g9ce9k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-776663851',owner_user_name='tempest-AttachInterfacesUnderV243Test-776663851-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:15:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b394acfc2f44ed180b65249224f2788',uuid=b45c203c-7ae1-436b-86d3-bfc0146dd536,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.456 253665 DEBUG nova.network.os_vif_util [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converting VIF {"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.457 253665 DEBUG nova.network.os_vif_util [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.457 253665 DEBUG os_vif [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.458 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.458 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.461 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.461 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91a0d7d2-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.461 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91a0d7d2-51, col_values=(('external_ids', {'iface-id': '91a0d7d2-517a-4636-a7fd-86f4d72aed04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:22:b3', 'vm-uuid': 'b45c203c-7ae1-436b-86d3-bfc0146dd536'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:02 np0005532048 NetworkManager[48920]: <info>  [1763802962.4641] manager: (tap91a0d7d2-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.466 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.470 253665 INFO os_vif [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51')#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.597 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.598 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.598 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] No VIF found with MAC fa:16:3e:d8:22:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.599 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Using config drive#033[00m
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0025243603283518915 of space, bias 1.0, pg target 0.7573080985055675 quantized to 32 (current 32)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:16:02 np0005532048 nova_compute[253661]: 2025-11-22 09:16:02.679 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.151 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] removing snapshot(7613fd608e534b8ca1bfd4aed4e348b2) on rbd image(aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.286 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Creating config drive at /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.300 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztn3fn_9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 304 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 508 KiB/s rd, 4.9 MiB/s wr, 165 op/s
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.475 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztn3fn_9" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.511 253665 DEBUG nova.storage.rbd_utils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] rbd image b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.517 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 22 04:16:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 22 04:16:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.740 253665 DEBUG nova.storage.rbd_utils [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] creating snapshot(snap) on rbd image(cf47a6a1-d9bd-4443-bd1d-cb349f9fcfe4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.885 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.885 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.886 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.886 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.886 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.888 253665 INFO nova.compute.manager [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Terminating instance#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.889 253665 DEBUG nova.compute.manager [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.938 253665 DEBUG nova.network.neutron [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated VIF entry in instance network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.939 253665 DEBUG nova.network.neutron [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.947 253665 DEBUG oslo_concurrency.processutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config b45c203c-7ae1-436b-86d3-bfc0146dd536_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.948 253665 INFO nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Deleting local config drive /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536/disk.config because it was imported into RBD.#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.959 253665 DEBUG oslo_concurrency.lockutils [req-df88aca2-aa88-4776-af9d-2d33acf34042 req-d8f5c5aa-8c3a-4f5f-b83a-bff96b5b7d16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:03 np0005532048 kernel: tap1da58540-88 (unregistering): left promiscuous mode
Nov 22 04:16:03 np0005532048 NetworkManager[48920]: <info>  [1763802963.9860] device (tap1da58540-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:16:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:03Z|00337|binding|INFO|Releasing lport 1da58540-88e4-4125-96c0-62be7cec281d from this chassis (sb_readonly=0)
Nov 22 04:16:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:03Z|00338|binding|INFO|Setting lport 1da58540-88e4-4125-96c0-62be7cec281d down in Southbound
Nov 22 04:16:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:03Z|00339|binding|INFO|Removing iface tap1da58540-88 ovn-installed in OVS
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:03 np0005532048 nova_compute[253661]: 2025-11-22 09:16:03.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:03.999 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:3e:da 10.100.0.5'], port_security=['fa:16:3e:23:3e:da 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '45051f55-4273-48ff-b5be-72501a74d560', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc07b24fb9ba4101a34be65493a83a22', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82819533-0bf5-47c8-9437-4b645122166d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36ec89d9-135e-42eb-84d1-00a3805c21a1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1da58540-88e4-4125-96c0-62be7cec281d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.001 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1da58540-88e4-4125-96c0-62be7cec281d in datapath 1a784673-76a0-4c6e-a5bb-2fe1d4413dea unbound from our chassis#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.003 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1a784673-76a0-4c6e-a5bb-2fe1d4413dea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.004 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d144f61-df18-45f8-baa7-807d2ec36fb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.005 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea namespace which is not needed anymore#033[00m
Nov 22 04:16:04 np0005532048 kernel: tap91a0d7d2-51: entered promiscuous mode
Nov 22 04:16:04 np0005532048 NetworkManager[48920]: <info>  [1763802964.0228] manager: (tap91a0d7d2-51): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Nov 22 04:16:04 np0005532048 systemd-udevd[302574]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:16:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:04Z|00340|binding|INFO|Claiming lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 for this chassis.
Nov 22 04:16:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:04Z|00341|binding|INFO|91a0d7d2-517a-4636-a7fd-86f4d72aed04: Claiming fa:16:3e:d8:22:b3 10.100.0.5
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.079 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:22:b3 10.100.0.5'], port_security=['fa:16:3e:d8:22:b3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b45c203c-7ae1-436b-86d3-bfc0146dd536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7459b4dc-5141-4001-a5b6-0d7256031901', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba63bb0-30f0-4e31-af74-7247ce34941d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=91a0d7d2-517a-4636-a7fd-86f4d72aed04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:04 np0005532048 NetworkManager[48920]: <info>  [1763802964.0827] device (tap91a0d7d2-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:16:04 np0005532048 NetworkManager[48920]: <info>  [1763802964.0836] device (tap91a0d7d2-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:16:04 np0005532048 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Deactivated successfully.
Nov 22 04:16:04 np0005532048 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Consumed 14.615s CPU time.
Nov 22 04:16:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:04Z|00342|binding|INFO|Setting lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 ovn-installed in OVS
Nov 22 04:16:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:04Z|00343|binding|INFO|Setting lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 up in Southbound
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 systemd-machined[215941]: Machine qemu-46-instance-00000028 terminated.
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 systemd-machined[215941]: New machine qemu-47-instance-0000002a.
Nov 22 04:16:04 np0005532048 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Nov 22 04:16:04 np0005532048 NetworkManager[48920]: <info>  [1763802964.1225] manager: (tap1da58540-88): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.123 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.140 253665 INFO nova.virt.libvirt.driver [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Instance destroyed successfully.#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.142 253665 DEBUG nova.objects.instance [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lazy-loading 'resources' on Instance uuid 45051f55-4273-48ff-b5be-72501a74d560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.156 253665 DEBUG nova.virt.libvirt.vif [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:15:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1927732921',display_name='tempest-ServersTestManualDisk-server-1927732921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1927732921',id=40,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGBX0yjbHKSpcMTELYvbrtlV9HnVJ+VN3g8rkd9TCKWMPUjySXweCS4cpqzW/ksedFJ/34L4Xm/tZKO9hmn9Qms+oHuE0viyLQ9MdGgB+HYr9JkLrXZ9hRmwZrKPRvprMA==',key_name='tempest-keypair-581094436',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:15:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc07b24fb9ba4101a34be65493a83a22',ramdisk_id='',reservation_id='r-tu3melt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-357496739',owner_user_name='tempest-ServersTestManualDisk-357496739-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:15:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fefecdd1a6a94e3ea3896308da03d91b',uuid=45051f55-4273-48ff-b5be-72501a74d560,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.156 253665 DEBUG nova.network.os_vif_util [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converting VIF {"id": "1da58540-88e4-4125-96c0-62be7cec281d", "address": "fa:16:3e:23:3e:da", "network": {"id": "1a784673-76a0-4c6e-a5bb-2fe1d4413dea", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-168773556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc07b24fb9ba4101a34be65493a83a22", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1da58540-88", "ovs_interfaceid": "1da58540-88e4-4125-96c0-62be7cec281d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.157 253665 DEBUG nova.network.os_vif_util [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.157 253665 DEBUG os_vif [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.158 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.159 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da58540-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.162 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.177 253665 INFO os_vif [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:3e:da,bridge_name='br-int',has_traffic_filtering=True,id=1da58540-88e4-4125-96c0-62be7cec281d,network=Network(1a784673-76a0-4c6e-a5bb-2fe1d4413dea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1da58540-88')#033[00m
Nov 22 04:16:04 np0005532048 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [NOTICE]   (301765) : haproxy version is 2.8.14-c23fe91
Nov 22 04:16:04 np0005532048 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [NOTICE]   (301765) : path to executable is /usr/sbin/haproxy
Nov 22 04:16:04 np0005532048 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [WARNING]  (301765) : Exiting Master process...
Nov 22 04:16:04 np0005532048 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [WARNING]  (301765) : Exiting Master process...
Nov 22 04:16:04 np0005532048 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [ALERT]    (301765) : Current worker (301767) exited with code 143 (Terminated)
Nov 22 04:16:04 np0005532048 neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea[301761]: [WARNING]  (301765) : All workers exited. Exiting... (0)
Nov 22 04:16:04 np0005532048 systemd[1]: libpod-a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0.scope: Deactivated successfully.
Nov 22 04:16:04 np0005532048 podman[302606]: 2025-11-22 09:16:04.200869601 +0000 UTC m=+0.058357635 container died a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:16:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0-userdata-shm.mount: Deactivated successfully.
Nov 22 04:16:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-024c7edb50aa40522fe8ae29b9f05311d4a6ea55593e5b288aee9bfe5a69618c-merged.mount: Deactivated successfully.
Nov 22 04:16:04 np0005532048 podman[302606]: 2025-11-22 09:16:04.256965239 +0000 UTC m=+0.114453263 container cleanup a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:16:04 np0005532048 systemd[1]: libpod-conmon-a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0.scope: Deactivated successfully.
Nov 22 04:16:04 np0005532048 podman[302665]: 2025-11-22 09:16:04.34365708 +0000 UTC m=+0.052876891 container remove a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.353 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dd025800-7c3c-4faf-a80c-ba5c1aebba40]: (4, ('Sat Nov 22 09:16:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea (a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0)\na60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0\nSat Nov 22 09:16:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea (a60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0)\na60226df25e66e16dca173910b851b0435bffcc52c4d4e40ed79d4e6a4213fc0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.355 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaeba918-08d0-424d-bbc4-35fac193b5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.357 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a784673-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 kernel: tap1a784673-70: left promiscuous mode
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.381 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[644062dc-e4d4-4018-8044-8ab0afae5e3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.396 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[088fc0f1-8b75-49ea-8019-c6be8635bde2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.398 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a78bcec9-4ef9-4e0a-97c1-5d6f88b2615f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.422 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f59f3535-4fd1-42cf-88dc-65b4c5a6fc66]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578767, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302678, 'error': None, 'target': 'ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 systemd[1]: run-netns-ovnmeta\x2d1a784673\x2d76a0\x2d4c6e\x2da5bb\x2d2fe1d4413dea.mount: Deactivated successfully.
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.428 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1a784673-76a0-4c6e-a5bb-2fe1d4413dea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.428 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3b9029-4e6e-4de8-84f6-b2f4d74b0bf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.430 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 in datapath a8ceec0c-2cf6-459a-a4d7-aaf770041b6c unbound from our chassis#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.431 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8ceec0c-2cf6-459a-a4d7-aaf770041b6c#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.442 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-unplugged-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.443 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.443 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.443 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] No waiting events found dispatching network-vif-unplugged-1da58540-88e4-4125-96c0-62be7cec281d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-unplugged-1da58540-88e4-4125-96c0-62be7cec281d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "45051f55-4273-48ff-b5be-72501a74d560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.444 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] No waiting events found dispatching network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 WARNING nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received unexpected event network-vif-plugged-1da58540-88e4-4125-96c0-62be7cec281d for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG oslo_concurrency.lockutils [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.445 253665 DEBUG nova.compute.manager [req-b4e939cc-7d91-435e-813c-d9b919a7130a req-9f371d36-8627-4afd-b48f-631999f9cdb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Processing event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[338cf403-6cf3-45b1-bf6c-4a87cf792bb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.449 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8ceec0c-21 in ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.451 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8ceec0c-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.451 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ebb846-d1e0-4160-b010-df762ce456ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.452 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a544db52-661a-4124-b320-16e83d459d66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.467 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[33879aac-9468-455b-b224-70c52387438a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.495 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dda00fa5-7af1-4060-a1d2-85e98754b5d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.539 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1b224940-a11b-48de-b79b-fdfd7317acd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 NetworkManager[48920]: <info>  [1763802964.5539] manager: (tapa8ceec0c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.552 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9eef6d04-57b6-4570-9dc1-3e8252799713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.598 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ddac8033-e8ad-422e-ab48-79437d26c9e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.601 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[43c5dfd8-6370-4a7a-883e-bd202520d78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 NetworkManager[48920]: <info>  [1763802964.6297] device (tapa8ceec0c-20): carrier: link connected
Nov 22 04:16:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.638 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[49cd6113-9f9c-4558-94e8-16f81b264c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 22 04:16:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.664 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6afbb932-67dd-43ce-a8eb-e6c690df3ad7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8ceec0c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:0f:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581536, 'reachable_time': 31837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302706, 'error': None, 'target': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.691 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1b55cd-7594-49f8-9717-219600c38a6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:fe7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581536, 'tstamp': 581536}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302707, 'error': None, 'target': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.712 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b52179f5-0b95-4c04-992a-19b5fc08a25d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8ceec0c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:0f:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581536, 'reachable_time': 31837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302708, 'error': None, 'target': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.752 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aa35c8-5db7-4079-ace4-117da5e47486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.841 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[666bc81e-1220-4bce-8cb1-60cd2bb1992a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.843 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8ceec0c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.844 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.844 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8ceec0c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:04 np0005532048 NetworkManager[48920]: <info>  [1763802964.8479] manager: (tapa8ceec0c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Nov 22 04:16:04 np0005532048 kernel: tapa8ceec0c-20: entered promiscuous mode
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.847 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.852 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8ceec0c-20, col_values=(('external_ids', {'iface-id': '73744eaa-7d97-4c21-9fb3-4378f10417f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.853 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:04Z|00344|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.870 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.871 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8ceec0c-2cf6-459a-a4d7-aaf770041b6c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8ceec0c-2cf6-459a-a4d7-aaf770041b6c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d9b377-c312-4490-9a9d-929c3cd685f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.873 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/a8ceec0c-2cf6-459a-a4d7-aaf770041b6c.pid.haproxy
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID a8ceec0c-2cf6-459a-a4d7-aaf770041b6c
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:16:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:04.874 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'env', 'PROCESS_TAG=haproxy-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8ceec0c-2cf6-459a-a4d7-aaf770041b6c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.915 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.928 253665 INFO nova.virt.libvirt.driver [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Deleting instance files /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560_del#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.929 253665 INFO nova.virt.libvirt.driver [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Deletion of /var/lib/nova/instances/45051f55-4273-48ff-b5be-72501a74d560_del complete#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.981 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802964.9809964, b45c203c-7ae1-436b-86d3-bfc0146dd536 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.982 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] VM Started (Lifecycle Event)#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.986 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.990 253665 INFO nova.compute.manager [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.990 253665 DEBUG oslo.service.loopingcall [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.991 253665 DEBUG nova.compute.manager [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.991 253665 DEBUG nova.network.neutron [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:16:04 np0005532048 nova_compute[253661]: 2025-11-22 09:16:04.995 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.001 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.002 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.005 253665 INFO nova.virt.libvirt.driver [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance spawned successfully.#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.005 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.018 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.018 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802964.9812398, b45c203c-7ae1-436b-86d3-bfc0146dd536 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.019 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.026 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.027 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.027 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.028 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.028 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.028 253665 DEBUG nova.virt.libvirt.driver [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.035 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.038 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763802964.988516, b45c203c-7ae1-436b-86d3-bfc0146dd536 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.038 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.061 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.065 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.082 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.091 253665 INFO nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Took 8.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.092 253665 DEBUG nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.196 253665 INFO nova.compute.manager [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Took 16.55 seconds to build instance.#033[00m
Nov 22 04:16:05 np0005532048 nova_compute[253661]: 2025-11-22 09:16:05.212 253665 DEBUG oslo_concurrency.lockutils [None req-f826e5cb-b51e-4751-bd4f-bf3ef6640bd0 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 307 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.7 MiB/s wr, 201 op/s
Nov 22 04:16:05 np0005532048 podman[302781]: 2025-11-22 09:16:05.306482965 +0000 UTC m=+0.087330248 container create 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 04:16:05 np0005532048 podman[302781]: 2025-11-22 09:16:05.253274057 +0000 UTC m=+0.034121360 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:16:05 np0005532048 systemd[1]: Started libpod-conmon-7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557.scope.
Nov 22 04:16:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:16:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014918619f881c93f9b6124aa0acf97a639f1327a1c3280fd54ede139d565034/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:05 np0005532048 podman[302781]: 2025-11-22 09:16:05.425121091 +0000 UTC m=+0.205968404 container init 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:16:05 np0005532048 podman[302781]: 2025-11-22 09:16:05.431959148 +0000 UTC m=+0.212806431 container start 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:16:05 np0005532048 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [NOTICE]   (302802) : New worker (302804) forked
Nov 22 04:16:05 np0005532048 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [NOTICE]   (302802) : Loading success.
Nov 22 04:16:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:06Z|00345|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 04:16:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:06Z|00346|binding|INFO|Releasing lport 113a1272-74c8-4666-96b6-8dbb3f235854 from this chassis (sb_readonly=0)
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.105 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.206 253665 INFO nova.virt.libvirt.driver [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Snapshot image upload complete#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.209 253665 INFO nova.compute.manager [None req-ff44c03e-2711-418f-a041-ea0ed2228ec1 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 5.38 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.236 253665 DEBUG nova.network.neutron [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:06Z|00347|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 04:16:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:06Z|00348|binding|INFO|Releasing lport 113a1272-74c8-4666-96b6-8dbb3f235854 from this chassis (sb_readonly=0)
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.265 253665 INFO nova.compute.manager [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Took 1.27 seconds to deallocate network for instance.#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.268 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.318 253665 DEBUG nova.compute.manager [req-1e333cd7-c7f8-4f59-a2ca-ca8774cba4d1 req-1fe13229-9cd3-4864-b910-51a593e1817c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Received event network-vif-deleted-1da58540-88e4-4125-96c0-62be7cec281d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.329 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.330 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.423 253665 DEBUG oslo_concurrency.processutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 22 04:16:06 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.732 253665 DEBUG nova.compute.manager [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.733 253665 DEBUG oslo_concurrency.lockutils [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.733 253665 DEBUG oslo_concurrency.lockutils [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.733 253665 DEBUG oslo_concurrency.lockutils [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.733 253665 DEBUG nova.compute.manager [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] No waiting events found dispatching network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.734 253665 WARNING nova.compute.manager [req-19c9bf84-3213-416b-8c2f-eb44d7282bb9 req-4e5c99e2-e46d-4460-907a-6f405cec05ca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received unexpected event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:16:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160454603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.900 253665 DEBUG oslo_concurrency.processutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.909 253665 DEBUG nova.compute.provider_tree [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.923 253665 DEBUG nova.scheduler.client.report [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.946 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:06 np0005532048 nova_compute[253661]: 2025-11-22 09:16:06.975 253665 INFO nova.scheduler.client.report [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Deleted allocations for instance 45051f55-4273-48ff-b5be-72501a74d560#033[00m
Nov 22 04:16:07 np0005532048 nova_compute[253661]: 2025-11-22 09:16:07.059 253665 DEBUG oslo_concurrency.lockutils [None req-8d533c84-6089-4f0e-b85d-a4183e774feb fefecdd1a6a94e3ea3896308da03d91b dc07b24fb9ba4101a34be65493a83a22 - - default default] Lock "45051f55-4273-48ff-b5be-72501a74d560" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 311 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 9.1 MiB/s wr, 328 op/s
Nov 22 04:16:08 np0005532048 nova_compute[253661]: 2025-11-22 09:16:08.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:08 np0005532048 NetworkManager[48920]: <info>  [1763802968.1262] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Nov 22 04:16:08 np0005532048 NetworkManager[48920]: <info>  [1763802968.1282] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Nov 22 04:16:08 np0005532048 nova_compute[253661]: 2025-11-22 09:16:08.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:08Z|00349|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 04:16:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:08Z|00350|binding|INFO|Releasing lport 113a1272-74c8-4666-96b6-8dbb3f235854 from this chassis (sb_readonly=0)
Nov 22 04:16:08 np0005532048 nova_compute[253661]: 2025-11-22 09:16:08.217 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 22 04:16:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 22 04:16:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.001 253665 DEBUG nova.compute.manager [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.001 253665 DEBUG nova.compute.manager [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing instance network info cache due to event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.001 253665 DEBUG oslo_concurrency.lockutils [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.002 253665 DEBUG oslo_concurrency.lockutils [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.002 253665 DEBUG nova.network.neutron [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.161 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 246 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 6.0 MiB/s wr, 408 op/s
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.819 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.819 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.820 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.820 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.820 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.821 253665 INFO nova.compute.manager [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Terminating instance#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.822 253665 DEBUG nova.compute.manager [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:16:09 np0005532048 kernel: tap14eb6b64-11 (unregistering): left promiscuous mode
Nov 22 04:16:09 np0005532048 NetworkManager[48920]: <info>  [1763802969.8980] device (tap14eb6b64-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.908 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:09Z|00351|binding|INFO|Releasing lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 from this chassis (sb_readonly=0)
Nov 22 04:16:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:09Z|00352|binding|INFO|Setting lport 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 down in Southbound
Nov 22 04:16:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:09Z|00353|binding|INFO|Removing iface tap14eb6b64-11 ovn-installed in OVS
Nov 22 04:16:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.913 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:a3:a6 10.100.0.14'], port_security=['fa:16:3e:f8:a3:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad111e77e47541688eda72c9090309e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '848a987a-5baf-4ba8-9981-79089e68d473', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f043f8a-2814-434a-a39b-7e1b32dc2849, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=14eb6b64-11d1-4c6f-9c3c-e24463c899c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.915 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 14eb6b64-11d1-4c6f-9c3c-e24463c899c9 in datapath 4ca459bc-d9ea-444a-9677-3a7c12339ffd unbound from our chassis#033[00m
Nov 22 04:16:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.916 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ca459bc-d9ea-444a-9677-3a7c12339ffd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:16:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.918 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe881a87-fc26-4c0d-87ae-c91a8d516286]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:09.918 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd namespace which is not needed anymore#033[00m
Nov 22 04:16:09 np0005532048 nova_compute[253661]: 2025-11-22 09:16:09.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:09 np0005532048 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Nov 22 04:16:09 np0005532048 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 14.502s CPU time.
Nov 22 04:16:09 np0005532048 systemd-machined[215941]: Machine qemu-45-instance-00000029 terminated.
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.069 253665 INFO nova.virt.libvirt.driver [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Instance destroyed successfully.#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.069 253665 DEBUG nova.objects.instance [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lazy-loading 'resources' on Instance uuid aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:10 np0005532048 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [NOTICE]   (301693) : haproxy version is 2.8.14-c23fe91
Nov 22 04:16:10 np0005532048 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [NOTICE]   (301693) : path to executable is /usr/sbin/haproxy
Nov 22 04:16:10 np0005532048 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [WARNING]  (301693) : Exiting Master process...
Nov 22 04:16:10 np0005532048 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [WARNING]  (301693) : Exiting Master process...
Nov 22 04:16:10 np0005532048 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [ALERT]    (301693) : Current worker (301695) exited with code 143 (Terminated)
Nov 22 04:16:10 np0005532048 neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd[301689]: [WARNING]  (301693) : All workers exited. Exiting... (0)
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.082 253665 DEBUG nova.virt.libvirt.vif [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:15:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-121050772',display_name='tempest-ImagesOneServerTestJSON-server-121050772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-121050772',id=41,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:15:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad111e77e47541688eda72c9090309e9',ramdisk_id='',reservation_id='r-820nx03b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1578797770',owner_user_name='tempest-ImagesOneServerTestJSON-1578797770-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:06Z,user_data=None,user_id='db8ccc99aef946c58a2604bc21e0ef23',uuid=aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.082 253665 DEBUG nova.network.os_vif_util [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converting VIF {"id": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "address": "fa:16:3e:f8:a3:a6", "network": {"id": "4ca459bc-d9ea-444a-9677-3a7c12339ffd", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-588518196-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad111e77e47541688eda72c9090309e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14eb6b64-11", "ovs_interfaceid": "14eb6b64-11d1-4c6f-9c3c-e24463c899c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.083 253665 DEBUG nova.network.os_vif_util [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.083 253665 DEBUG os_vif [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:16:10 np0005532048 systemd[1]: libpod-39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0.scope: Deactivated successfully.
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.088 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14eb6b64-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:10 np0005532048 podman[302858]: 2025-11-22 09:16:10.088728153 +0000 UTC m=+0.060701693 container died 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.101 253665 INFO os_vif [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a3:a6,bridge_name='br-int',has_traffic_filtering=True,id=14eb6b64-11d1-4c6f-9c3c-e24463c899c9,network=Network(4ca459bc-d9ea-444a-9677-3a7c12339ffd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14eb6b64-11')#033[00m
Nov 22 04:16:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0-userdata-shm.mount: Deactivated successfully.
Nov 22 04:16:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d134d16ff507fa45a351af632330afd5dc37dd1d818793e1cf8013d20243ecc4-merged.mount: Deactivated successfully.
Nov 22 04:16:10 np0005532048 podman[302858]: 2025-11-22 09:16:10.127197179 +0000 UTC m=+0.099170719 container cleanup 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:10 np0005532048 systemd[1]: libpod-conmon-39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0.scope: Deactivated successfully.
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.191 253665 DEBUG nova.compute.manager [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-unplugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.192 253665 DEBUG oslo_concurrency.lockutils [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.192 253665 DEBUG oslo_concurrency.lockutils [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.192 253665 DEBUG oslo_concurrency.lockutils [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.193 253665 DEBUG nova.compute.manager [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] No waiting events found dispatching network-vif-unplugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.193 253665 DEBUG nova.compute.manager [req-921aa8e8-a117-4ab2-86f7-07a025297ba5 req-fa2eb7fa-bcaa-45bb-b321-f8493f449af9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-unplugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:16:10 np0005532048 podman[302915]: 2025-11-22 09:16:10.211124532 +0000 UTC m=+0.058232493 container remove 39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46c24c35-578c-4d24-b8a5-66618269469e]: (4, ('Sat Nov 22 09:16:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd (39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0)\n39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0\nSat Nov 22 09:16:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd (39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0)\n39c6fcbb9c8aafbc11b00c8ae2401b2c6f25f7fb7fd0fe81b61de77ab5621dd0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.220 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[435513fd-e823-4b54-87fa-19b2d0db1d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.221 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca459bc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.223 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:10 np0005532048 kernel: tap4ca459bc-d0: left promiscuous mode
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.228 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3f5b89-8b0b-45ce-9740-86ff7fb6269f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.248 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.249 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73a50dc2-7504-4437-a0e6-6859584a1c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.250 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1eefbf7c-35da-4cbf-8897-2dcba0c4da5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ad603bab-58a6-4e93-a5f0-176bfeeed60b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578661, 'reachable_time': 17100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302931, 'error': None, 'target': 'ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:10 np0005532048 systemd[1]: run-netns-ovnmeta\x2d4ca459bc\x2dd9ea\x2d444a\x2d9677\x2d3a7c12339ffd.mount: Deactivated successfully.
Nov 22 04:16:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.284 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ca459bc-d9ea-444a-9677-3a7c12339ffd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:16:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:10.285 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2df8d5-6bd0-4463-9e42-1c18e9bccb45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.560 253665 DEBUG nova.network.neutron [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated VIF entry in instance network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.561 253665 DEBUG nova.network.neutron [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.572 253665 INFO nova.virt.libvirt.driver [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Deleting instance files /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_del#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.572 253665 INFO nova.virt.libvirt.driver [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Deletion of /var/lib/nova/instances/aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad_del complete#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.592 253665 DEBUG oslo_concurrency.lockutils [req-6b82fd57-696b-4243-b883-731eeea804e8 req-2657d410-11db-4c9d-a3dc-fe9c6f755802 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.621 253665 INFO nova.compute.manager [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.622 253665 DEBUG oslo.service.loopingcall [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.622 253665 DEBUG nova.compute.manager [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:16:10 np0005532048 nova_compute[253661]: 2025-11-22 09:16:10.622 253665 DEBUG nova.network.neutron [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:16:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 246 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.1 MiB/s wr, 267 op/s
Nov 22 04:16:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 22 04:16:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 22 04:16:11 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 22 04:16:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:11Z|00354|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 04:16:11 np0005532048 nova_compute[253661]: 2025-11-22 09:16:11.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:11 np0005532048 nova_compute[253661]: 2025-11-22 09:16:11.832 253665 DEBUG nova.network.neutron [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:11 np0005532048 nova_compute[253661]: 2025-11-22 09:16:11.851 253665 INFO nova.compute.manager [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Took 1.23 seconds to deallocate network for instance.#033[00m
Nov 22 04:16:11 np0005532048 nova_compute[253661]: 2025-11-22 09:16:11.897 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:11 np0005532048 nova_compute[253661]: 2025-11-22 09:16:11.898 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:11 np0005532048 nova_compute[253661]: 2025-11-22 09:16:11.916 253665 DEBUG nova.compute.manager [req-c3fd46cb-92bf-46a1-8813-e7dfc735ca89 req-a76ed072-26a7-43f3-a54d-1dd06401414d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-deleted-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:11 np0005532048 nova_compute[253661]: 2025-11-22 09:16:11.993 253665 DEBUG oslo_concurrency.processutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.275 253665 DEBUG nova.compute.manager [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.275 253665 DEBUG oslo_concurrency.lockutils [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.276 253665 DEBUG oslo_concurrency.lockutils [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.276 253665 DEBUG oslo_concurrency.lockutils [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.276 253665 DEBUG nova.compute.manager [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] No waiting events found dispatching network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.276 253665 WARNING nova.compute.manager [req-eb5edc6d-1eab-4eaa-8744-f065a35e2a3e req-54604d6b-db2d-416e-9f77-16752ef66d0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Received unexpected event network-vif-plugged-14eb6b64-11d1-4c6f-9c3c-e24463c899c9 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:16:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:16:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3882136957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:16:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:16:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3882136957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:16:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059825303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.483 253665 DEBUG oslo_concurrency.processutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.491 253665 DEBUG nova.compute.provider_tree [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.510 253665 DEBUG nova.scheduler.client.report [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.533 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.576 253665 INFO nova.scheduler.client.report [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Deleted allocations for instance aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad#033[00m
Nov 22 04:16:12 np0005532048 nova_compute[253661]: 2025-11-22 09:16:12.639 253665 DEBUG oslo_concurrency.lockutils [None req-4d8cb4e4-f5d9-4ed3-bf7e-afb471988da6 db8ccc99aef946c58a2604bc21e0ef23 ad111e77e47541688eda72c9090309e9 - - default default] Lock "aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 208 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.0 MiB/s wr, 287 op/s
Nov 22 04:16:13 np0005532048 nova_compute[253661]: 2025-11-22 09:16:13.791 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802958.790629, 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:16:13 np0005532048 nova_compute[253661]: 2025-11-22 09:16:13.792 253665 INFO nova.compute.manager [-] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:16:13 np0005532048 nova_compute[253661]: 2025-11-22 09:16:13.810 253665 DEBUG nova.compute.manager [None req-33f9528d-8391-4f16-a5f9-dbfbc617d596 - - - - - -] [instance: 9ccdc1a5-8e01-4a77-bb72-cc6acfca6ed1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:14 np0005532048 nova_compute[253661]: 2025-11-22 09:16:14.954 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:15 np0005532048 nova_compute[253661]: 2025-11-22 09:16:15.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 27 KiB/s wr, 200 op/s
Nov 22 04:16:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:15Z|00355|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 04:16:15 np0005532048 nova_compute[253661]: 2025-11-22 09:16:15.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 22 04:16:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 22 04:16:16 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 22 04:16:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 4.4 KiB/s wr, 69 op/s
Nov 22 04:16:17 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 04:16:18 np0005532048 nova_compute[253661]: 2025-11-22 09:16:18.509 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:19 np0005532048 nova_compute[253661]: 2025-11-22 09:16:19.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:19 np0005532048 nova_compute[253661]: 2025-11-22 09:16:19.139 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802964.1378357, 45051f55-4273-48ff-b5be-72501a74d560 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:16:19 np0005532048 nova_compute[253661]: 2025-11-22 09:16:19.140 253665 INFO nova.compute.manager [-] [instance: 45051f55-4273-48ff-b5be-72501a74d560] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:16:19 np0005532048 nova_compute[253661]: 2025-11-22 09:16:19.160 253665 DEBUG nova.compute.manager [None req-a2a0b78d-cf98-4ee4-995d-a4bc30c84f06 - - - - - -] [instance: 45051f55-4273-48ff-b5be-72501a74d560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 105 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 406 KiB/s rd, 2.4 MiB/s wr, 125 op/s
Nov 22 04:16:19 np0005532048 nova_compute[253661]: 2025-11-22 09:16:19.956 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:20Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:22:b3 10.100.0.5
Nov 22 04:16:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:20Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:22:b3 10.100.0.5
Nov 22 04:16:20 np0005532048 nova_compute[253661]: 2025-11-22 09:16:20.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:21 np0005532048 nova_compute[253661]: 2025-11-22 09:16:21.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:16:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 105 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 1.9 MiB/s wr, 101 op/s
Nov 22 04:16:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:21 np0005532048 nova_compute[253661]: 2025-11-22 09:16:21.525 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:22 np0005532048 nova_compute[253661]: 2025-11-22 09:16:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:22Z|00356|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 04:16:22 np0005532048 nova_compute[253661]: 2025-11-22 09:16:22.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:23 np0005532048 nova_compute[253661]: 2025-11-22 09:16:23.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:16:23 np0005532048 nova_compute[253661]: 2025-11-22 09:16:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:16:23 np0005532048 nova_compute[253661]: 2025-11-22 09:16:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:16:23 np0005532048 nova_compute[253661]: 2025-11-22 09:16:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:16:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 112 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.5 MiB/s wr, 95 op/s
Nov 22 04:16:23 np0005532048 podman[302955]: 2025-11-22 09:16:23.374506483 +0000 UTC m=+0.063029820 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:16:23 np0005532048 podman[302956]: 2025-11-22 09:16:23.379425884 +0000 UTC m=+0.064703141 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 04:16:23 np0005532048 nova_compute[253661]: 2025-11-22 09:16:23.466 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:23 np0005532048 nova_compute[253661]: 2025-11-22 09:16:23.467 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:23 np0005532048 nova_compute[253661]: 2025-11-22 09:16:23.467 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:16:23 np0005532048 nova_compute[253661]: 2025-11-22 09:16:23.467 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:23 np0005532048 nova_compute[253661]: 2025-11-22 09:16:23.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:24 np0005532048 nova_compute[253661]: 2025-11-22 09:16:24.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.065 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763802970.0640647, aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.065 253665 INFO nova.compute.manager [-] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.084 253665 DEBUG nova.compute.manager [None req-e1f0e237-9bc3-4b8f-b459-c766f62b8578 - - - - - -] [instance: aa80db2e-73f3-47f4-bdc3-3d0b1bf5edad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.094 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 472 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.871 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.885 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.886 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.887 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.887 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.887 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.909 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.909 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.909 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.909 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:16:25 np0005532048 nova_compute[253661]: 2025-11-22 09:16:25.910 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066135902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.348 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.428 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.429 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:16:26 np0005532048 podman[303015]: 2025-11-22 09:16:26.444344204 +0000 UTC m=+0.139417838 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.585 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.587 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4010MB free_disk=59.9428596496582GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.587 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.587 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.697 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance b45c203c-7ae1-436b-86d3-bfc0146dd536 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.697 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.698 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:16:26 np0005532048 nova_compute[253661]: 2025-11-22 09:16:26.742 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121506405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:27 np0005532048 nova_compute[253661]: 2025-11-22 09:16:27.224 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:27 np0005532048 nova_compute[253661]: 2025-11-22 09:16:27.233 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:16:27 np0005532048 nova_compute[253661]: 2025-11-22 09:16:27.263 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:16:27 np0005532048 nova_compute[253661]: 2025-11-22 09:16:27.281 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:16:27 np0005532048 nova_compute[253661]: 2025-11-22 09:16:27.282 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 433 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Nov 22 04:16:27 np0005532048 nova_compute[253661]: 2025-11-22 09:16:27.624 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:16:27 np0005532048 nova_compute[253661]: 2025-11-22 09:16:27.625 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:16:27 np0005532048 nova_compute[253661]: 2025-11-22 09:16:27.626 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:16:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 22 04:16:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 22 04:16:27 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 22 04:16:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:27.959 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:27.960 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:27.961 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:29 np0005532048 nova_compute[253661]: 2025-11-22 09:16:29.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 675 KiB/s wr, 44 op/s
Nov 22 04:16:29 np0005532048 nova_compute[253661]: 2025-11-22 09:16:29.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:30 np0005532048 nova_compute[253661]: 2025-11-22 09:16:30.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 675 KiB/s wr, 44 op/s
Nov 22 04:16:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:31 np0005532048 nova_compute[253661]: 2025-11-22 09:16:31.562 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:31 np0005532048 nova_compute[253661]: 2025-11-22 09:16:31.563 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:31 np0005532048 nova_compute[253661]: 2025-11-22 09:16:31.579 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:16:31 np0005532048 nova_compute[253661]: 2025-11-22 09:16:31.658 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:31 np0005532048 nova_compute[253661]: 2025-11-22 09:16:31.659 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:31 np0005532048 nova_compute[253661]: 2025-11-22 09:16:31.669 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:16:31 np0005532048 nova_compute[253661]: 2025-11-22 09:16:31.670 253665 INFO nova.compute.claims [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:16:31 np0005532048 nova_compute[253661]: 2025-11-22 09:16:31.809 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236845316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.257 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.266 253665 DEBUG nova.compute.provider_tree [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.280 253665 DEBUG nova.scheduler.client.report [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.310 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.311 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.382 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.382 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.408 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.429 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.515 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.516 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.517 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Creating image(s)#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.544 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.568 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:16:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 58d7eab2-c467-4842-8976-f07ff5ee6e00 does not exist
Nov 22 04:16:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ad118c96-d7a0-4800-968f-2a7dd491eebf does not exist
Nov 22 04:16:32 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev da6849e0-b24e-4dc6-8a89-92a6201342d7 does not exist
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.598 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.603 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.681 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.683 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.683 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.684 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.709 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:32 np0005532048 nova_compute[253661]: 2025-11-22 09:16:32.714 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:16:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:16:33 np0005532048 nova_compute[253661]: 2025-11-22 09:16:33.105 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:33 np0005532048 nova_compute[253661]: 2025-11-22 09:16:33.136 253665 DEBUG nova.policy [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5ae8af2cc9f40e083473a191ddd445f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:16:33 np0005532048 nova_compute[253661]: 2025-11-22 09:16:33.186 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] resizing rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:16:33 np0005532048 podman[303472]: 2025-11-22 09:16:33.214563425 +0000 UTC m=+0.056828787 container create fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:16:33 np0005532048 podman[303472]: 2025-11-22 09:16:33.182158959 +0000 UTC m=+0.024424351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:33 np0005532048 systemd[1]: Started libpod-conmon-fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477.scope.
Nov 22 04:16:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 121 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 78 KiB/s wr, 35 op/s
Nov 22 04:16:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:16:33 np0005532048 podman[303472]: 2025-11-22 09:16:33.444784854 +0000 UTC m=+0.287050236 container init fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:16:33 np0005532048 podman[303472]: 2025-11-22 09:16:33.460985772 +0000 UTC m=+0.303251134 container start fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:16:33 np0005532048 gracious_pike[303524]: 167 167
Nov 22 04:16:33 np0005532048 systemd[1]: libpod-fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477.scope: Deactivated successfully.
Nov 22 04:16:33 np0005532048 podman[303472]: 2025-11-22 09:16:33.551407665 +0000 UTC m=+0.393673037 container attach fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:16:33 np0005532048 podman[303472]: 2025-11-22 09:16:33.552556413 +0000 UTC m=+0.394821775 container died fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:16:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c5023cc3e7f4737db0cb89a154e8adf10969638e8d99de612a8c8646724af487-merged.mount: Deactivated successfully.
Nov 22 04:16:33 np0005532048 nova_compute[253661]: 2025-11-22 09:16:33.772 253665 DEBUG nova.objects.instance [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:33 np0005532048 nova_compute[253661]: 2025-11-22 09:16:33.785 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:16:33 np0005532048 nova_compute[253661]: 2025-11-22 09:16:33.785 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Ensure instance console log exists: /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:16:33 np0005532048 nova_compute[253661]: 2025-11-22 09:16:33.786 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:33 np0005532048 nova_compute[253661]: 2025-11-22 09:16:33.786 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:33 np0005532048 nova_compute[253661]: 2025-11-22 09:16:33.787 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:33 np0005532048 podman[303472]: 2025-11-22 09:16:33.813176328 +0000 UTC m=+0.655441690 container remove fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:16:33 np0005532048 systemd[1]: libpod-conmon-fa537c9b452f8e7623b1e197bd40c677aa1611187b179558a2824817f5ca4477.scope: Deactivated successfully.
Nov 22 04:16:33 np0005532048 podman[303566]: 2025-11-22 09:16:33.991397068 +0000 UTC m=+0.046945345 container create 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:16:34 np0005532048 podman[303566]: 2025-11-22 09:16:33.969714816 +0000 UTC m=+0.025263133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:34 np0005532048 systemd[1]: Started libpod-conmon-80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b.scope.
Nov 22 04:16:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:16:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:34 np0005532048 podman[303566]: 2025-11-22 09:16:34.171413463 +0000 UTC m=+0.226961760 container init 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:16:34 np0005532048 podman[303566]: 2025-11-22 09:16:34.178870866 +0000 UTC m=+0.234419143 container start 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:16:34 np0005532048 podman[303566]: 2025-11-22 09:16:34.216750247 +0000 UTC m=+0.272298614 container attach 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:16:34 np0005532048 nova_compute[253661]: 2025-11-22 09:16:34.219 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully created port: dc08e15e-7d04-4fac-8489-61a2d7b5a642 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:16:34 np0005532048 nova_compute[253661]: 2025-11-22 09:16:34.652 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully created port: babfaba0-2c0a-4eb0-adc0-3473b0b80a08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:16:34 np0005532048 nova_compute[253661]: 2025-11-22 09:16:34.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:35 np0005532048 nova_compute[253661]: 2025-11-22 09:16:35.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 151 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.3 MiB/s wr, 34 op/s
Nov 22 04:16:35 np0005532048 great_matsumoto[303582]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:16:35 np0005532048 great_matsumoto[303582]: --> relative data size: 1.0
Nov 22 04:16:35 np0005532048 great_matsumoto[303582]: --> All data devices are unavailable
Nov 22 04:16:35 np0005532048 systemd[1]: libpod-80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b.scope: Deactivated successfully.
Nov 22 04:16:35 np0005532048 systemd[1]: libpod-80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b.scope: Consumed 1.133s CPU time.
Nov 22 04:16:35 np0005532048 podman[303566]: 2025-11-22 09:16:35.370804281 +0000 UTC m=+1.426352578 container died 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:16:35 np0005532048 nova_compute[253661]: 2025-11-22 09:16:35.382 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully created port: 3c735a93-ffc0-4525-bd7d-7db35fe17769 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:16:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1e0d82016fddeb617460c7c897294d12c0b338225ecdb34a314c0100f98bf60b-merged.mount: Deactivated successfully.
Nov 22 04:16:35 np0005532048 podman[303566]: 2025-11-22 09:16:35.693185795 +0000 UTC m=+1.748734062 container remove 80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:16:35 np0005532048 systemd[1]: libpod-conmon-80e637046d5a9149ed20b17c49ef78bce95aecee3be7b4d7686d1f606057ae1b.scope: Deactivated successfully.
Nov 22 04:16:36 np0005532048 nova_compute[253661]: 2025-11-22 09:16:36.092 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully updated port: dc08e15e-7d04-4fac-8489-61a2d7b5a642 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:16:36 np0005532048 nova_compute[253661]: 2025-11-22 09:16:36.252 253665 DEBUG nova.compute.manager [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-changed-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:36 np0005532048 nova_compute[253661]: 2025-11-22 09:16:36.253 253665 DEBUG nova.compute.manager [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing instance network info cache due to event network-changed-dc08e15e-7d04-4fac-8489-61a2d7b5a642. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:16:36 np0005532048 nova_compute[253661]: 2025-11-22 09:16:36.253 253665 DEBUG oslo_concurrency.lockutils [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:36 np0005532048 nova_compute[253661]: 2025-11-22 09:16:36.253 253665 DEBUG oslo_concurrency.lockutils [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:36 np0005532048 nova_compute[253661]: 2025-11-22 09:16:36.254 253665 DEBUG nova.network.neutron [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing network info cache for port dc08e15e-7d04-4fac-8489-61a2d7b5a642 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:16:36 np0005532048 podman[303762]: 2025-11-22 09:16:36.401497794 +0000 UTC m=+0.046147085 container create 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:16:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:36 np0005532048 systemd[1]: Started libpod-conmon-51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7.scope.
Nov 22 04:16:36 np0005532048 nova_compute[253661]: 2025-11-22 09:16:36.449 253665 DEBUG nova.network.neutron [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:16:36 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:16:36 np0005532048 podman[303762]: 2025-11-22 09:16:36.380892028 +0000 UTC m=+0.025541349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:36 np0005532048 podman[303762]: 2025-11-22 09:16:36.495238038 +0000 UTC m=+0.139887359 container init 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:16:36 np0005532048 podman[303762]: 2025-11-22 09:16:36.504798143 +0000 UTC m=+0.149447444 container start 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:16:36 np0005532048 podman[303762]: 2025-11-22 09:16:36.509512798 +0000 UTC m=+0.154162209 container attach 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:36 np0005532048 dazzling_nightingale[303778]: 167 167
Nov 22 04:16:36 np0005532048 systemd[1]: libpod-51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7.scope: Deactivated successfully.
Nov 22 04:16:36 np0005532048 podman[303762]: 2025-11-22 09:16:36.511780925 +0000 UTC m=+0.156430226 container died 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c2085170234b579f1d9a3a27e4767b0934d88101fb9bfbf18dab73ae81721aa6-merged.mount: Deactivated successfully.
Nov 22 04:16:36 np0005532048 podman[303762]: 2025-11-22 09:16:36.576493245 +0000 UTC m=+0.221142556 container remove 51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:16:36 np0005532048 systemd[1]: libpod-conmon-51cee29398fcab5258987abf44a891cd2c5eb45fb778ace787a52d053f331ef7.scope: Deactivated successfully.
Nov 22 04:16:36 np0005532048 podman[303800]: 2025-11-22 09:16:36.748985155 +0000 UTC m=+0.042019335 container create a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:16:36 np0005532048 systemd[1]: Started libpod-conmon-a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f.scope.
Nov 22 04:16:36 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:16:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:36 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:36 np0005532048 podman[303800]: 2025-11-22 09:16:36.732652403 +0000 UTC m=+0.025686603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:36 np0005532048 podman[303800]: 2025-11-22 09:16:36.837175052 +0000 UTC m=+0.130209252 container init a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:16:36 np0005532048 podman[303800]: 2025-11-22 09:16:36.843459817 +0000 UTC m=+0.136493997 container start a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:16:36 np0005532048 podman[303800]: 2025-11-22 09:16:36.85011356 +0000 UTC m=+0.143147740 container attach a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:16:37 np0005532048 nova_compute[253661]: 2025-11-22 09:16:37.142 253665 DEBUG nova.network.neutron [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:37 np0005532048 nova_compute[253661]: 2025-11-22 09:16:37.160 253665 DEBUG oslo_concurrency.lockutils [req-4140bea0-4d04-42b3-b883-7a35eda7b2e4 req-a44f9a39-a183-41ef-8a14-3d2b8e948da0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:37 np0005532048 nova_compute[253661]: 2025-11-22 09:16:37.250 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully updated port: babfaba0-2c0a-4eb0-adc0-3473b0b80a08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:16:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]: {
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:    "0": [
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:        {
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "devices": [
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "/dev/loop3"
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            ],
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_name": "ceph_lv0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_size": "21470642176",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "name": "ceph_lv0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "tags": {
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.cluster_name": "ceph",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.crush_device_class": "",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.encrypted": "0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.osd_id": "0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.type": "block",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.vdo": "0"
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            },
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "type": "block",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "vg_name": "ceph_vg0"
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:        }
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:    ],
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:    "1": [
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:        {
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "devices": [
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "/dev/loop4"
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            ],
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_name": "ceph_lv1",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_size": "21470642176",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "name": "ceph_lv1",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "tags": {
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.cluster_name": "ceph",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.crush_device_class": "",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.encrypted": "0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.osd_id": "1",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.type": "block",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.vdo": "0"
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            },
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "type": "block",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "vg_name": "ceph_vg1"
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:        }
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:    ],
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:    "2": [
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:        {
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "devices": [
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "/dev/loop5"
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            ],
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_name": "ceph_lv2",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_size": "21470642176",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "name": "ceph_lv2",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "tags": {
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.cluster_name": "ceph",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.crush_device_class": "",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.encrypted": "0",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.osd_id": "2",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.type": "block",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:                "ceph.vdo": "0"
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            },
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "type": "block",
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:            "vg_name": "ceph_vg2"
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:        }
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]:    ]
Nov 22 04:16:37 np0005532048 crazy_lichterman[303817]: }
Nov 22 04:16:37 np0005532048 systemd[1]: libpod-a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f.scope: Deactivated successfully.
Nov 22 04:16:37 np0005532048 podman[303800]: 2025-11-22 09:16:37.650709258 +0000 UTC m=+0.943743458 container died a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:16:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cd3a2040269b058bb7ea17f91d0a0fb8d095a77311716e5417c1c2eaf5c6cb53-merged.mount: Deactivated successfully.
Nov 22 04:16:37 np0005532048 podman[303800]: 2025-11-22 09:16:37.736953417 +0000 UTC m=+1.029987597 container remove a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:16:37 np0005532048 systemd[1]: libpod-conmon-a241571ba2d35e06e9842484565487e8733d9754ed456fda909a36ccba909b7f.scope: Deactivated successfully.
Nov 22 04:16:38 np0005532048 nova_compute[253661]: 2025-11-22 09:16:38.223 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Successfully updated port: 3c735a93-ffc0-4525-bd7d-7db35fe17769 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:16:38 np0005532048 nova_compute[253661]: 2025-11-22 09:16:38.234 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:38 np0005532048 nova_compute[253661]: 2025-11-22 09:16:38.235 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquired lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:38 np0005532048 nova_compute[253661]: 2025-11-22 09:16:38.235 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:16:38 np0005532048 nova_compute[253661]: 2025-11-22 09:16:38.368 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:16:38 np0005532048 nova_compute[253661]: 2025-11-22 09:16:38.373 253665 DEBUG nova.compute.manager [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-changed-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:38 np0005532048 nova_compute[253661]: 2025-11-22 09:16:38.373 253665 DEBUG nova.compute.manager [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing instance network info cache due to event network-changed-babfaba0-2c0a-4eb0-adc0-3473b0b80a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:16:38 np0005532048 nova_compute[253661]: 2025-11-22 09:16:38.373 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:38 np0005532048 podman[303981]: 2025-11-22 09:16:38.449984482 +0000 UTC m=+0.052005209 container create 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:16:38 np0005532048 systemd[1]: Started libpod-conmon-44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240.scope.
Nov 22 04:16:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:16:38 np0005532048 podman[303981]: 2025-11-22 09:16:38.425180143 +0000 UTC m=+0.027200850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:38 np0005532048 podman[303981]: 2025-11-22 09:16:38.537712528 +0000 UTC m=+0.139733235 container init 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:16:38 np0005532048 podman[303981]: 2025-11-22 09:16:38.547188091 +0000 UTC m=+0.149208778 container start 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:16:38 np0005532048 crazy_goodall[303997]: 167 167
Nov 22 04:16:38 np0005532048 systemd[1]: libpod-44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240.scope: Deactivated successfully.
Nov 22 04:16:38 np0005532048 podman[303981]: 2025-11-22 09:16:38.550924712 +0000 UTC m=+0.152945399 container attach 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:16:38 np0005532048 conmon[303997]: conmon 44b425e4b0339ac00bb7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240.scope/container/memory.events
Nov 22 04:16:38 np0005532048 podman[303981]: 2025-11-22 09:16:38.555659019 +0000 UTC m=+0.157679706 container died 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:16:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-faa7f143f749beb83b340bbf9c7f195a66dc36f92217024753c11ae824c5f6df-merged.mount: Deactivated successfully.
Nov 22 04:16:38 np0005532048 podman[303981]: 2025-11-22 09:16:38.593858958 +0000 UTC m=+0.195879665 container remove 44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:16:38 np0005532048 systemd[1]: libpod-conmon-44b425e4b0339ac00bb7609ff128f1eda374fce63a167dbf1b2ed7c5740a7240.scope: Deactivated successfully.
Nov 22 04:16:38 np0005532048 podman[304020]: 2025-11-22 09:16:38.772883648 +0000 UTC m=+0.050580164 container create 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:16:38 np0005532048 systemd[1]: Started libpod-conmon-8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85.scope.
Nov 22 04:16:38 np0005532048 podman[304020]: 2025-11-22 09:16:38.751140563 +0000 UTC m=+0.028837099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:16:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:38 np0005532048 podman[304020]: 2025-11-22 09:16:38.869679107 +0000 UTC m=+0.147375653 container init 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:16:38 np0005532048 podman[304020]: 2025-11-22 09:16:38.883535568 +0000 UTC m=+0.161232094 container start 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 22 04:16:38 np0005532048 podman[304020]: 2025-11-22 09:16:38.88725697 +0000 UTC m=+0.164953516 container attach 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:16:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 22 04:16:39 np0005532048 nova_compute[253661]: 2025-11-22 09:16:39.963 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:39 np0005532048 brave_euler[304037]: {
Nov 22 04:16:39 np0005532048 brave_euler[304037]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "osd_id": 1,
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "type": "bluestore"
Nov 22 04:16:39 np0005532048 brave_euler[304037]:    },
Nov 22 04:16:39 np0005532048 brave_euler[304037]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "osd_id": 0,
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "type": "bluestore"
Nov 22 04:16:39 np0005532048 brave_euler[304037]:    },
Nov 22 04:16:39 np0005532048 brave_euler[304037]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "osd_id": 2,
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:16:39 np0005532048 brave_euler[304037]:        "type": "bluestore"
Nov 22 04:16:39 np0005532048 brave_euler[304037]:    }
Nov 22 04:16:39 np0005532048 brave_euler[304037]: }
Nov 22 04:16:40 np0005532048 systemd[1]: libpod-8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85.scope: Deactivated successfully.
Nov 22 04:16:40 np0005532048 podman[304020]: 2025-11-22 09:16:40.028236903 +0000 UTC m=+1.305933419 container died 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:16:40 np0005532048 systemd[1]: libpod-8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85.scope: Consumed 1.146s CPU time.
Nov 22 04:16:40 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b39fccce47e537399c9671d10bd7edef07c802c61ca8b33672915d39d67ce33a-merged.mount: Deactivated successfully.
Nov 22 04:16:40 np0005532048 podman[304020]: 2025-11-22 09:16:40.091839676 +0000 UTC m=+1.369536192 container remove 8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:16:40 np0005532048 nova_compute[253661]: 2025-11-22 09:16:40.099 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:40 np0005532048 systemd[1]: libpod-conmon-8d2f9101920472850556b5d6ad9465a1506661e2b8f7de16db54249c97c07b85.scope: Deactivated successfully.
Nov 22 04:16:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:16:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:16:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:16:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:16:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bb376bef-3335-4d6d-ac13-cf66ff798504 does not exist
Nov 22 04:16:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 72b66b13-f5c7-4861-9de6-832d5d658a93 does not exist
Nov 22 04:16:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:16:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:16:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 22 04:16:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.570 253665 DEBUG nova.network.neutron [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.598 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Releasing lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.598 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance network_info: |[{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.599 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.600 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing network info cache for port babfaba0-2c0a-4eb0-adc0-3473b0b80a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.607 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start _get_guest_xml network_info=[{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.614 253665 WARNING nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.621 253665 DEBUG nova.virt.libvirt.host [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.622 253665 DEBUG nova.virt.libvirt.host [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.625 253665 DEBUG nova.virt.libvirt.host [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.626 253665 DEBUG nova.virt.libvirt.host [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.626 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.627 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.627 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.627 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.628 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.629 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.629 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.629 253665 DEBUG nova.virt.hardware [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:16:41 np0005532048 nova_compute[253661]: 2025-11-22 09:16:41.632 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:16:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304591467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.132 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 22 04:16:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 22 04:16:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.169 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.176 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.310 253665 DEBUG nova.objects.instance [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'flavor' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.335 253665 DEBUG oslo_concurrency.lockutils [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.336 253665 DEBUG oslo_concurrency.lockutils [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:16:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/495675620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.669 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.671 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.672 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.673 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.674 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.675 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.675 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.676 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.676 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.677 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.678 253665 DEBUG nova.objects.instance [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.693 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <uuid>4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e</uuid>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <name>instance-0000002b</name>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersTestMultiNic-server-619809161</nova:name>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:16:41</nova:creationTime>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:user uuid="c5ae8af2cc9f40e083473a191ddd445f">tempest-ServersTestMultiNic-1064785551-project-member</nova:user>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:project uuid="2d156ca65e214b4aacdf111fd47dc4f6">tempest-ServersTestMultiNic-1064785551</nova:project>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:port uuid="dc08e15e-7d04-4fac-8489-61a2d7b5a642">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.172" ipVersion="4"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:port uuid="babfaba0-2c0a-4eb0-adc0-3473b0b80a08">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.1.80" ipVersion="4"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <nova:port uuid="3c735a93-ffc0-4525-bd7d-7db35fe17769">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.229" ipVersion="4"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <entry name="serial">4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e</entry>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <entry name="uuid">4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e</entry>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:fb:74:e4"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <target dev="tapdc08e15e-7d"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:10:00:b1"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <target dev="tapbabfaba0-2c"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:eb:17:78"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <target dev="tap3c735a93-ff"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/console.log" append="off"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:16:42 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:16:42 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:16:42 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:16:42 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.695 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Preparing to wait for external event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.696 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.696 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Preparing to wait for external event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.697 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.698 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Preparing to wait for external event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.698 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.698 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.698 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.699 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.699 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.700 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.700 253665 DEBUG os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.702 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.702 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.707 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc08e15e-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.707 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc08e15e-7d, col_values=(('external_ids', {'iface-id': 'dc08e15e-7d04-4fac-8489-61a2d7b5a642', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:74:e4', 'vm-uuid': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 NetworkManager[48920]: <info>  [1763803002.7110] manager: (tapdc08e15e-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.720 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.722 253665 INFO os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d')#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.723 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.723 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.724 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.724 253665 DEBUG os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.725 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.725 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.728 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.729 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbabfaba0-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.729 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbabfaba0-2c, col_values=(('external_ids', {'iface-id': 'babfaba0-2c0a-4eb0-adc0-3473b0b80a08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:00:b1', 'vm-uuid': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.731 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 NetworkManager[48920]: <info>  [1763803002.7321] manager: (tapbabfaba0-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.733 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.741 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.743 253665 INFO os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c')#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.745 253665 DEBUG nova.virt.libvirt.vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:16:32Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.745 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.746 253665 DEBUG nova.network.os_vif_util [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.746 253665 DEBUG os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.746 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.747 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.747 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.749 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.749 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c735a93-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.749 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c735a93-ff, col_values=(('external_ids', {'iface-id': '3c735a93-ffc0-4525-bd7d-7db35fe17769', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:17:78', 'vm-uuid': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 NetworkManager[48920]: <info>  [1763803002.7521] manager: (tap3c735a93-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.753 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.763 253665 INFO os_vif [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff')#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.819 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:fb:74:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:10:00:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:eb:17:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.821 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Using config drive#033[00m
Nov 22 04:16:42 np0005532048 nova_compute[253661]: 2025-11-22 09:16:42.844 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Nov 22 04:16:43 np0005532048 nova_compute[253661]: 2025-11-22 09:16:43.507 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Creating config drive at /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config#033[00m
Nov 22 04:16:43 np0005532048 nova_compute[253661]: 2025-11-22 09:16:43.514 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpang54t2k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:43 np0005532048 nova_compute[253661]: 2025-11-22 09:16:43.667 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpang54t2k" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:43 np0005532048 nova_compute[253661]: 2025-11-22 09:16:43.698 253665 DEBUG nova.storage.rbd_utils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:16:43 np0005532048 nova_compute[253661]: 2025-11-22 09:16:43.702 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:43 np0005532048 nova_compute[253661]: 2025-11-22 09:16:43.913 253665 DEBUG oslo_concurrency.processutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:43 np0005532048 nova_compute[253661]: 2025-11-22 09:16:43.914 253665 INFO nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Deleting local config drive /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e/disk.config because it was imported into RBD.#033[00m
Nov 22 04:16:43 np0005532048 NetworkManager[48920]: <info>  [1763803003.9870] manager: (tapdc08e15e-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Nov 22 04:16:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:43Z|00357|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 04:16:43 np0005532048 kernel: tapdc08e15e-7d: entered promiscuous mode
Nov 22 04:16:43 np0005532048 nova_compute[253661]: 2025-11-22 09:16:43.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:43Z|00358|binding|INFO|Claiming lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 for this chassis.
Nov 22 04:16:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:43Z|00359|binding|INFO|dc08e15e-7d04-4fac-8489-61a2d7b5a642: Claiming fa:16:3e:fb:74:e4 10.100.0.172
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.005 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:74:e4 10.100.0.172'], port_security=['fa:16:3e:fb:74:e4 10.100.0.172'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.172/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d4cba21-42dc-4923-abab-98063b71666c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dc08e15e-7d04-4fac-8489-61a2d7b5a642) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.007 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dc08e15e-7d04-4fac-8489-61a2d7b5a642 in datapath c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 bound to our chassis#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.009 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6#033[00m
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.0157] manager: (tapbabfaba0-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Nov 22 04:16:44 np0005532048 systemd-udevd[304278]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:16:44 np0005532048 systemd-udevd[304280]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.028 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updated VIF entry in instance network info cache for port babfaba0-2c0a-4eb0-adc0-3473b0b80a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.029 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5900503-5b9f-43ed-bc27-2750243ef208]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.028 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc43f3b8e-c1 in ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.030 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc43f3b8e-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.031 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ccb47d9-8ad9-43cd-9add-4699b55b319b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6424717-259d-4302-a1ba-c9a09b784b8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.0372] manager: (tap3c735a93-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/160)
Nov 22 04:16:44 np0005532048 systemd-udevd[304286]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.039 253665 DEBUG nova.network.neutron [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.043 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.043 253665 DEBUG nova.compute.manager [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-changed-3c735a93-ffc0-4525-bd7d-7db35fe17769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.043 253665 DEBUG nova.compute.manager [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing instance network info cache due to event network-changed-3c735a93-ffc0-4525-bd7d-7db35fe17769. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.043 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.044 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.044 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Refreshing network info cache for port 3c735a93-ffc0-4525-bd7d-7db35fe17769 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:16:44 np0005532048 kernel: tapbabfaba0-2c: entered promiscuous mode
Nov 22 04:16:44 np0005532048 kernel: tap3c735a93-ff: entered promiscuous mode
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.0481] device (tapdc08e15e-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.0544] device (tapdc08e15e-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.0552] device (tapbabfaba0-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.0585] device (tapbabfaba0-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.0599] device (tap3c735a93-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.054 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[df8b9d52-41c9-44d9-b9d2-6c899ef1c922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00360|binding|INFO|Claiming lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 for this chassis.
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00361|binding|INFO|babfaba0-2c0a-4eb0-adc0-3473b0b80a08: Claiming fa:16:3e:10:00:b1 10.100.1.80
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00362|binding|INFO|Claiming lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 for this chassis.
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00363|binding|INFO|3c735a93-ffc0-4525-bd7d-7db35fe17769: Claiming fa:16:3e:eb:17:78 10.100.0.229
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.0639] device (tap3c735a93-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.067 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.068 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:17:78 10.100.0.229'], port_security=['fa:16:3e:eb:17:78 10.100.0.229'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.229/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d4cba21-42dc-4923-abab-98063b71666c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=3c735a93-ffc0-4525-bd7d-7db35fe17769) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.069 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:00:b1 10.100.1.80'], port_security=['fa:16:3e:10:00:b1 10.100.1.80'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.80/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=529c2359-2293-4f9c-a99f-590f8fe2f28e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=babfaba0-2c0a-4eb0-adc0-3473b0b80a08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.087 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dffaeb43-bb68-4abe-8d93-6402614cc386]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 systemd-machined[215941]: New machine qemu-48-instance-0000002b.
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00364|binding|INFO|Releasing lport 73744eaa-7d97-4c21-9fb3-4378f10417f6 from this chassis (sb_readonly=0)
Nov 22 04:16:44 np0005532048 systemd[1]: Started Virtual Machine qemu-48-instance-0000002b.
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00365|binding|INFO|Setting lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 ovn-installed in OVS
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00366|binding|INFO|Setting lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 up in Southbound
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.110 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.129 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[130c8af9-bf90-4867-b343-8d154af8cd32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.1384] manager: (tapc43f3b8e-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/161)
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.140 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2e6895-250d-4e59-b5eb-a3ff87df64a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.148 253665 DEBUG nova.compute.manager [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.148 253665 DEBUG nova.compute.manager [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing instance network info cache due to event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.149 253665 DEBUG oslo_concurrency.lockutils [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00367|binding|INFO|Setting lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 ovn-installed in OVS
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00368|binding|INFO|Setting lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 up in Southbound
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00369|binding|INFO|Setting lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 ovn-installed in OVS
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00370|binding|INFO|Setting lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 up in Southbound
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.164 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.182 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[52c476f4-8eb3-4462-ba22-488c4d8b65a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.184 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[20616cdd-e0bf-4f29-870a-a8f726565e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.2095] device (tapc43f3b8e-c0): carrier: link connected
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.216 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6f797977-b96d-4e31-a5a7-b6bef105cbbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.236 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00434859-482f-40dd-a61c-b8c8c97878f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc43f3b8e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:07:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585494, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304319, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.254 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38bbf224-611b-43f2-a4a0-a5ec28304124]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:70a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585494, 'tstamp': 585494}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304320, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.273 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce923e1e-fcc6-4060-954a-e96fd1418eea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc43f3b8e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:07:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585494, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304321, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.308 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58aa705c-0d18-4416-8720-176db68a8e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.373 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d20514f7-0b42-464b-98b5-a704f5d36782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.375 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc43f3b8e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.375 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.376 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc43f3b8e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 NetworkManager[48920]: <info>  [1763803004.3790] manager: (tapc43f3b8e-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Nov 22 04:16:44 np0005532048 kernel: tapc43f3b8e-c0: entered promiscuous mode
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.391 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc43f3b8e-c0, col_values=(('external_ids', {'iface-id': '61d4e08f-2ccc-4601-a2e7-6cb33cc906ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:44Z|00371|binding|INFO|Releasing lport 61d4e08f-2ccc-4601-a2e7-6cb33cc906ee from this chassis (sb_readonly=0)
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.408 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b76b669-f70f-4acd-9df1-95a2f3e54b04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.410 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6.pid.haproxy
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.410 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'env', 'PROCESS_TAG=haproxy-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:16:44 np0005532048 podman[304396]: 2025-11-22 09:16:44.779152922 +0000 UTC m=+0.052752128 container create ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.793 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803004.7922537, 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.793 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.810 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.816 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803004.7930264, 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.816 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:16:44 np0005532048 systemd[1]: Started libpod-conmon-ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e.scope.
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.832 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.836 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:16:44 np0005532048 podman[304396]: 2025-11-22 09:16:44.75021716 +0000 UTC m=+0.023816386 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.850 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:16:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:16:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c249f8811210d708abd429b642e58a08ae9985262779c5dbd5ea97093529a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:44 np0005532048 podman[304396]: 2025-11-22 09:16:44.876973256 +0000 UTC m=+0.150572502 container init ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:16:44 np0005532048 podman[304396]: 2025-11-22 09:16:44.884356637 +0000 UTC m=+0.157955843 container start ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:44 np0005532048 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [NOTICE]   (304416) : New worker (304418) forked
Nov 22 04:16:44 np0005532048 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [NOTICE]   (304416) : Loading success.
Nov 22 04:16:44 np0005532048 nova_compute[253661]: 2025-11-22 09:16:44.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.984 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 3c735a93-ffc0-4525-bd7d-7db35fe17769 in datapath c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 unbound from our chassis#033[00m
Nov 22 04:16:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:44.986 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.005 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3674444-d4db-4de2-af71-3adfd63114bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.042 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd5a3c3-3650-496f-94e6-cfe823519734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.047 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[abd8861a-d31d-4879-ba43-b2cb18aa47ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.088 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3efefb-80dc-483a-8952-d41fd27c3fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.109 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3c45e715-1d6a-4aac-b867-1ed0d59d0fc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc43f3b8e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:07:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 6, 'rx_bytes': 90, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 6, 'rx_bytes': 90, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585494, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304432, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.129 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[726a18a0-3d61-4e4e-8221-ac07e913e26a]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapc43f3b8e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585507, 'tstamp': 585507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304433, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc43f3b8e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585510, 'tstamp': 585510}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304433, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.131 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc43f3b8e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:45 np0005532048 nova_compute[253661]: 2025-11-22 09:16:45.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.136 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc43f3b8e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc43f3b8e-c0, col_values=(('external_ids', {'iface-id': '61d4e08f-2ccc-4601-a2e7-6cb33cc906ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.138 162862 INFO neutron.agent.ovn.metadata.agent [-] Port babfaba0-2c0a-4eb0-adc0-3473b0b80a08 in datapath 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c unbound from our chassis#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.139 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.153 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[51dc0a0f-f4a3-421e-b0cf-bb3735cc8ca6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.154 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d8f3ef5-81 in ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.156 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d8f3ef5-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.156 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc767047-bb5f-4837-83b9-fc4b8917631f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.157 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc12a0e1-d603-4bf6-9e68-39d38c3ae17b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.170 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[fb038518-da90-46af-a1d2-b5b6fd612fe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9f565f71-6808-4b05-9b8e-02f1c9200c3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.237 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[94c91ddd-d78b-4bb0-a5dd-47c8e95192c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.243 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[585414e4-a5dd-413d-9643-f0ae773b3d9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 NetworkManager[48920]: <info>  [1763803005.2445] manager: (tap2d8f3ef5-80): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
Nov 22 04:16:45 np0005532048 systemd-udevd[304309]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.284 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[03ec2fb7-bddf-4c7f-bbbb-773bbac66dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.288 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[30d434a1-e23a-491a-92e8-87d56bc84329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 NetworkManager[48920]: <info>  [1763803005.3141] device (tap2d8f3ef5-80): carrier: link connected
Nov 22 04:16:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 884 KiB/s wr, 34 op/s
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.322 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[89dcd2f8-1884-4727-9d21-05bff8f7eb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.344 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cd16f70c-4699-4099-bd9f-2365a4b0c238]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d8f3ef5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:dd:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585605, 'reachable_time': 24107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304444, 'error': None, 'target': 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be0e5068-b26e-426d-a641-92db3c5c4354]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:ddc7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585605, 'tstamp': 585605}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304445, 'error': None, 'target': 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.385 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3e663a-3fdc-4f1e-94fe-bf2b9dcccc71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d8f3ef5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:dd:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585605, 'reachable_time': 24107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304446, 'error': None, 'target': 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e72f5e-ae05-4218-a3cb-4b0552e31263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.504 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f261481-e6ba-4b92-ad15-b6c7c48d5110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d8f3ef5-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.506 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d8f3ef5-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:45 np0005532048 nova_compute[253661]: 2025-11-22 09:16:45.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:45 np0005532048 NetworkManager[48920]: <info>  [1763803005.5091] manager: (tap2d8f3ef5-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Nov 22 04:16:45 np0005532048 kernel: tap2d8f3ef5-80: entered promiscuous mode
Nov 22 04:16:45 np0005532048 nova_compute[253661]: 2025-11-22 09:16:45.510 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.511 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d8f3ef5-80, col_values=(('external_ids', {'iface-id': '4afb8ac6-882e-4ceb-b8f8-84b902cc9bc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:45 np0005532048 nova_compute[253661]: 2025-11-22 09:16:45.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:45Z|00372|binding|INFO|Releasing lport 4afb8ac6-882e-4ceb-b8f8-84b902cc9bc9 from this chassis (sb_readonly=0)
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.536 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:16:45 np0005532048 nova_compute[253661]: 2025-11-22 09:16:45.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.537 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21c855dc-280a-4392-bc72-b95e9b22997f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.537 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c.pid.haproxy
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:16:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:45.538 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'env', 'PROCESS_TAG=haproxy-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:16:45 np0005532048 podman[304478]: 2025-11-22 09:16:45.954187981 +0000 UTC m=+0.063942133 container create b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:16:45 np0005532048 systemd[1]: Started libpod-conmon-b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f.scope.
Nov 22 04:16:46 np0005532048 podman[304478]: 2025-11-22 09:16:45.914479216 +0000 UTC m=+0.024233388 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:16:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:16:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/558a2bf28aa725adc01b2257b2fa48656e975a746fc020a7ec5246f4f99d866b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:46 np0005532048 podman[304478]: 2025-11-22 09:16:46.040660127 +0000 UTC m=+0.150414279 container init b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:16:46 np0005532048 podman[304478]: 2025-11-22 09:16:46.048261863 +0000 UTC m=+0.158016015 container start b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 04:16:46 np0005532048 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [NOTICE]   (304497) : New worker (304499) forked
Nov 22 04:16:46 np0005532048 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [NOTICE]   (304497) : Loading success.
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.315 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.317 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.317 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.318 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.318 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Processing event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.318 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.318 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.319 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.319 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.319 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No event matching network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 in dict_keys([('network-vif-plugged', 'babfaba0-2c0a-4eb0-adc0-3473b0b80a08'), ('network-vif-plugged', '3c735a93-ffc0-4525-bd7d-7db35fe17769')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.319 253665 WARNING nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.320 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.320 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.320 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.320 253665 DEBUG oslo_concurrency.lockutils [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.321 253665 DEBUG nova.compute.manager [req-27a0927e-5369-4180-b801-47a161259356 req-fea52e0d-7124-44ce-89c9-1654ea4e79ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Processing event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.380 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updated VIF entry in instance network info cache for port 3c735a93-ffc0-4525-bd7d-7db35fe17769. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.381 253665 DEBUG nova.network.neutron [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.402 253665 DEBUG oslo_concurrency.lockutils [req-de9bfa07-6bbf-4c79-93da-d3f7669362ef req-e57973df-1b5f-4d1c-aa01-5e01b594f385 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.604 253665 DEBUG nova.network.neutron [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.625 253665 DEBUG oslo_concurrency.lockutils [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.625 253665 DEBUG nova.compute.manager [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.626 253665 DEBUG nova.compute.manager [None req-2f3ee601-d6e8-4ab8-9d12-7d61417feee3 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] network_info to inject: |[{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.629 253665 DEBUG oslo_concurrency.lockutils [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:46 np0005532048 nova_compute[253661]: 2025-11-22 09:16:46.629 253665 DEBUG nova.network.neutron [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:16:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.6 KiB/s wr, 31 op/s
Nov 22 04:16:47 np0005532048 nova_compute[253661]: 2025-11-22 09:16:47.709 253665 DEBUG nova.objects.instance [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'flavor' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:47 np0005532048 nova_compute[253661]: 2025-11-22 09:16:47.728 253665 DEBUG oslo_concurrency.lockutils [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:47 np0005532048 nova_compute[253661]: 2025-11-22 09:16:47.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.163 253665 DEBUG nova.network.neutron [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated VIF entry in instance network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.164 253665 DEBUG nova.network.neutron [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.176 253665 DEBUG oslo_concurrency.lockutils [req-a23f1aab-bc91-44b6-aa04-7436dba2e0c8 req-bc78ceb1-7c6a-41a3-8796-8a7d5e57eacf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.177 253665 DEBUG oslo_concurrency.lockutils [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.461 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.461 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.462 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.463 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.463 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No event matching network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 in dict_keys([('network-vif-plugged', '3c735a93-ffc0-4525-bd7d-7db35fe17769')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.463 253665 WARNING nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.464 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.464 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.464 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.465 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.465 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Processing event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.465 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.466 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.466 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.466 253665 DEBUG oslo_concurrency.lockutils [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.466 253665 DEBUG nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.467 253665 WARNING nova.compute.manager [req-53b7d3f5-5ad3-4772-a07c-9d1cd6d70656 req-2c973dc8-972e-4d4a-85ef-6dd11dfe4ac5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-3c735a93-ffc0-4525-bd7d-7db35fe17769 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.468 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.474 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803008.4741724, 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.474 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.477 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.480 253665 INFO nova.virt.libvirt.driver [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance spawned successfully.#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.480 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.506 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.512 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.512 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.513 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.514 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.514 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.515 253665 DEBUG nova.virt.libvirt.driver [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.521 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.557 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.589 253665 INFO nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Took 16.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.590 253665 DEBUG nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.668 253665 INFO nova.compute.manager [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Took 17.04 seconds to build instance.#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.685 253665 DEBUG oslo_concurrency.lockutils [None req-c7dda407-9503-4b1a-a1fb-9b748d100ef7 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:48 np0005532048 nova_compute[253661]: 2025-11-22 09:16:48.952 253665 DEBUG nova.network.neutron [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.041 253665 DEBUG nova.compute.manager [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.041 253665 DEBUG nova.compute.manager [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing instance network info cache due to event network-changed-91a0d7d2-517a-4636-a7fd-86f4d72aed04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.042 253665 DEBUG oslo_concurrency.lockutils [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:16:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:49.054 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.056 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:49.056 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:16:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 37 op/s
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.839 253665 DEBUG nova.network.neutron [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.853 253665 DEBUG oslo_concurrency.lockutils [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.854 253665 DEBUG nova.compute.manager [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.854 253665 DEBUG nova.compute.manager [None req-9c98fa34-d63f-447d-94fb-a14c36e90255 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] network_info to inject: |[{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.860 253665 DEBUG oslo_concurrency.lockutils [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.861 253665 DEBUG nova.network.neutron [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Refreshing network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:16:49 np0005532048 nova_compute[253661]: 2025-11-22 09:16:49.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:49 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:16:49 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.071 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.071 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.072 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.072 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.072 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.073 253665 INFO nova.compute.manager [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Terminating instance#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.075 253665 DEBUG nova.compute.manager [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:16:50 np0005532048 kernel: tap91a0d7d2-51 (unregistering): left promiscuous mode
Nov 22 04:16:50 np0005532048 NetworkManager[48920]: <info>  [1763803010.1424] device (tap91a0d7d2-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:50Z|00373|binding|INFO|Releasing lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 from this chassis (sb_readonly=0)
Nov 22 04:16:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:50Z|00374|binding|INFO|Setting lport 91a0d7d2-517a-4636-a7fd-86f4d72aed04 down in Southbound
Nov 22 04:16:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:50Z|00375|binding|INFO|Removing iface tap91a0d7d2-51 ovn-installed in OVS
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.158 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:22:b3 10.100.0.5'], port_security=['fa:16:3e:d8:22:b3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b45c203c-7ae1-436b-86d3-bfc0146dd536', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7459b4dc-5141-4001-a5b6-0d7256031901', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba63bb0-30f0-4e31-af74-7247ce34941d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=91a0d7d2-517a-4636-a7fd-86f4d72aed04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.160 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 91a0d7d2-517a-4636-a7fd-86f4d72aed04 in datapath a8ceec0c-2cf6-459a-a4d7-aaf770041b6c unbound from our chassis#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.163 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.168 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[203c6820-1983-41d6-aa51-796e35681b56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.170 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c namespace which is not needed anymore#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Nov 22 04:16:50 np0005532048 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 15.808s CPU time.
Nov 22 04:16:50 np0005532048 systemd-machined[215941]: Machine qemu-47-instance-0000002a terminated.
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.321 253665 INFO nova.virt.libvirt.driver [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Instance destroyed successfully.#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.322 253665 DEBUG nova.objects.instance [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lazy-loading 'resources' on Instance uuid b45c203c-7ae1-436b-86d3-bfc0146dd536 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.334 253665 DEBUG nova.virt.libvirt.vif [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1942299149',display_name='tempest-AttachInterfacesUnderV243Test-server-1942299149',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1942299149',id=42,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPJjKzxQ6a+OuJML0HHQQYvCuHT4o36Pe0HTJXEDf/t0kK24QNwKu6PCguH+C6XVYn+ibPKaOztSJwRFEDsoyxOxItcOZetU3VENvv82U9z5y/gmG/qHovd9IPqkeCrJiA==',key_name='tempest-keypair-1712592069',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:16:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2f74a0d8c2374c07a9c9cd48b42318c3',ramdisk_id='',reservation_id='r-2g9ce9k3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-776663851',owner_user_name='tempest-AttachInterfacesUnderV243Test-776663851-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7b394acfc2f44ed180b65249224f2788',uuid=b45c203c-7ae1-436b-86d3-bfc0146dd536,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.334 253665 DEBUG nova.network.os_vif_util [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converting VIF {"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.336 253665 DEBUG nova.network.os_vif_util [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.337 253665 DEBUG os_vif [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:16:50 np0005532048 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [NOTICE]   (302802) : haproxy version is 2.8.14-c23fe91
Nov 22 04:16:50 np0005532048 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [NOTICE]   (302802) : path to executable is /usr/sbin/haproxy
Nov 22 04:16:50 np0005532048 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [WARNING]  (302802) : Exiting Master process...
Nov 22 04:16:50 np0005532048 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [ALERT]    (302802) : Current worker (302804) exited with code 143 (Terminated)
Nov 22 04:16:50 np0005532048 neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c[302796]: [WARNING]  (302802) : All workers exited. Exiting... (0)
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.341 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91a0d7d2-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 systemd[1]: libpod-7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557.scope: Deactivated successfully.
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.346 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.349 253665 INFO os_vif [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:22:b3,bridge_name='br-int',has_traffic_filtering=True,id=91a0d7d2-517a-4636-a7fd-86f4d72aed04,network=Network(a8ceec0c-2cf6-459a-a4d7-aaf770041b6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91a0d7d2-51')#033[00m
Nov 22 04:16:50 np0005532048 podman[304532]: 2025-11-22 09:16:50.351332685 +0000 UTC m=+0.061847701 container died 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:16:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-014918619f881c93f9b6124aa0acf97a639f1327a1c3280fd54ede139d565034-merged.mount: Deactivated successfully.
Nov 22 04:16:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557-userdata-shm.mount: Deactivated successfully.
Nov 22 04:16:50 np0005532048 podman[304532]: 2025-11-22 09:16:50.427547088 +0000 UTC m=+0.138062104 container cleanup 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:16:50 np0005532048 systemd[1]: libpod-conmon-7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557.scope: Deactivated successfully.
Nov 22 04:16:50 np0005532048 podman[304590]: 2025-11-22 09:16:50.513776708 +0000 UTC m=+0.059803141 container remove 7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.520 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eeed5dea-4c6d-4c76-aa4e-8f4803dbc78f]: (4, ('Sat Nov 22 09:16:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c (7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557)\n7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557\nSat Nov 22 09:16:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c (7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557)\n7bd0afb8ec3fe3b14e73034a9f72a149aee4736c706701d3e5263b2bfbba2557\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.523 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6e174701-8754-4ae3-a903-7815c82b4940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.525 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8ceec0c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.527 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 kernel: tapa8ceec0c-20: left promiscuous mode
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.550 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c59bbe6-bf96-4f06-a6f7-953d30021faf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.568 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c2856f4d-323d-4e29-a8a2-6b8670e95627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.570 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb18198-947d-485c-bb72-5277502ee062]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.598 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f5987fb4-2cc1-474f-9b39-fde2078dcb7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581526, 'reachable_time': 31766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304606, 'error': None, 'target': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:50 np0005532048 systemd[1]: run-netns-ovnmeta\x2da8ceec0c\x2d2cf6\x2d459a\x2da4d7\x2daaf770041b6c.mount: Deactivated successfully.
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.607 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.608 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbea081-c48e-4491-a4b1-11165c6b00b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.669 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.858 253665 INFO nova.virt.libvirt.driver [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Deleting instance files /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536_del#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.859 253665 INFO nova.virt.libvirt.driver [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Deletion of /var/lib/nova/instances/b45c203c-7ae1-436b-86d3-bfc0146dd536_del complete#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.909 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.909 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.910 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.910 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.910 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.911 253665 INFO nova.compute.manager [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Terminating instance#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.912 253665 DEBUG nova.compute.manager [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.916 253665 INFO nova.compute.manager [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.916 253665 DEBUG oslo.service.loopingcall [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.917 253665 DEBUG nova.compute.manager [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.917 253665 DEBUG nova.network.neutron [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:16:50 np0005532048 kernel: tapdc08e15e-7d (unregistering): left promiscuous mode
Nov 22 04:16:50 np0005532048 NetworkManager[48920]: <info>  [1763803010.9550] device (tapdc08e15e-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:16:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:50Z|00376|binding|INFO|Releasing lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 from this chassis (sb_readonly=0)
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:50Z|00377|binding|INFO|Setting lport dc08e15e-7d04-4fac-8489-61a2d7b5a642 down in Southbound
Nov 22 04:16:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:50Z|00378|binding|INFO|Removing iface tapdc08e15e-7d ovn-installed in OVS
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.965 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:74:e4 10.100.0.172'], port_security=['fa:16:3e:fb:74:e4 10.100.0.172'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.172/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d4cba21-42dc-4923-abab-98063b71666c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dc08e15e-7d04-4fac-8489-61a2d7b5a642) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.967 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dc08e15e-7d04-4fac-8489-61a2d7b5a642 in datapath c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 unbound from our chassis#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.968 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6#033[00m
Nov 22 04:16:50 np0005532048 kernel: tapbabfaba0-2c (unregistering): left promiscuous mode
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 NetworkManager[48920]: <info>  [1763803010.9828] device (tapbabfaba0-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.987 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1026d72-2a6e-4040-9c30-5114d0c18273]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:50Z|00379|binding|INFO|Releasing lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 from this chassis (sb_readonly=0)
Nov 22 04:16:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:50Z|00380|binding|INFO|Setting lport babfaba0-2c0a-4eb0-adc0-3473b0b80a08 down in Southbound
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:50Z|00381|binding|INFO|Removing iface tapbabfaba0-2c ovn-installed in OVS
Nov 22 04:16:50 np0005532048 nova_compute[253661]: 2025-11-22 09:16:50.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:50.996 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:00:b1 10.100.1.80'], port_security=['fa:16:3e:10:00:b1 10.100.1.80'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.80/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=529c2359-2293-4f9c-a99f-590f8fe2f28e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=babfaba0-2c0a-4eb0-adc0-3473b0b80a08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 kernel: tap3c735a93-ff (unregistering): left promiscuous mode
Nov 22 04:16:51 np0005532048 NetworkManager[48920]: <info>  [1763803011.0173] device (tap3c735a93-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.037 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6954e6-2754-46ec-8a31-ff0ec3714a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.041 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc4ea39-9c58-4783-8d64-c5b15752d94a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:51Z|00382|binding|INFO|Releasing lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 from this chassis (sb_readonly=0)
Nov 22 04:16:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:51Z|00383|binding|INFO|Setting lport 3c735a93-ffc0-4525-bd7d-7db35fe17769 down in Southbound
Nov 22 04:16:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:16:51Z|00384|binding|INFO|Removing iface tap3c735a93-ff ovn-installed in OVS
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.042 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.045 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.061 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:17:78 10.100.0.229'], port_security=['fa:16:3e:eb:17:78 10.100.0.229'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.229/24', 'neutron:device_id': '4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d4cba21-42dc-4923-abab-98063b71666c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=3c735a93-ffc0-4525-bd7d-7db35fe17769) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.080 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[28449923-ac2f-40b6-a69a-f369d1a1c7b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Nov 22 04:16:51 np0005532048 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Consumed 3.230s CPU time.
Nov 22 04:16:51 np0005532048 systemd-machined[215941]: Machine qemu-48-instance-0000002b terminated.
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.102 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18de082a-dd71-45a6-b88e-a9dcbf246c9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc43f3b8e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:07:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585494, 'reachable_time': 25498, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304628, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.123 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe9c762-9eb7-4a69-8a12-346a3745ebb5]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tapc43f3b8e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585507, 'tstamp': 585507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304629, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc43f3b8e-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585510, 'tstamp': 585510}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304629, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.126 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc43f3b8e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.136 253665 DEBUG nova.compute.manager [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-unplugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.136 253665 DEBUG oslo_concurrency.lockutils [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.137 253665 DEBUG oslo_concurrency.lockutils [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.137 253665 DEBUG oslo_concurrency.lockutils [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.137 253665 DEBUG nova.compute.manager [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] No waiting events found dispatching network-vif-unplugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.137 253665 DEBUG nova.compute.manager [req-a6e827cf-0dc4-45ac-a9bb-3543a53fefc9 req-b4e92284-93a0-48b9-8163-370bb9a1e9c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-unplugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:16:51 np0005532048 NetworkManager[48920]: <info>  [1763803011.1392] manager: (tapdc08e15e-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.145 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc43f3b8e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.146 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.146 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc43f3b8e-c0, col_values=(('external_ids', {'iface-id': '61d4e08f-2ccc-4601-a2e7-6cb33cc906ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.146 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.147 162862 INFO neutron.agent.ovn.metadata.agent [-] Port babfaba0-2c0a-4eb0-adc0-3473b0b80a08 in datapath 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c unbound from our chassis#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.150 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.152 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f67f046f-5fef-4fc0-a3f0-8e3b673b1d82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.153 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c namespace which is not needed anymore#033[00m
Nov 22 04:16:51 np0005532048 NetworkManager[48920]: <info>  [1763803011.1546] manager: (tapbabfaba0-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Nov 22 04:16:51 np0005532048 NetworkManager[48920]: <info>  [1763803011.1686] manager: (tap3c735a93-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.184 253665 INFO nova.virt.libvirt.driver [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Instance destroyed successfully.#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.185 253665 DEBUG nova.objects.instance [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'resources' on Instance uuid 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.204 253665 DEBUG nova.virt.libvirt.vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:16:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:48Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.204 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.205 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.205 253665 DEBUG os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.207 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc08e15e-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.217 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.219 253665 INFO os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:74:e4,bridge_name='br-int',has_traffic_filtering=True,id=dc08e15e-7d04-4fac-8489-61a2d7b5a642,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc08e15e-7d')#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.220 253665 DEBUG nova.virt.libvirt.vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:16:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:48Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.220 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.221 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.221 253665 DEBUG os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.223 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.223 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbabfaba0-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.226 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.230 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.232 253665 INFO os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:00:b1,bridge_name='br-int',has_traffic_filtering=True,id=babfaba0-2c0a-4eb0-adc0-3473b0b80a08,network=Network(2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbabfaba0-2c')#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.233 253665 DEBUG nova.virt.libvirt.vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:16:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-619809161',display_name='tempest-ServersTestMultiNic-server-619809161',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-619809161',id=43,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:16:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-gzecmxkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:16:48Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.233 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "address": "fa:16:3e:eb:17:78", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.229", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c735a93-ff", "ovs_interfaceid": "3c735a93-ffc0-4525-bd7d-7db35fe17769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.234 253665 DEBUG nova.network.os_vif_util [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.234 253665 DEBUG os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.235 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.236 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c735a93-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.237 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.240 253665 INFO os_vif [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:17:78,bridge_name='br-int',has_traffic_filtering=True,id=3c735a93-ffc0-4525-bd7d-7db35fe17769,network=Network(c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c735a93-ff')#033[00m
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [NOTICE]   (304497) : haproxy version is 2.8.14-c23fe91
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [NOTICE]   (304497) : path to executable is /usr/sbin/haproxy
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [WARNING]  (304497) : Exiting Master process...
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [ALERT]    (304497) : Current worker (304499) exited with code 143 (Terminated)
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c[304493]: [WARNING]  (304497) : All workers exited. Exiting... (0)
Nov 22 04:16:51 np0005532048 systemd[1]: libpod-b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f.scope: Deactivated successfully.
Nov 22 04:16:51 np0005532048 podman[304686]: 2025-11-22 09:16:51.318897936 +0000 UTC m=+0.058327034 container died b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:16:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 167 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 37 op/s
Nov 22 04:16:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f-userdata-shm.mount: Deactivated successfully.
Nov 22 04:16:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-558a2bf28aa725adc01b2257b2fa48656e975a746fc020a7ec5246f4f99d866b-merged.mount: Deactivated successfully.
Nov 22 04:16:51 np0005532048 podman[304686]: 2025-11-22 09:16:51.373745424 +0000 UTC m=+0.113174492 container cleanup b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:16:51 np0005532048 systemd[1]: libpod-conmon-b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f.scope: Deactivated successfully.
Nov 22 04:16:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 22 04:16:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 22 04:16:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 22 04:16:51 np0005532048 podman[304734]: 2025-11-22 09:16:51.46149825 +0000 UTC m=+0.060308343 container remove b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.468 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df7e3a97-4072-4903-8f34-c5eef8d20e01]: (4, ('Sat Nov 22 09:16:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c (b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f)\nb69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f\nSat Nov 22 09:16:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c (b69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f)\nb69dae00c7a918328ee79fd7723bfccd4280317fbcb477706cfbfb8ccd06331f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.469 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98856ec8-584a-4053-ba59-6c6ca302c0c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.470 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d8f3ef5-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.491 253665 DEBUG nova.compute.manager [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-unplugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.491 253665 DEBUG oslo_concurrency.lockutils [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.491 253665 DEBUG oslo_concurrency.lockutils [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.491 253665 DEBUG oslo_concurrency.lockutils [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.492 253665 DEBUG nova.compute.manager [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-unplugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.492 253665 DEBUG nova.compute.manager [req-cd717a0b-18d3-4d48-8cb8-da35021a1f7e req-4bbbaef9-8889-4d13-8066-434cf5ffce26 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-unplugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.505 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 kernel: tap2d8f3ef5-80: left promiscuous mode
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.524 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4875b6-30f0-4529-b88a-e012ad3bd7cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[520cd2eb-e028-4ff3-9a5b-e41e3e32ceb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.547 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[904242a8-ea9f-4df6-bbc7-444e0064da05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.573 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c67fe838-be82-4ff5-86bc-70213cc2d285]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585596, 'reachable_time': 20045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304748, 'error': None, 'target': 'ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 systemd[1]: run-netns-ovnmeta\x2d2d8f3ef5\x2d84f0\x2d49e9\x2d9b4b\x2d24a885a3aa6c.mount: Deactivated successfully.
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.576 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.577 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2299b565-ea6d-408b-b2fe-59c55de1e1b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.578 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 3c735a93-ffc0-4525-bd7d-7db35fe17769 in datapath c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 unbound from our chassis#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.579 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.580 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c03b1fcf-3143-42a3-bbf6-4eae0bd914d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.581 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 namespace which is not needed anymore#033[00m
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [NOTICE]   (304416) : haproxy version is 2.8.14-c23fe91
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [NOTICE]   (304416) : path to executable is /usr/sbin/haproxy
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [WARNING]  (304416) : Exiting Master process...
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [ALERT]    (304416) : Current worker (304418) exited with code 143 (Terminated)
Nov 22 04:16:51 np0005532048 neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6[304412]: [WARNING]  (304416) : All workers exited. Exiting... (0)
Nov 22 04:16:51 np0005532048 systemd[1]: libpod-ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e.scope: Deactivated successfully.
Nov 22 04:16:51 np0005532048 podman[304766]: 2025-11-22 09:16:51.747036708 +0000 UTC m=+0.059895132 container died ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.750 253665 INFO nova.virt.libvirt.driver [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Deleting instance files /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_del#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.751 253665 INFO nova.virt.libvirt.driver [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Deletion of /var/lib/nova/instances/4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e_del complete#033[00m
Nov 22 04:16:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e-userdata-shm.mount: Deactivated successfully.
Nov 22 04:16:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-19c249f8811210d708abd429b642e58a08ae9985262779c5dbd5ea97093529a7-merged.mount: Deactivated successfully.
Nov 22 04:16:51 np0005532048 podman[304766]: 2025-11-22 09:16:51.78980233 +0000 UTC m=+0.102660764 container cleanup ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.808 253665 INFO nova.compute.manager [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.809 253665 DEBUG oslo.service.loopingcall [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.809 253665 DEBUG nova.compute.manager [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.810 253665 DEBUG nova.network.neutron [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:16:51 np0005532048 systemd[1]: libpod-conmon-ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e.scope: Deactivated successfully.
Nov 22 04:16:51 np0005532048 podman[304794]: 2025-11-22 09:16:51.869917259 +0000 UTC m=+0.052780259 container remove ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ab643b-5947-409c-a739-496f3fa030aa]: (4, ('Sat Nov 22 09:16:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 (ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e)\nca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e\nSat Nov 22 09:16:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 (ca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e)\nca83631999ba95534a3c36532c99c85e200b0884e935dd6c4ff7d984bb19624e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.879 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[572c3e53-f5cd-4d10-82f1-f3623f353848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.880 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc43f3b8e-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:51 np0005532048 kernel: tapc43f3b8e-c0: left promiscuous mode
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 nova_compute[253661]: 2025-11-22 09:16:51.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.904 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e95474b-e8cf-4580-83b0-2d0240ce3632]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.919 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[359a034a-d87e-4310-a181-55f19dfc7d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.921 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b8742ec1-327c-4b73-98ff-4d744646383a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.947 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[51614735-881d-45ae-b9e4-67d0f595a227]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585485, 'reachable_time': 41183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304808, 'error': None, 'target': 'ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.950 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:16:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:51.950 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[595a980f-504a-4896-a648-5a1ae5590904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:16:51 np0005532048 systemd[1]: run-netns-ovnmeta\x2dc43f3b8e\x2dc53d\x2d47a3\x2dab2b\x2dbfb152b82dd6.mount: Deactivated successfully.
Nov 22 04:16:52 np0005532048 nova_compute[253661]: 2025-11-22 09:16:52.045 253665 DEBUG nova.network.neutron [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updated VIF entry in instance network info cache for port 91a0d7d2-517a-4636-a7fd-86f4d72aed04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:16:52 np0005532048 nova_compute[253661]: 2025-11-22 09:16:52.045 253665 DEBUG nova.network.neutron [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [{"id": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "address": "fa:16:3e:d8:22:b3", "network": {"id": "a8ceec0c-2cf6-459a-a4d7-aaf770041b6c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-812150159-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2f74a0d8c2374c07a9c9cd48b42318c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91a0d7d2-51", "ovs_interfaceid": "91a0d7d2-517a-4636-a7fd-86f4d72aed04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:52 np0005532048 nova_compute[253661]: 2025-11-22 09:16:52.059 253665 DEBUG oslo_concurrency.lockutils [req-392dd015-0619-4538-b082-3d11365c95d7 req-8ebb389c-ca6f-4bd7-ae7a-b17ae2398869 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b45c203c-7ae1-436b-86d3-bfc0146dd536" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:16:52
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', 'images', '.mgr', 'backups', 'default.rgw.control']
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.119 253665 DEBUG nova.network.neutron [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.140 253665 INFO nova.compute.manager [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Took 2.22 seconds to deallocate network for instance.#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.192 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.192 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.243 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.244 253665 DEBUG oslo_concurrency.lockutils [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.244 253665 DEBUG oslo_concurrency.lockutils [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.245 253665 DEBUG oslo_concurrency.lockutils [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.245 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] No waiting events found dispatching network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.246 253665 WARNING nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received unexpected event network-vif-plugged-91a0d7d2-517a-4636-a7fd-86f4d72aed04 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.246 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-deleted-3c735a93-ffc0-4525-bd7d-7db35fe17769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.247 253665 INFO nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Neutron deleted interface 3c735a93-ffc0-4525-bd7d-7db35fe17769; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.247 253665 DEBUG nova.network.neutron [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "address": "fa:16:3e:fb:74:e4", "network": {"id": "c43f3b8e-c53d-47a3-ab2b-bfb152b82dd6", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1078920500", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.172", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc08e15e-7d", "ovs_interfaceid": "dc08e15e-7d04-4fac-8489-61a2d7b5a642", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.286 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Detach interface failed, port_id=3c735a93-ffc0-4525-bd7d-7db35fe17769, reason: Instance 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.287 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-deleted-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.287 253665 INFO nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Neutron deleted interface dc08e15e-7d04-4fac-8489-61a2d7b5a642; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.287 253665 DEBUG nova.network.neutron [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [{"id": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "address": "fa:16:3e:10:00:b1", "network": {"id": "2d8f3ef5-84f0-49e9-9b4b-24a885a3aa6c", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1290740387", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.80", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbabfaba0-2c", "ovs_interfaceid": "babfaba0-2c0a-4eb0-adc0-3473b0b80a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.301 253665 DEBUG oslo_concurrency.processutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:53 np0005532048 ceph-mgr[75315]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1636168236
Nov 22 04:16:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 130 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 20 KiB/s wr, 95 op/s
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.357 253665 DEBUG nova.compute.manager [req-fad5e2ff-6775-4127-b492-0227cad0bcdd req-04171347-f999-4565-a29c-57fc0bc8a3da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Detach interface failed, port_id=dc08e15e-7d04-4fac-8489-61a2d7b5a642, reason: Instance 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.684 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.685 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.685 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.686 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.686 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.687 253665 WARNING nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-dc08e15e-7d04-4fac-8489-61a2d7b5a642 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.687 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-unplugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.688 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.689 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.690 253665 DEBUG oslo_concurrency.lockutils [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.690 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-unplugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.691 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-unplugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.691 253665 DEBUG nova.compute.manager [req-6aa7b91b-f5f5-40df-923a-b95afb50b897 req-f0bfb0c7-6b4e-47a9-b26b-8b3a8e29cc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Received event network-vif-deleted-91a0d7d2-517a-4636-a7fd-86f4d72aed04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.694 253665 DEBUG nova.network.neutron [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.714 253665 INFO nova.compute.manager [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Took 1.90 seconds to deallocate network for instance.#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.752 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/851078135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.818 253665 DEBUG oslo_concurrency.processutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.826 253665 DEBUG nova.compute.provider_tree [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.843 253665 DEBUG nova.scheduler.client.report [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.870 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.873 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.903 253665 INFO nova.scheduler.client.report [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Deleted allocations for instance b45c203c-7ae1-436b-86d3-bfc0146dd536#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.927 253665 DEBUG oslo_concurrency.processutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:16:53 np0005532048 nova_compute[253661]: 2025-11-22 09:16:53.974 253665 DEBUG oslo_concurrency.lockutils [None req-b01e0231-c6a7-4bb6-a967-5b630113a2e4 7b394acfc2f44ed180b65249224f2788 2f74a0d8c2374c07a9c9cd48b42318c3 - - default default] Lock "b45c203c-7ae1-436b-86d3-bfc0146dd536" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2933749736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:54 np0005532048 podman[304851]: 2025-11-22 09:16:54.384545024 +0000 UTC m=+0.070254758 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 04:16:54 np0005532048 podman[304852]: 2025-11-22 09:16:54.39335925 +0000 UTC m=+0.075714802 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:16:54 np0005532048 nova_compute[253661]: 2025-11-22 09:16:54.397 253665 DEBUG oslo_concurrency.processutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:16:54 np0005532048 nova_compute[253661]: 2025-11-22 09:16:54.405 253665 DEBUG nova.compute.provider_tree [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:16:54 np0005532048 nova_compute[253661]: 2025-11-22 09:16:54.416 253665 DEBUG nova.scheduler.client.report [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:16:54 np0005532048 nova_compute[253661]: 2025-11-22 09:16:54.436 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:54 np0005532048 nova_compute[253661]: 2025-11-22 09:16:54.459 253665 INFO nova.scheduler.client.report [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Deleted allocations for instance 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e#033[00m
Nov 22 04:16:54 np0005532048 nova_compute[253661]: 2025-11-22 09:16:54.510 253665 DEBUG oslo_concurrency.lockutils [None req-bd101875-3ccc-4085-a19e-986308545728 c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:16:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:16:54 np0005532048 nova_compute[253661]: 2025-11-22 09:16:54.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:55 np0005532048 nova_compute[253661]: 2025-11-22 09:16:55.317 253665 DEBUG nova.compute.manager [req-e1741556-fdfb-452c-b9dd-053d1197584a req-fe93a008-ee34-4921-81a7-a3654030923f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-deleted-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 41 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 157 op/s
Nov 22 04:16:55 np0005532048 nova_compute[253661]: 2025-11-22 09:16:55.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:55 np0005532048 nova_compute[253661]: 2025-11-22 09:16:55.781 253665 DEBUG nova.compute.manager [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:16:55 np0005532048 nova_compute[253661]: 2025-11-22 09:16:55.782 253665 DEBUG oslo_concurrency.lockutils [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:16:55 np0005532048 nova_compute[253661]: 2025-11-22 09:16:55.782 253665 DEBUG oslo_concurrency.lockutils [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:16:55 np0005532048 nova_compute[253661]: 2025-11-22 09:16:55.782 253665 DEBUG oslo_concurrency.lockutils [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:16:55 np0005532048 nova_compute[253661]: 2025-11-22 09:16:55.783 253665 DEBUG nova.compute.manager [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] No waiting events found dispatching network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:16:55 np0005532048 nova_compute[253661]: 2025-11-22 09:16:55.783 253665 WARNING nova.compute.manager [req-778fcc56-2ffc-45ad-bdaf-b72b0f5abcce req-631dc7d5-d066-4f2d-889b-02e89b3f1094 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Received unexpected event network-vif-plugged-babfaba0-2c0a-4eb0-adc0-3473b0b80a08 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:16:56 np0005532048 nova_compute[253661]: 2025-11-22 09:16:56.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:56 np0005532048 nova_compute[253661]: 2025-11-22 09:16:56.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:56 np0005532048 nova_compute[253661]: 2025-11-22 09:16:56.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:16:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:16:57.059 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:16:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 146 op/s
Nov 22 04:16:57 np0005532048 podman[304892]: 2025-11-22 09:16:57.482391444 +0000 UTC m=+0.166218227 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 22 04:16:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 141 op/s
Nov 22 04:16:59 np0005532048 nova_compute[253661]: 2025-11-22 09:16:59.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:01 np0005532048 nova_compute[253661]: 2025-11-22 09:17:01.242 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:01 np0005532048 nova_compute[253661]: 2025-11-22 09:17:01.327 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:01 np0005532048 nova_compute[253661]: 2025-11-22 09:17:01.328 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 141 op/s
Nov 22 04:17:01 np0005532048 nova_compute[253661]: 2025-11-22 09:17:01.340 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:17:01 np0005532048 nova_compute[253661]: 2025-11-22 09:17:01.412 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:01 np0005532048 nova_compute[253661]: 2025-11-22 09:17:01.413 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:01 np0005532048 nova_compute[253661]: 2025-11-22 09:17:01.421 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:17:01 np0005532048 nova_compute[253661]: 2025-11-22 09:17:01.422 253665 INFO nova.compute.claims [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:17:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:01 np0005532048 nova_compute[253661]: 2025-11-22 09:17:01.507 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801285361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.007 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.014 253665 DEBUG nova.compute.provider_tree [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.032 253665 DEBUG nova.scheduler.client.report [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.052 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.053 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.106 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.107 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.130 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.153 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.254 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.255 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.256 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Creating image(s)#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.282 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.307 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.339 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.343 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.395 253665 DEBUG nova.policy [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.407 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.407 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.421 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.441 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.443 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.444 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.444 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.471 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.476 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 8dafc0d0-bd93-4080-b51e-36887936ea66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.541 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.542 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.551 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.552 253665 INFO nova.compute.claims [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:17:02 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.668 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.882 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 8dafc0d0-bd93-4080-b51e-36887936ea66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:02 np0005532048 nova_compute[253661]: 2025-11-22 09:17:02.949 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.078 253665 DEBUG nova.objects.instance [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid 8dafc0d0-bd93-4080-b51e-36887936ea66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.109 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.110 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Ensure instance console log exists: /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.111 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.112 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.112 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495696526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.140 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.147 253665 DEBUG nova.compute.provider_tree [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.174 253665 DEBUG nova.scheduler.client.report [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.203 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.204 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.252 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.252 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.274 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.294 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:17:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 KiB/s wr, 118 op/s
Nov 22 04:17:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:03.361 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:0f:e7'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba63bb0-30f0-4e31-af74-7247ce34941d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=73744eaa-7d97-4c21-9fb3-4378f10417f6) old=Port_Binding(mac=['fa:16:3e:96:0f:e7 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8ceec0c-2cf6-459a-a4d7-aaf770041b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2f74a0d8c2374c07a9c9cd48b42318c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:03.364 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 73744eaa-7d97-4c21-9fb3-4378f10417f6 in datapath a8ceec0c-2cf6-459a-a4d7-aaf770041b6c updated#033[00m
Nov 22 04:17:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:03.366 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a8ceec0c-2cf6-459a-a4d7-aaf770041b6c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:17:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:03.368 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc9b402-e563-4c32-ab43-86ea30a84ccc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.377 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.378 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.379 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Creating image(s)#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.400 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.423 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.447 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.451 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.490 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Successfully created port: f72c6b7d-0ba5-4d25-a08a-e2518c2a479d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.496 253665 DEBUG nova.policy [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca3a4f3a44014ad7a069b7dbdffb7c04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ce05fa5ad2745dab1909b0954fb83d6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.529 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.531 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.531 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.532 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.557 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.562 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4589c5da-d558-41a1-bf54-30746991be9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.909 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4589c5da-d558-41a1-bf54-30746991be9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:03 np0005532048 nova_compute[253661]: 2025-11-22 09:17:03.984 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] resizing rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.106 253665 DEBUG nova.objects.instance [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lazy-loading 'migration_context' on Instance uuid 4589c5da-d558-41a1-bf54-30746991be9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.119 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.119 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Ensure instance console log exists: /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.120 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.120 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.120 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.386 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Successfully updated port: f72c6b7d-0ba5-4d25-a08a-e2518c2a479d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.405 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.405 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.406 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.446 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Successfully created port: 79319cd8-59bd-43b2-a72b-a88f70eb5570 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.568 253665 DEBUG nova.compute.manager [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-changed-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.568 253665 DEBUG nova.compute.manager [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Refreshing instance network info cache due to event network-changed-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.568 253665 DEBUG oslo_concurrency.lockutils [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.958 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:17:04 np0005532048 nova_compute[253661]: 2025-11-22 09:17:04.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:05 np0005532048 nova_compute[253661]: 2025-11-22 09:17:05.319 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803010.318197, b45c203c-7ae1-436b-86d3-bfc0146dd536 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:05 np0005532048 nova_compute[253661]: 2025-11-22 09:17:05.320 253665 INFO nova.compute.manager [-] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:17:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 68 MiB data, 456 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 84 op/s
Nov 22 04:17:05 np0005532048 nova_compute[253661]: 2025-11-22 09:17:05.341 253665 DEBUG nova.compute.manager [None req-6869de07-e164-4892-86d3-f3bd5f608b3f - - - - - -] [instance: b45c203c-7ae1-436b-86d3-bfc0146dd536] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:05 np0005532048 nova_compute[253661]: 2025-11-22 09:17:05.749 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Successfully updated port: 79319cd8-59bd-43b2-a72b-a88f70eb5570 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:17:05 np0005532048 nova_compute[253661]: 2025-11-22 09:17:05.766 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:05 np0005532048 nova_compute[253661]: 2025-11-22 09:17:05.766 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquired lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:05 np0005532048 nova_compute[253661]: 2025-11-22 09:17:05.766 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.035 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.183 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803011.181592, 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.183 253665 INFO nova.compute.manager [-] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.193 253665 DEBUG nova.network.neutron [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Updating instance_info_cache with network_info: [{"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.204 253665 DEBUG nova.compute.manager [None req-a44e40fa-2bb2-4830-a8ef-a8089817de0d - - - - - -] [instance: 4a3bba8d-a5ac-41ca-bc8a-0eb656103f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.235 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.236 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance network_info: |[{"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.237 253665 DEBUG oslo_concurrency.lockutils [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.237 253665 DEBUG nova.network.neutron [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Refreshing network info cache for port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.243 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start _get_guest_xml network_info=[{"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.250 253665 WARNING nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.260 253665 DEBUG nova.virt.libvirt.host [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.261 253665 DEBUG nova.virt.libvirt.host [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.265 253665 DEBUG nova.virt.libvirt.host [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.266 253665 DEBUG nova.virt.libvirt.host [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.267 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.267 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.268 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.268 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.268 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.269 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.269 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.269 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.269 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.270 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.270 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.270 253665 DEBUG nova.virt.hardware [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.273 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025858630' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.773 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.797 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:06 np0005532048 nova_compute[253661]: 2025-11-22 09:17:06.801 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3211189844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.281 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.283 253665 DEBUG nova.virt.libvirt.vif [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1053734599',display_name='tempest-DeleteServersTestJSON-server-1053734599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1053734599',id=44,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-w9qj9zgo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:02Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=8dafc0d0-bd93-4080-b51e-36887936ea66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.283 253665 DEBUG nova.network.os_vif_util [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.284 253665 DEBUG nova.network.os_vif_util [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.285 253665 DEBUG nova.objects.instance [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dafc0d0-bd93-4080-b51e-36887936ea66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.308 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <uuid>8dafc0d0-bd93-4080-b51e-36887936ea66</uuid>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <name>instance-0000002c</name>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <nova:name>tempest-DeleteServersTestJSON-server-1053734599</nova:name>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:17:06</nova:creationTime>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <nova:port uuid="f72c6b7d-0ba5-4d25-a08a-e2518c2a479d">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <entry name="serial">8dafc0d0-bd93-4080-b51e-36887936ea66</entry>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <entry name="uuid">8dafc0d0-bd93-4080-b51e-36887936ea66</entry>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/8dafc0d0-bd93-4080-b51e-36887936ea66_disk">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:73:d5:8c"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <target dev="tapf72c6b7d-0b"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/console.log" append="off"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:17:07 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:17:07 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:17:07 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:17:07 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.309 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Preparing to wait for external event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.310 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.310 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.310 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.311 253665 DEBUG nova.virt.libvirt.vif [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1053734599',display_name='tempest-DeleteServersTestJSON-server-1053734599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1053734599',id=44,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-w9qj9zgo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:02Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=8dafc0d0-bd93-4080-b51e-36887936ea66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.311 253665 DEBUG nova.network.os_vif_util [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.312 253665 DEBUG nova.network.os_vif_util [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.312 253665 DEBUG os_vif [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.319 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf72c6b7d-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.320 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf72c6b7d-0b, col_values=(('external_ids', {'iface-id': 'f72c6b7d-0ba5-4d25-a08a-e2518c2a479d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:d5:8c', 'vm-uuid': '8dafc0d0-bd93-4080-b51e-36887936ea66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.321 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:07 np0005532048 NetworkManager[48920]: <info>  [1763803027.3226] manager: (tapf72c6b7d-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.325 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.330 253665 INFO os_vif [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b')#033[00m
Nov 22 04:17:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 113 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.0 MiB/s wr, 33 op/s
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.379 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.380 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.380 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:73:d5:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.381 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Using config drive#033[00m
Nov 22 04:17:07 np0005532048 nova_compute[253661]: 2025-11-22 09:17:07.401 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.294 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Creating config drive at /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.299 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpayjm6mzv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.427 253665 DEBUG nova.compute.manager [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-changed-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.428 253665 DEBUG nova.compute.manager [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Refreshing instance network info cache due to event network-changed-79319cd8-59bd-43b2-a72b-a88f70eb5570. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.429 253665 DEBUG oslo_concurrency.lockutils [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.455 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpayjm6mzv" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.485 253665 DEBUG nova.storage.rbd_utils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.491 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.666 253665 DEBUG oslo_concurrency.processutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config 8dafc0d0-bd93-4080-b51e-36887936ea66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.667 253665 INFO nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Deleting local config drive /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66/disk.config because it was imported into RBD.#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.671 253665 DEBUG nova.network.neutron [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Updating instance_info_cache with network_info: [{"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.697 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Releasing lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.698 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance network_info: |[{"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.699 253665 DEBUG oslo_concurrency.lockutils [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.699 253665 DEBUG nova.network.neutron [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Refreshing network info cache for port 79319cd8-59bd-43b2-a72b-a88f70eb5570 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.704 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start _get_guest_xml network_info=[{"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.709 253665 WARNING nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.726 253665 DEBUG nova.virt.libvirt.host [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.727 253665 DEBUG nova.virt.libvirt.host [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.733 253665 DEBUG nova.virt.libvirt.host [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.735 253665 DEBUG nova.virt.libvirt.host [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.736 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.736 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.737 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.737 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.737 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.738 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.739 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.739 253665 DEBUG nova.virt.hardware [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:17:08 np0005532048 kernel: tapf72c6b7d-0b: entered promiscuous mode
Nov 22 04:17:08 np0005532048 NetworkManager[48920]: <info>  [1763803028.7407] manager: (tapf72c6b7d-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/169)
Nov 22 04:17:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:08Z|00385|binding|INFO|Claiming lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for this chassis.
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.743 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:08Z|00386|binding|INFO|f72c6b7d-0ba5-4d25-a08a-e2518c2a479d: Claiming fa:16:3e:73:d5:8c 10.100.0.11
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.758 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d5:8c 10.100.0.11'], port_security=['fa:16:3e:73:d5:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dafc0d0-bd93-4080-b51e-36887936ea66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.760 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d in datapath d93e3720-b00d-41f5-8283-164e9f857d24 bound to our chassis#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.762 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24#033[00m
Nov 22 04:17:08 np0005532048 systemd-udevd[305432]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.778 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5450a02f-cc49-4857-9429-d3cea4a24cbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.779 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.784 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.784 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17d3e0dd-4ff9-49bd-9a50-60df92b80dbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.786 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4697fd11-a580-4ad8-b006-9fd26b5e4228]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.788 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:08 np0005532048 NetworkManager[48920]: <info>  [1763803028.7942] device (tapf72c6b7d-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:17:08 np0005532048 NetworkManager[48920]: <info>  [1763803028.7951] device (tapf72c6b7d-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.802 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa8a244-9a22-45e4-b8f8-31f946c83603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 systemd-machined[215941]: New machine qemu-49-instance-0000002c.
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.823 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0fba26c0-5e42-4c23-a232-586c63054f7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 systemd[1]: Started Virtual Machine qemu-49-instance-0000002c.
Nov 22 04:17:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:08Z|00387|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d ovn-installed in OVS
Nov 22 04:17:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:08Z|00388|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d up in Southbound
Nov 22 04:17:08 np0005532048 nova_compute[253661]: 2025-11-22 09:17:08.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.882 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[57fd41b6-9ff8-4b35-92ae-82cad1c4ff4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 NetworkManager[48920]: <info>  [1763803028.8925] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/170)
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.892 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[57f54984-d630-4bb3-9379-271a1ac3b097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.931 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1559187b-bf54-4cca-8f1a-07518fc6bae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.935 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe2780a-bbe5-4ed1-9916-7b97249a18f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 NetworkManager[48920]: <info>  [1763803028.9638] device (tapd93e3720-b0): carrier: link connected
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.970 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[877c51d6-48e0-4755-b010-5017fa93bf45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:08.991 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e328b820-0ebc-493f-84a4-5e3b59823101]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587970, 'reachable_time': 44482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305484, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.013 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[036399fa-c4d0-4c79-8169-e7f30d1fea57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587970, 'tstamp': 587970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305485, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.033 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a5811fe9-0f43-4747-9fbc-39f095033cc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587970, 'reachable_time': 44482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305486, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.071 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48285153-c526-49db-9187-2ecf2a974979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.078 253665 DEBUG nova.compute.manager [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.078 253665 DEBUG oslo_concurrency.lockutils [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.079 253665 DEBUG oslo_concurrency.lockutils [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.079 253665 DEBUG oslo_concurrency.lockutils [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.079 253665 DEBUG nova.compute.manager [req-23e492f0-fdc4-49c4-80d6-d30749fcfaaf req-4aab6b06-8001-43e6-8954-bd4be36751e8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Processing event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0d702278-ba03-44ba-9c9c-943aa9e3683a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:09 np0005532048 NetworkManager[48920]: <info>  [1763803029.1476] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Nov 22 04:17:09 np0005532048 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.152 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.153 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:09Z|00389|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.156 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.157 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d07f27e9-8b43-4618-93f3-77890ccc3d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.158 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:17:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:09.159 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.164 253665 DEBUG nova.network.neutron [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Updated VIF entry in instance network info cache for port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.165 253665 DEBUG nova.network.neutron [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Updating instance_info_cache with network_info: [{"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.168 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.182 253665 DEBUG oslo_concurrency.lockutils [req-0919c9d2-22e0-4692-abbf-4a1f516a7f4a req-d48486ec-13e6-4c15-aa2d-e8e179b47508 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8dafc0d0-bd93-4080-b51e-36887936ea66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717109147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.266 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.298 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.306 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 134 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.432 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803029.432045, 8dafc0d0-bd93-4080-b51e-36887936ea66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.433 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] VM Started (Lifecycle Event)#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.438 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.443 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.447 253665 INFO nova.virt.libvirt.driver [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance spawned successfully.#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.448 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.451 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.456 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.467 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.468 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.469 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.470 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.471 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.471 253665 DEBUG nova.virt.libvirt.driver [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.477 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.478 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803029.432413, 8dafc0d0-bd93-4080-b51e-36887936ea66 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.478 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.501 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.516 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803029.4419057, 8dafc0d0-bd93-4080-b51e-36887936ea66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.517 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.532 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.536 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.548 253665 INFO nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Took 7.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.549 253665 DEBUG nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:09 np0005532048 podman[305599]: 2025-11-22 09:17:09.557170266 +0000 UTC m=+0.054009475 container create 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.561 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:09 np0005532048 systemd[1]: Started libpod-conmon-6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889.scope.
Nov 22 04:17:09 np0005532048 podman[305599]: 2025-11-22 09:17:09.526197517 +0000 UTC m=+0.023036746 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:17:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.625 253665 INFO nova.compute.manager [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Took 8.23 seconds to build instance.#033[00m
Nov 22 04:17:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ebc4bd6ad0fd49fe8ea4cf8e7431f6025e366defa6658f1cd2635875334b6b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.635 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.636 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:09 np0005532048 podman[305599]: 2025-11-22 09:17:09.64264704 +0000 UTC m=+0.139486249 container init 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.645 253665 DEBUG oslo_concurrency.lockutils [None req-d13dc84c-62f4-4206-a27a-6e0cd60cfd5b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:09 np0005532048 podman[305599]: 2025-11-22 09:17:09.650213032 +0000 UTC m=+0.147052241 container start 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.653 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:17:09 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [NOTICE]   (305619) : New worker (305621) forked
Nov 22 04:17:09 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [NOTICE]   (305619) : Loading success.
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.729 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.730 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.740 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.741 253665 INFO nova.compute.claims [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:17:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816249735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.826 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.828 253665 DEBUG nova.virt.libvirt.vif [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-293583865',display_name='tempest-ImagesNegativeTestJSON-server-293583865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-293583865',id=45,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ce05fa5ad2745dab1909b0954fb83d6',ramdisk_id='',reservation_id='r-cc0nf0zk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1501914571',owner_user_name='tempest-ImagesNegativeTestJSON-1501914571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:03Z,user_data=None,user_id='ca3a4f3a44014ad7a069b7dbdffb7c04',uuid=4589c5da-d558-41a1-bf54-30746991be9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.828 253665 DEBUG nova.network.os_vif_util [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converting VIF {"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.829 253665 DEBUG nova.network.os_vif_util [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.830 253665 DEBUG nova.objects.instance [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4589c5da-d558-41a1-bf54-30746991be9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.843 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <uuid>4589c5da-d558-41a1-bf54-30746991be9e</uuid>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <name>instance-0000002d</name>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <nova:name>tempest-ImagesNegativeTestJSON-server-293583865</nova:name>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:17:08</nova:creationTime>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <nova:user uuid="ca3a4f3a44014ad7a069b7dbdffb7c04">tempest-ImagesNegativeTestJSON-1501914571-project-member</nova:user>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <nova:project uuid="4ce05fa5ad2745dab1909b0954fb83d6">tempest-ImagesNegativeTestJSON-1501914571</nova:project>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <nova:port uuid="79319cd8-59bd-43b2-a72b-a88f70eb5570">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <entry name="serial">4589c5da-d558-41a1-bf54-30746991be9e</entry>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <entry name="uuid">4589c5da-d558-41a1-bf54-30746991be9e</entry>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4589c5da-d558-41a1-bf54-30746991be9e_disk">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4589c5da-d558-41a1-bf54-30746991be9e_disk.config">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:14:e8:cd"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <target dev="tap79319cd8-59"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/console.log" append="off"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:17:09 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:17:09 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:17:09 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:17:09 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.850 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Preparing to wait for external event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.850 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.851 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.851 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.852 253665 DEBUG nova.virt.libvirt.vif [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-293583865',display_name='tempest-ImagesNegativeTestJSON-server-293583865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-293583865',id=45,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ce05fa5ad2745dab1909b0954fb83d6',ramdisk_id='',reservation_id='r-cc0nf0zk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1501914571',owner_user_name='tempest-ImagesNegativeTestJSON-1501914571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:03Z,user_data=None,user_id='ca3a4f3a44014ad7a069b7dbdffb7c04',uuid=4589c5da-d558-41a1-bf54-30746991be9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.852 253665 DEBUG nova.network.os_vif_util [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converting VIF {"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.853 253665 DEBUG nova.network.os_vif_util [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.854 253665 DEBUG os_vif [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.855 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.856 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.864 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79319cd8-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.865 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79319cd8-59, col_values=(('external_ids', {'iface-id': '79319cd8-59bd-43b2-a72b-a88f70eb5570', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:e8:cd', 'vm-uuid': '4589c5da-d558-41a1-bf54-30746991be9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:09 np0005532048 NetworkManager[48920]: <info>  [1763803029.8680] manager: (tap79319cd8-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.874 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.876 253665 INFO os_vif [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59')#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.930 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.932 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.932 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] No VIF found with MAC fa:16:3e:14:e8:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.933 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Using config drive#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.956 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:09 np0005532048 nova_compute[253661]: 2025-11-22 09:17:09.965 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.025 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.070 253665 DEBUG nova.network.neutron [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Updated VIF entry in instance network info cache for port 79319cd8-59bd-43b2-a72b-a88f70eb5570. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.082 253665 DEBUG nova.network.neutron [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Updating instance_info_cache with network_info: [{"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.102 253665 DEBUG oslo_concurrency.lockutils [req-868f02d5-7d5b-47d6-8bd7-ee1d380ea00e req-c7e3098b-eddc-4aa4-beca-f1d51691e322 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4589c5da-d558-41a1-bf54-30746991be9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.330 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Creating config drive at /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.335 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgyady1zy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.482 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgyady1zy" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385645448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.512 253665 DEBUG nova.storage.rbd_utils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] rbd image 4589c5da-d558-41a1-bf54-30746991be9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.519 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config 4589c5da-d558-41a1-bf54-30746991be9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.554 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.561 253665 DEBUG nova.compute.provider_tree [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.579 253665 DEBUG nova.scheduler.client.report [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.611 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.612 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.663 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.663 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.680 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.695 253665 DEBUG oslo_concurrency.processutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config 4589c5da-d558-41a1-bf54-30746991be9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.696 253665 INFO nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Deleting local config drive /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e/disk.config because it was imported into RBD.#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.703 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:17:10 np0005532048 kernel: tap79319cd8-59: entered promiscuous mode
Nov 22 04:17:10 np0005532048 NetworkManager[48920]: <info>  [1763803030.7636] manager: (tap79319cd8-59): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.764 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:10Z|00390|binding|INFO|Claiming lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 for this chassis.
Nov 22 04:17:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:10Z|00391|binding|INFO|79319cd8-59bd-43b2-a72b-a88f70eb5570: Claiming fa:16:3e:14:e8:cd 10.100.0.9
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.767 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:10 np0005532048 systemd-udevd[305460]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.778 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:e8:cd 10.100.0.9'], port_security=['fa:16:3e:14:e8:cd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4589c5da-d558-41a1-bf54-30746991be9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e716dd72-0105-4eae-a6f7-a94546350d4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce05fa5ad2745dab1909b0954fb83d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f974b244-516a-40d5-add2-959691d72108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83728acd-5828-4942-8843-484dbd1b47c1, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=79319cd8-59bd-43b2-a72b-a88f70eb5570) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.780 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 79319cd8-59bd-43b2-a72b-a88f70eb5570 in datapath e716dd72-0105-4eae-a6f7-a94546350d4d bound to our chassis#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.781 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e716dd72-0105-4eae-a6f7-a94546350d4d#033[00m
Nov 22 04:17:10 np0005532048 NetworkManager[48920]: <info>  [1763803030.7932] device (tap79319cd8-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:17:10 np0005532048 NetworkManager[48920]: <info>  [1763803030.7942] device (tap79319cd8-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.797 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.799 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9764ad1b-cd28-471a-a837-6ae191bb2475]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.800 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape716dd72-01 in ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.799 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.799 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Creating image(s)#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.802 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape716dd72-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.802 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19fc8905-fcab-48d6-8697-7fc4aff53d70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.803 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c88109b-06c3-4613-9413-c31ec8cbfa08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 systemd-machined[215941]: New machine qemu-50-instance-0000002d.
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.820 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[69d6987c-13dc-4e48-8c48-1c7053fa2f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 systemd[1]: Started Virtual Machine qemu-50-instance-0000002d.
Nov 22 04:17:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:10Z|00392|binding|INFO|Setting lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 ovn-installed in OVS
Nov 22 04:17:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:10Z|00393|binding|INFO|Setting lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 up in Southbound
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.844 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.845 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[117fe6ac-60bd-4ec8-a7cd-de59fcf582e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.889 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.889 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cd455a-91d6-4eba-8413-03d3baf6dc63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 NetworkManager[48920]: <info>  [1763803030.8985] manager: (tape716dd72-00): new Veth device (/org/freedesktop/NetworkManager/Devices/174)
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e5e318-c5ec-450e-aa48-e94118d04209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.944 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.947 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8eecb07a-8aeb-4101-ae4a-629e062fe921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.951 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[75d7791d-2778-41ca-a066-e28a065734cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.952 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:10 np0005532048 NetworkManager[48920]: <info>  [1763803030.9789] device (tape716dd72-00): carrier: link connected
Nov 22 04:17:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:10.988 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23f71793-1d1a-4948-8c72-f7cd71c5cc1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:10 np0005532048 nova_compute[253661]: 2025-11-22 09:17:10.998 253665 DEBUG nova.policy [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5ae8af2cc9f40e083473a191ddd445f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.008 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[90c6eca2-2241-4a1a-82ad-11e8fad8b4a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape716dd72-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:a7:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588171, 'reachable_time': 23987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305798, 'error': None, 'target': 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.019 253665 DEBUG nova.compute.manager [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.020 253665 DEBUG oslo_concurrency.lockutils [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.020 253665 DEBUG oslo_concurrency.lockutils [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.020 253665 DEBUG oslo_concurrency.lockutils [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.021 253665 DEBUG nova.compute.manager [req-b3bbc8f3-fb98-4c03-89f0-d509424726d0 req-1ef92268-6775-4dc9-bf05-3d278ed68b9b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Processing event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e95e75-4859-4e22-9bde-6dc30eb749de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:a77c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588171, 'tstamp': 588171}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305799, 'error': None, 'target': 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.041 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.042 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.042 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.043 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.044 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7933775e-769a-4507-a214-e529a8cb63f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape716dd72-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:a7:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588171, 'reachable_time': 23987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305800, 'error': None, 'target': 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.069 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.076 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a63d88a0-884c-4328-a21c-6bedf9264f2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.082 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[632edd66-1760-4bd4-ad0f-a7461d8d32a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.146 253665 DEBUG nova.compute.manager [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.147 253665 DEBUG oslo_concurrency.lockutils [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.148 253665 DEBUG oslo_concurrency.lockutils [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.148 253665 DEBUG oslo_concurrency.lockutils [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.148 253665 DEBUG nova.compute.manager [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.149 253665 WARNING nova.compute.manager [req-9b52bee1-be3a-43e2-bc94-f299be711ba7 req-5d67952d-0ff3-4136-bdd7-cafb31d5c424 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state active and task_state None.#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.158 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[179266bc-5ea9-4d54-8a72-0dbeb2866ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.161 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape716dd72-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.162 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.164 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape716dd72-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:11 np0005532048 kernel: tape716dd72-00: entered promiscuous mode
Nov 22 04:17:11 np0005532048 NetworkManager[48920]: <info>  [1763803031.1671] manager: (tape716dd72-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.172 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape716dd72-00, col_values=(('external_ids', {'iface-id': 'a6bc6da3-64a7-49d4-9920-2ddf5959e212'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00394|binding|INFO|Releasing lport a6bc6da3-64a7-49d4-9920-2ddf5959e212 from this chassis (sb_readonly=0)
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.177 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e716dd72-0105-4eae-a6f7-a94546350d4d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e716dd72-0105-4eae-a6f7-a94546350d4d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.178 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc263ffc-bbdf-4eb1-b330-6acb9228cd95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.179 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-e716dd72-0105-4eae-a6f7-a94546350d4d
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/e716dd72-0105-4eae-a6f7-a94546350d4d.pid.haproxy
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID e716dd72-0105-4eae-a6f7-a94546350d4d
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.181 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'env', 'PROCESS_TAG=haproxy-e716dd72-0105-4eae-a6f7-a94546350d4d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e716dd72-0105-4eae-a6f7-a94546350d4d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 134 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.391 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.392 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.392 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.392 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.393 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.395 253665 INFO nova.compute.manager [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Terminating instance#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.396 253665 DEBUG nova.compute.manager [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.427 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a63d88a0-884c-4328-a21c-6bedf9264f2e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:11 np0005532048 kernel: tapf72c6b7d-0b (unregistering): left promiscuous mode
Nov 22 04:17:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:11 np0005532048 NetworkManager[48920]: <info>  [1763803031.4426] device (tapf72c6b7d-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00395|binding|INFO|Releasing lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d from this chassis (sb_readonly=0)
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00396|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d down in Southbound
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00397|binding|INFO|Removing iface tapf72c6b7d-0b ovn-installed in OVS
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.458 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d5:8c 10.100.0.11'], port_security=['fa:16:3e:73:d5:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dafc0d0-bd93-4080-b51e-36887936ea66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Nov 22 04:17:11 np0005532048 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Consumed 2.519s CPU time.
Nov 22 04:17:11 np0005532048 systemd-machined[215941]: Machine qemu-49-instance-0000002c terminated.
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.508 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] resizing rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.551 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803031.5505958, 4589c5da-d558-41a1-bf54-30746991be9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.551 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.554 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.560 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.565 253665 INFO nova.virt.libvirt.driver [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance spawned successfully.#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.565 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.570 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.580 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:11 np0005532048 kernel: tapf72c6b7d-0b: entered promiscuous mode
Nov 22 04:17:11 np0005532048 NetworkManager[48920]: <info>  [1763803031.6228] manager: (tapf72c6b7d-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00398|binding|INFO|Claiming lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for this chassis.
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00399|binding|INFO|f72c6b7d-0ba5-4d25-a08a-e2518c2a479d: Claiming fa:16:3e:73:d5:8c 10.100.0.11
Nov 22 04:17:11 np0005532048 kernel: tapf72c6b7d-0b (unregistering): left promiscuous mode
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.632 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d5:8c 10.100.0.11'], port_security=['fa:16:3e:73:d5:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dafc0d0-bd93-4080-b51e-36887936ea66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.638 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.639 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803031.5536268, 4589c5da-d558-41a1-bf54-30746991be9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.639 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:17:11 np0005532048 podman[305970]: 2025-11-22 09:17:11.645278784 +0000 UTC m=+0.070289568 container create 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00400|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d ovn-installed in OVS
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00401|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d up in Southbound
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.660 253665 DEBUG nova.objects.instance [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'migration_context' on Instance uuid a63d88a0-884c-4328-a21c-6bedf9264f2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00402|binding|INFO|Releasing lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d from this chassis (sb_readonly=0)
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00403|binding|INFO|Setting lport f72c6b7d-0ba5-4d25-a08a-e2518c2a479d down in Southbound
Nov 22 04:17:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:11Z|00404|binding|INFO|Removing iface tapf72c6b7d-0b ovn-installed in OVS
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.664 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.666 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.668 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.668 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.669 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.669 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.669 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.670 253665 DEBUG nova.virt.libvirt.driver [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.670 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d5:8c 10.100.0.11'], port_security=['fa:16:3e:73:d5:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dafc0d0-bd93-4080-b51e-36887936ea66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.681 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803031.5600092, 4589c5da-d558-41a1-bf54-30746991be9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.682 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.687 253665 INFO nova.virt.libvirt.driver [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Instance destroyed successfully.#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.687 253665 DEBUG nova.objects.instance [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid 8dafc0d0-bd93-4080-b51e-36887936ea66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 systemd[1]: Started libpod-conmon-8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47.scope.
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.697 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.697 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Ensure instance console log exists: /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.698 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.698 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.699 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:11 np0005532048 podman[305970]: 2025-11-22 09:17:11.604835308 +0000 UTC m=+0.029846112 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.718 253665 DEBUG nova.virt.libvirt.vif [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1053734599',display_name='tempest-DeleteServersTestJSON-server-1053734599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1053734599',id=44,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-w9qj9zgo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:09Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=8dafc0d0-bd93-4080-b51e-36887936ea66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.719 253665 DEBUG nova.network.os_vif_util [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "address": "fa:16:3e:73:d5:8c", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72c6b7d-0b", "ovs_interfaceid": "f72c6b7d-0ba5-4d25-a08a-e2518c2a479d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.721 253665 DEBUG nova.network.os_vif_util [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.722 253665 DEBUG os_vif [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.725 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f3cb4a182fbe93ee3d0d58357102266c7d0274a0d3457d7bc4d58a8aed79f9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.726 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf72c6b7d-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.731 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.737 253665 INFO os_vif [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d5:8c,bridge_name='br-int',has_traffic_filtering=True,id=f72c6b7d-0ba5-4d25-a08a-e2518c2a479d,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72c6b7d-0b')#033[00m
Nov 22 04:17:11 np0005532048 podman[305970]: 2025-11-22 09:17:11.742670055 +0000 UTC m=+0.167680859 container init 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:17:11 np0005532048 podman[305970]: 2025-11-22 09:17:11.750001743 +0000 UTC m=+0.175012527 container start 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.761 253665 INFO nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Took 8.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.762 253665 DEBUG nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.763 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:11 np0005532048 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [NOTICE]   (306016) : New worker (306026) forked
Nov 22 04:17:11 np0005532048 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [NOTICE]   (306016) : Loading success.
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.797 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.831 253665 INFO nova.compute.manager [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Took 9.31 seconds to build instance.#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.837 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.839 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.840 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[564b2cb7-28e2-4f3b-b50d-a3a4f15b3a0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:11.840 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.860 253665 DEBUG oslo_concurrency.lockutils [None req-e69b4d93-44a9-4910-b834-51426e9e8d29 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:11 np0005532048 nova_compute[253661]: 2025-11-22 09:17:11.866 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Successfully created port: 1f84d052-9d22-469d-b43d-259c9b54bcaf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:17:11 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [NOTICE]   (305619) : haproxy version is 2.8.14-c23fe91
Nov 22 04:17:11 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [NOTICE]   (305619) : path to executable is /usr/sbin/haproxy
Nov 22 04:17:11 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [WARNING]  (305619) : Exiting Master process...
Nov 22 04:17:11 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [WARNING]  (305619) : Exiting Master process...
Nov 22 04:17:11 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [ALERT]    (305619) : Current worker (305621) exited with code 143 (Terminated)
Nov 22 04:17:11 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[305615]: [WARNING]  (305619) : All workers exited. Exiting... (0)
Nov 22 04:17:11 np0005532048 systemd[1]: libpod-6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889.scope: Deactivated successfully.
Nov 22 04:17:12 np0005532048 podman[306055]: 2025-11-22 09:17:12.000467161 +0000 UTC m=+0.045900430 container died 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:17:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889-userdata-shm.mount: Deactivated successfully.
Nov 22 04:17:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5ebc4bd6ad0fd49fe8ea4cf8e7431f6025e366defa6658f1cd2635875334b6b7-merged.mount: Deactivated successfully.
Nov 22 04:17:12 np0005532048 podman[306055]: 2025-11-22 09:17:12.062610501 +0000 UTC m=+0.108043770 container cleanup 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:17:12 np0005532048 systemd[1]: libpod-conmon-6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889.scope: Deactivated successfully.
Nov 22 04:17:12 np0005532048 podman[306086]: 2025-11-22 09:17:12.134487336 +0000 UTC m=+0.048232645 container remove 6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10c0dfa0-1b58-4b13-bbfd-5a6e9e46be79]: (4, ('Sat Nov 22 09:17:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889)\n6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889\nSat Nov 22 09:17:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889)\n6e20d07acee02399d931f72110e30ef2b444ce4a8cb13ba05c3c5c12b8e08889\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.144 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4714aa-9c29-48db-8137-783879aa229e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.146 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:12 np0005532048 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.156 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b17058-f9e1-4092-92ad-0fb987ba8f4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.172 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[392593cd-1d72-4ad0-949e-77f556043921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.177 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[22f6cf61-d123-4ea5-b57a-618d8284d824]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.199 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1a22d1-774b-40c8-9700-4e7dcdc98b5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587960, 'reachable_time': 32377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306099, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:12 np0005532048 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.203 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.203 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[34bbd71f-7179-487b-b518-92cb014fd289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.204 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.206 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.207 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[39f41eb2-31bd-4f5e-9e7e-2b617a3b6016]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.207 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f72c6b7d-0ba5-4d25-a08a-e2518c2a479d in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.209 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:17:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:12.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b43550d-ce6a-406a-90fa-fc68efdff3dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.224 253665 INFO nova.virt.libvirt.driver [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Deleting instance files /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66_del#033[00m
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.224 253665 INFO nova.virt.libvirt.driver [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Deletion of /var/lib/nova/instances/8dafc0d0-bd93-4080-b51e-36887936ea66_del complete#033[00m
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.275 253665 INFO nova.compute.manager [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.275 253665 DEBUG oslo.service.loopingcall [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.276 253665 DEBUG nova.compute.manager [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.277 253665 DEBUG nova.network.neutron [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:17:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:17:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467955315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:17:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:17:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1467955315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:17:12 np0005532048 nova_compute[253661]: 2025-11-22 09:17:12.827 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Successfully created port: b12fa008-cd82-4cf3-8abd-a89e90fb9e4c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.283 253665 DEBUG nova.network.neutron [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.306 253665 INFO nova.compute.manager [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Took 1.03 seconds to deallocate network for instance.#033[00m
Nov 22 04:17:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 146 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 809 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.361 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.362 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.453 253665 DEBUG oslo_concurrency.processutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.516 253665 DEBUG nova.compute.manager [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.517 253665 DEBUG oslo_concurrency.lockutils [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.517 253665 DEBUG oslo_concurrency.lockutils [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.518 253665 DEBUG oslo_concurrency.lockutils [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.518 253665 DEBUG nova.compute.manager [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] No waiting events found dispatching network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.518 253665 WARNING nova.compute.manager [req-76b45607-b44c-4569-89b6-313684852bd8 req-50f5ece6-8be8-4415-a7f4-fe4f86b83912 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received unexpected event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.594 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.595 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.595 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.595 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.596 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.596 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.596 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.597 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.597 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.597 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.597 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.598 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.598 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.598 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.599 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.599 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.599 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.599 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.600 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.600 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.600 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.601 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.601 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.601 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.601 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.602 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.602 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.602 253665 DEBUG oslo_concurrency.lockutils [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.603 253665 DEBUG nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.603 253665 WARNING nova.compute.manager [req-10ff0391-598b-41d1-830b-5f6a9f4e4dd6 req-c293e8f9-0941-4a1c-9575-8e5975b02d8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-unplugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.721 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.722 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.722 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.723 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.723 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.725 253665 INFO nova.compute.manager [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Terminating instance#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.726 253665 DEBUG nova.compute.manager [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:17:13 np0005532048 kernel: tap79319cd8-59 (unregistering): left promiscuous mode
Nov 22 04:17:13 np0005532048 NetworkManager[48920]: <info>  [1763803033.7771] device (tap79319cd8-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:17:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:13Z|00405|binding|INFO|Releasing lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 from this chassis (sb_readonly=0)
Nov 22 04:17:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:13Z|00406|binding|INFO|Setting lport 79319cd8-59bd-43b2-a72b-a88f70eb5570 down in Southbound
Nov 22 04:17:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:13Z|00407|binding|INFO|Removing iface tap79319cd8-59 ovn-installed in OVS
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.809 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:e8:cd 10.100.0.9'], port_security=['fa:16:3e:14:e8:cd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4589c5da-d558-41a1-bf54-30746991be9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e716dd72-0105-4eae-a6f7-a94546350d4d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ce05fa5ad2745dab1909b0954fb83d6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f974b244-516a-40d5-add2-959691d72108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83728acd-5828-4942-8843-484dbd1b47c1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=79319cd8-59bd-43b2-a72b-a88f70eb5570) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.812 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 79319cd8-59bd-43b2-a72b-a88f70eb5570 in datapath e716dd72-0105-4eae-a6f7-a94546350d4d unbound from our chassis#033[00m
Nov 22 04:17:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.814 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e716dd72-0105-4eae-a6f7-a94546350d4d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:17:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.815 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d682f7b0-c69b-482f-a4d1-605341999053]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:13.816 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d namespace which is not needed anymore#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.824 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:13 np0005532048 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Nov 22 04:17:13 np0005532048 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Consumed 2.867s CPU time.
Nov 22 04:17:13 np0005532048 systemd-machined[215941]: Machine qemu-50-instance-0000002d terminated.
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.955 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:13 np0005532048 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [NOTICE]   (306016) : haproxy version is 2.8.14-c23fe91
Nov 22 04:17:13 np0005532048 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [NOTICE]   (306016) : path to executable is /usr/sbin/haproxy
Nov 22 04:17:13 np0005532048 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [WARNING]  (306016) : Exiting Master process...
Nov 22 04:17:13 np0005532048 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [ALERT]    (306016) : Current worker (306026) exited with code 143 (Terminated)
Nov 22 04:17:13 np0005532048 neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d[306005]: [WARNING]  (306016) : All workers exited. Exiting... (0)
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:13 np0005532048 systemd[1]: libpod-8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47.scope: Deactivated successfully.
Nov 22 04:17:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114348137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:13 np0005532048 podman[306146]: 2025-11-22 09:17:13.966214305 +0000 UTC m=+0.054332383 container died 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.977 253665 INFO nova.virt.libvirt.driver [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Instance destroyed successfully.#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.978 253665 DEBUG nova.objects.instance [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lazy-loading 'resources' on Instance uuid 4589c5da-d558-41a1-bf54-30746991be9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:13 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.987 253665 DEBUG oslo_concurrency.processutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47-userdata-shm.mount: Deactivated successfully.
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.994 253665 DEBUG nova.virt.libvirt.vif [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-293583865',display_name='tempest-ImagesNegativeTestJSON-server-293583865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-293583865',id=45,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ce05fa5ad2745dab1909b0954fb83d6',ramdisk_id='',reservation_id='r-cc0nf0zk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1501914571',owner_user_name='tempest-ImagesNegativeTestJSON-1501914571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:11Z,user_data=None,user_id='ca3a4f3a44014ad7a069b7dbdffb7c04',uuid=4589c5da-d558-41a1-bf54-30746991be9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.995 253665 DEBUG nova.network.os_vif_util [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converting VIF {"id": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "address": "fa:16:3e:14:e8:cd", "network": {"id": "e716dd72-0105-4eae-a6f7-a94546350d4d", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-134401430-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ce05fa5ad2745dab1909b0954fb83d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79319cd8-59", "ovs_interfaceid": "79319cd8-59bd-43b2-a72b-a88f70eb5570", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.997 253665 DEBUG nova.network.os_vif_util [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:13.997 253665 DEBUG os_vif [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.001 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.002 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79319cd8-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1f3cb4a182fbe93ee3d0d58357102266c7d0274a0d3457d7bc4d58a8aed79f9e-merged.mount: Deactivated successfully.
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.004 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.012 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Successfully updated port: 1f84d052-9d22-469d-b43d-259c9b54bcaf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.014 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.018 253665 DEBUG nova.compute.provider_tree [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.021 253665 INFO os_vif [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:e8:cd,bridge_name='br-int',has_traffic_filtering=True,id=79319cd8-59bd-43b2-a72b-a88f70eb5570,network=Network(e716dd72-0105-4eae-a6f7-a94546350d4d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79319cd8-59')#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.038 253665 DEBUG nova.scheduler.client.report [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:14 np0005532048 podman[306146]: 2025-11-22 09:17:14.047893287 +0000 UTC m=+0.136011355 container cleanup 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:17:14 np0005532048 systemd[1]: libpod-conmon-8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47.scope: Deactivated successfully.
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.062 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.100 253665 INFO nova.scheduler.client.report [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance 8dafc0d0-bd93-4080-b51e-36887936ea66#033[00m
Nov 22 04:17:14 np0005532048 podman[306198]: 2025-11-22 09:17:14.149331176 +0000 UTC m=+0.075484494 container remove 8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:17:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.156 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa336cc-3a70-4a86-83dd-aefb393046c0]: (4, ('Sat Nov 22 09:17:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d (8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47)\n8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47\nSat Nov 22 09:17:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d (8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47)\n8af527868436682f2ec45f82497284b50c79528475f6e0dcff8cb5ebd472ea47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.158 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4af1dfe5-5489-4008-96bb-df655f548bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.159 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape716dd72-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:14 np0005532048 kernel: tape716dd72-00: left promiscuous mode
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.162 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.172 253665 DEBUG oslo_concurrency.lockutils [None req-2e992ee2-72c2-4f79-b50f-df32044003fd 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.177 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.181 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[072f18b1-7370-4758-b76c-c89e5b45b5db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[16cde236-ba3e-40f0-9be6-7560ecc35560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.197 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06f2d6f2-a53c-4f96-8385-1aa72d1294a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.214 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea4c2d8-f35d-4ce2-817f-900253768de3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588162, 'reachable_time': 33661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306216, 'error': None, 'target': 'ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.217 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e716dd72-0105-4eae-a6f7-a94546350d4d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:17:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:14.217 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2460ee48-9d41-48a5-a160-40d86befe3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:14 np0005532048 systemd[1]: run-netns-ovnmeta\x2de716dd72\x2d0105\x2d4eae\x2da6f7\x2da94546350d4d.mount: Deactivated successfully.
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.902 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Successfully updated port: b12fa008-cd82-4cf3-8abd-a89e90fb9e4c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.921 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.922 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquired lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.922 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:17:14 np0005532048 nova_compute[253661]: 2025-11-22 09:17:14.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:15 np0005532048 nova_compute[253661]: 2025-11-22 09:17:15.100 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:17:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 159 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 204 op/s
Nov 22 04:17:15 np0005532048 nova_compute[253661]: 2025-11-22 09:17:15.374 253665 INFO nova.virt.libvirt.driver [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Deleting instance files /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e_del#033[00m
Nov 22 04:17:15 np0005532048 nova_compute[253661]: 2025-11-22 09:17:15.375 253665 INFO nova.virt.libvirt.driver [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Deletion of /var/lib/nova/instances/4589c5da-d558-41a1-bf54-30746991be9e_del complete#033[00m
Nov 22 04:17:15 np0005532048 nova_compute[253661]: 2025-11-22 09:17:15.432 253665 INFO nova.compute.manager [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Took 1.71 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:17:15 np0005532048 nova_compute[253661]: 2025-11-22 09:17:15.433 253665 DEBUG oslo.service.loopingcall [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:17:15 np0005532048 nova_compute[253661]: 2025-11-22 09:17:15.434 253665 DEBUG nova.compute.manager [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:17:15 np0005532048 nova_compute[253661]: 2025-11-22 09:17:15.434 253665 DEBUG nova.network.neutron [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.102 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-deleted-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.103 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-unplugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.103 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] No waiting events found dispatching network-vif-unplugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-unplugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.104 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-changed-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.105 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Refreshing instance network info cache due to event network-changed-1f84d052-9d22-469d-b43d-259c9b54bcaf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.105 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.167 253665 DEBUG nova.compute.manager [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.168 253665 DEBUG oslo_concurrency.lockutils [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.168 253665 DEBUG oslo_concurrency.lockutils [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.168 253665 DEBUG oslo_concurrency.lockutils [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8dafc0d0-bd93-4080-b51e-36887936ea66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.169 253665 DEBUG nova.compute.manager [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] No waiting events found dispatching network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.169 253665 WARNING nova.compute.manager [req-be4b4f81-5862-4ecb-9abe-dc36d7770622 req-faf8c701-ca80-48e1-825e-2643d6908d0d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Received unexpected event network-vif-plugged-f72c6b7d-0ba5-4d25-a08a-e2518c2a479d for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:17:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.670 253665 DEBUG nova.network.neutron [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.682 253665 INFO nova.compute.manager [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Took 1.25 seconds to deallocate network for instance.#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.722 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.722 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:16 np0005532048 nova_compute[253661]: 2025-11-22 09:17:16.787 253665 DEBUG oslo_concurrency.processutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.127 253665 DEBUG nova.network.neutron [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.144 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Releasing lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.145 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance network_info: |[{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.146 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.146 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Refreshing network info cache for port 1f84d052-9d22-469d-b43d-259c9b54bcaf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.152 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start _get_guest_xml network_info=[{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.160 253665 WARNING nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.171 253665 DEBUG nova.virt.libvirt.host [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.172 253665 DEBUG nova.virt.libvirt.host [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.179 253665 DEBUG nova.virt.libvirt.host [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.180 253665 DEBUG nova.virt.libvirt.host [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.181 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.181 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.181 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.182 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.183 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.183 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.183 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.183 253665 DEBUG nova.virt.hardware [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.186 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2961703626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.254 253665 DEBUG oslo_concurrency.processutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.261 253665 DEBUG nova.compute.provider_tree [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.276 253665 DEBUG nova.scheduler.client.report [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.299 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.330 253665 INFO nova.scheduler.client.report [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Deleted allocations for instance 4589c5da-d558-41a1-bf54-30746991be9e#033[00m
Nov 22 04:17:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 114 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 243 op/s
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.386 253665 DEBUG oslo_concurrency.lockutils [None req-fde20df7-fb25-46ca-89d8-fb77646637e3 ca3a4f3a44014ad7a069b7dbdffb7c04 4ce05fa5ad2745dab1909b0954fb83d6 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1151482682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.690 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.718 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:17 np0005532048 nova_compute[253661]: 2025-11-22 09:17:17.723 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2202826143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.227 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.229 253665 DEBUG nova.virt.libvirt.vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:10Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.229 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.230 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.231 253665 DEBUG nova.virt.libvirt.vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:10Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.231 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.232 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.233 253665 DEBUG nova.objects.instance [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a63d88a0-884c-4328-a21c-6bedf9264f2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.249 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <uuid>a63d88a0-884c-4328-a21c-6bedf9264f2e</uuid>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <name>instance-0000002e</name>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersTestMultiNic-server-1847576078</nova:name>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:17:17</nova:creationTime>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <nova:user uuid="c5ae8af2cc9f40e083473a191ddd445f">tempest-ServersTestMultiNic-1064785551-project-member</nova:user>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <nova:project uuid="2d156ca65e214b4aacdf111fd47dc4f6">tempest-ServersTestMultiNic-1064785551</nova:project>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <nova:port uuid="1f84d052-9d22-469d-b43d-259c9b54bcaf">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.82" ipVersion="4"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <nova:port uuid="b12fa008-cd82-4cf3-8abd-a89e90fb9e4c">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.1.147" ipVersion="4"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <entry name="serial">a63d88a0-884c-4328-a21c-6bedf9264f2e</entry>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <entry name="uuid">a63d88a0-884c-4328-a21c-6bedf9264f2e</entry>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a63d88a0-884c-4328-a21c-6bedf9264f2e_disk">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:ca:e5:b4"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <target dev="tap1f84d052-9d"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:f9:be:fa"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <target dev="tapb12fa008-cd"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/console.log" append="off"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:17:18 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:17:18 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:17:18 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:17:18 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.251 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Preparing to wait for external event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.251 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.252 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.252 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.252 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Preparing to wait for external event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.253 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.253 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.253 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.254 253665 DEBUG nova.virt.libvirt.vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:10Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.254 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.255 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.256 253665 DEBUG os_vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.257 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.258 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.263 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f84d052-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.264 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f84d052-9d, col_values=(('external_ids', {'iface-id': '1f84d052-9d22-469d-b43d-259c9b54bcaf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:e5:b4', 'vm-uuid': 'a63d88a0-884c-4328-a21c-6bedf9264f2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:18 np0005532048 NetworkManager[48920]: <info>  [1763803038.2680] manager: (tap1f84d052-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.268 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.276 253665 INFO os_vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d')#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.277 253665 DEBUG nova.virt.libvirt.vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:10Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.277 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.278 253665 DEBUG nova.network.os_vif_util [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.278 253665 DEBUG os_vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.282 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.282 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb12fa008-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.283 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb12fa008-cd, col_values=(('external_ids', {'iface-id': 'b12fa008-cd82-4cf3-8abd-a89e90fb9e4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:be:fa', 'vm-uuid': 'a63d88a0-884c-4328-a21c-6bedf9264f2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:18 np0005532048 NetworkManager[48920]: <info>  [1763803038.2852] manager: (tapb12fa008-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.292 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.293 253665 INFO os_vif [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd')#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.348 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.349 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.349 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:ca:e5:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.350 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] No VIF found with MAC fa:16:3e:f9:be:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.350 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Using config drive#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.374 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.884 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updated VIF entry in instance network info cache for port 1f84d052-9d22-469d-b43d-259c9b54bcaf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.886 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.903 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.904 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.904 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4589c5da-d558-41a1-bf54-30746991be9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.905 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.905 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4589c5da-d558-41a1-bf54-30746991be9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.906 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] No waiting events found dispatching network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.906 253665 WARNING nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received unexpected event network-vif-plugged-79319cd8-59bd-43b2-a72b-a88f70eb5570 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.906 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-changed-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.907 253665 DEBUG nova.compute.manager [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Refreshing instance network info cache due to event network-changed-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.907 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.908 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.908 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Refreshing network info cache for port b12fa008-cd82-4cf3-8abd-a89e90fb9e4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.975 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Creating config drive at /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config#033[00m
Nov 22 04:17:18 np0005532048 nova_compute[253661]: 2025-11-22 09:17:18.980 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmply7ndxa0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.129 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmply7ndxa0" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.152 253665 DEBUG nova.storage.rbd_utils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] rbd image a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.156 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.213 253665 DEBUG nova.compute.manager [req-e5c9c00a-dea6-4439-83ef-6c47dcc43ba5 req-6a1a42af-9a96-4dd6-81de-ef9fa270b321 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Received event network-vif-deleted-79319cd8-59bd-43b2-a72b-a88f70eb5570 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 248 op/s
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.508 253665 DEBUG oslo_concurrency.processutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config a63d88a0-884c-4328-a21c-6bedf9264f2e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.508 253665 INFO nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Deleting local config drive /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e/disk.config because it was imported into RBD.#033[00m
Nov 22 04:17:19 np0005532048 NetworkManager[48920]: <info>  [1763803039.5678] manager: (tap1f84d052-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Nov 22 04:17:19 np0005532048 kernel: tap1f84d052-9d: entered promiscuous mode
Nov 22 04:17:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:19Z|00408|binding|INFO|Claiming lport 1f84d052-9d22-469d-b43d-259c9b54bcaf for this chassis.
Nov 22 04:17:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:19Z|00409|binding|INFO|1f84d052-9d22-469d-b43d-259c9b54bcaf: Claiming fa:16:3e:ca:e5:b4 10.100.0.82
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.581 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:e5:b4 10.100.0.82'], port_security=['fa:16:3e:ca:e5:b4 10.100.0.82'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.82/24', 'neutron:device_id': 'a63d88a0-884c-4328-a21c-6bedf9264f2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-843f0308-8d5e-40fc-9082-c0a02b73f832', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0cd90fe-647e-4a9f-911e-14c3221ee262, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1f84d052-9d22-469d-b43d-259c9b54bcaf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.582 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1f84d052-9d22-469d-b43d-259c9b54bcaf in datapath 843f0308-8d5e-40fc-9082-c0a02b73f832 bound to our chassis#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.584 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 843f0308-8d5e-40fc-9082-c0a02b73f832#033[00m
Nov 22 04:17:19 np0005532048 NetworkManager[48920]: <info>  [1763803039.5847] manager: (tapb12fa008-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Nov 22 04:17:19 np0005532048 systemd-udevd[306381]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:19 np0005532048 systemd-udevd[306382]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[677e77ea-bc71-4f30-a75b-05ca13ff21c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.598 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap843f0308-81 in ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.599 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap843f0308-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.600 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73f08e0e-c01b-4a18-b270-1a899d52933d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.600 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[de922be5-2c55-41d9-873f-07adec6d73a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.612 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[92a53cba-a9e3-44fd-8803-bab7edc7df47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 NetworkManager[48920]: <info>  [1763803039.6196] device (tap1f84d052-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:17:19 np0005532048 NetworkManager[48920]: <info>  [1763803039.6208] device (tap1f84d052-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:17:19 np0005532048 systemd-machined[215941]: New machine qemu-51-instance-0000002e.
Nov 22 04:17:19 np0005532048 NetworkManager[48920]: <info>  [1763803039.6387] device (tapb12fa008-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:17:19 np0005532048 kernel: tapb12fa008-cd: entered promiscuous mode
Nov 22 04:17:19 np0005532048 NetworkManager[48920]: <info>  [1763803039.6405] device (tapb12fa008-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:17:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:19Z|00410|binding|INFO|Claiming lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c for this chassis.
Nov 22 04:17:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:19Z|00411|binding|INFO|b12fa008-cd82-4cf3-8abd-a89e90fb9e4c: Claiming fa:16:3e:f9:be:fa 10.100.1.147
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6c2c7f-988a-4aed-b2e3-e3f1fb9c7091]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 systemd[1]: Started Virtual Machine qemu-51-instance-0000002e.
Nov 22 04:17:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:19Z|00412|binding|INFO|Setting lport 1f84d052-9d22-469d-b43d-259c9b54bcaf ovn-installed in OVS
Nov 22 04:17:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:19Z|00413|binding|INFO|Setting lport 1f84d052-9d22-469d-b43d-259c9b54bcaf up in Southbound
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.649 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:be:fa 10.100.1.147'], port_security=['fa:16:3e:f9:be:fa 10.100.1.147'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.147/24', 'neutron:device_id': 'a63d88a0-884c-4328-a21c-6bedf9264f2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f8b641-eec2-42fb-ae80-bc7afe5817fe, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.650 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.678 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d08cf387-6e0c-459c-979d-e189b9ff2094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.686 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[071f7af5-ff94-465f-a1e0-4124fb549a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 NetworkManager[48920]: <info>  [1763803039.6904] manager: (tap843f0308-80): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Nov 22 04:17:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:19Z|00414|binding|INFO|Setting lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c ovn-installed in OVS
Nov 22 04:17:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:19Z|00415|binding|INFO|Setting lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c up in Southbound
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.722 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8d5fca09-56df-4979-b931-6bab69711c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.726 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[96c202a0-5303-4a09-9a89-4e1893590d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 NetworkManager[48920]: <info>  [1763803039.7546] device (tap843f0308-80): carrier: link connected
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.760 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7c36185c-54f5-4948-a25c-e5029850963d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.780 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13b510ab-7ea9-45c5-a16c-7bf7d5d85110]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap843f0308-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:17:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589049, 'reachable_time': 44401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306417, 'error': None, 'target': 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.798 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1cd60e-76e0-4db8-964f-c0aa122fd2b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:17d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589049, 'tstamp': 589049}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306418, 'error': None, 'target': 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.814 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdaa2fd-3e2a-4b8a-92fa-d64f51d4fb0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap843f0308-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:17:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589049, 'reachable_time': 44401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306419, 'error': None, 'target': 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.849 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd909d8-2fc3-4213-a1c2-9b764d331e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.857 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.857 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.877 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.917 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[172e705a-6789-4b29-bf72-09860a663d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.919 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap843f0308-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.919 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.920 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap843f0308-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:19 np0005532048 NetworkManager[48920]: <info>  [1763803039.9226] manager: (tap843f0308-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Nov 22 04:17:19 np0005532048 kernel: tap843f0308-80: entered promiscuous mode
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.928 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap843f0308-80, col_values=(('external_ids', {'iface-id': '421f3205-68de-408e-8920-e7fb640c7177'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.929 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:19Z|00416|binding|INFO|Releasing lport 421f3205-68de-408e-8920-e7fb640c7177 from this chassis (sb_readonly=0)
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.930 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.933 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/843f0308-8d5e-40fc-9082-c0a02b73f832.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/843f0308-8d5e-40fc-9082-c0a02b73f832.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.934 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48cba49c-0914-4dd3-a1d9-31351f7c7faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.935 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-843f0308-8d5e-40fc-9082-c0a02b73f832
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/843f0308-8d5e-40fc-9082-c0a02b73f832.pid.haproxy
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 843f0308-8d5e-40fc-9082-c0a02b73f832
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:17:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:19.936 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'env', 'PROCESS_TAG=haproxy-843f0308-8d5e-40fc-9082-c0a02b73f832', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/843f0308-8d5e-40fc-9082-c0a02b73f832.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.954 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.954 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.964 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.964 253665 INFO nova.compute.claims [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.983 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:19 np0005532048 nova_compute[253661]: 2025-11-22 09:17:19.983 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.000 253665 DEBUG nova.compute.manager [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.000 253665 DEBUG oslo_concurrency.lockutils [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.001 253665 DEBUG oslo_concurrency.lockutils [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.001 253665 DEBUG oslo_concurrency.lockutils [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.001 253665 DEBUG nova.compute.manager [req-f24cb1f4-ca4c-45bb-9699-4dc8cf865c8d req-8a962b71-2477-498e-ab34-83ab6f5ba212 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Processing event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.017 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.095 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.167 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updated VIF entry in instance network info cache for port b12fa008-cd82-4cf3-8abd-a89e90fb9e4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.167 253665 DEBUG nova.network.neutron [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.170 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803040.170243, a63d88a0-884c-4328-a21c-6bedf9264f2e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.171 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.173 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.209 253665 DEBUG oslo_concurrency.lockutils [req-a6f08292-7490-4d8e-80f9-01896be7e285 req-9d87bce7-6140-4a03-8224-2450075c3711 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.221 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.225 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803040.172484, a63d88a0-884c-4328-a21c-6bedf9264f2e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.226 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.240 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.244 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.258 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:20 np0005532048 podman[306495]: 2025-11-22 09:17:20.367687483 +0000 UTC m=+0.060986824 container create 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 04:17:20 np0005532048 systemd[1]: Started libpod-conmon-24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb.scope.
Nov 22 04:17:20 np0005532048 podman[306495]: 2025-11-22 09:17:20.338832496 +0000 UTC m=+0.032131847 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:17:20 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862f2ca2fa37b1a4dd8d301fd07f294fa6ad5aa4c9c004f76d711540cd99b7fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:20 np0005532048 podman[306495]: 2025-11-22 09:17:20.588396561 +0000 UTC m=+0.281695932 container init 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 04:17:20 np0005532048 podman[306495]: 2025-11-22 09:17:20.602198855 +0000 UTC m=+0.295498226 container start 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 04:17:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002589410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:20 np0005532048 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [NOTICE]   (306533) : New worker (306537) forked
Nov 22 04:17:20 np0005532048 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [NOTICE]   (306533) : Loading success.
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.640 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.647 253665 DEBUG nova.compute.provider_tree [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.666 253665 DEBUG nova.scheduler.client.report [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.686 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.687 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.690 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.697 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.698 253665 INFO nova.compute.claims [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.763 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.763 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.790 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.808 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.872 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.893 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b12fa008-cd82-4cf3-8abd-a89e90fb9e4c in datapath 5d251c03-e62d-4f4a-933e-92ba86d2f7be unbound from our chassis#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.895 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d251c03-e62d-4f4a-933e-92ba86d2f7be#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.909 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[61117f24-2dca-41ca-ab0f-ccff1534e594]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.911 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d251c03-e1 in ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.910 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.912 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.913 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Creating image(s)#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.913 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d251c03-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.913 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7030b2de-9613-48b1-88bc-952168f90e8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.915 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[55787abb-5d28-4a15-b862-551a50ac9aeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.931 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0de55a-f75a-4266-996d-f1ac5b981aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.947 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.959 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1438f9-f200-4ed6-9077-a019979c62d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:20 np0005532048 nova_compute[253661]: 2025-11-22 09:17:20.979 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:20.993 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[368d88bd-80b1-40dc-b3ea-d52c0dcb77f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f08a80e1-5072-41d1-a318-931710c2d776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 systemd-udevd[306402]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:21 np0005532048 NetworkManager[48920]: <info>  [1763803041.0022] manager: (tap5d251c03-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.021 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.028 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.040 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d85738-c134-45c0-b0f1-1067ec911936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.045 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4469aad2-d3e9-4330-b052-97b81061d383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 NetworkManager[48920]: <info>  [1763803041.0735] device (tap5d251c03-e0): carrier: link connected
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.080 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6583e4-9292-4fd7-a2c3-cbf377a2f2d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.086 253665 DEBUG nova.policy [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31a3f645b946468d9e6fe3b907dfdc0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.104 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[135446c6-d83d-4ad1-812a-b9a5be70c170]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d251c03-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:aa:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589181, 'reachable_time': 40318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306631, 'error': None, 'target': 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.130 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d28ee165-b1f7-43d0-844c-9f3599cb53ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:aa98'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589181, 'tstamp': 589181}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306632, 'error': None, 'target': 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.130 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.131 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.132 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.132 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.152 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4bf4d7-675a-4631-a014-2e8f4d309dcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d251c03-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:aa:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589181, 'reachable_time': 40318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306635, 'error': None, 'target': 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.159 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.171 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8d4f1a-1f18-45ef-b990-889df5ef021d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.280 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1a07fa63-0022-413f-91b7-c220082a8adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.283 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d251c03-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.283 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.284 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d251c03-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:21 np0005532048 NetworkManager[48920]: <info>  [1763803041.2871] manager: (tap5d251c03-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Nov 22 04:17:21 np0005532048 kernel: tap5d251c03-e0: entered promiscuous mode
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.289 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d251c03-e0, col_values=(('external_ids', {'iface-id': '70a7c301-9e7f-4dbe-8105-d63ada0ee0cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:21Z|00417|binding|INFO|Releasing lport 70a7c301-9e7f-4dbe-8105-d63ada0ee0cc from this chassis (sb_readonly=0)
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.291 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.311 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d251c03-e62d-4f4a-933e-92ba86d2f7be.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d251c03-e62d-4f4a-933e-92ba86d2f7be.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.312 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8448ca6f-6534-4602-a5ce-c72dec885848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.313 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-5d251c03-e62d-4f4a-933e-92ba86d2f7be
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/5d251c03-e62d-4f4a-933e-92ba86d2f7be.pid.haproxy
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 5d251c03-e62d-4f4a-933e-92ba86d2f7be
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.312 253665 DEBUG nova.compute.manager [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.313 253665 DEBUG oslo_concurrency.lockutils [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.314 253665 DEBUG oslo_concurrency.lockutils [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.314 253665 DEBUG oslo_concurrency.lockutils [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.314 253665 DEBUG nova.compute.manager [req-e93febbc-2604-4c0a-aeb8-1cbd14dbba1f req-5156d33c-a8f9-41b0-9cf0-d5e5d5045af1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Processing event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.314 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:21.315 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'env', 'PROCESS_TAG=haproxy-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d251c03-e62d-4f4a-933e-92ba86d2f7be.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.315 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.319 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803041.3192072, a63d88a0-884c-4328-a21c-6bedf9264f2e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.319 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.322 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.325 253665 INFO nova.virt.libvirt.driver [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance spawned successfully.#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.325 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:17:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.353 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.353 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.354 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.354 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.355 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.355 253665 DEBUG nova.virt.libvirt.driver [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.361 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/94112967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.390 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.407 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.422 253665 DEBUG nova.compute.provider_tree [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.431 253665 INFO nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Took 10.63 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.432 253665 DEBUG nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.435 253665 DEBUG nova.scheduler.client.report [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.467 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.468 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.512 253665 INFO nova.compute.manager [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Took 11.81 seconds to build instance.#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.519 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.520 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.533 253665 DEBUG oslo_concurrency.lockutils [None req-4eb36677-c407-4fea-8931-783c912fa44e c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.536 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.551 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.586 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.669 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.671 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.672 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Creating image(s)#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.695 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.719 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:21 np0005532048 podman[306741]: 2025-11-22 09:17:21.738708607 +0000 UTC m=+0.055480311 container create f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.752 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.757 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:21 np0005532048 systemd[1]: Started libpod-conmon-f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d.scope.
Nov 22 04:17:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.802 253665 DEBUG nova.policy [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:17:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c652dea7d569ad4907db1e9365d3ee319c62c3437d1edcb0b89d72aa63a1823/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:21 np0005532048 podman[306741]: 2025-11-22 09:17:21.709228125 +0000 UTC m=+0.025999849 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.817 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] resizing rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:17:21 np0005532048 podman[306741]: 2025-11-22 09:17:21.819071827 +0000 UTC m=+0.135843521 container init f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:21 np0005532048 podman[306741]: 2025-11-22 09:17:21.827260104 +0000 UTC m=+0.144031798 container start f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:21 np0005532048 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [NOTICE]   (306829) : New worker (306837) forked
Nov 22 04:17:21 np0005532048 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [NOTICE]   (306829) : Loading success.
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.858 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.859 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.860 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.860 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.885 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.890 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:21 np0005532048 nova_compute[253661]: 2025-11-22 09:17:21.995 253665 DEBUG nova.objects.instance [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'migration_context' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.012 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.013 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Ensure instance console log exists: /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.014 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.014 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.014 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.086 253665 DEBUG nova.compute.manager [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.086 253665 DEBUG oslo_concurrency.lockutils [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.086 253665 DEBUG oslo_concurrency.lockutils [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.087 253665 DEBUG oslo_concurrency.lockutils [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.087 253665 DEBUG nova.compute.manager [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.087 253665 WARNING nova.compute.manager [req-0ba675a0-0591-442f-a662-2013ea23dae5 req-3f910b4c-5df8-4577-8b14-a0b4cf4dbea9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received unexpected event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf for instance with vm_state active and task_state None.#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.294 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.332 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Successfully created port: e8eabe8a-7cdb-44e6-9266-09d08038b4ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.338 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Successfully created port: a029f6c5-4597-4645-9974-c282b8014824 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.391 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.563 253665 DEBUG nova.objects.instance [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid 2964b30c-ab3b-4bab-8f11-2492007f83ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.576 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.576 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Ensure instance console log exists: /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.577 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.577 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:22 np0005532048 nova_compute[253661]: 2025-11-22 09:17:22.577 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.254 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.265 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Successfully updated port: e8eabe8a-7cdb-44e6-9266-09d08038b4ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.277 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.278 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.278 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 95 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 240 op/s
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.416 253665 DEBUG nova.compute.manager [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.416 253665 DEBUG oslo_concurrency.lockutils [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.417 253665 DEBUG oslo_concurrency.lockutils [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.417 253665 DEBUG oslo_concurrency.lockutils [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.417 253665 DEBUG nova.compute.manager [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.417 253665 WARNING nova.compute.manager [req-a39bbee4-5d8a-4ecb-8d0b-efee804678c1 req-c3858579-a199-4abd-9350-0e82a08944c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received unexpected event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c for instance with vm_state active and task_state None.#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.472 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.472 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.472 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.473 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a63d88a0-884c-4328-a21c-6bedf9264f2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.513 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.514 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.514 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.514 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.514 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.515 253665 INFO nova.compute.manager [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Terminating instance#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.516 253665 DEBUG nova.compute.manager [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.553 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Successfully updated port: a029f6c5-4597-4645-9974-c282b8014824 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:17:23 np0005532048 kernel: tap1f84d052-9d (unregistering): left promiscuous mode
Nov 22 04:17:23 np0005532048 NetworkManager[48920]: <info>  [1763803043.5601] device (tap1f84d052-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.566 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:17:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:23Z|00418|binding|INFO|Releasing lport 1f84d052-9d22-469d-b43d-259c9b54bcaf from this chassis (sb_readonly=0)
Nov 22 04:17:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:23Z|00419|binding|INFO|Setting lport 1f84d052-9d22-469d-b43d-259c9b54bcaf down in Southbound
Nov 22 04:17:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:23Z|00420|binding|INFO|Removing iface tap1f84d052-9d ovn-installed in OVS
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.576 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.577 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquired lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.577 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.578 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.584 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:e5:b4 10.100.0.82'], port_security=['fa:16:3e:ca:e5:b4 10.100.0.82'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.82/24', 'neutron:device_id': 'a63d88a0-884c-4328-a21c-6bedf9264f2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-843f0308-8d5e-40fc-9082-c0a02b73f832', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0cd90fe-647e-4a9f-911e-14c3221ee262, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1f84d052-9d22-469d-b43d-259c9b54bcaf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.586 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1f84d052-9d22-469d-b43d-259c9b54bcaf in datapath 843f0308-8d5e-40fc-9082-c0a02b73f832 unbound from our chassis#033[00m
Nov 22 04:17:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.589 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 843f0308-8d5e-40fc-9082-c0a02b73f832, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:17:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.590 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[307e8f44-1d9f-4046-ac73-b546618954ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.590 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 namespace which is not needed anymore#033[00m
Nov 22 04:17:23 np0005532048 kernel: tapb12fa008-cd (unregistering): left promiscuous mode
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.593 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 NetworkManager[48920]: <info>  [1763803043.5973] device (tapb12fa008-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:23Z|00421|binding|INFO|Releasing lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c from this chassis (sb_readonly=0)
Nov 22 04:17:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:23Z|00422|binding|INFO|Setting lport b12fa008-cd82-4cf3-8abd-a89e90fb9e4c down in Southbound
Nov 22 04:17:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:23Z|00423|binding|INFO|Removing iface tapb12fa008-cd ovn-installed in OVS
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:23.616 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:be:fa 10.100.1.147'], port_security=['fa:16:3e:f9:be:fa 10.100.1.147'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.147/24', 'neutron:device_id': 'a63d88a0-884c-4328-a21c-6bedf9264f2e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d156ca65e214b4aacdf111fd47dc4f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e15c6eeb-4672-4ad5-8d91-2edc2ba003b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e6f8b641-eec2-42fb-ae80-bc7afe5817fe, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Nov 22 04:17:23 np0005532048 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Consumed 2.795s CPU time.
Nov 22 04:17:23 np0005532048 systemd-machined[215941]: Machine qemu-51-instance-0000002e terminated.
Nov 22 04:17:23 np0005532048 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [NOTICE]   (306533) : haproxy version is 2.8.14-c23fe91
Nov 22 04:17:23 np0005532048 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [NOTICE]   (306533) : path to executable is /usr/sbin/haproxy
Nov 22 04:17:23 np0005532048 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [WARNING]  (306533) : Exiting Master process...
Nov 22 04:17:23 np0005532048 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [WARNING]  (306533) : Exiting Master process...
Nov 22 04:17:23 np0005532048 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [ALERT]    (306533) : Current worker (306537) exited with code 143 (Terminated)
Nov 22 04:17:23 np0005532048 neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832[306529]: [WARNING]  (306533) : All workers exited. Exiting... (0)
Nov 22 04:17:23 np0005532048 systemd[1]: libpod-24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb.scope: Deactivated successfully.
Nov 22 04:17:23 np0005532048 podman[307001]: 2025-11-22 09:17:23.75036747 +0000 UTC m=+0.053230997 container died 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.765 253665 INFO nova.virt.libvirt.driver [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Instance destroyed successfully.#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.766 253665 DEBUG nova.objects.instance [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lazy-loading 'resources' on Instance uuid a63d88a0-884c-4328-a21c-6bedf9264f2e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.780 253665 DEBUG nova.virt.libvirt.vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:21Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.781 253665 DEBUG nova.network.os_vif_util [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.782 253665 DEBUG nova.network.os_vif_util [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.782 253665 DEBUG os_vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.785 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f84d052-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.786 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.788 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.791 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.793 253665 INFO os_vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:e5:b4,bridge_name='br-int',has_traffic_filtering=True,id=1f84d052-9d22-469d-b43d-259c9b54bcaf,network=Network(843f0308-8d5e-40fc-9082-c0a02b73f832),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f84d052-9d')#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.794 253665 DEBUG nova.virt.libvirt.vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1847576078',display_name='tempest-ServersTestMultiNic-server-1847576078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1847576078',id=46,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d156ca65e214b4aacdf111fd47dc4f6',ramdisk_id='',reservation_id='r-wpglyrxg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1064785551',owner_user_name='tempest-ServersTestMultiNic-1064785551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:21Z,user_data=None,user_id='c5ae8af2cc9f40e083473a191ddd445f',uuid=a63d88a0-884c-4328-a21c-6bedf9264f2e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.794 253665 DEBUG nova.network.os_vif_util [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converting VIF {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.795 253665 DEBUG nova.network.os_vif_util [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.795 253665 DEBUG os_vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.796 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb12fa008-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.797 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.798 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.799 253665 INFO os_vif [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:be:fa,bridge_name='br-int',has_traffic_filtering=True,id=b12fa008-cd82-4cf3-8abd-a89e90fb9e4c,network=Network(5d251c03-e62d-4f4a-933e-92ba86d2f7be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb12fa008-cd')#033[00m
Nov 22 04:17:23 np0005532048 nova_compute[253661]: 2025-11-22 09:17:23.952 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:17:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb-userdata-shm.mount: Deactivated successfully.
Nov 22 04:17:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-862f2ca2fa37b1a4dd8d301fd07f294fa6ad5aa4c9c004f76d711540cd99b7fa-merged.mount: Deactivated successfully.
Nov 22 04:17:24 np0005532048 podman[307001]: 2025-11-22 09:17:24.061388579 +0000 UTC m=+0.364252106 container cleanup 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:17:24 np0005532048 systemd[1]: libpod-conmon-24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb.scope: Deactivated successfully.
Nov 22 04:17:24 np0005532048 podman[307074]: 2025-11-22 09:17:24.139109236 +0000 UTC m=+0.052749064 container remove 24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.147 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[da04969e-3e4f-47a4-87cf-3a6e3edd2487]: (4, ('Sat Nov 22 09:17:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 (24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb)\n24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb\nSat Nov 22 09:17:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 (24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb)\n24c9f22c1bbf35b63f9b6820d4ef3cc5096a941e904300360f969ce0dd7637bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d554a8c1-6329-4665-bda4-34d4ae6b4189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.153 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap843f0308-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:24 np0005532048 kernel: tap843f0308-80: left promiscuous mode
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.181 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5366f98e-3bcc-49f6-a32f-a1e3548a28ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.202 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e17449-fd94-4e47-8f95-1162488dbcd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.203 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74e44c12-e6a4-46f5-8cd9-e148c77c28ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69ca2ba5-054d-40ee-80dd-1e45fdae0ebb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589040, 'reachable_time': 43561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307090, 'error': None, 'target': 'ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 systemd[1]: run-netns-ovnmeta\x2d843f0308\x2d8d5e\x2d40fc\x2d9082\x2dc0a02b73f832.mount: Deactivated successfully.
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.225 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-843f0308-8d5e-40fc-9082-c0a02b73f832 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.226 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[23db5614-8f06-418d-92e8-eb845229dce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.227 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b12fa008-cd82-4cf3-8abd-a89e90fb9e4c in datapath 5d251c03-e62d-4f4a-933e-92ba86d2f7be unbound from our chassis#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.229 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d251c03-e62d-4f4a-933e-92ba86d2f7be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.230 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a882db5c-7918-4bf8-a994-196c8efd5083]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.231 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be namespace which is not needed anymore#033[00m
Nov 22 04:17:24 np0005532048 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [NOTICE]   (306829) : haproxy version is 2.8.14-c23fe91
Nov 22 04:17:24 np0005532048 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [NOTICE]   (306829) : path to executable is /usr/sbin/haproxy
Nov 22 04:17:24 np0005532048 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [WARNING]  (306829) : Exiting Master process...
Nov 22 04:17:24 np0005532048 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [ALERT]    (306829) : Current worker (306837) exited with code 143 (Terminated)
Nov 22 04:17:24 np0005532048 neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be[306810]: [WARNING]  (306829) : All workers exited. Exiting... (0)
Nov 22 04:17:24 np0005532048 systemd[1]: libpod-f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d.scope: Deactivated successfully.
Nov 22 04:17:24 np0005532048 podman[307109]: 2025-11-22 09:17:24.388096498 +0000 UTC m=+0.056625878 container died f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:17:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5c652dea7d569ad4907db1e9365d3ee319c62c3437d1edcb0b89d72aa63a1823-merged.mount: Deactivated successfully.
Nov 22 04:17:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d-userdata-shm.mount: Deactivated successfully.
Nov 22 04:17:24 np0005532048 podman[307109]: 2025-11-22 09:17:24.437346717 +0000 UTC m=+0.105876087 container cleanup f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.452 253665 INFO nova.virt.libvirt.driver [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Deleting instance files /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e_del#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.453 253665 INFO nova.virt.libvirt.driver [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Deletion of /var/lib/nova/instances/a63d88a0-884c-4328-a21c-6bedf9264f2e_del complete#033[00m
Nov 22 04:17:24 np0005532048 systemd[1]: libpod-conmon-f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d.scope: Deactivated successfully.
Nov 22 04:17:24 np0005532048 podman[307136]: 2025-11-22 09:17:24.542933917 +0000 UTC m=+0.098426629 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:24 np0005532048 podman[307138]: 2025-11-22 09:17:24.548618134 +0000 UTC m=+0.104012723 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 04:17:24 np0005532048 podman[307150]: 2025-11-22 09:17:24.570086532 +0000 UTC m=+0.104896004 container remove f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.576 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec474fe8-4437-4524-ba3d-32372aa48829]: (4, ('Sat Nov 22 09:17:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be (f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d)\nf0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d\nSat Nov 22 09:17:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be (f0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d)\nf0afe5d9e6c4ffdccefe1aa21a87d4f7485b555f35acb6bbbdcdc9212ab98f7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.578 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99b01e21-fd05-4ee7-8712-ea092eb91808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.579 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d251c03-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:24 np0005532048 kernel: tap5d251c03-e0: left promiscuous mode
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.581 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.599 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abb6a939-9f95-49c3-bd02-891cf5e2e75f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.613 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce721e8f-22c2-45de-b537-3ab1f2cec4c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.615 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1a0295-a200-451d-a1bf-7d62bb4432a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.634 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d76ac3a3-0af0-44a0-96da-d5eff6aeb66f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589172, 'reachable_time': 21768, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307186, 'error': None, 'target': 'ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.637 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d251c03-e62d-4f4a-933e-92ba86d2f7be deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:17:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:24.638 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d59b13dc-a238-4f83-a5e5-8fb60eb23f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:24 np0005532048 systemd[1]: run-netns-ovnmeta\x2d5d251c03\x2de62d\x2d4f4a\x2d933e\x2d92ba86d2f7be.mount: Deactivated successfully.
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.643 253665 INFO nova.compute.manager [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.643 253665 DEBUG oslo.service.loopingcall [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.643 253665 DEBUG nova.compute.manager [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.644 253665 DEBUG nova.network.neutron [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.709 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-changed-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.710 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Refreshing instance network info cache due to event network-changed-e8eabe8a-7cdb-44e6-9266-09d08038b4ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.710 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.731 253665 DEBUG nova.network.neutron [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Updating instance_info_cache with network_info: [{"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.787 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.787 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance network_info: |[{"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.788 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.788 253665 DEBUG nova.network.neutron [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Refreshing network info cache for port e8eabe8a-7cdb-44e6-9266-09d08038b4ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.793 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start _get_guest_xml network_info=[{"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.801 253665 WARNING nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.812 253665 DEBUG nova.virt.libvirt.host [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.814 253665 DEBUG nova.virt.libvirt.host [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.819 253665 DEBUG nova.virt.libvirt.host [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.819 253665 DEBUG nova.virt.libvirt.host [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.820 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.820 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.820 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.820 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.821 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.822 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.822 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.822 253665 DEBUG nova.virt.hardware [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.825 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:24 np0005532048 nova_compute[253661]: 2025-11-22 09:17:24.988 253665 DEBUG nova.network.neutron [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.032 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Releasing lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.033 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance network_info: |[{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.036 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start _get_guest_xml network_info=[{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.040 253665 WARNING nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.045 253665 DEBUG nova.virt.libvirt.host [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.046 253665 DEBUG nova.virt.libvirt.host [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.048 253665 DEBUG nova.virt.libvirt.host [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.048 253665 DEBUG nova.virt.libvirt.host [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.049 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.049 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.049 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.050 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.051 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.051 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.051 253665 DEBUG nova.virt.hardware [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.054 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035658854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.307 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.331 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.336 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 164 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 268 op/s
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.494 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-changed-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.495 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Refreshing instance network info cache due to event network-changed-a029f6c5-4597-4645-9974-c282b8014824. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.495 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.495 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.496 253665 DEBUG nova.network.neutron [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Refreshing network info cache for port a029f6c5-4597-4645-9974-c282b8014824 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:17:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077379852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.633 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.665 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.671 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2551848851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.821 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.824 253665 DEBUG nova.virt.libvirt.vif [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1581347636',display_name='tempest-DeleteServersTestJSON-server-1581347636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1581347636',id=48,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-3v2ufiuv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:21Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2964b30c-ab3b-4bab-8f11-2492007f83ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.824 253665 DEBUG nova.network.os_vif_util [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.826 253665 DEBUG nova.network.os_vif_util [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.828 253665 DEBUG nova.objects.instance [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2964b30c-ab3b-4bab-8f11-2492007f83ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.844 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <uuid>2964b30c-ab3b-4bab-8f11-2492007f83ac</uuid>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <name>instance-00000030</name>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <nova:name>tempest-DeleteServersTestJSON-server-1581347636</nova:name>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:17:24</nova:creationTime>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <nova:port uuid="e8eabe8a-7cdb-44e6-9266-09d08038b4ea">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <entry name="serial">2964b30c-ab3b-4bab-8f11-2492007f83ac</entry>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <entry name="uuid">2964b30c-ab3b-4bab-8f11-2492007f83ac</entry>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2964b30c-ab3b-4bab-8f11-2492007f83ac_disk">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:80:82:17"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <target dev="tape8eabe8a-7c"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/console.log" append="off"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:17:25 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:17:25 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:17:25 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:17:25 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.846 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Preparing to wait for external event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.846 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.847 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.847 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.847 253665 DEBUG nova.virt.libvirt.vif [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1581347636',display_name='tempest-DeleteServersTestJSON-server-1581347636',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1581347636',id=48,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-3v2ufiuv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:21Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2964b30c-ab3b-4bab-8f11-2492007f83ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.848 253665 DEBUG nova.network.os_vif_util [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.848 253665 DEBUG nova.network.os_vif_util [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.849 253665 DEBUG os_vif [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.850 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.851 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.854 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8eabe8a-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.855 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8eabe8a-7c, col_values=(('external_ids', {'iface-id': 'e8eabe8a-7cdb-44e6-9266-09d08038b4ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:82:17', 'vm-uuid': '2964b30c-ab3b-4bab-8f11-2492007f83ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.857 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:25 np0005532048 NetworkManager[48920]: <info>  [1763803045.8588] manager: (tape8eabe8a-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.865 253665 INFO os_vif [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c')#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.922 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.923 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.923 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:80:82:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.924 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Using config drive#033[00m
Nov 22 04:17:25 np0005532048 nova_compute[253661]: 2025-11-22 09:17:25.949 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735200162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.149 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.150 253665 DEBUG nova.virt.libvirt.vif [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:20Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.151 253665 DEBUG nova.network.os_vif_util [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.151 253665 DEBUG nova.network.os_vif_util [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.152 253665 DEBUG nova.objects.instance [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.173 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <uuid>971e37bd-eb33-42b7-b5c7-86eff88cb700</uuid>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <name>instance-0000002f</name>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <nova:name>tempest-AttachInterfacesV270Test-server-1595917141</nova:name>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:17:25</nova:creationTime>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <nova:user uuid="31a3f645b946468d9e6fe3b907dfdc0b">tempest-AttachInterfacesV270Test-1454989706-project-member</nova:user>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <nova:project uuid="0b711aaafbb94138a8f95e1e15d0f0a4">tempest-AttachInterfacesV270Test-1454989706</nova:project>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <nova:port uuid="a029f6c5-4597-4645-9974-c282b8014824">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <entry name="serial">971e37bd-eb33-42b7-b5c7-86eff88cb700</entry>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <entry name="uuid">971e37bd-eb33-42b7-b5c7-86eff88cb700</entry>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/971e37bd-eb33-42b7-b5c7-86eff88cb700_disk">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:85:8e:01"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <target dev="tapa029f6c5-45"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/console.log" append="off"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:17:26 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:17:26 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:17:26 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:17:26 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.174 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Preparing to wait for external event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.176 253665 DEBUG nova.virt.libvirt.vif [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:20Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.176 253665 DEBUG nova.network.os_vif_util [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.177 253665 DEBUG nova.network.os_vif_util [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.177 253665 DEBUG os_vif [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.177 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.178 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.178 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.180 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa029f6c5-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.181 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa029f6c5-45, col_values=(('external_ids', {'iface-id': 'a029f6c5-4597-4645-9974-c282b8014824', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:8e:01', 'vm-uuid': '971e37bd-eb33-42b7-b5c7-86eff88cb700'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:26 np0005532048 NetworkManager[48920]: <info>  [1763803046.2245] manager: (tapa029f6c5-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.224 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.228 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.232 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.233 253665 INFO os_vif [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45')#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.298 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.299 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.299 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No VIF found with MAC fa:16:3e:85:8e:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.300 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Using config drive#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.325 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.624 253665 DEBUG nova.network.neutron [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.648 253665 INFO nova.compute.manager [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Took 2.00 seconds to deallocate network for instance.#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.654 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Creating config drive at /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.659 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb277ybpr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.744 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.745 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.759 253665 DEBUG nova.network.neutron [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Updated VIF entry in instance network info cache for port e8eabe8a-7cdb-44e6-9266-09d08038b4ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.759 253665 DEBUG nova.network.neutron [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Updating instance_info_cache with network_info: [{"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.774 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2964b30c-ab3b-4bab-8f11-2492007f83ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-unplugged-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG oslo_concurrency.lockutils [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-unplugged-1f84d052-9d22-469d-b43d-259c9b54bcaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.775 253665 DEBUG nova.compute.manager [req-a9d846f6-2851-469b-91f1-60466741e88b req-4eb1846b-25f9-467a-99fa-5999be7ba148 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-unplugged-1f84d052-9d22-469d-b43d-259c9b54bcaf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.799 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803031.6466799, 8dafc0d0-bd93-4080-b51e-36887936ea66 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.800 253665 INFO nova.compute.manager [-] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.806 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb277ybpr" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.832 253665 DEBUG nova.storage.rbd_utils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.836 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.887 253665 DEBUG nova.compute.manager [None req-d98b4322-b187-491d-a121-13a1a03a14a1 - - - - - -] [instance: 8dafc0d0-bd93-4080-b51e-36887936ea66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.889 253665 DEBUG nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.889 253665 DEBUG oslo_concurrency.lockutils [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.890 253665 DEBUG oslo_concurrency.lockutils [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.890 253665 DEBUG oslo_concurrency.lockutils [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.890 253665 DEBUG nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.891 253665 WARNING nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received unexpected event network-vif-plugged-1f84d052-9d22-469d-b43d-259c9b54bcaf for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.891 253665 DEBUG nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-deleted-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.891 253665 DEBUG nova.compute.manager [req-176d886a-0394-4da7-af0c-e4477ed3cccb req-59354aa8-df53-46ce-b281-e18c96c64084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-deleted-1f84d052-9d22-469d-b43d-259c9b54bcaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:26 np0005532048 nova_compute[253661]: 2025-11-22 09:17:26.933 253665 DEBUG oslo_concurrency.processutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.250 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Creating config drive at /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.255 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xt9tp0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.311 253665 DEBUG oslo_concurrency.processutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config 2964b30c-ab3b-4bab-8f11-2492007f83ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.312 253665 INFO nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Deleting local config drive /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac/disk.config because it was imported into RBD.#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.340 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updating instance_info_cache with network_info: [{"id": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "address": "fa:16:3e:ca:e5:b4", "network": {"id": "843f0308-8d5e-40fc-9082-c0a02b73f832", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1404006807", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.82", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f84d052-9d", "ovs_interfaceid": "1f84d052-9d22-469d-b43d-259c9b54bcaf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "address": "fa:16:3e:f9:be:fa", "network": {"id": "5d251c03-e62d-4f4a-933e-92ba86d2f7be", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1894602210", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.147", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d156ca65e214b4aacdf111fd47dc4f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb12fa008-cd", "ovs_interfaceid": "b12fa008-cd82-4cf3-8abd-a89e90fb9e4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 168 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 220 op/s
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.362 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-a63d88a0-884c-4328-a21c-6bedf9264f2e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.362 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.363 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.363 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.363 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.363 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.364 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.364 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:27 np0005532048 NetworkManager[48920]: <info>  [1763803047.3720] manager: (tape8eabe8a-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Nov 22 04:17:27 np0005532048 kernel: tape8eabe8a-7c: entered promiscuous mode
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:27Z|00424|binding|INFO|Claiming lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea for this chassis.
Nov 22 04:17:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:27Z|00425|binding|INFO|e8eabe8a-7cdb-44e6-9266-09d08038b4ea: Claiming fa:16:3e:80:82:17 10.100.0.12
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.391 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:82:17 10.100.0.12'], port_security=['fa:16:3e:80:82:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2964b30c-ab3b-4bab-8f11-2492007f83ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e8eabe8a-7cdb-44e6-9266-09d08038b4ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.392 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e8eabe8a-7cdb-44e6-9266-09d08038b4ea in datapath d93e3720-b00d-41f5-8283-164e9f857d24 bound to our chassis#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.394 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.395 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.400 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6xt9tp0z" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028693184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.411 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f75f9046-2cfe-434a-98a3-e725d7260be9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.412 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.414 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.414 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d86acd0e-75f2-41c6-972c-c62df11a321c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.415 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a57c98d-a56f-4290-b661-b8264cb868ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 systemd-machined[215941]: New machine qemu-52-instance-00000030.
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.429 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e293849e-251e-42eb-a508-d2ce7cfba72e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.433 253665 DEBUG nova.storage.rbd_utils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] rbd image 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.436 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:27 np0005532048 systemd[1]: Started Virtual Machine qemu-52-instance-00000030.
Nov 22 04:17:27 np0005532048 systemd-udevd[307453]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.458 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[808206d9-2d07-484f-8172-5af0b4070f85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 NetworkManager[48920]: <info>  [1763803047.4739] device (tape8eabe8a-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:17:27 np0005532048 NetworkManager[48920]: <info>  [1763803047.4758] device (tape8eabe8a-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:17:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:27Z|00426|binding|INFO|Setting lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea ovn-installed in OVS
Nov 22 04:17:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:27Z|00427|binding|INFO|Setting lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea up in Southbound
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.488 253665 DEBUG oslo_concurrency.processutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.497 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[067df6bc-5b0b-4e11-a2ae-ad269bcb4cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 systemd-udevd[307459]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:27 np0005532048 NetworkManager[48920]: <info>  [1763803047.5048] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.506 253665 DEBUG nova.compute.provider_tree [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.503 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[790f8856-6b4a-4d70-aeae-4462af576711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.522 253665 DEBUG nova.scheduler.client.report [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.543 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23170e48-bfbb-4845-a829-4567dfa5139f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.546 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7bee5dc0-3aec-4373-bc16-2de0e043ed67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.551 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.555 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.556 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.556 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.556 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:27 np0005532048 NetworkManager[48920]: <info>  [1763803047.5768] device (tapd93e3720-b0): carrier: link connected
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.586 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5f31cedf-3f8d-4c5e-ad70-1d7a52caa05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.605 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b4fc1a-15f8-4616-b0fb-73944fbfed02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589831, 'reachable_time': 33339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307523, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.627 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a102cee2-fd5f-44d2-b9c1-768bd193c6ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589831, 'tstamp': 589831}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307528, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.628 253665 INFO nova.scheduler.client.report [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Deleted allocations for instance a63d88a0-884c-4328-a21c-6bedf9264f2e#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.650 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f77f93a9-a297-4afa-b405-3143d256cf11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589831, 'reachable_time': 33339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307534, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.655 253665 DEBUG oslo_concurrency.processutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config 971e37bd-eb33-42b7-b5c7-86eff88cb700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.656 253665 INFO nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Deleting local config drive /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700/disk.config because it was imported into RBD.#033[00m
Nov 22 04:17:27 np0005532048 podman[307479]: 2025-11-22 09:17:27.668668989 +0000 UTC m=+0.126327581 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.692 253665 DEBUG oslo_concurrency.lockutils [None req-953fca54-4402-48a5-a5ba-1333d219237a c5ae8af2cc9f40e083473a191ddd445f 2d156ca65e214b4aacdf111fd47dc4f6 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.698 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[051d6987-3829-499a-82e9-ef443e319193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 kernel: tapa029f6c5-45: entered promiscuous mode
Nov 22 04:17:27 np0005532048 NetworkManager[48920]: <info>  [1763803047.7174] manager: (tapa029f6c5-45): new Tun device (/org/freedesktop/NetworkManager/Devices/189)
Nov 22 04:17:27 np0005532048 systemd-udevd[307476]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:27Z|00428|binding|INFO|Claiming lport a029f6c5-4597-4645-9974-c282b8014824 for this chassis.
Nov 22 04:17:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:27Z|00429|binding|INFO|a029f6c5-4597-4645-9974-c282b8014824: Claiming fa:16:3e:85:8e:01 10.100.0.11
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:27 np0005532048 NetworkManager[48920]: <info>  [1763803047.7325] device (tapa029f6c5-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.733 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:8e:01 10.100.0.11'], port_security=['fa:16:3e:85:8e:01 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '971e37bd-eb33-42b7-b5c7-86eff88cb700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82b2fc18-34ac-4d00-9742-ce510a84048d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=808e605d-2f47-40d3-afaa-9b5d699201f0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a029f6c5-4597-4645-9974-c282b8014824) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:27 np0005532048 NetworkManager[48920]: <info>  [1763803047.7338] device (tapa029f6c5-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:17:27 np0005532048 systemd-machined[215941]: New machine qemu-53-instance-0000002f.
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.791 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:27 np0005532048 systemd[1]: Started Virtual Machine qemu-53-instance-0000002f.
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.794 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a90c1e89-b18e-4c64-90bc-687f1822af0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.796 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.797 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.797 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:27Z|00430|binding|INFO|Setting lport a029f6c5-4597-4645-9974-c282b8014824 ovn-installed in OVS
Nov 22 04:17:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:27Z|00431|binding|INFO|Setting lport a029f6c5-4597-4645-9974-c282b8014824 up in Southbound
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:27 np0005532048 NetworkManager[48920]: <info>  [1763803047.8002] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Nov 22 04:17:27 np0005532048 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.808 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:27Z|00432|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.812 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.813 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81fdb9cd-00a4-451d-95bc-15dec2fe97a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.814 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.815 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:17:27 np0005532048 nova_compute[253661]: 2025-11-22 09:17:27.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.960 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.962 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:27.962 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4177867885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.092 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.090278, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.093 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Started (Lifecycle Event)#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.095 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.121 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.131 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.091522, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.133 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.156 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.160 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.191 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.198 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.199 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.204 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.204 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.205 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.203779, 971e37bd-eb33-42b7-b5c7-86eff88cb700 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.205 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] VM Started (Lifecycle Event)#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.217 253665 DEBUG nova.compute.manager [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.217 253665 DEBUG oslo_concurrency.lockutils [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.218 253665 DEBUG oslo_concurrency.lockutils [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.218 253665 DEBUG oslo_concurrency.lockutils [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.218 253665 DEBUG nova.compute.manager [req-92011986-81cf-4384-96db-16ceacc8752a req-d9f95b4e-5ab0-4e8c-a68b-cbcbb68d6d97 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Processing event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.219 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.229 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.231 253665 DEBUG nova.network.neutron [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updated VIF entry in instance network info cache for port a029f6c5-4597-4645-9974-c282b8014824. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.231 253665 DEBUG nova.network.neutron [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.235 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.239 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.203899, 971e37bd-eb33-42b7-b5c7-86eff88cb700 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.239 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.242 253665 INFO nova.virt.libvirt.driver [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance spawned successfully.#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.243 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.245 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-unplugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.246 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-unplugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-unplugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.247 253665 DEBUG oslo_concurrency.lockutils [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a63d88a0-884c-4328-a21c-6bedf9264f2e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.248 253665 DEBUG nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] No waiting events found dispatching network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.248 253665 WARNING nova.compute.manager [req-d6ec03af-eebc-448c-a4cd-dd72ba119b49 req-4ebf1472-1724-446a-9017-163f1f58171e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Received unexpected event network-vif-plugged-b12fa008-cd82-4cf3-8abd-a89e90fb9e4c for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.259 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.268 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.272 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.272 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.273 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.273 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.274 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.274 253665 DEBUG nova.virt.libvirt.driver [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 podman[307691]: 2025-11-22 09:17:28.281786523 +0000 UTC m=+0.077032051 container create 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.298 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.299 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.2268512, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.299 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.322 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.326 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:28 np0005532048 podman[307691]: 2025-11-22 09:17:28.240668651 +0000 UTC m=+0.035914179 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.333 253665 INFO nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Took 6.66 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.334 253665 DEBUG nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.343 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:28 np0005532048 systemd[1]: Started libpod-conmon-03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688.scope.
Nov 22 04:17:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff0afd4b1c1c84ce2231b8246fcd82b5ce225f423297f3fe953c53d2600a88c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.401 253665 INFO nova.compute.manager [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Took 8.33 seconds to build instance.#033[00m
Nov 22 04:17:28 np0005532048 podman[307691]: 2025-11-22 09:17:28.420131964 +0000 UTC m=+0.215377512 container init 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.422 253665 DEBUG oslo_concurrency.lockutils [None req-55d566c6-fc24-4bee-963e-01a0ac95bac1 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:28 np0005532048 podman[307691]: 2025-11-22 09:17:28.428027764 +0000 UTC m=+0.223273292 container start 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:17:28 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [NOTICE]   (307710) : New worker (307712) forked
Nov 22 04:17:28 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [NOTICE]   (307710) : Loading success.
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.520 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4101MB free_disk=59.933631896972656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.521 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.521 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.532 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a029f6c5-4597-4645-9974-c282b8014824 in datapath d29d4d6f-5bba-4588-9a6e-d6174b2f2613 unbound from our chassis#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.535 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d29d4d6f-5bba-4588-9a6e-d6174b2f2613#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.551 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[daecf899-360d-44ad-8ea4-5720f3c5cdd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.552 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd29d4d6f-51 in ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.555 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd29d4d6f-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.555 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dce610b3-0ee7-4dfc-b911-0aaa7e43293e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07d5c3c3-c4c1-47e5-b45b-79c043a13d98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.569 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[5668bb20-a337-4c51-9f80-9b99cded4e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.586 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[885a6c08-97b7-41ce-845a-ee7fd801367c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.588 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 971e37bd-eb33-42b7-b5c7-86eff88cb700 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.588 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2964b30c-ab3b-4bab-8f11-2492007f83ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.588 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.589 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.625 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c2738df5-ccbd-4ef8-83e8-e60a922f7e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 NetworkManager[48920]: <info>  [1763803048.6348] manager: (tapd29d4d6f-50): new Veth device (/org/freedesktop/NetworkManager/Devices/191)
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.636 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4361a0-8cc0-49fe-8b39-5df73782e721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.640 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.679 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8819d0e8-3fac-43b4-b1c1-fabdee950bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.684 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3588ee82-35b2-4a53-a738-80d82f51e37e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 NetworkManager[48920]: <info>  [1763803048.7137] device (tapd29d4d6f-50): carrier: link connected
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.720 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[caee8f1c-5977-4699-a62b-da10200e94f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.742 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[12d4a634-6505-4040-9f09-61fd488551fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd29d4d6f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fd:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589945, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307732, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.765 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb5160b-2d1f-4820-9511-c93a4b68f235]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:fd66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589945, 'tstamp': 589945}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307733, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.786 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4a54e7-343c-438b-950d-bb962ea4ccde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd29d4d6f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fd:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589945, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307735, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.821 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[308134b8-89d1-4246-94eb-1b2203772855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.895 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[972a00ae-74d2-45d1-bbd7-dde992434173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.899 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd29d4d6f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.899 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.900 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd29d4d6f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:28 np0005532048 NetworkManager[48920]: <info>  [1763803048.9032] manager: (tapd29d4d6f-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Nov 22 04:17:28 np0005532048 kernel: tapd29d4d6f-50: entered promiscuous mode
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.906 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd29d4d6f-50, col_values=(('external_ids', {'iface-id': 'c20b47b0-28aa-4f40-a285-cb154afca96a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:28Z|00433|binding|INFO|Releasing lport c20b47b0-28aa-4f40-a285-cb154afca96a from this chassis (sb_readonly=0)
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.924 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d29d4d6f-5bba-4588-9a6e-d6174b2f2613.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d29d4d6f-5bba-4588-9a6e-d6174b2f2613.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.925 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d2def3f0-771e-4d85-9d8c-429ea92d5992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.926 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-d29d4d6f-5bba-4588-9a6e-d6174b2f2613
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/d29d4d6f-5bba-4588-9a6e-d6174b2f2613.pid.haproxy
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID d29d4d6f-5bba-4588-9a6e-d6174b2f2613
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:28.927 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'env', 'PROCESS_TAG=haproxy-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d29d4d6f-5bba-4588-9a6e-d6174b2f2613.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.929 253665 DEBUG nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.930 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.931 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.932 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.932 253665 DEBUG nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Processing event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.933 253665 DEBUG nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.934 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.935 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.935 253665 DEBUG oslo_concurrency.lockutils [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.936 253665 DEBUG nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.936 253665 WARNING nova.compute.manager [req-f7696a31-a5d4-478f-9f14-7c247af92374 req-b050926f-c79b-463b-999a-49a671471b7e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.937 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.939 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.944 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803048.9438565, 971e37bd-eb33-42b7-b5c7-86eff88cb700 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.945 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.948 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.955 253665 INFO nova.virt.libvirt.driver [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance spawned successfully.#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.957 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.969 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803033.9682946, 4589c5da-d558-41a1-bf54-30746991be9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.970 253665 INFO nova.compute.manager [-] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.976 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.987 253665 DEBUG nova.compute.manager [None req-06a5be85-6028-4699-9254-031adc6feaaf - - - - - -] [instance: 4589c5da-d558-41a1-bf54-30746991be9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.989 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.992 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.993 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.994 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.994 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.994 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:28 np0005532048 nova_compute[253661]: 2025-11-22 09:17:28.995 253665 DEBUG nova.virt.libvirt.driver [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.026 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.059 253665 INFO nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Took 8.15 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.060 253665 DEBUG nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589551161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.123 253665 INFO nova.compute.manager [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Took 9.19 seconds to build instance.#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.140 253665 DEBUG oslo_concurrency.lockutils [None req-7728b66b-33ed-4751-b7db-4d2320b242ce 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.142 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.147 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.174 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.202 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.202 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 188 op/s
Nov 22 04:17:29 np0005532048 podman[307788]: 2025-11-22 09:17:29.372889288 +0000 UTC m=+0.063891343 container create 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:17:29 np0005532048 systemd[1]: Started libpod-conmon-4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30.scope.
Nov 22 04:17:29 np0005532048 podman[307788]: 2025-11-22 09:17:29.335497206 +0000 UTC m=+0.026499281 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.442 253665 INFO nova.compute.manager [None req-e547e8aa-6b04-4f8c-a7aa-aa4de0fce2e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Pausing#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.443 253665 DEBUG nova.objects.instance [None req-e547e8aa-6b04-4f8c-a7aa-aa4de0fce2e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'flavor' on Instance uuid 2964b30c-ab3b-4bab-8f11-2492007f83ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad2c1a91593403ac1a5427beee5092aa718b4ae7eb2816e33351712b63d9327/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:29 np0005532048 podman[307788]: 2025-11-22 09:17:29.466854607 +0000 UTC m=+0.157856662 container init 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.467 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803049.4672964, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.468 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.470 253665 DEBUG nova.compute.manager [None req-e547e8aa-6b04-4f8c-a7aa-aa4de0fce2e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:29 np0005532048 podman[307788]: 2025-11-22 09:17:29.473200621 +0000 UTC m=+0.164202666 container start 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.491 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.496 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:29 np0005532048 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [NOTICE]   (307807) : New worker (307809) forked
Nov 22 04:17:29 np0005532048 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [NOTICE]   (307807) : Loading success.
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.518 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.961 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "interface-971e37bd-eb33-42b7-b5c7-86eff88cb700-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.962 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "interface-971e37bd-eb33-42b7-b5c7-86eff88cb700-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.963 253665 DEBUG nova.objects.instance [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'flavor' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:29 np0005532048 nova_compute[253661]: 2025-11-22 09:17:29.990 253665 DEBUG nova.objects.instance [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'pci_requests' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:30 np0005532048 nova_compute[253661]: 2025-11-22 09:17:30.011 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:17:30 np0005532048 nova_compute[253661]: 2025-11-22 09:17:30.369 253665 DEBUG nova.compute.manager [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:30 np0005532048 nova_compute[253661]: 2025-11-22 09:17:30.369 253665 DEBUG oslo_concurrency.lockutils [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:30 np0005532048 nova_compute[253661]: 2025-11-22 09:17:30.370 253665 DEBUG oslo_concurrency.lockutils [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:30 np0005532048 nova_compute[253661]: 2025-11-22 09:17:30.370 253665 DEBUG oslo_concurrency.lockutils [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:30 np0005532048 nova_compute[253661]: 2025-11-22 09:17:30.370 253665 DEBUG nova.compute.manager [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] No waiting events found dispatching network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:30 np0005532048 nova_compute[253661]: 2025-11-22 09:17:30.371 253665 WARNING nova.compute.manager [req-d4ca2a26-c9a2-4f9f-b4b1-d2bf033c42e5 req-42619e22-409a-40a6-8101-8c9ebf7670a1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received unexpected event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea for instance with vm_state paused and task_state None.#033[00m
Nov 22 04:17:30 np0005532048 nova_compute[253661]: 2025-11-22 09:17:30.554 253665 DEBUG nova.policy [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31a3f645b946468d9e6fe3b907dfdc0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:17:31 np0005532048 nova_compute[253661]: 2025-11-22 09:17:31.224 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Nov 22 04:17:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:31 np0005532048 nova_compute[253661]: 2025-11-22 09:17:31.511 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Successfully created port: 0368daf2-eb08-4459-8b7b-e5a565dbb954 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:17:31 np0005532048 nova_compute[253661]: 2025-11-22 09:17:31.930 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:31 np0005532048 nova_compute[253661]: 2025-11-22 09:17:31.931 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:31 np0005532048 nova_compute[253661]: 2025-11-22 09:17:31.931 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:31 np0005532048 nova_compute[253661]: 2025-11-22 09:17:31.932 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:31 np0005532048 nova_compute[253661]: 2025-11-22 09:17:31.932 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:31 np0005532048 nova_compute[253661]: 2025-11-22 09:17:31.934 253665 INFO nova.compute.manager [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Terminating instance#033[00m
Nov 22 04:17:31 np0005532048 nova_compute[253661]: 2025-11-22 09:17:31.935 253665 DEBUG nova.compute.manager [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:17:32 np0005532048 kernel: tape8eabe8a-7c (unregistering): left promiscuous mode
Nov 22 04:17:32 np0005532048 NetworkManager[48920]: <info>  [1763803052.0874] device (tape8eabe8a-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.100 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:32Z|00434|binding|INFO|Releasing lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea from this chassis (sb_readonly=0)
Nov 22 04:17:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:32Z|00435|binding|INFO|Setting lport e8eabe8a-7cdb-44e6-9266-09d08038b4ea down in Southbound
Nov 22 04:17:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:32Z|00436|binding|INFO|Removing iface tape8eabe8a-7c ovn-installed in OVS
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.107 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.111 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:82:17 10.100.0.12'], port_security=['fa:16:3e:80:82:17 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2964b30c-ab3b-4bab-8f11-2492007f83ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=e8eabe8a-7cdb-44e6-9266-09d08038b4ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.113 162862 INFO neutron.agent.ovn.metadata.agent [-] Port e8eabe8a-7cdb-44e6-9266-09d08038b4ea in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.115 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.116 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ce9350-4829-4e37-a22f-348c84c480e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.117 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore#033[00m
Nov 22 04:17:32 np0005532048 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000030.scope: Deactivated successfully.
Nov 22 04:17:32 np0005532048 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000030.scope: Consumed 1.804s CPU time.
Nov 22 04:17:32 np0005532048 systemd-machined[215941]: Machine qemu-52-instance-00000030 terminated.
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.174 253665 INFO nova.virt.libvirt.driver [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Instance destroyed successfully.#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.175 253665 DEBUG nova.objects.instance [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid 2964b30c-ab3b-4bab-8f11-2492007f83ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.189 253665 DEBUG nova.virt.libvirt.vif [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1581347636',display_name='tempest-DeleteServersTestJSON-server-1581347636',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1581347636',id=48,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-3v2ufiuv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2964b30c-ab3b-4bab-8f11-2492007f83ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.189 253665 DEBUG nova.network.os_vif_util [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "address": "fa:16:3e:80:82:17", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8eabe8a-7c", "ovs_interfaceid": "e8eabe8a-7cdb-44e6-9266-09d08038b4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.190 253665 DEBUG nova.network.os_vif_util [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.191 253665 DEBUG os_vif [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.192 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.193 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8eabe8a-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.200 253665 INFO os_vif [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:82:17,bridge_name='br-int',has_traffic_filtering=True,id=e8eabe8a-7cdb-44e6-9266-09d08038b4ea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8eabe8a-7c')#033[00m
Nov 22 04:17:32 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [NOTICE]   (307710) : haproxy version is 2.8.14-c23fe91
Nov 22 04:17:32 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [NOTICE]   (307710) : path to executable is /usr/sbin/haproxy
Nov 22 04:17:32 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [WARNING]  (307710) : Exiting Master process...
Nov 22 04:17:32 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [WARNING]  (307710) : Exiting Master process...
Nov 22 04:17:32 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [ALERT]    (307710) : Current worker (307712) exited with code 143 (Terminated)
Nov 22 04:17:32 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[307706]: [WARNING]  (307710) : All workers exited. Exiting... (0)
Nov 22 04:17:32 np0005532048 systemd[1]: libpod-03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688.scope: Deactivated successfully.
Nov 22 04:17:32 np0005532048 podman[307853]: 2025-11-22 09:17:32.331965097 +0000 UTC m=+0.106445987 container died 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:17:32 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688-userdata-shm.mount: Deactivated successfully.
Nov 22 04:17:32 np0005532048 systemd[1]: var-lib-containers-storage-overlay-eff0afd4b1c1c84ce2231b8246fcd82b5ce225f423297f3fe953c53d2600a88c-merged.mount: Deactivated successfully.
Nov 22 04:17:32 np0005532048 podman[307853]: 2025-11-22 09:17:32.572790169 +0000 UTC m=+0.347271039 container cleanup 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:17:32 np0005532048 systemd[1]: libpod-conmon-03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688.scope: Deactivated successfully.
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.679 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Successfully updated port: 0368daf2-eb08-4459-8b7b-e5a565dbb954 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.694 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.694 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquired lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.694 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:17:32 np0005532048 podman[307900]: 2025-11-22 09:17:32.830707946 +0000 UTC m=+0.234155381 container remove 03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.844 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69800ddd-6079-46a4-96b9-374bfde07ca7]: (4, ('Sat Nov 22 09:17:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688)\n03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688\nSat Nov 22 09:17:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688)\n03b5455d5411cba85e292b84c6d2830e282c8003d4764bcef61f337d43808688\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.847 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d772891-e102-40a4-86c5-407a004a5e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.849 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:32 np0005532048 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.869 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[487e1fd3-e65c-4c51-947b-2fd42d4e7198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.887 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[781af5c9-fc18-4911-aded-e5df27023638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.889 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c094be-5cce-46e4-b5f1-4658e05d41be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.901 253665 WARNING nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] d29d4d6f-5bba-4588-9a6e-d6174b2f2613 already exists in list: networks containing: ['d29d4d6f-5bba-4588-9a6e-d6174b2f2613']. ignoring it#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.913 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2b3ced-2e39-4114-835f-221005bf5c82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589822, 'reachable_time': 40933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307914, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.916 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:17:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:32.916 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[51791ef9-adea-4cf8-9fa9-8fd02d63ba4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:32 np0005532048 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.996 253665 DEBUG nova.compute.manager [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-unplugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.996 253665 DEBUG oslo_concurrency.lockutils [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.997 253665 DEBUG oslo_concurrency.lockutils [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.997 253665 DEBUG oslo_concurrency.lockutils [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.997 253665 DEBUG nova.compute.manager [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] No waiting events found dispatching network-vif-unplugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:32 np0005532048 nova_compute[253661]: 2025-11-22 09:17:32.997 253665 DEBUG nova.compute.manager [req-ae36be97-e87c-4bb9-9baa-6264ab5484be req-3753245f-fe53-470d-a8cd-dd1b12632575 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-unplugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:17:33 np0005532048 nova_compute[253661]: 2025-11-22 09:17:33.055 253665 DEBUG nova.compute.manager [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-changed-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:33 np0005532048 nova_compute[253661]: 2025-11-22 09:17:33.055 253665 DEBUG nova.compute.manager [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Refreshing instance network info cache due to event network-changed-0368daf2-eb08-4459-8b7b-e5a565dbb954. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:17:33 np0005532048 nova_compute[253661]: 2025-11-22 09:17:33.056 253665 DEBUG oslo_concurrency.lockutils [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 200 op/s
Nov 22 04:17:34 np0005532048 nova_compute[253661]: 2025-11-22 09:17:34.264 253665 INFO nova.virt.libvirt.driver [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Deleting instance files /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac_del#033[00m
Nov 22 04:17:34 np0005532048 nova_compute[253661]: 2025-11-22 09:17:34.265 253665 INFO nova.virt.libvirt.driver [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Deletion of /var/lib/nova/instances/2964b30c-ab3b-4bab-8f11-2492007f83ac_del complete#033[00m
Nov 22 04:17:34 np0005532048 nova_compute[253661]: 2025-11-22 09:17:34.308 253665 INFO nova.compute.manager [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Took 2.37 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:17:34 np0005532048 nova_compute[253661]: 2025-11-22 09:17:34.309 253665 DEBUG oslo.service.loopingcall [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:17:34 np0005532048 nova_compute[253661]: 2025-11-22 09:17:34.309 253665 DEBUG nova.compute.manager [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:17:34 np0005532048 nova_compute[253661]: 2025-11-22 09:17:34.309 253665 DEBUG nova.network.neutron [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:17:34 np0005532048 nova_compute[253661]: 2025-11-22 09:17:34.986 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.119 253665 DEBUG nova.compute.manager [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.120 253665 DEBUG oslo_concurrency.lockutils [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.120 253665 DEBUG oslo_concurrency.lockutils [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.120 253665 DEBUG oslo_concurrency.lockutils [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.121 253665 DEBUG nova.compute.manager [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] No waiting events found dispatching network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.121 253665 WARNING nova.compute.manager [req-e2d95b01-9c88-48be-b37d-c5016abb3f46 req-2d047e93-3372-4cc2-b25a-f7e8cd2605fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received unexpected event network-vif-plugged-e8eabe8a-7cdb-44e6-9266-09d08038b4ea for instance with vm_state paused and task_state deleting.#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.195 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.252 253665 DEBUG nova.network.neutron [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.270 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Releasing lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.271 253665 DEBUG oslo_concurrency.lockutils [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.271 253665 DEBUG nova.network.neutron [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Refreshing network info cache for port 0368daf2-eb08-4459-8b7b-e5a565dbb954 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.276 253665 DEBUG nova.virt.libvirt.vif [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.277 253665 DEBUG nova.network.os_vif_util [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.278 253665 DEBUG nova.network.os_vif_util [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.278 253665 DEBUG os_vif [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.279 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.280 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.288 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0368daf2-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.288 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0368daf2-eb, col_values=(('external_ids', {'iface-id': '0368daf2-eb08-4459-8b7b-e5a565dbb954', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:86:4f', 'vm-uuid': '971e37bd-eb33-42b7-b5c7-86eff88cb700'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:35 np0005532048 NetworkManager[48920]: <info>  [1763803055.2915] manager: (tap0368daf2-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.295 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.298 253665 INFO os_vif [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb')#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.299 253665 DEBUG nova.virt.libvirt.vif [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.300 253665 DEBUG nova.network.os_vif_util [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.300 253665 DEBUG nova.network.os_vif_util [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.305 253665 DEBUG nova.virt.libvirt.guest [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] attach device xml: <interface type="ethernet">
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:15:86:4f"/>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <target dev="tap0368daf2-eb"/>
Nov 22 04:17:35 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:17:35 np0005532048 nova_compute[253661]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 22 04:17:35 np0005532048 kernel: tap0368daf2-eb: entered promiscuous mode
Nov 22 04:17:35 np0005532048 NetworkManager[48920]: <info>  [1763803055.3271] manager: (tap0368daf2-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Nov 22 04:17:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:35Z|00437|binding|INFO|Claiming lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 for this chassis.
Nov 22 04:17:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:35Z|00438|binding|INFO|0368daf2-eb08-4459-8b7b-e5a565dbb954: Claiming fa:16:3e:15:86:4f 10.100.0.13
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.334 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:86:4f 10.100.0.13'], port_security=['fa:16:3e:15:86:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '971e37bd-eb33-42b7-b5c7-86eff88cb700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82b2fc18-34ac-4d00-9742-ce510a84048d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=808e605d-2f47-40d3-afaa-9b5d699201f0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0368daf2-eb08-4459-8b7b-e5a565dbb954) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.335 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0368daf2-eb08-4459-8b7b-e5a565dbb954 in datapath d29d4d6f-5bba-4588-9a6e-d6174b2f2613 bound to our chassis#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.337 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d29d4d6f-5bba-4588-9a6e-d6174b2f2613#033[00m
Nov 22 04:17:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 121 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.2 MiB/s wr, 264 op/s
Nov 22 04:17:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:35Z|00439|binding|INFO|Setting lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 ovn-installed in OVS
Nov 22 04:17:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:35Z|00440|binding|INFO|Setting lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 up in Southbound
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.354 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:35 np0005532048 systemd-udevd[307923]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.363 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[340bfab1-a3cc-4c5d-a5f4-d9573125d561]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:35 np0005532048 NetworkManager[48920]: <info>  [1763803055.3910] device (tap0368daf2-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:17:35 np0005532048 NetworkManager[48920]: <info>  [1763803055.3937] device (tap0368daf2-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.409 253665 DEBUG nova.virt.libvirt.driver [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.409 253665 DEBUG nova.virt.libvirt.driver [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.410 253665 DEBUG nova.virt.libvirt.driver [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No VIF found with MAC fa:16:3e:85:8e:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.410 253665 DEBUG nova.virt.libvirt.driver [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] No VIF found with MAC fa:16:3e:15:86:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.411 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e921f4f7-14f1-4c20-9dd0-9ca69ace7cfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.415 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cab2c15a-b5bc-4062-8218-ab2abbe5c044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.436 253665 DEBUG nova.virt.libvirt.guest [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <nova:name>tempest-AttachInterfacesV270Test-server-1595917141</nova:name>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:17:35</nova:creationTime>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    <nova:user uuid="31a3f645b946468d9e6fe3b907dfdc0b">tempest-AttachInterfacesV270Test-1454989706-project-member</nova:user>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    <nova:project uuid="0b711aaafbb94138a8f95e1e15d0f0a4">tempest-AttachInterfacesV270Test-1454989706</nova:project>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    <nova:port uuid="a029f6c5-4597-4645-9974-c282b8014824">
Nov 22 04:17:35 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    <nova:port uuid="0368daf2-eb08-4459-8b7b-e5a565dbb954">
Nov 22 04:17:35 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:17:35 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:17:35 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:17:35 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.452 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c62d095e-d0f0-40e3-b298-fe862945fa74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.469 253665 DEBUG oslo_concurrency.lockutils [None req-f77edf8f-391a-4f52-9072-6a826a7a1cb7 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "interface-971e37bd-eb33-42b7-b5c7-86eff88cb700-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.472 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2d59858b-3c04-49aa-8709-2726f5e7ec68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd29d4d6f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fd:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589945, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307930, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4af937a1-0ef3-4210-a727-42266e8b7c45]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd29d4d6f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589959, 'tstamp': 589959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307931, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd29d4d6f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589962, 'tstamp': 589962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307931, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.500 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd29d4d6f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:35 np0005532048 nova_compute[253661]: 2025-11-22 09:17:35.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd29d4d6f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd29d4d6f-50, col_values=(('external_ids', {'iface-id': 'c20b47b0-28aa-4f40-a285-cb154afca96a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:35.520 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:36 np0005532048 nova_compute[253661]: 2025-11-22 09:17:36.449 253665 DEBUG nova.network.neutron [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:36 np0005532048 nova_compute[253661]: 2025-11-22 09:17:36.464 253665 INFO nova.compute.manager [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Took 2.16 seconds to deallocate network for instance.#033[00m
Nov 22 04:17:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:36 np0005532048 nova_compute[253661]: 2025-11-22 09:17:36.506 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:36 np0005532048 nova_compute[253661]: 2025-11-22 09:17:36.507 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:36 np0005532048 nova_compute[253661]: 2025-11-22 09:17:36.575 253665 DEBUG oslo_concurrency.processutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/249014273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.023 253665 DEBUG oslo_concurrency.processutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.029 253665 DEBUG nova.compute.provider_tree [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.043 253665 DEBUG nova.scheduler.client.report [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.072 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.095 253665 INFO nova.scheduler.client.report [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance 2964b30c-ab3b-4bab-8f11-2492007f83ac#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.156 253665 DEBUG oslo_concurrency.lockutils [None req-42579cee-c439-49e4-99ea-e2f96b4764b4 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2964b30c-ab3b-4bab-8f11-2492007f83ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.201 253665 DEBUG nova.network.neutron [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updated VIF entry in instance network info cache for port 0368daf2-eb08-4459-8b7b-e5a565dbb954. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.202 253665 DEBUG nova.network.neutron [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.220 253665 DEBUG oslo_concurrency.lockutils [req-47196d67-4aa5-4ccd-91bf-96c945239259 req-10dd9321-c902-4720-983f-30fe118289fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-971e37bd-eb33-42b7-b5c7-86eff88cb700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.295 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.296 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.296 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.297 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.297 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.297 253665 WARNING nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.297 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Received event network-vif-deleted-e8eabe8a-7cdb-44e6-9266-09d08038b4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.298 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.298 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.298 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.298 253665 DEBUG oslo_concurrency.lockutils [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.299 253665 DEBUG nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.299 253665 WARNING nova.compute.manager [req-feadb07e-eb94-4f66-b527-7b6c43b38b6b req-254a546b-2d30-46a3-adb5-9f2524825e13 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:17:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 105 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 705 KiB/s wr, 194 op/s
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.409 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.410 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.410 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.410 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.411 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.412 253665 INFO nova.compute.manager [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Terminating instance#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.413 253665 DEBUG nova.compute.manager [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:17:37 np0005532048 kernel: tapa029f6c5-45 (unregistering): left promiscuous mode
Nov 22 04:17:37 np0005532048 NetworkManager[48920]: <info>  [1763803057.6425] device (tapa029f6c5-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.685 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:37Z|00441|binding|INFO|Releasing lport a029f6c5-4597-4645-9974-c282b8014824 from this chassis (sb_readonly=0)
Nov 22 04:17:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:37Z|00442|binding|INFO|Setting lport a029f6c5-4597-4645-9974-c282b8014824 down in Southbound
Nov 22 04:17:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:37Z|00443|binding|INFO|Removing iface tapa029f6c5-45 ovn-installed in OVS
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.687 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.692 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:8e:01 10.100.0.11'], port_security=['fa:16:3e:85:8e:01 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '971e37bd-eb33-42b7-b5c7-86eff88cb700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82b2fc18-34ac-4d00-9742-ce510a84048d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=808e605d-2f47-40d3-afaa-9b5d699201f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a029f6c5-4597-4645-9974-c282b8014824) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.693 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a029f6c5-4597-4645-9974-c282b8014824 in datapath d29d4d6f-5bba-4588-9a6e-d6174b2f2613 unbound from our chassis#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.695 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d29d4d6f-5bba-4588-9a6e-d6174b2f2613#033[00m
Nov 22 04:17:37 np0005532048 kernel: tap0368daf2-eb (unregistering): left promiscuous mode
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 NetworkManager[48920]: <info>  [1763803057.7076] device (tap0368daf2-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:17:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:37Z|00444|binding|INFO|Releasing lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 from this chassis (sb_readonly=0)
Nov 22 04:17:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:37Z|00445|binding|INFO|Setting lport 0368daf2-eb08-4459-8b7b-e5a565dbb954 down in Southbound
Nov 22 04:17:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:37Z|00446|binding|INFO|Removing iface tap0368daf2-eb ovn-installed in OVS
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.718 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e187871c-dbbd-40ef-8a0f-d872ba5bbfbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.724 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:86:4f 10.100.0.13'], port_security=['fa:16:3e:15:86:4f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '971e37bd-eb33-42b7-b5c7-86eff88cb700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b711aaafbb94138a8f95e1e15d0f0a4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82b2fc18-34ac-4d00-9742-ce510a84048d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=808e605d-2f47-40d3-afaa-9b5d699201f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0368daf2-eb08-4459-8b7b-e5a565dbb954) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.738 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.754 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[58e41b19-07f3-4624-92a1-72055c1b6a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.758 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3d5c25-0378-40b1-a0fd-6ab4092ce4f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:37 np0005532048 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Nov 22 04:17:37 np0005532048 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002f.scope: Consumed 8.925s CPU time.
Nov 22 04:17:37 np0005532048 systemd-machined[215941]: Machine qemu-53-instance-0000002f terminated.
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.791 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6272218e-368a-4e82-9d18-4409037452bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed1ad42-b03a-4294-87e7-e8bd3067121f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd29d4d6f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fd:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589945, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307970, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.838 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a024bb1-f4fc-4a93-ab65-43f571ce8735]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd29d4d6f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589959, 'tstamp': 589959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307971, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd29d4d6f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589962, 'tstamp': 589962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307971, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:37 np0005532048 NetworkManager[48920]: <info>  [1763803057.8399] manager: (tapa029f6c5-45): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.840 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd29d4d6f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 NetworkManager[48920]: <info>  [1763803057.8556] manager: (tap0368daf2-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.859 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd29d4d6f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.859 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.860 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd29d4d6f-50, col_values=(('external_ids', {'iface-id': 'c20b47b0-28aa-4f40-a285-cb154afca96a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.860 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.862 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0368daf2-eb08-4459-8b7b-e5a565dbb954 in datapath d29d4d6f-5bba-4588-9a6e-d6174b2f2613 unbound from our chassis#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.863 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d29d4d6f-5bba-4588-9a6e-d6174b2f2613, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74813c28-f19d-43df-ae95-37b325600b7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:37.864 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 namespace which is not needed anymore#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.875 253665 INFO nova.virt.libvirt.driver [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Instance destroyed successfully.#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.876 253665 DEBUG nova.objects.instance [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lazy-loading 'resources' on Instance uuid 971e37bd-eb33-42b7-b5c7-86eff88cb700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.891 253665 DEBUG nova.virt.libvirt.vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.891 253665 DEBUG nova.network.os_vif_util [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "a029f6c5-4597-4645-9974-c282b8014824", "address": "fa:16:3e:85:8e:01", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa029f6c5-45", "ovs_interfaceid": "a029f6c5-4597-4645-9974-c282b8014824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.893 253665 DEBUG nova.network.os_vif_util [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.893 253665 DEBUG os_vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.895 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.897 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa029f6c5-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.908 253665 INFO os_vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:8e:01,bridge_name='br-int',has_traffic_filtering=True,id=a029f6c5-4597-4645-9974-c282b8014824,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa029f6c5-45')#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.909 253665 DEBUG nova.virt.libvirt.vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1595917141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1595917141',id=47,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0b711aaafbb94138a8f95e1e15d0f0a4',ramdisk_id='',reservation_id='r-q7exj5e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1454989706',owner_user_name='tempest-AttachInterfacesV270Test-1454989706-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:17:29Z,user_data=None,user_id='31a3f645b946468d9e6fe3b907dfdc0b',uuid=971e37bd-eb33-42b7-b5c7-86eff88cb700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.910 253665 DEBUG nova.network.os_vif_util [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converting VIF {"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.910 253665 DEBUG nova.network.os_vif_util [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.911 253665 DEBUG os_vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.912 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0368daf2-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.914 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.915 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:37 np0005532048 nova_compute[253661]: 2025-11-22 09:17:37.918 253665 INFO os_vif [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:86:4f,bridge_name='br-int',has_traffic_filtering=True,id=0368daf2-eb08-4459-8b7b-e5a565dbb954,network=Network(d29d4d6f-5bba-4588-9a6e-d6174b2f2613),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0368daf2-eb')#033[00m
Nov 22 04:17:38 np0005532048 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [NOTICE]   (307807) : haproxy version is 2.8.14-c23fe91
Nov 22 04:17:38 np0005532048 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [NOTICE]   (307807) : path to executable is /usr/sbin/haproxy
Nov 22 04:17:38 np0005532048 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [WARNING]  (307807) : Exiting Master process...
Nov 22 04:17:38 np0005532048 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [WARNING]  (307807) : Exiting Master process...
Nov 22 04:17:38 np0005532048 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [ALERT]    (307807) : Current worker (307809) exited with code 143 (Terminated)
Nov 22 04:17:38 np0005532048 neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613[307803]: [WARNING]  (307807) : All workers exited. Exiting... (0)
Nov 22 04:17:38 np0005532048 systemd[1]: libpod-4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30.scope: Deactivated successfully.
Nov 22 04:17:38 np0005532048 podman[308027]: 2025-11-22 09:17:38.039161592 +0000 UTC m=+0.055592232 container died 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:17:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30-userdata-shm.mount: Deactivated successfully.
Nov 22 04:17:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-aad2c1a91593403ac1a5427beee5092aa718b4ae7eb2816e33351712b63d9327-merged.mount: Deactivated successfully.
Nov 22 04:17:38 np0005532048 podman[308027]: 2025-11-22 09:17:38.086082222 +0000 UTC m=+0.102512852 container cleanup 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.092 253665 DEBUG nova.compute.manager [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-unplugged-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.093 253665 DEBUG oslo_concurrency.lockutils [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.093 253665 DEBUG oslo_concurrency.lockutils [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.093 253665 DEBUG oslo_concurrency.lockutils [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.093 253665 DEBUG nova.compute.manager [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-unplugged-a029f6c5-4597-4645-9974-c282b8014824 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.094 253665 DEBUG nova.compute.manager [req-12db3bac-1555-4861-ae17-f19f1f8adda0 req-44974eda-19de-4f55-8e1c-4e2ce18ad98c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-unplugged-a029f6c5-4597-4645-9974-c282b8014824 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:17:38 np0005532048 systemd[1]: libpod-conmon-4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30.scope: Deactivated successfully.
Nov 22 04:17:38 np0005532048 podman[308056]: 2025-11-22 09:17:38.164892978 +0000 UTC m=+0.051788220 container remove 4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:17:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.172 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b6d08f-9176-4451-8230-9278b79e0eeb]: (4, ('Sat Nov 22 09:17:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 (4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30)\n4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30\nSat Nov 22 09:17:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 (4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30)\n4f60f743c300724974acf66ec25d186cb80503c27f3b822ee10625550fbd1e30\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.174 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33beb765-f26d-4bfb-b2a3-8ab62bcfb2bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.176 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd29d4d6f-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.178 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:38 np0005532048 kernel: tapd29d4d6f-50: left promiscuous mode
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dac9d2b1-2f1b-4ee6-8366-79ee4cc678ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1685d9f8-fb5a-43e9-87a8-ce305c7e7e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.218 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbbee29-bc2e-4a1c-97fa-41f9fa36ba43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06eaa753-6b9d-4ff1-ae3f-aa9f4dc57d38]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589935, 'reachable_time': 33253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308071, 'error': None, 'target': 'ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:38 np0005532048 systemd[1]: run-netns-ovnmeta\x2dd29d4d6f\x2d5bba\x2d4588\x2d9a6e\x2dd6174b2f2613.mount: Deactivated successfully.
Nov 22 04:17:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.243 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d29d4d6f-5bba-4588-9a6e-d6174b2f2613 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:17:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:38.244 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c787b0f1-b562-4f29-82a7-e6c316f35462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.340 253665 INFO nova.virt.libvirt.driver [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Deleting instance files /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700_del#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.341 253665 INFO nova.virt.libvirt.driver [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Deletion of /var/lib/nova/instances/971e37bd-eb33-42b7-b5c7-86eff88cb700_del complete#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.384 253665 INFO nova.compute.manager [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.384 253665 DEBUG oslo.service.loopingcall [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.385 253665 DEBUG nova.compute.manager [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.385 253665 DEBUG nova.network.neutron [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.764 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803043.7633476, a63d88a0-884c-4328-a21c-6bedf9264f2e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.765 253665 INFO nova.compute.manager [-] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:17:38 np0005532048 nova_compute[253661]: 2025-11-22 09:17:38.783 253665 DEBUG nova.compute.manager [None req-73c3a6d0-5c1b-4f7c-b36f-a19bc71c3cc4 - - - - - -] [instance: a63d88a0-884c-4328-a21c-6bedf9264f2e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 49 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 26 KiB/s wr, 176 op/s
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.425 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-unplugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.425 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.426 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.426 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.426 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-unplugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.427 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-unplugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.427 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.427 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.427 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.428 253665 DEBUG oslo_concurrency.lockutils [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.428 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.428 253665 WARNING nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-0368daf2-eb08-4459-8b7b-e5a565dbb954 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.429 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-deleted-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.429 253665 INFO nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Neutron deleted interface a029f6c5-4597-4645-9974-c282b8014824; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.429 253665 DEBUG nova.network.neutron [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [{"id": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "address": "fa:16:3e:15:86:4f", "network": {"id": "d29d4d6f-5bba-4588-9a6e-d6174b2f2613", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1553490545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0b711aaafbb94138a8f95e1e15d0f0a4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0368daf2-eb", "ovs_interfaceid": "0368daf2-eb08-4459-8b7b-e5a565dbb954", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.452 253665 DEBUG nova.compute.manager [req-6c6881e2-fe6e-46bd-bc3a-59900642e5a2 req-73206304-8b2c-4cab-bf6d-872070fdec94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Detach interface failed, port_id=a029f6c5-4597-4645-9974-c282b8014824, reason: Instance 971e37bd-eb33-42b7-b5c7-86eff88cb700 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.788 253665 DEBUG nova.network.neutron [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.806 253665 INFO nova.compute.manager [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Took 1.42 seconds to deallocate network for instance.#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.852 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.853 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.904 253665 DEBUG oslo_concurrency.processutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:39 np0005532048 nova_compute[253661]: 2025-11-22 09:17:39.988 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.185 253665 DEBUG nova.compute.manager [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.186 253665 DEBUG oslo_concurrency.lockutils [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.186 253665 DEBUG oslo_concurrency.lockutils [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.186 253665 DEBUG oslo_concurrency.lockutils [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.187 253665 DEBUG nova.compute.manager [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] No waiting events found dispatching network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.187 253665 WARNING nova.compute.manager [req-87213f2b-23e6-45e2-9256-dc8b74458c6f req-1bc8516d-556b-4670-a86c-9fecbfd63c84 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received unexpected event network-vif-plugged-a029f6c5-4597-4645-9974-c282b8014824 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:17:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272310526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.414 253665 DEBUG oslo_concurrency.processutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.421 253665 DEBUG nova.compute.provider_tree [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.445 253665 DEBUG nova.scheduler.client.report [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.475 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.502 253665 INFO nova.scheduler.client.report [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Deleted allocations for instance 971e37bd-eb33-42b7-b5c7-86eff88cb700#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.563 253665 DEBUG oslo_concurrency.lockutils [None req-ec58b0bd-1536-4098-90f2-99aa7f143a3e 31a3f645b946468d9e6fe3b907dfdc0b 0b711aaafbb94138a8f95e1e15d0f0a4 - - default default] Lock "971e37bd-eb33-42b7-b5c7-86eff88cb700" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.947 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.948 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:40 np0005532048 nova_compute[253661]: 2025-11-22 09:17:40.966 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.026 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.027 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.033 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.034 253665 INFO nova.compute.claims [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.117 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:41 np0005532048 podman[308267]: 2025-11-22 09:17:41.144184688 +0000 UTC m=+0.099125239 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:17:41 np0005532048 podman[308267]: 2025-11-22 09:17:41.256881617 +0000 UTC m=+0.211822158 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:17:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 49 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 KiB/s wr, 152 op/s
Nov 22 04:17:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.576 253665 DEBUG nova.compute.manager [req-745f209a-110a-4b4b-9bf5-7d659bcfb22f req-1b848f30-4cfc-43b2-b3cb-002fc9311dee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Received event network-vif-deleted-0368daf2-eb08-4459-8b7b-e5a565dbb954 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3787652747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.651 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.661 253665 DEBUG nova.compute.provider_tree [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.676 253665 DEBUG nova.scheduler.client.report [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.700 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.702 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.755 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.756 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.774 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.789 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.873 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.875 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.876 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Creating image(s)#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.903 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.933 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.958 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:41 np0005532048 nova_compute[253661]: 2025-11-22 09:17:41.964 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:17:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.040 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.041 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.042 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.042 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.064 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.073 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.240 253665 DEBUG nova.policy [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.381 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.450 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.557 253665 DEBUG nova.objects.instance [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.569 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.570 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Ensure instance console log exists: /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.570 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.571 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.571 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:42 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 27244405-b9fd-4433-b14c-1c6922c09a66 does not exist
Nov 22 04:17:42 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 125286ea-17ec-4c31-9e34-69eed71bd558 does not exist
Nov 22 04:17:42 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 61458e94-e40f-411f-b760-397bacb922f8 does not exist
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:17:42 np0005532048 nova_compute[253661]: 2025-11-22 09:17:42.915 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:17:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 KiB/s wr, 160 op/s
Nov 22 04:17:43 np0005532048 podman[308881]: 2025-11-22 09:17:43.489758212 +0000 UTC m=+0.045645320 container create adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 04:17:43 np0005532048 systemd[1]: Started libpod-conmon-adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63.scope.
Nov 22 04:17:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:43 np0005532048 podman[308881]: 2025-11-22 09:17:43.469167832 +0000 UTC m=+0.025054970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:43 np0005532048 podman[308881]: 2025-11-22 09:17:43.569780577 +0000 UTC m=+0.125667705 container init adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:17:43 np0005532048 podman[308881]: 2025-11-22 09:17:43.583231563 +0000 UTC m=+0.139118671 container start adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:17:43 np0005532048 podman[308881]: 2025-11-22 09:17:43.586765679 +0000 UTC m=+0.142652887 container attach adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:17:43 np0005532048 elegant_chaum[308898]: 167 167
Nov 22 04:17:43 np0005532048 systemd[1]: libpod-adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63.scope: Deactivated successfully.
Nov 22 04:17:43 np0005532048 conmon[308898]: conmon adacc5a305454d6a0b56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63.scope/container/memory.events
Nov 22 04:17:43 np0005532048 podman[308881]: 2025-11-22 09:17:43.594980429 +0000 UTC m=+0.150867557 container died adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:17:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-111287d53fd15aee228c08531a6b9ea6e1763e817b4f1104945e11ea90d11eb5-merged.mount: Deactivated successfully.
Nov 22 04:17:43 np0005532048 podman[308881]: 2025-11-22 09:17:43.640245078 +0000 UTC m=+0.196132186 container remove adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 04:17:43 np0005532048 nova_compute[253661]: 2025-11-22 09:17:43.645 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Successfully created port: 085e3bcc-2e77-4c2e-8298-872aac04e65e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:17:43 np0005532048 systemd[1]: libpod-conmon-adacc5a305454d6a0b56fa1b814f894bb1640a3e23e3304664f60cc9d9fc8d63.scope: Deactivated successfully.
Nov 22 04:17:43 np0005532048 podman[308922]: 2025-11-22 09:17:43.82301133 +0000 UTC m=+0.057926369 container create e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:43 np0005532048 systemd[1]: Started libpod-conmon-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope.
Nov 22 04:17:43 np0005532048 podman[308922]: 2025-11-22 09:17:43.795213424 +0000 UTC m=+0.030128503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:43 np0005532048 podman[308922]: 2025-11-22 09:17:43.931497005 +0000 UTC m=+0.166412034 container init e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:17:43 np0005532048 podman[308922]: 2025-11-22 09:17:43.938726301 +0000 UTC m=+0.173641300 container start e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:43 np0005532048 podman[308922]: 2025-11-22 09:17:43.942071893 +0000 UTC m=+0.176986912 container attach e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:44 np0005532048 nova_compute[253661]: 2025-11-22 09:17:44.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:45 np0005532048 romantic_khorana[308938]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:17:45 np0005532048 romantic_khorana[308938]: --> relative data size: 1.0
Nov 22 04:17:45 np0005532048 romantic_khorana[308938]: --> All data devices are unavailable
Nov 22 04:17:45 np0005532048 systemd[1]: libpod-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope: Deactivated successfully.
Nov 22 04:17:45 np0005532048 systemd[1]: libpod-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope: Consumed 1.060s CPU time.
Nov 22 04:17:45 np0005532048 conmon[308938]: conmon e144e79918758160f326 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope/container/memory.events
Nov 22 04:17:45 np0005532048 podman[308922]: 2025-11-22 09:17:45.040500332 +0000 UTC m=+1.275415351 container died e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:17:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f8974004c34f92301e3d55a1eefa57eb9e7458c0d0abecddcf85ef7c5c61a669-merged.mount: Deactivated successfully.
Nov 22 04:17:45 np0005532048 podman[308922]: 2025-11-22 09:17:45.11489612 +0000 UTC m=+1.349811129 container remove e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:17:45 np0005532048 systemd[1]: libpod-conmon-e144e79918758160f3265333f2ede04fb7b20959ae86f5e30aa903739febb990.scope: Deactivated successfully.
Nov 22 04:17:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 71 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 142 op/s
Nov 22 04:17:45 np0005532048 podman[309116]: 2025-11-22 09:17:45.791373248 +0000 UTC m=+0.047086266 container create a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:17:45 np0005532048 systemd[1]: Started libpod-conmon-a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c.scope.
Nov 22 04:17:45 np0005532048 podman[309116]: 2025-11-22 09:17:45.770738966 +0000 UTC m=+0.026451984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:45 np0005532048 podman[309116]: 2025-11-22 09:17:45.892559866 +0000 UTC m=+0.148272914 container init a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:17:45 np0005532048 podman[309116]: 2025-11-22 09:17:45.904157938 +0000 UTC m=+0.159870956 container start a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:17:45 np0005532048 podman[309116]: 2025-11-22 09:17:45.909488647 +0000 UTC m=+0.165201685 container attach a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:17:45 np0005532048 goofy_archimedes[309132]: 167 167
Nov 22 04:17:45 np0005532048 systemd[1]: libpod-a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c.scope: Deactivated successfully.
Nov 22 04:17:45 np0005532048 podman[309116]: 2025-11-22 09:17:45.911916577 +0000 UTC m=+0.167629595 container died a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 22 04:17:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d42c87cba9eb76046cd31fc8a7f1895d0f9dd9773c94d1bc14a20ac4a128751b-merged.mount: Deactivated successfully.
Nov 22 04:17:45 np0005532048 podman[309116]: 2025-11-22 09:17:45.955424764 +0000 UTC m=+0.211137782 container remove a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_archimedes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:17:45 np0005532048 nova_compute[253661]: 2025-11-22 09:17:45.966 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Successfully updated port: 085e3bcc-2e77-4c2e-8298-872aac04e65e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:17:45 np0005532048 systemd[1]: libpod-conmon-a375409b0c226f8ef5641fbd0b9878676147742a26a6a21275c2499c79f2a25c.scope: Deactivated successfully.
Nov 22 04:17:46 np0005532048 nova_compute[253661]: 2025-11-22 09:17:46.009 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:46 np0005532048 nova_compute[253661]: 2025-11-22 09:17:46.010 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:46 np0005532048 nova_compute[253661]: 2025-11-22 09:17:46.011 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:17:46 np0005532048 nova_compute[253661]: 2025-11-22 09:17:46.093 253665 DEBUG nova.compute.manager [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-changed-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:46 np0005532048 nova_compute[253661]: 2025-11-22 09:17:46.093 253665 DEBUG nova.compute.manager [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Refreshing instance network info cache due to event network-changed-085e3bcc-2e77-4c2e-8298-872aac04e65e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:17:46 np0005532048 nova_compute[253661]: 2025-11-22 09:17:46.094 253665 DEBUG oslo_concurrency.lockutils [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:17:46 np0005532048 podman[309155]: 2025-11-22 09:17:46.149837788 +0000 UTC m=+0.047840384 container create cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:17:46 np0005532048 systemd[1]: Started libpod-conmon-cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9.scope.
Nov 22 04:17:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:46 np0005532048 podman[309155]: 2025-11-22 09:17:46.13266732 +0000 UTC m=+0.030669936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:46 np0005532048 podman[309155]: 2025-11-22 09:17:46.244281442 +0000 UTC m=+0.142284058 container init cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:17:46 np0005532048 nova_compute[253661]: 2025-11-22 09:17:46.247 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:17:46 np0005532048 podman[309155]: 2025-11-22 09:17:46.257410801 +0000 UTC m=+0.155413397 container start cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:17:46 np0005532048 podman[309155]: 2025-11-22 09:17:46.261963232 +0000 UTC m=+0.159965828 container attach cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:17:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.172 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803052.1709483, 2964b30c-ab3b-4bab-8f11-2492007f83ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.174 253665 INFO nova.compute.manager [-] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]: {
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:    "0": [
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:        {
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "devices": [
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "/dev/loop3"
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            ],
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_name": "ceph_lv0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_size": "21470642176",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "name": "ceph_lv0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "tags": {
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.cluster_name": "ceph",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.crush_device_class": "",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.encrypted": "0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.osd_id": "0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.type": "block",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.vdo": "0"
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            },
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "type": "block",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "vg_name": "ceph_vg0"
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:        }
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:    ],
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:    "1": [
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:        {
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "devices": [
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "/dev/loop4"
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            ],
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_name": "ceph_lv1",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_size": "21470642176",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "name": "ceph_lv1",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "tags": {
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.cluster_name": "ceph",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.crush_device_class": "",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.encrypted": "0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.osd_id": "1",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.type": "block",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.vdo": "0"
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            },
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "type": "block",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "vg_name": "ceph_vg1"
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:        }
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:    ],
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:    "2": [
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:        {
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "devices": [
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "/dev/loop5"
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            ],
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_name": "ceph_lv2",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_size": "21470642176",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "name": "ceph_lv2",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "tags": {
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.cluster_name": "ceph",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.crush_device_class": "",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.encrypted": "0",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.osd_id": "2",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.type": "block",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:                "ceph.vdo": "0"
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            },
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "type": "block",
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:            "vg_name": "ceph_vg2"
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:        }
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]:    ]
Nov 22 04:17:47 np0005532048 heuristic_fermat[309172]: }
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.198 253665 DEBUG nova.compute.manager [None req-5c81062d-5d68-44fc-8f7d-4a9c1da2dd4d - - - - - -] [instance: 2964b30c-ab3b-4bab-8f11-2492007f83ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:47 np0005532048 systemd[1]: libpod-cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9.scope: Deactivated successfully.
Nov 22 04:17:47 np0005532048 podman[309155]: 2025-11-22 09:17:47.247279563 +0000 UTC m=+1.145282169 container died cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:17:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-40205f4a30e3ff99335c344b2f57e20100661d04b86538ea3956f286a9e1703f-merged.mount: Deactivated successfully.
Nov 22 04:17:47 np0005532048 podman[309155]: 2025-11-22 09:17:47.330227199 +0000 UTC m=+1.228229795 container remove cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:17:47 np0005532048 systemd[1]: libpod-conmon-cee1eec02a64a87b924319998f9915a0fe1a00eebe255894d4960e29d00ed6b9.scope: Deactivated successfully.
Nov 22 04:17:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.735 253665 DEBUG nova.network.neutron [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.765 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.765 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance network_info: |[{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.765 253665 DEBUG oslo_concurrency.lockutils [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.766 253665 DEBUG nova.network.neutron [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Refreshing network info cache for port 085e3bcc-2e77-4c2e-8298-872aac04e65e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.770 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Start _get_guest_xml network_info=[{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.776 253665 WARNING nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.784 253665 DEBUG nova.virt.libvirt.host [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.785 253665 DEBUG nova.virt.libvirt.host [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.789 253665 DEBUG nova.virt.libvirt.host [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.790 253665 DEBUG nova.virt.libvirt.host [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.791 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.791 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.792 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.793 253665 DEBUG nova.virt.hardware [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.797 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:47 np0005532048 nova_compute[253661]: 2025-11-22 09:17:47.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:48 np0005532048 podman[309354]: 2025-11-22 09:17:48.042104446 +0000 UTC m=+0.049875023 container create dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:17:48 np0005532048 systemd[1]: Started libpod-conmon-dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f.scope.
Nov 22 04:17:48 np0005532048 podman[309354]: 2025-11-22 09:17:48.019393685 +0000 UTC m=+0.027164272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:48 np0005532048 podman[309354]: 2025-11-22 09:17:48.152330055 +0000 UTC m=+0.160100642 container init dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:17:48 np0005532048 podman[309354]: 2025-11-22 09:17:48.161049587 +0000 UTC m=+0.168820154 container start dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:17:48 np0005532048 podman[309354]: 2025-11-22 09:17:48.165572636 +0000 UTC m=+0.173343223 container attach dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:17:48 np0005532048 vigorous_lalande[309370]: 167 167
Nov 22 04:17:48 np0005532048 systemd[1]: libpod-dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f.scope: Deactivated successfully.
Nov 22 04:17:48 np0005532048 podman[309354]: 2025-11-22 09:17:48.168928208 +0000 UTC m=+0.176698785 container died dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:17:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4e632415c437e028d5b1318311830c66ce196bb1ed3813d59d092115e0ed66c2-merged.mount: Deactivated successfully.
Nov 22 04:17:48 np0005532048 podman[309354]: 2025-11-22 09:17:48.224537229 +0000 UTC m=+0.232307796 container remove dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:17:48 np0005532048 systemd[1]: libpod-conmon-dfc7a25a2d6d29406b03923e349809bc35bb319d74da75deb157fe59549f7a8f.scope: Deactivated successfully.
Nov 22 04:17:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607910177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.323 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.352 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.358 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:48 np0005532048 podman[309411]: 2025-11-22 09:17:48.413560902 +0000 UTC m=+0.050712663 container create 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 04:17:48 np0005532048 systemd[1]: Started libpod-conmon-896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3.scope.
Nov 22 04:17:48 np0005532048 podman[309411]: 2025-11-22 09:17:48.389671271 +0000 UTC m=+0.026823052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:48 np0005532048 podman[309411]: 2025-11-22 09:17:48.532037131 +0000 UTC m=+0.169188922 container init 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:48 np0005532048 podman[309411]: 2025-11-22 09:17:48.540077916 +0000 UTC m=+0.177229677 container start 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:17:48 np0005532048 podman[309411]: 2025-11-22 09:17:48.547611159 +0000 UTC m=+0.184762940 container attach 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:17:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708910595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.894 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.896 253665 DEBUG nova.virt.libvirt.vif [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1186810200',display_name='tempest-DeleteServersTestJSON-server-1186810200',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1186810200',id=49,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-bkwxncnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:41Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=ee68ed8e-d5b3-4069-ac90-f7e94430ed0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.897 253665 DEBUG nova.network.os_vif_util [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.898 253665 DEBUG nova.network.os_vif_util [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.899 253665 DEBUG nova.objects.instance [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.916 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <uuid>ee68ed8e-d5b3-4069-ac90-f7e94430ed0d</uuid>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <name>instance-00000031</name>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <nova:name>tempest-DeleteServersTestJSON-server-1186810200</nova:name>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:17:47</nova:creationTime>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <nova:port uuid="085e3bcc-2e77-4c2e-8298-872aac04e65e">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <entry name="serial">ee68ed8e-d5b3-4069-ac90-f7e94430ed0d</entry>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <entry name="uuid">ee68ed8e-d5b3-4069-ac90-f7e94430ed0d</entry>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:52:a1:a9"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <target dev="tap085e3bcc-2e"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/console.log" append="off"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:17:48 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:17:48 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:17:48 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:17:48 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.917 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Preparing to wait for external event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.917 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.918 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.918 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.919 253665 DEBUG nova.virt.libvirt.vif [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1186810200',display_name='tempest-DeleteServersTestJSON-server-1186810200',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1186810200',id=49,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-bkwxncnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:41Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=ee68ed8e-d5b3-4069-ac90-f7e94430ed0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.919 253665 DEBUG nova.network.os_vif_util [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.919 253665 DEBUG nova.network.os_vif_util [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.920 253665 DEBUG os_vif [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.920 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.921 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.921 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.925 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap085e3bcc-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.925 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap085e3bcc-2e, col_values=(('external_ids', {'iface-id': '085e3bcc-2e77-4c2e-8298-872aac04e65e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:a1:a9', 'vm-uuid': 'ee68ed8e-d5b3-4069-ac90-f7e94430ed0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:48 np0005532048 NetworkManager[48920]: <info>  [1763803068.9284] manager: (tap085e3bcc-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.932 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.938 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:48 np0005532048 nova_compute[253661]: 2025-11-22 09:17:48.939 253665 INFO os_vif [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e')#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.007 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.007 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.008 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:52:a1:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.008 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Using config drive#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.033 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.488 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Creating config drive at /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.492 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3yy5hhw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:49.610 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:49.613 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.650 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3yy5hhw" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]: {
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "osd_id": 1,
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "type": "bluestore"
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:    },
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "osd_id": 0,
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "type": "bluestore"
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:    },
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "osd_id": 2,
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:        "type": "bluestore"
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]:    }
Nov 22 04:17:49 np0005532048 hardcore_meitner[309432]: }
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.679 253665 DEBUG nova.storage.rbd_utils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.687 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:49 np0005532048 systemd[1]: libpod-896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3.scope: Deactivated successfully.
Nov 22 04:17:49 np0005532048 systemd[1]: libpod-896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3.scope: Consumed 1.179s CPU time.
Nov 22 04:17:49 np0005532048 podman[309411]: 2025-11-22 09:17:49.72843288 +0000 UTC m=+1.365584661 container died 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:17:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6a832730a670bc5081fff7c61cf8145dc60b46df9a7d88d495232ed489f91ea8-merged.mount: Deactivated successfully.
Nov 22 04:17:49 np0005532048 podman[309411]: 2025-11-22 09:17:49.815905996 +0000 UTC m=+1.453057757 container remove 896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:17:49 np0005532048 systemd[1]: libpod-conmon-896ad2593e8e567ddd096d36ca409e8f97e523e537c147925954a5856e902cd3.scope: Deactivated successfully.
Nov 22 04:17:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:17:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:17:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 56dc1e41-5ade-4a59-ad75-d6c419cb1a5c does not exist
Nov 22 04:17:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3005e85d-41da-4122-8bb4-e5d0c5b87b49 does not exist
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.911 253665 DEBUG oslo_concurrency.processutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.912 253665 INFO nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Deleting local config drive /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d/disk.config because it was imported into RBD.#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.947 253665 DEBUG nova.network.neutron [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updated VIF entry in instance network info cache for port 085e3bcc-2e77-4c2e-8298-872aac04e65e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.948 253665 DEBUG nova.network.neutron [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.969 253665 DEBUG oslo_concurrency.lockutils [req-6ba054cd-fed4-4ef2-a0b2-faa79b55d11a req-557ba3b0-5136-4336-9f03-56622082e4dd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:17:49 np0005532048 nova_compute[253661]: 2025-11-22 09:17:49.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 kernel: tap085e3bcc-2e: entered promiscuous mode
Nov 22 04:17:50 np0005532048 NetworkManager[48920]: <info>  [1763803070.0096] manager: (tap085e3bcc-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:50Z|00447|binding|INFO|Claiming lport 085e3bcc-2e77-4c2e-8298-872aac04e65e for this chassis.
Nov 22 04:17:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:50Z|00448|binding|INFO|085e3bcc-2e77-4c2e-8298-872aac04e65e: Claiming fa:16:3e:52:a1:a9 10.100.0.10
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.027 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:a1:a9 10.100.0.10'], port_security=['fa:16:3e:52:a1:a9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ee68ed8e-d5b3-4069-ac90-f7e94430ed0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=085e3bcc-2e77-4c2e-8298-872aac04e65e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.028 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 085e3bcc-2e77-4c2e-8298-872aac04e65e in datapath d93e3720-b00d-41f5-8283-164e9f857d24 bound to our chassis#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.030 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.049 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2e9a1e-eade-4607-8464-4ccda9b912ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.051 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.054 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c878191e-4ea7-4687-a3f3-e31add845f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 systemd-udevd[309622]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.055 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[918a2b7d-10bf-4b6c-95ad-8ce1de65adc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 systemd-machined[215941]: New machine qemu-54-instance-00000031.
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.073 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[77641050-9fdd-4d30-8cec-c655c7e22cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 NetworkManager[48920]: <info>  [1763803070.0811] device (tap085e3bcc-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:17:50 np0005532048 NetworkManager[48920]: <info>  [1763803070.0826] device (tap085e3bcc-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:17:50 np0005532048 systemd[1]: Started Virtual Machine qemu-54-instance-00000031.
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:50Z|00449|binding|INFO|Setting lport 085e3bcc-2e77-4c2e-8298-872aac04e65e ovn-installed in OVS
Nov 22 04:17:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:50Z|00450|binding|INFO|Setting lport 085e3bcc-2e77-4c2e-8298-872aac04e65e up in Southbound
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.096 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[60b013d8-b3e0-425f-bd21-ea1baf2acba2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.144 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c55d4a2f-27ec-48d6-ae91-6508141208b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 NetworkManager[48920]: <info>  [1763803070.1524] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9dbd08-a9ed-4e3f-ae2c-a366f16ecdc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 systemd-udevd[309627]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.189 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[02155866-8a2d-4754-89b1-8a69e41b834e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.194 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9ba968-4e91-4b14-b59e-1ec76a35fd2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 NetworkManager[48920]: <info>  [1763803070.2270] device (tapd93e3720-b0): carrier: link connected
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.235 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[af0e6f24-c177-49ae-891c-5a07c24be81d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.266 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3a8524-7037-46e5-9255-43a9dd4f605e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592096, 'reachable_time': 39365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309656, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.291 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[49c048f4-7559-4bfe-9e2f-c07ac8f30b37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592096, 'tstamp': 592096}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309657, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.319 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3e40b7-1017-4056-a120-758065b059cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592096, 'reachable_time': 39365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309658, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00c8b873-0531-493e-bc43-c5f2de4269ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.447 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1de7ea1-72c0-4bab-9dfa-e0a96791a8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.450 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.451 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.451 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:50 np0005532048 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 NetworkManager[48920]: <info>  [1763803070.4546] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.456 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.460 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:17:50Z|00451|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.463 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.464 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.466 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f8329976-87e2-422d-9b37-d5c27e8ec0bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.467 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:17:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:50.468 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.479 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.539 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803070.5387995, ee68ed8e-d5b3-4069-ac90-f7e94430ed0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.539 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] VM Started (Lifecycle Event)#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.559 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.564 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803070.5391448, ee68ed8e-d5b3-4069-ac90-f7e94430ed0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.564 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.581 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.587 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:50 np0005532048 nova_compute[253661]: 2025-11-22 09:17:50.609 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:17:50 np0005532048 podman[309731]: 2025-11-22 09:17:50.899047915 +0000 UTC m=+0.061506255 container create 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:50 np0005532048 systemd[1]: Started libpod-conmon-2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424.scope.
Nov 22 04:17:50 np0005532048 podman[309731]: 2025-11-22 09:17:50.864779062 +0000 UTC m=+0.027237432 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:17:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:17:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4b6fe6d0e31e82f90921939e8d033bb084c60d0f39f2ba0237780fbbf52430/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:51 np0005532048 podman[309731]: 2025-11-22 09:17:51.000235783 +0000 UTC m=+0.162694153 container init 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:17:51 np0005532048 podman[309731]: 2025-11-22 09:17:51.007130951 +0000 UTC m=+0.169589291 container start 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:17:51 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [NOTICE]   (309750) : New worker (309752) forked
Nov 22 04:17:51 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [NOTICE]   (309750) : Loading success.
Nov 22 04:17:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 22 04:17:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:17:52
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root']
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:52 np0005532048 nova_compute[253661]: 2025-11-22 09:17:52.873 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803057.8727126, 971e37bd-eb33-42b7-b5c7-86eff88cb700 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:52 np0005532048 nova_compute[253661]: 2025-11-22 09:17:52.874 253665 INFO nova.compute.manager [-] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:17:52 np0005532048 nova_compute[253661]: 2025-11-22 09:17:52.894 253665 DEBUG nova.compute.manager [None req-a2a017e0-40d1-4124-9d9e-cd71c158066f - - - - - -] [instance: 971e37bd-eb33-42b7-b5c7-86eff88cb700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 22 04:17:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:17:53.617 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:17:53 np0005532048 nova_compute[253661]: 2025-11-22 09:17:53.928 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:17:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:17:54 np0005532048 nova_compute[253661]: 2025-11-22 09:17:54.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 22 04:17:55 np0005532048 podman[309762]: 2025-11-22 09:17:55.384717638 +0000 UTC m=+0.070711059 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 22 04:17:55 np0005532048 podman[309761]: 2025-11-22 09:17:55.400327577 +0000 UTC m=+0.088979842 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 04:17:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.917 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.917 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.923 253665 DEBUG nova.compute.manager [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.924 253665 DEBUG oslo_concurrency.lockutils [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.924 253665 DEBUG oslo_concurrency.lockutils [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.924 253665 DEBUG oslo_concurrency.lockutils [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.924 253665 DEBUG nova.compute.manager [req-36a60c58-793d-460f-bfc5-080838b811cd req-caad74f5-26b7-4f20-8c5f-dbe7f3341c3b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Processing event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.925 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.931 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803076.9312377, ee68ed8e-d5b3-4069-ac90-f7e94430ed0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.932 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.935 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.938 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.945 253665 INFO nova.virt.libvirt.driver [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance spawned successfully.#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.946 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.964 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.972 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.980 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.981 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.982 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.982 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.983 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.984 253665 DEBUG nova.virt.libvirt.driver [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:17:56 np0005532048 nova_compute[253661]: 2025-11-22 09:17:56.993 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.052 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.053 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.057 253665 INFO nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Took 15.18 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.058 253665 DEBUG nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.068 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.068 253665 INFO nova.compute.claims [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.172 253665 INFO nova.compute.manager [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Took 16.17 seconds to build instance.#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.196 253665 DEBUG oslo_concurrency.lockutils [None req-fc7a6a56-1b19-4c30-a3e0-a25e33ea0639 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.266 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 590 KiB/s wr, 22 op/s
Nov 22 04:17:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991763992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.831 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.839 253665 DEBUG nova.compute.provider_tree [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.856 253665 DEBUG nova.scheduler.client.report [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.880 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.882 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.929 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.931 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.952 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:17:57 np0005532048 nova_compute[253661]: 2025-11-22 09:17:57.976 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.067 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.069 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.070 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Creating image(s)#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.105 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.147 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.180 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.186 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.245 253665 DEBUG nova.policy [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '559fd7e00a0a468797efe4955caffc4a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.291 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.293 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.294 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.294 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.327 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.334 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 636b1046-fff8-4a45-8a14-04010b2f282e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:17:58 np0005532048 podman[309875]: 2025-11-22 09:17:58.418256749 +0000 UTC m=+0.107868722 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 04:17:58 np0005532048 nova_compute[253661]: 2025-11-22 09:17:58.931 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.001 253665 DEBUG nova.compute.manager [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.002 253665 DEBUG oslo_concurrency.lockutils [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.002 253665 DEBUG oslo_concurrency.lockutils [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.003 253665 DEBUG oslo_concurrency.lockutils [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.003 253665 DEBUG nova.compute.manager [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] No waiting events found dispatching network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.003 253665 WARNING nova.compute.manager [req-9622cc96-686e-4344-a8ee-0dc584df20f1 req-de28c57d-1f28-4cc2-af19-edf6727e7829 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received unexpected event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e for instance with vm_state active and task_state shelving.#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.261 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.263 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.263 253665 INFO nova.compute.manager [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Shelving#033[00m
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.282 253665 DEBUG nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:17:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 51 op/s
Nov 22 04:17:59 np0005532048 nova_compute[253661]: 2025-11-22 09:17:59.476 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Successfully created port: a288a5e5-7b57-4be8-9617-3271ea1e210f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:18:00 np0005532048 nova_compute[253661]: 2025-11-22 09:18:00.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:00 np0005532048 nova_compute[253661]: 2025-11-22 09:18:00.387 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Successfully updated port: a288a5e5-7b57-4be8-9617-3271ea1e210f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:18:00 np0005532048 nova_compute[253661]: 2025-11-22 09:18:00.400 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:00 np0005532048 nova_compute[253661]: 2025-11-22 09:18:00.400 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:00 np0005532048 nova_compute[253661]: 2025-11-22 09:18:00.401 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:18:00 np0005532048 nova_compute[253661]: 2025-11-22 09:18:00.495 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 636b1046-fff8-4a45-8a14-04010b2f282e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:00 np0005532048 nova_compute[253661]: 2025-11-22 09:18:00.555 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] resizing rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:18:00 np0005532048 nova_compute[253661]: 2025-11-22 09:18:00.805 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:18:01 np0005532048 nova_compute[253661]: 2025-11-22 09:18:01.115 253665 DEBUG nova.objects.instance [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'migration_context' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:01 np0005532048 nova_compute[253661]: 2025-11-22 09:18:01.128 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:18:01 np0005532048 nova_compute[253661]: 2025-11-22 09:18:01.129 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Ensure instance console log exists: /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:18:01 np0005532048 nova_compute[253661]: 2025-11-22 09:18:01.129 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:01 np0005532048 nova_compute[253661]: 2025-11-22 09:18:01.130 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:01 np0005532048 nova_compute[253661]: 2025-11-22 09:18:01.130 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 88 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 51 op/s
Nov 22 04:18:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:01 np0005532048 nova_compute[253661]: 2025-11-22 09:18:01.730 253665 DEBUG nova.compute.manager [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-changed-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:01 np0005532048 nova_compute[253661]: 2025-11-22 09:18:01.731 253665 DEBUG nova.compute.manager [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Refreshing instance network info cache due to event network-changed-a288a5e5-7b57-4be8-9617-3271ea1e210f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:18:01 np0005532048 nova_compute[253661]: 2025-11-22 09:18:01.731 253665 DEBUG oslo_concurrency.lockutils [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.580 253665 DEBUG nova.network.neutron [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.618 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.619 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance network_info: |[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.620 253665 DEBUG oslo_concurrency.lockutils [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.620 253665 DEBUG nova.network.neutron [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Refreshing network info cache for port a288a5e5-7b57-4be8-9617-3271ea1e210f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.625 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start _get_guest_xml network_info=[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.629 253665 WARNING nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.652 253665 DEBUG nova.virt.libvirt.host [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.653 253665 DEBUG nova.virt.libvirt.host [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.661 253665 DEBUG nova.virt.libvirt.host [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.662 253665 DEBUG nova.virt.libvirt.host [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.662 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.663 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.663 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.664 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.664 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.664 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.665 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.665 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.665 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.666 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.666 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.666 253665 DEBUG nova.virt.hardware [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:18:02 np0005532048 nova_compute[253661]: 2025-11-22 09:18:02.669 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583057949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.165 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.195 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.200 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 106 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 531 KiB/s wr, 74 op/s
Nov 22 04:18:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007183746' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.685 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.688 253665 DEBUG nova.virt.libvirt.vif [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.688 253665 DEBUG nova.network.os_vif_util [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.689 253665 DEBUG nova.network.os_vif_util [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.690 253665 DEBUG nova.objects.instance [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.708 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <uuid>636b1046-fff8-4a45-8a14-04010b2f282e</uuid>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <name>instance-00000032</name>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerActionsTestJSON-server-149918095</nova:name>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:18:02</nova:creationTime>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <nova:port uuid="a288a5e5-7b57-4be8-9617-3271ea1e210f">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <entry name="serial">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <entry name="uuid">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk.config">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:70:38:8e"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <target dev="tapa288a5e5-7b"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log" append="off"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:18:03 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:18:03 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:18:03 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:18:03 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.710 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Preparing to wait for external event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.710 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.710 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.711 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.712 253665 DEBUG nova.virt.libvirt.vif [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:17:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.712 253665 DEBUG nova.network.os_vif_util [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.713 253665 DEBUG nova.network.os_vif_util [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.713 253665 DEBUG os_vif [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.714 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.715 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.715 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.719 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.720 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.759 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:03 np0005532048 NetworkManager[48920]: <info>  [1763803083.7618] manager: (tapa288a5e5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.771 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.773 253665 INFO os_vif [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.837 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.838 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.838 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No VIF found with MAC fa:16:3e:70:38:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.839 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Using config drive#033[00m
Nov 22 04:18:03 np0005532048 nova_compute[253661]: 2025-11-22 09:18:03.861 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:04 np0005532048 nova_compute[253661]: 2025-11-22 09:18:04.220 253665 DEBUG nova.network.neutron [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updated VIF entry in instance network info cache for port a288a5e5-7b57-4be8-9617-3271ea1e210f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:18:04 np0005532048 nova_compute[253661]: 2025-11-22 09:18:04.221 253665 DEBUG nova.network.neutron [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:04 np0005532048 nova_compute[253661]: 2025-11-22 09:18:04.241 253665 DEBUG oslo_concurrency.lockutils [req-bb74b686-a23d-4dc8-aece-fc35932d8af9 req-10a70fdd-281e-4c18-8ca3-4d8b84dccd74 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:04 np0005532048 nova_compute[253661]: 2025-11-22 09:18:04.314 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Creating config drive at /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config#033[00m
Nov 22 04:18:04 np0005532048 nova_compute[253661]: 2025-11-22 09:18:04.321 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwjwiek9t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:04 np0005532048 nova_compute[253661]: 2025-11-22 09:18:04.478 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwjwiek9t" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:04 np0005532048 nova_compute[253661]: 2025-11-22 09:18:04.511 253665 DEBUG nova.storage.rbd_utils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:04 np0005532048 nova_compute[253661]: 2025-11-22 09:18:04.522 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:05 np0005532048 nova_compute[253661]: 2025-11-22 09:18:05.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 22 04:18:05 np0005532048 nova_compute[253661]: 2025-11-22 09:18:05.360 253665 DEBUG oslo_concurrency.processutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config 636b1046-fff8-4a45-8a14-04010b2f282e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.838s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:05 np0005532048 nova_compute[253661]: 2025-11-22 09:18:05.361 253665 INFO nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Deleting local config drive /var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/disk.config because it was imported into RBD.#033[00m
Nov 22 04:18:05 np0005532048 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 04:18:05 np0005532048 nova_compute[253661]: 2025-11-22 09:18:05.421 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:05 np0005532048 NetworkManager[48920]: <info>  [1763803085.4250] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Nov 22 04:18:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:05Z|00452|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 04:18:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:05Z|00453|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.435 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.436 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 bound to our chassis#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.437 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93#033[00m
Nov 22 04:18:05 np0005532048 systemd-udevd[310148]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.453 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a6b238-3f77-4d78-a5c1-08d3a5e5054f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.455 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.458 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.458 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a10991-63af-4c72-a13d-f28c3caa3a0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 NetworkManager[48920]: <info>  [1763803085.4651] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:18:05 np0005532048 NetworkManager[48920]: <info>  [1763803085.4659] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:18:05 np0005532048 systemd-machined[215941]: New machine qemu-55-instance-00000032.
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.461 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[211e57dd-6862-4a64-b46a-47c49818c309]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.479 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b104baf9-5d0f-4134-97d8-51e079f79393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 systemd[1]: Started Virtual Machine qemu-55-instance-00000032.
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.501 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[497afcaa-984f-4df9-a59f-2e17d1c97dd9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 nova_compute[253661]: 2025-11-22 09:18:05.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:05Z|00454|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 04:18:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:05Z|00455|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 04:18:05 np0005532048 nova_compute[253661]: 2025-11-22 09:18:05.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.543 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8d164450-9c39-4e3f-beb1-6f040ff806eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2bff0a23-c5bb-425a-b78a-3acbb5f389f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 NetworkManager[48920]: <info>  [1763803085.5545] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/203)
Nov 22 04:18:05 np0005532048 systemd-udevd[310151]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.620 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdeae6a-85c5-4375-9982-5dd276f90518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.628 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[318c48f4-91f1-45ec-a521-3010d1b9f16f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 NetworkManager[48920]: <info>  [1763803085.6616] device (tapebc42408-70): carrier: link connected
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.669 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c04dc445-edab-487b-9f55-1b805a8d912c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.698 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f228402-8bf5-4df8-b479-f8ba760bfacd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593639, 'reachable_time': 29944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310181, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.721 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[db0ebb0b-cbe9-47bb-a820-64762e00cb31]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593639, 'tstamp': 593639}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310182, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.751 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7562b77f-5790-469c-88b0-6332b1eb6544]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593639, 'reachable_time': 29944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310183, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c22cad1-9bec-4d80-a9af-1a9a96b73404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4353da41-d287-408d-814e-1a9c14d5f4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.901 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.901 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.902 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:05 np0005532048 nova_compute[253661]: 2025-11-22 09:18:05.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:05 np0005532048 NetworkManager[48920]: <info>  [1763803085.9050] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Nov 22 04:18:05 np0005532048 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.910 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:05 np0005532048 nova_compute[253661]: 2025-11-22 09:18:05.911 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:05Z|00456|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.915 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.917 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7b3b30-4a0d-4af8-9a62-f882e3f0e41c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.917 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:18:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:05.919 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:18:05 np0005532048 nova_compute[253661]: 2025-11-22 09:18:05.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:06 np0005532048 podman[310248]: 2025-11-22 09:18:06.289073007 +0000 UTC m=+0.027870459 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.436 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803086.4356837, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.438 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.458 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.467 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803086.437159, 636b1046-fff8-4a45-8a14-04010b2f282e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.468 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.483 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:06 np0005532048 podman[310248]: 2025-11-22 09:18:06.484416493 +0000 UTC m=+0.223213925 container create 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.487 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.505 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:18:06 np0005532048 systemd[1]: Started libpod-conmon-17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de.scope.
Nov 22 04:18:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5234ae273265799e05869e908957591393f90ceac631303829d1658fcdf1825/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:06 np0005532048 podman[310248]: 2025-11-22 09:18:06.683566362 +0000 UTC m=+0.422363824 container init 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:18:06 np0005532048 podman[310248]: 2025-11-22 09:18:06.690249945 +0000 UTC m=+0.429047377 container start 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:18:06 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [NOTICE]   (310276) : New worker (310278) forked
Nov 22 04:18:06 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [NOTICE]   (310276) : Loading success.
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.799 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.800 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.826 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.898 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.899 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.907 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:18:06 np0005532048 nova_compute[253661]: 2025-11-22 09:18:06.908 253665 INFO nova.compute.claims [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.040 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 134 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Nov 22 04:18:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104638163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.537 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.545 253665 DEBUG nova.compute.provider_tree [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.561 253665 DEBUG nova.scheduler.client.report [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.587 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.589 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.632 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.634 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.652 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.668 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.765 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.767 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.767 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Creating image(s)#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.789 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.813 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.836 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.841 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.884 253665 DEBUG nova.policy [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e3344198c364c67aa73008f33323a4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '377c148737af4a5fb70d3e00de87fcd3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.925 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.927 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.927 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.928 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.947 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:07 np0005532048 nova_compute[253661]: 2025-11-22 09:18:07.950 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d583bf52-8135-4fca-a3f4-cf6efd88f497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:08 np0005532048 nova_compute[253661]: 2025-11-22 09:18:08.484 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d583bf52-8135-4fca-a3f4-cf6efd88f497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:08 np0005532048 nova_compute[253661]: 2025-11-22 09:18:08.555 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] resizing rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:18:08 np0005532048 nova_compute[253661]: 2025-11-22 09:18:08.760 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:08 np0005532048 nova_compute[253661]: 2025-11-22 09:18:08.832 253665 DEBUG nova.objects.instance [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'migration_context' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:08 np0005532048 nova_compute[253661]: 2025-11-22 09:18:08.851 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:18:08 np0005532048 nova_compute[253661]: 2025-11-22 09:18:08.852 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Ensure instance console log exists: /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:18:08 np0005532048 nova_compute[253661]: 2025-11-22 09:18:08.852 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:08 np0005532048 nova_compute[253661]: 2025-11-22 09:18:08.853 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:08 np0005532048 nova_compute[253661]: 2025-11-22 09:18:08.853 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.089 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Successfully created port: 68713fec-01b1-463b-861c-b96beeb4381a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.333 253665 DEBUG nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:18:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 154 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 113 op/s
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.626 253665 DEBUG nova.compute.manager [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.626 253665 DEBUG oslo_concurrency.lockutils [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.626 253665 DEBUG oslo_concurrency.lockutils [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.627 253665 DEBUG oslo_concurrency.lockutils [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.627 253665 DEBUG nova.compute.manager [req-4d27a30b-8adf-4c81-90d4-b2901d6cf261 req-507874fc-bed0-4c4c-b59b-c9886ba9c6f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Processing event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.628 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.632 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803089.6317296, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.632 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.635 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.639 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance spawned successfully.#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.640 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.655 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.661 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.664 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.664 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.666 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.666 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.666 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.668 253665 DEBUG nova.virt.libvirt.driver [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.697 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.731 253665 INFO nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Took 11.66 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.732 253665 DEBUG nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.809 253665 INFO nova.compute.manager [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Took 12.80 seconds to build instance.#033[00m
Nov 22 04:18:09 np0005532048 nova_compute[253661]: 2025-11-22 09:18:09.830 253665 DEBUG oslo_concurrency.lockutils [None req-f1f400f8-aedb-44b3-9cbc-a6f60cb76d60 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:10 np0005532048 nova_compute[253661]: 2025-11-22 09:18:10.059 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:10 np0005532048 nova_compute[253661]: 2025-11-22 09:18:10.643 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Successfully updated port: 68713fec-01b1-463b-861c-b96beeb4381a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:18:10 np0005532048 nova_compute[253661]: 2025-11-22 09:18:10.666 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:10 np0005532048 nova_compute[253661]: 2025-11-22 09:18:10.666 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquired lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:10 np0005532048 nova_compute[253661]: 2025-11-22 09:18:10.667 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:18:10 np0005532048 nova_compute[253661]: 2025-11-22 09:18:10.955 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:18:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 154 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 688 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Nov 22 04:18:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:11Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:a1:a9 10.100.0.10
Nov 22 04:18:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:11Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:a1:a9 10.100.0.10
Nov 22 04:18:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:11 np0005532048 nova_compute[253661]: 2025-11-22 09:18:11.954 253665 DEBUG nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:11 np0005532048 nova_compute[253661]: 2025-11-22 09:18:11.954 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:11 np0005532048 nova_compute[253661]: 2025-11-22 09:18:11.955 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:11 np0005532048 nova_compute[253661]: 2025-11-22 09:18:11.955 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:11 np0005532048 nova_compute[253661]: 2025-11-22 09:18:11.955 253665 DEBUG nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:11 np0005532048 nova_compute[253661]: 2025-11-22 09:18:11.956 253665 WARNING nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:11 np0005532048 nova_compute[253661]: 2025-11-22 09:18:11.956 253665 DEBUG nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-changed-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:11 np0005532048 nova_compute[253661]: 2025-11-22 09:18:11.956 253665 DEBUG nova.compute.manager [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Refreshing instance network info cache due to event network-changed-68713fec-01b1-463b-861c-b96beeb4381a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:18:11 np0005532048 nova_compute[253661]: 2025-11-22 09:18:11.956 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:18:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/683596592' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:18:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:18:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/683596592' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:12 np0005532048 NetworkManager[48920]: <info>  [1763803092.3859] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 22 04:18:12 np0005532048 NetworkManager[48920]: <info>  [1763803092.3868] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.491 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:12Z|00457|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:18:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:12Z|00458|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.546 253665 DEBUG nova.network.neutron [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updating instance_info_cache with network_info: [{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.566 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Releasing lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.566 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance network_info: |[{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.566 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.567 253665 DEBUG nova.network.neutron [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Refreshing network info cache for port 68713fec-01b1-463b-861c-b96beeb4381a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.571 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start _get_guest_xml network_info=[{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.576 253665 WARNING nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.581 253665 DEBUG nova.virt.libvirt.host [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.582 253665 DEBUG nova.virt.libvirt.host [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.590 253665 DEBUG nova.virt.libvirt.host [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.591 253665 DEBUG nova.virt.libvirt.host [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.591 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.591 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.592 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.592 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.592 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.592 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.593 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.594 253665 DEBUG nova.virt.hardware [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.597 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.654 253665 DEBUG nova.compute.manager [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-changed-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.654 253665 DEBUG nova.compute.manager [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Refreshing instance network info cache due to event network-changed-a288a5e5-7b57-4be8-9617-3271ea1e210f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.655 253665 DEBUG oslo_concurrency.lockutils [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.655 253665 DEBUG oslo_concurrency.lockutils [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:12 np0005532048 nova_compute[253661]: 2025-11-22 09:18:12.655 253665 DEBUG nova.network.neutron [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Refreshing network info cache for port a288a5e5-7b57-4be8-9617-3271ea1e210f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:18:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395612675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.104 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.128 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.135 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 196 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.2 MiB/s wr, 125 op/s
Nov 22 04:18:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174716448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.629 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.631 253665 DEBUG nova.virt.libvirt.vif [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:07Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.632 253665 DEBUG nova.network.os_vif_util [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.633 253665 DEBUG nova.network.os_vif_util [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.634 253665 DEBUG nova.objects.instance [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.650 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <uuid>d583bf52-8135-4fca-a3f4-cf6efd88f497</uuid>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <name>instance-00000033</name>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <nova:name>tempest-InstanceActionsTestJSON-server-1695394731</nova:name>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:18:12</nova:creationTime>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <nova:user uuid="8e3344198c364c67aa73008f33323a4d">tempest-InstanceActionsTestJSON-371860100-project-member</nova:user>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <nova:project uuid="377c148737af4a5fb70d3e00de87fcd3">tempest-InstanceActionsTestJSON-371860100</nova:project>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <nova:port uuid="68713fec-01b1-463b-861c-b96beeb4381a">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <entry name="serial">d583bf52-8135-4fca-a3f4-cf6efd88f497</entry>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <entry name="uuid">d583bf52-8135-4fca-a3f4-cf6efd88f497</entry>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d583bf52-8135-4fca-a3f4-cf6efd88f497_disk">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:89:be:dc"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <target dev="tap68713fec-01"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/console.log" append="off"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:18:13 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:18:13 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:18:13 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:18:13 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.651 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Preparing to wait for external event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.651 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.651 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.651 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.652 253665 DEBUG nova.virt.libvirt.vif [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:07Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.652 253665 DEBUG nova.network.os_vif_util [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.653 253665 DEBUG nova.network.os_vif_util [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.653 253665 DEBUG os_vif [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.654 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.655 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.657 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68713fec-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.658 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68713fec-01, col_values=(('external_ids', {'iface-id': '68713fec-01b1-463b-861c-b96beeb4381a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:be:dc', 'vm-uuid': 'd583bf52-8135-4fca-a3f4-cf6efd88f497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:13 np0005532048 NetworkManager[48920]: <info>  [1763803093.6608] manager: (tap68713fec-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.670 253665 INFO os_vif [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01')#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.724 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.724 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.724 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] No VIF found with MAC fa:16:3e:89:be:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.725 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Using config drive#033[00m
Nov 22 04:18:13 np0005532048 nova_compute[253661]: 2025-11-22 09:18:13.746 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.034 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Creating config drive at /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.041 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv11zrjko execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.191 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv11zrjko" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.221 253665 DEBUG nova.storage.rbd_utils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] rbd image d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.226 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.717 253665 DEBUG oslo_concurrency.processutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.718 253665 INFO nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Deleting local config drive /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/disk.config because it was imported into RBD.#033[00m
Nov 22 04:18:14 np0005532048 kernel: tap68713fec-01: entered promiscuous mode
Nov 22 04:18:14 np0005532048 NetworkManager[48920]: <info>  [1763803094.7749] manager: (tap68713fec-01): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Nov 22 04:18:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:14Z|00459|binding|INFO|Claiming lport 68713fec-01b1-463b-861c-b96beeb4381a for this chassis.
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:14Z|00460|binding|INFO|68713fec-01b1-463b-861c-b96beeb4381a: Claiming fa:16:3e:89:be:dc 10.100.0.12
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.784 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:be:dc 10.100.0.12'], port_security=['fa:16:3e:89:be:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd583bf52-8135-4fca-a3f4-cf6efd88f497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '377c148737af4a5fb70d3e00de87fcd3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24b1af78-e337-4ff8-adc9-262229584365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2f4f0a9-7cb3-4409-b976-e7e8b221c96a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=68713fec-01b1-463b-861c-b96beeb4381a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.786 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 68713fec-01b1-463b-861c-b96beeb4381a in datapath 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e bound to our chassis#033[00m
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.788 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.806 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.806 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea416d2d-8285-462d-82b7-6c842c33da63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.807 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f7cdf45-21 in ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:18:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:14Z|00461|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a ovn-installed in OVS
Nov 22 04:18:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:14Z|00462|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a up in Southbound
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:14 np0005532048 systemd-udevd[310612]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.811 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f7cdf45-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06d8ffd1-eeab-4778-a24c-e988f377ccf7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.814 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1750c4-6ded-4829-ae3d-356a463594ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:14 np0005532048 systemd-machined[215941]: New machine qemu-56-instance-00000033.
Nov 22 04:18:14 np0005532048 NetworkManager[48920]: <info>  [1763803094.8354] device (tap68713fec-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:18:14 np0005532048 NetworkManager[48920]: <info>  [1763803094.8367] device (tap68713fec-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.840 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3131394d-09de-444f-94ae-1b12881603ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:14 np0005532048 systemd[1]: Started Virtual Machine qemu-56-instance-00000033.
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.875 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3190129d-b7e3-49e5-b702-8eb198cc37cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.908 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44cab5ad-9501-431a-a174-af981bf57600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:14 np0005532048 NetworkManager[48920]: <info>  [1763803094.9167] manager: (tap8f7cdf45-20): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.918 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[53f3b0bf-7cf3-4ed8-a9ee-d75b88743c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.958 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d972aa25-2a05-44cb-b6d2-ecf560504aaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.962 253665 DEBUG nova.network.neutron [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updated VIF entry in instance network info cache for port 68713fec-01b1-463b-861c-b96beeb4381a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.963 253665 DEBUG nova.network.neutron [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updating instance_info_cache with network_info: [{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:14.967 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a2f945-aa84-4a8b-bb63-549882beeb3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:14 np0005532048 nova_compute[253661]: 2025-11-22 09:18:14.982 253665 DEBUG oslo_concurrency.lockutils [req-85011d61-57c1-49bb-933d-ed8493734f47 req-72035322-ac87-41f9-b334-7e502baef396 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:14 np0005532048 NetworkManager[48920]: <info>  [1763803094.9964] device (tap8f7cdf45-20): carrier: link connected
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.001 253665 DEBUG nova.network.neutron [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updated VIF entry in instance network info cache for port a288a5e5-7b57-4be8-9617-3271ea1e210f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.002 253665 DEBUG nova.network.neutron [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.008 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9c381425-c96b-433d-9116-8c8dc405d7bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.022 253665 DEBUG nova.compute.manager [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.023 253665 DEBUG oslo_concurrency.lockutils [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.023 253665 DEBUG oslo_concurrency.lockutils [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.023 253665 DEBUG oslo_concurrency.lockutils [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.023 253665 DEBUG nova.compute.manager [req-3920146b-4c65-4996-93a0-541e4a7d65e5 req-ff4f2cab-2622-44c8-bf42-0dbd401d1bbb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Processing event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.025 253665 DEBUG oslo_concurrency.lockutils [req-b08b1129-15cf-4edd-8333-ff71b55100a3 req-8adb6736-9c21-4c19-91df-dd8c9acff964 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.034 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2855da4c-a50a-4a94-98ff-817bb8380abe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f7cdf45-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:09:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594573, 'reachable_time': 41566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310645, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e168a46d-7488-4245-aa73-6c5892a298e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594573, 'tstamp': 594573}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310646, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.082 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[50090949-06e5-4add-9631-a4a97aca4ee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f7cdf45-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:09:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594573, 'reachable_time': 41566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310647, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.126 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[792af2a0-f84c-4ab6-ad86-0122cc02e7f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.197 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[277e5ecd-a722-4fe6-9eda-dab60273392e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.199 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f7cdf45-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.199 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.200 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f7cdf45-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:15 np0005532048 NetworkManager[48920]: <info>  [1763803095.2032] manager: (tap8f7cdf45-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Nov 22 04:18:15 np0005532048 kernel: tap8f7cdf45-20: entered promiscuous mode
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.207 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f7cdf45-20, col_values=(('external_ids', {'iface-id': '9041b29d-074d-4855-9e30-a4e5a849535d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:15Z|00463|binding|INFO|Releasing lport 9041b29d-074d-4855-9e30-a4e5a849535d from this chassis (sb_readonly=0)
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.211 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[648244f0-e7f4-4375-a396-29e2956157a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.218 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:18:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:15.220 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'env', 'PROCESS_TAG=haproxy-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 213 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 189 op/s
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.456 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803095.456058, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.457 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Started (Lifecycle Event)#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.460 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.465 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.469 253665 INFO nova.virt.libvirt.driver [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance spawned successfully.#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.470 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.475 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.479 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.491 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.492 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.493 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.493 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.494 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.494 253665 DEBUG nova.virt.libvirt.driver [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.499 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803095.456177, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.521 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.526 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803095.4637592, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.526 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.544 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.550 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.559 253665 INFO nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Took 7.79 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.560 253665 DEBUG nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.572 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.618 253665 INFO nova.compute.manager [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Took 8.75 seconds to build instance.#033[00m
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.633 253665 DEBUG oslo_concurrency.lockutils [None req-c6aad457-87d7-45d4-856d-67b416a9c428 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:15 np0005532048 podman[310720]: 2025-11-22 09:18:15.639029115 +0000 UTC m=+0.064065748 container create 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:18:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:15Z|00464|binding|INFO|Releasing lport 9041b29d-074d-4855-9e30-a4e5a849535d from this chassis (sb_readonly=0)
Nov 22 04:18:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:15Z|00465|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:18:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:15Z|00466|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 04:18:15 np0005532048 systemd[1]: Started libpod-conmon-47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694.scope.
Nov 22 04:18:15 np0005532048 podman[310720]: 2025-11-22 09:18:15.60795924 +0000 UTC m=+0.032995893 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:18:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:15 np0005532048 nova_compute[253661]: 2025-11-22 09:18:15.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:15 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c75362a33d3428f67980ba20a070be98c841833d0506f9fa1c0a3666ede05df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:15 np0005532048 podman[310720]: 2025-11-22 09:18:15.734655718 +0000 UTC m=+0.159692351 container init 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:18:15 np0005532048 podman[310720]: 2025-11-22 09:18:15.741176137 +0000 UTC m=+0.166212770 container start 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 04:18:15 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [NOTICE]   (310739) : New worker (310741) forked
Nov 22 04:18:15 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [NOTICE]   (310739) : Loading success.
Nov 22 04:18:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.136 253665 DEBUG nova.compute.manager [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.139 253665 DEBUG oslo_concurrency.lockutils [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.139 253665 DEBUG oslo_concurrency.lockutils [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.139 253665 DEBUG oslo_concurrency.lockutils [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.140 253665 DEBUG nova.compute.manager [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.140 253665 WARNING nova.compute.manager [req-f55d3a91-effb-4aee-9104-cc5b93a15a7f req-d7365eba-5c16-40d2-87e8-84d037053a23 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.271 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.272 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.272 253665 INFO nova.compute.manager [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Rebooting instance#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.289 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.289 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquired lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:17 np0005532048 nova_compute[253661]: 2025-11-22 09:18:17.290 253665 DEBUG nova.network.neutron [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:18:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 213 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 22 04:18:18 np0005532048 nova_compute[253661]: 2025-11-22 09:18:18.661 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.206 253665 DEBUG nova.network.neutron [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updating instance_info_cache with network_info: [{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.232 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Releasing lock "refresh_cache-d583bf52-8135-4fca-a3f4-cf6efd88f497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.233 253665 DEBUG nova.compute.manager [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:19 np0005532048 kernel: tap68713fec-01 (unregistering): left promiscuous mode
Nov 22 04:18:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 214 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 220 op/s
Nov 22 04:18:19 np0005532048 NetworkManager[48920]: <info>  [1763803099.3691] device (tap68713fec-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:19Z|00467|binding|INFO|Releasing lport 68713fec-01b1-463b-861c-b96beeb4381a from this chassis (sb_readonly=0)
Nov 22 04:18:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:19Z|00468|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a down in Southbound
Nov 22 04:18:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:19Z|00469|binding|INFO|Removing iface tap68713fec-01 ovn-installed in OVS
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.390 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:be:dc 10.100.0.12'], port_security=['fa:16:3e:89:be:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd583bf52-8135-4fca-a3f4-cf6efd88f497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '377c148737af4a5fb70d3e00de87fcd3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24b1af78-e337-4ff8-adc9-262229584365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2f4f0a9-7cb3-4409-b976-e7e8b221c96a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=68713fec-01b1-463b-861c-b96beeb4381a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.392 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 68713fec-01b1-463b-861c-b96beeb4381a in datapath 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e unbound from our chassis#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.394 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.396 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c659b11-5f0c-4b39-8440-762b372c8ccf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.397 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e namespace which is not needed anymore#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:19 np0005532048 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 22 04:18:19 np0005532048 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Consumed 4.535s CPU time.
Nov 22 04:18:19 np0005532048 systemd-machined[215941]: Machine qemu-56-instance-00000033 terminated.
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.536 253665 DEBUG nova.compute.manager [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-unplugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.538 253665 DEBUG oslo_concurrency.lockutils [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.538 253665 DEBUG oslo_concurrency.lockutils [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.539 253665 DEBUG oslo_concurrency.lockutils [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.539 253665 DEBUG nova.compute.manager [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-unplugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.540 253665 WARNING nova.compute.manager [req-a60d61b8-68c8-4996-b96a-3c26081f89f0 req-03eced03-46ed-41d4-a06d-00803524db94 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-unplugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 22 04:18:19 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [NOTICE]   (310739) : haproxy version is 2.8.14-c23fe91
Nov 22 04:18:19 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [NOTICE]   (310739) : path to executable is /usr/sbin/haproxy
Nov 22 04:18:19 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [WARNING]  (310739) : Exiting Master process...
Nov 22 04:18:19 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [ALERT]    (310739) : Current worker (310741) exited with code 143 (Terminated)
Nov 22 04:18:19 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[310735]: [WARNING]  (310739) : All workers exited. Exiting... (0)
Nov 22 04:18:19 np0005532048 systemd[1]: libpod-47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694.scope: Deactivated successfully.
Nov 22 04:18:19 np0005532048 podman[310774]: 2025-11-22 09:18:19.584398431 +0000 UTC m=+0.072283248 container died 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.591 253665 INFO nova.virt.libvirt.driver [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance destroyed successfully.#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.592 253665 DEBUG nova.objects.instance [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'resources' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.617 253665 DEBUG nova.virt.libvirt.vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:19Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.618 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.621 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.622 253665 DEBUG os_vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.627 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.627 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68713fec-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.630 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694-userdata-shm.mount: Deactivated successfully.
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8c75362a33d3428f67980ba20a070be98c841833d0506f9fa1c0a3666ede05df-merged.mount: Deactivated successfully.
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.639 253665 INFO os_vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01')#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.653 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start _get_guest_xml network_info=[{"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:18:19 np0005532048 podman[310774]: 2025-11-22 09:18:19.660143461 +0000 UTC m=+0.148028248 container cleanup 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.659 253665 WARNING nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:18:19 np0005532048 systemd[1]: libpod-conmon-47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694.scope: Deactivated successfully.
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.669 253665 DEBUG nova.virt.libvirt.host [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.671 253665 DEBUG nova.virt.libvirt.host [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.676 253665 DEBUG nova.virt.libvirt.host [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.676 253665 DEBUG nova.virt.libvirt.host [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.677 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.677 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.678 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.679 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.680 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.680 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.681 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.682 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.682 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.682 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.683 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.683 253665 DEBUG nova.virt.hardware [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.683 253665 DEBUG nova.objects.instance [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.707 253665 DEBUG oslo_concurrency.processutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:19 np0005532048 podman[310814]: 2025-11-22 09:18:19.74938845 +0000 UTC m=+0.054974486 container remove 47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.759 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a17d47f-bb00-47c2-a825-4ca8c70cd3a5]: (4, ('Sat Nov 22 09:18:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e (47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694)\n47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694\nSat Nov 22 09:18:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e (47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694)\n47dff57d8087723ba423301c28ec3aa000185e555ac80e23962d73e32d8f5694\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.761 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18e8ee15-0ba0-45fc-9adb-08092ad1b121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.762 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f7cdf45-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:19 np0005532048 kernel: tap8f7cdf45-20: left promiscuous mode
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.765 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:19 np0005532048 nova_compute[253661]: 2025-11-22 09:18:19.779 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[476da982-6363-40eb-ab35-f75911cdcdf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.811 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe133437-3ae9-4fe8-aa8e-57ed32f40b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.834 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7b202e2a-64f9-43f9-aad0-282c8320cc9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.857 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[337f7226-82ab-4db6-ab10-d59e793708d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594564, 'reachable_time': 24326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310829, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:19 np0005532048 systemd[1]: run-netns-ovnmeta\x2d8f7cdf45\x2d2d9c\x2d4e24\x2d9818\x2d3c9ecbf1b21e.mount: Deactivated successfully.
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.864 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:18:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:19.865 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2b0055-11ff-4759-bbd1-485c93191998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085474209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.239 253665 DEBUG oslo_concurrency.processutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.281 253665 DEBUG oslo_concurrency.processutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.392 253665 DEBUG nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:18:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2653312510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.753 253665 DEBUG oslo_concurrency.processutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.755 253665 DEBUG nova.virt.libvirt.vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:19Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.755 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.756 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.759 253665 DEBUG nova.objects.instance [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.776 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <uuid>d583bf52-8135-4fca-a3f4-cf6efd88f497</uuid>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <name>instance-00000033</name>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <nova:name>tempest-InstanceActionsTestJSON-server-1695394731</nova:name>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:18:19</nova:creationTime>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <nova:user uuid="8e3344198c364c67aa73008f33323a4d">tempest-InstanceActionsTestJSON-371860100-project-member</nova:user>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <nova:project uuid="377c148737af4a5fb70d3e00de87fcd3">tempest-InstanceActionsTestJSON-371860100</nova:project>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <nova:port uuid="68713fec-01b1-463b-861c-b96beeb4381a">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <entry name="serial">d583bf52-8135-4fca-a3f4-cf6efd88f497</entry>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <entry name="uuid">d583bf52-8135-4fca-a3f4-cf6efd88f497</entry>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d583bf52-8135-4fca-a3f4-cf6efd88f497_disk">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d583bf52-8135-4fca-a3f4-cf6efd88f497_disk.config">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:89:be:dc"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <target dev="tap68713fec-01"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497/console.log" append="off"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <input type="keyboard" bus="usb"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:18:20 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:18:20 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:18:20 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:18:20 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.780 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.780 253665 DEBUG nova.virt.libvirt.driver [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.781 253665 DEBUG nova.virt.libvirt.vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:19Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.781 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.786 253665 DEBUG nova.network.os_vif_util [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.787 253665 DEBUG os_vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.788 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.789 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.794 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.794 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68713fec-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.794 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68713fec-01, col_values=(('external_ids', {'iface-id': '68713fec-01b1-463b-861c-b96beeb4381a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:be:dc', 'vm-uuid': 'd583bf52-8135-4fca-a3f4-cf6efd88f497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.797 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:20 np0005532048 NetworkManager[48920]: <info>  [1763803100.7982] manager: (tap68713fec-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.805 253665 INFO os_vif [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01')#033[00m
Nov 22 04:18:20 np0005532048 kernel: tap68713fec-01: entered promiscuous mode
Nov 22 04:18:20 np0005532048 systemd-udevd[310754]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:18:20 np0005532048 NetworkManager[48920]: <info>  [1763803100.8980] manager: (tap68713fec-01): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:20Z|00470|binding|INFO|Claiming lport 68713fec-01b1-463b-861c-b96beeb4381a for this chassis.
Nov 22 04:18:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:20Z|00471|binding|INFO|68713fec-01b1-463b-861c-b96beeb4381a: Claiming fa:16:3e:89:be:dc 10.100.0.12
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.908 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:be:dc 10.100.0.12'], port_security=['fa:16:3e:89:be:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd583bf52-8135-4fca-a3f4-cf6efd88f497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '377c148737af4a5fb70d3e00de87fcd3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '24b1af78-e337-4ff8-adc9-262229584365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2f4f0a9-7cb3-4409-b976-e7e8b221c96a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=68713fec-01b1-463b-861c-b96beeb4381a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.909 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 68713fec-01b1-463b-861c-b96beeb4381a in datapath 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e bound to our chassis#033[00m
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.911 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e#033[00m
Nov 22 04:18:20 np0005532048 NetworkManager[48920]: <info>  [1763803100.9153] device (tap68713fec-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:18:20 np0005532048 NetworkManager[48920]: <info>  [1763803100.9167] device (tap68713fec-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:18:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:20Z|00472|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a ovn-installed in OVS
Nov 22 04:18:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:20Z|00473|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a up in Southbound
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:20 np0005532048 nova_compute[253661]: 2025-11-22 09:18:20.925 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.928 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[502f30a0-e985-4a43-87a0-d3bed5472f83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.930 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f7cdf45-21 in ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.932 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f7cdf45-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.932 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee82c25-a9cd-4abd-a6a7-3db5ca31a313]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.935 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b83794-9297-4211-8f18-01a0af9d2afd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:20 np0005532048 systemd-machined[215941]: New machine qemu-57-instance-00000033.
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.953 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2c0114-da78-4be6-9e24-44fba6937cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:20 np0005532048 systemd[1]: Started Virtual Machine qemu-57-instance-00000033.
Nov 22 04:18:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:20.982 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[093eeb6f-3585-47cf-9ce1-76fa4f9221d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.033 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[01d00f54-3204-43ee-81e4-d4d3f630866e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.044 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e2bb1b-4c77-4c78-b7d3-7f5a5091b152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 NetworkManager[48920]: <info>  [1763803101.0461] manager: (tap8f7cdf45-20): new Veth device (/org/freedesktop/NetworkManager/Devices/213)
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.089 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[768239d8-affa-4fef-b010-5b79f9830172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.093 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a50c0e-e607-48eb-a026-b6a8f5499eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 NetworkManager[48920]: <info>  [1763803101.1234] device (tap8f7cdf45-20): carrier: link connected
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.130 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[69733b7b-39f9-484b-bdd0-7f5e7e7a46b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.155 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82e7452f-6a8f-4231-bd78-ed0b4c7b2c95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f7cdf45-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:09:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595186, 'reachable_time': 30358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310938, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.177 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a7d4c9-77d6-444c-8c5f-5912dff93464]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595186, 'tstamp': 595186}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310939, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1d0d4e-3199-4e0a-9488-efa23a4eddb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f7cdf45-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:09:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595186, 'reachable_time': 30358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310940, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.239 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c5505ab5-2430-4581-98b4-e88b7d0e3a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.317 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7539f4-2010-4ad0-9b54-9bc1f6fe57c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.318 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f7cdf45-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f7cdf45-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:21 np0005532048 NetworkManager[48920]: <info>  [1763803101.3222] manager: (tap8f7cdf45-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 22 04:18:21 np0005532048 kernel: tap8f7cdf45-20: entered promiscuous mode
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.324 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f7cdf45-20, col_values=(('external_ids', {'iface-id': '9041b29d-074d-4855-9e30-a4e5a849535d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:21Z|00474|binding|INFO|Releasing lport 9041b29d-074d-4855-9e30-a4e5a849535d from this chassis (sb_readonly=0)
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.328 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.329 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb9836f-55ef-4d7e-b954-c6d94ed1eb4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.330 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.pid.haproxy
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:18:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:21.331 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'env', 'PROCESS_TAG=haproxy-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 214 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.3 MiB/s wr, 199 op/s
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.428 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for d583bf52-8135-4fca-a3f4-cf6efd88f497 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.429 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803101.4282403, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.429 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.432 253665 DEBUG nova.compute.manager [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.442 253665 INFO nova.virt.libvirt.driver [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance rebooted successfully.#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.443 253665 DEBUG nova.compute.manager [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.450 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.457 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.486 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.487 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803101.4316216, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.487 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Started (Lifecycle Event)#033[00m
Nov 22 04:18:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.507 253665 DEBUG oslo_concurrency.lockutils [None req-2380840f-8797-4318-9729-2d4028aec542 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.509 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:21 np0005532048 nova_compute[253661]: 2025-11-22 09:18:21.514 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:21 np0005532048 podman[311014]: 2025-11-22 09:18:21.753161138 +0000 UTC m=+0.057187750 container create 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:18:21 np0005532048 systemd[1]: Started libpod-conmon-4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4.scope.
Nov 22 04:18:21 np0005532048 podman[311014]: 2025-11-22 09:18:21.725709981 +0000 UTC m=+0.029736613 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:18:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f73e94104a9b40a1c2c8be8af5cf7f587f3e059d0fe257d63eefedbd0fc6b01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:21 np0005532048 podman[311014]: 2025-11-22 09:18:21.842888468 +0000 UTC m=+0.146915100 container init 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:18:21 np0005532048 podman[311014]: 2025-11-22 09:18:21.84870727 +0000 UTC m=+0.152733892 container start 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:18:21 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [NOTICE]   (311033) : New worker (311035) forked
Nov 22 04:18:21 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [NOTICE]   (311033) : Loading success.
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.455 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.455 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.456 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.456 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.457 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.457 253665 WARNING nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.457 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.458 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.458 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.458 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.458 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.459 253665 WARNING nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.459 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.460 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.460 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.461 253665 DEBUG oslo_concurrency.lockutils [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.461 253665 DEBUG nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] No waiting events found dispatching network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.461 253665 WARNING nova.compute.manager [req-9f0eef61-8465-4dac-8daa-6fe819e87916 req-61777862-ca43-4468-86d1-d60395e510aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received unexpected event network-vif-plugged-68713fec-01b1-463b-861c-b96beeb4381a for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:22 np0005532048 kernel: tap085e3bcc-2e (unregistering): left promiscuous mode
Nov 22 04:18:22 np0005532048 NetworkManager[48920]: <info>  [1763803102.8346] device (tap085e3bcc-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:22Z|00475|binding|INFO|Releasing lport 085e3bcc-2e77-4c2e-8298-872aac04e65e from this chassis (sb_readonly=0)
Nov 22 04:18:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:22Z|00476|binding|INFO|Setting lport 085e3bcc-2e77-4c2e-8298-872aac04e65e down in Southbound
Nov 22 04:18:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:22Z|00477|binding|INFO|Removing iface tap085e3bcc-2e ovn-installed in OVS
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.848 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.857 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:a1:a9 10.100.0.10'], port_security=['fa:16:3e:52:a1:a9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ee68ed8e-d5b3-4069-ac90-f7e94430ed0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=085e3bcc-2e77-4c2e-8298-872aac04e65e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.858 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 085e3bcc-2e77-4c2e-8298-872aac04e65e in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis#033[00m
Nov 22 04:18:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.860 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:18:22 np0005532048 nova_compute[253661]: 2025-11-22 09:18:22.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c838948-4f8e-42ab-b757-adbfe6c2ad2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:22.874 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore#033[00m
Nov 22 04:18:22 np0005532048 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 22 04:18:22 np0005532048 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Consumed 14.835s CPU time.
Nov 22 04:18:22 np0005532048 systemd-machined[215941]: Machine qemu-54-instance-00000031 terminated.
Nov 22 04:18:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:23Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:38:8e 10.100.0.4
Nov 22 04:18:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:23Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:38:8e 10.100.0.4
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:18:23 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [NOTICE]   (309750) : haproxy version is 2.8.14-c23fe91
Nov 22 04:18:23 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [NOTICE]   (309750) : path to executable is /usr/sbin/haproxy
Nov 22 04:18:23 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [WARNING]  (309750) : Exiting Master process...
Nov 22 04:18:23 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [ALERT]    (309750) : Current worker (309752) exited with code 143 (Terminated)
Nov 22 04:18:23 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[309746]: [WARNING]  (309750) : All workers exited. Exiting... (0)
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.252 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.252 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:23 np0005532048 systemd[1]: libpod-2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424.scope: Deactivated successfully.
Nov 22 04:18:23 np0005532048 podman[311067]: 2025-11-22 09:18:23.2590915 +0000 UTC m=+0.278789956 container died 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:18:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 214 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 222 op/s
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.412 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance shutdown successfully after 24 seconds.#033[00m
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.420 253665 INFO nova.virt.libvirt.driver [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance destroyed successfully.#033[00m
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.421 253665 DEBUG nova.objects.instance [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'numa_topology' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:23 np0005532048 nova_compute[253661]: 2025-11-22 09:18:23.773 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Beginning cold snapshot process#033[00m
Nov 22 04:18:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424-userdata-shm.mount: Deactivated successfully.
Nov 22 04:18:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7e4b6fe6d0e31e82f90921939e8d033bb084c60d0f39f2ba0237780fbbf52430-merged.mount: Deactivated successfully.
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.163 253665 DEBUG nova.virt.libvirt.imagebackend [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.332 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] creating snapshot(a4d1fb214b444dfa90abb7feb6651e29) on rbd image(ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:18:24 np0005532048 podman[311067]: 2025-11-22 09:18:24.511283336 +0000 UTC m=+1.530981782 container cleanup 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:18:24 np0005532048 systemd[1]: libpod-conmon-2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424.scope: Deactivated successfully.
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.524 253665 DEBUG nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-vif-unplugged-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.525 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.525 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.526 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.526 253665 DEBUG nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] No waiting events found dispatching network-vif-unplugged-085e3bcc-2e77-4c2e-8298-872aac04e65e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.526 253665 WARNING nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received unexpected event network-vif-unplugged-085e3bcc-2e77-4c2e-8298-872aac04e65e for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.527 253665 DEBUG nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.527 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.527 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.527 253665 DEBUG oslo_concurrency.lockutils [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.528 253665 DEBUG nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] No waiting events found dispatching network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.528 253665 WARNING nova.compute.manager [req-9f63b6a6-afff-42f1-9e72-7ed28042f5db req-58bf2aad-82ae-4b48-9bd1-5bd1243dd579 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received unexpected event network-vif-plugged-085e3bcc-2e77-4c2e-8298-872aac04e65e for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.813 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.813 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.814 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.814 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.814 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.815 253665 INFO nova.compute.manager [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Terminating instance#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.816 253665 DEBUG nova.compute.manager [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:18:24 np0005532048 podman[311158]: 2025-11-22 09:18:24.940221008 +0000 UTC m=+0.400938532 container remove 2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:18:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.946 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a18505f-cadb-4ffa-a798-619978f64366]: (4, ('Sat Nov 22 09:18:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424)\n2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424\nSat Nov 22 09:18:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424)\n2901b96037cd1a838ce9aa24321ba2104c69ae5a127d89f31eeb2565d4c60424\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.949 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[981c5da3-702f-444f-8927-5ad87bd459d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.950 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:24 np0005532048 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:24 np0005532048 kernel: tap68713fec-01 (unregistering): left promiscuous mode
Nov 22 04:18:24 np0005532048 NetworkManager[48920]: <info>  [1763803104.9614] device (tap68713fec-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.979 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0762110-3c67-416b-b685-7048d260c212]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.987 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:24Z|00478|binding|INFO|Releasing lport 68713fec-01b1-463b-861c-b96beeb4381a from this chassis (sb_readonly=0)
Nov 22 04:18:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:24Z|00479|binding|INFO|Setting lport 68713fec-01b1-463b-861c-b96beeb4381a down in Southbound
Nov 22 04:18:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:24Z|00480|binding|INFO|Removing iface tap68713fec-01 ovn-installed in OVS
Nov 22 04:18:24 np0005532048 nova_compute[253661]: 2025-11-22 09:18:24.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:24.995 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:be:dc 10.100.0.12'], port_security=['fa:16:3e:89:be:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd583bf52-8135-4fca-a3f4-cf6efd88f497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '377c148737af4a5fb70d3e00de87fcd3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '24b1af78-e337-4ff8-adc9-262229584365', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2f4f0a9-7cb3-4409-b976-e7e8b221c96a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=68713fec-01b1-463b-861c-b96beeb4381a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.004 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9992b44e-c74f-45bf-bd60-a9385adbacce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.006 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e40154ec-b6f8-42e5-99dc-892297c418f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:25 np0005532048 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 22 04:18:25 np0005532048 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Consumed 3.878s CPU time.
Nov 22 04:18:25 np0005532048 systemd-machined[215941]: Machine qemu-57-instance-00000033 terminated.
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.028 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b0f857-15a3-46db-beaf-7a1d2605d3e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592087, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311181, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.035 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.036 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9a73d2d9-f98d-4f8d-94b8-03e2c3fba420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.037 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 68713fec-01b1-463b-861c-b96beeb4381a in datapath 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e unbound from our chassis#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.039 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.040 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca2d63d-5110-45bd-89ba-e34c1a0c0c7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.041 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e namespace which is not needed anymore#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.061 253665 INFO nova.virt.libvirt.driver [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Instance destroyed successfully.#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.061 253665 DEBUG nova.objects.instance [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lazy-loading 'resources' on Instance uuid d583bf52-8135-4fca-a3f4-cf6efd88f497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.073 253665 DEBUG nova.virt.libvirt.vif [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1695394731',display_name='tempest-InstanceActionsTestJSON-server-1695394731',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1695394731',id=51,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='377c148737af4a5fb70d3e00de87fcd3',ramdisk_id='',reservation_id='r-9m6ivfpr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-371860100',owner_user_name='tempest-InstanceActionsTestJSON-371860100-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:21Z,user_data=None,user_id='8e3344198c364c67aa73008f33323a4d',uuid=d583bf52-8135-4fca-a3f4-cf6efd88f497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.074 253665 DEBUG nova.network.os_vif_util [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converting VIF {"id": "68713fec-01b1-463b-861c-b96beeb4381a", "address": "fa:16:3e:89:be:dc", "network": {"id": "8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-173243537-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "377c148737af4a5fb70d3e00de87fcd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68713fec-01", "ovs_interfaceid": "68713fec-01b1-463b-861c-b96beeb4381a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.075 253665 DEBUG nova.network.os_vif_util [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.077 253665 DEBUG os_vif [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.081 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.081 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68713fec-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.092 253665 INFO os_vif [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:be:dc,bridge_name='br-int',has_traffic_filtering=True,id=68713fec-01b1-463b-861c-b96beeb4381a,network=Network(8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68713fec-01')#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.109 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:25 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [NOTICE]   (311033) : haproxy version is 2.8.14-c23fe91
Nov 22 04:18:25 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [NOTICE]   (311033) : path to executable is /usr/sbin/haproxy
Nov 22 04:18:25 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [WARNING]  (311033) : Exiting Master process...
Nov 22 04:18:25 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [ALERT]    (311033) : Current worker (311035) exited with code 143 (Terminated)
Nov 22 04:18:25 np0005532048 neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e[311029]: [WARNING]  (311033) : All workers exited. Exiting... (0)
Nov 22 04:18:25 np0005532048 systemd[1]: libpod-4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4.scope: Deactivated successfully.
Nov 22 04:18:25 np0005532048 podman[311232]: 2025-11-22 09:18:25.206628551 +0000 UTC m=+0.047569786 container died 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 04:18:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4-userdata-shm.mount: Deactivated successfully.
Nov 22 04:18:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6f73e94104a9b40a1c2c8be8af5cf7f587f3e059d0fe257d63eefedbd0fc6b01-merged.mount: Deactivated successfully.
Nov 22 04:18:25 np0005532048 podman[311232]: 2025-11-22 09:18:25.269777537 +0000 UTC m=+0.110718762 container cleanup 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:18:25 np0005532048 systemd[1]: libpod-conmon-4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4.scope: Deactivated successfully.
Nov 22 04:18:25 np0005532048 podman[311264]: 2025-11-22 09:18:25.367161702 +0000 UTC m=+0.069634303 container remove 4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:18:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 225 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 223 op/s
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.378 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[45183ce7-cabe-4927-8cf1-835776a88740]: (4, ('Sat Nov 22 09:18:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e (4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4)\n4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4\nSat Nov 22 09:18:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e (4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4)\n4e7ce30115039c855a338605bdaf4e00551f7c93927fd50918941582931b65a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.381 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a04c33a3-b260-489d-b70b-08bed836df2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.382 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f7cdf45-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:25 np0005532048 kernel: tap8f7cdf45-20: left promiscuous mode
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.407 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f86190ba-6b0c-41c0-a561-daa7e664039a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.411 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.421 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41a88a03-9f04-4021-8f95-d6970b78afd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.422 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[87101c45-a070-471c-8bb1-a9bbcfee2047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.426 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.427 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.427 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.441 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[24074353-f1ec-46ab-a390-1b74d87a4b90]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595176, 'reachable_time': 28739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311283, 'error': None, 'target': 'ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 systemd[1]: run-netns-ovnmeta\x2d8f7cdf45\x2d2d9c\x2d4e24\x2d9818\x2d3c9ecbf1b21e.mount: Deactivated successfully.
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.445 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f7cdf45-2d9c-4e24-9818-3c9ecbf1b21e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:18:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:25.445 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[25cac3eb-f47f-4c98-948e-fe61d029e154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:25 np0005532048 podman[311281]: 2025-11-22 09:18:25.513015906 +0000 UTC m=+0.069387036 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 04:18:25 np0005532048 podman[311284]: 2025-11-22 09:18:25.520124539 +0000 UTC m=+0.071015616 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 04:18:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 22 04:18:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 22 04:18:25 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 22 04:18:25 np0005532048 nova_compute[253661]: 2025-11-22 09:18:25.860 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] cloning vms/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk@a4d1fb214b444dfa90abb7feb6651e29 to images/c410abb5-ca6a-4ea8-bbe5-04c76de12b91 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.031 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] flattening images/c410abb5-ca6a-4ea8-bbe5-04c76de12b91 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.154 253665 INFO nova.virt.libvirt.driver [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Deleting instance files /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497_del#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.154 253665 INFO nova.virt.libvirt.driver [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Deletion of /var/lib/nova/instances/d583bf52-8135-4fca-a3f4-cf6efd88f497_del complete#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.220 253665 INFO nova.compute.manager [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Took 1.40 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.221 253665 DEBUG oslo.service.loopingcall [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.221 253665 DEBUG nova.compute.manager [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.222 253665 DEBUG nova.network.neutron [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.686 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] removing snapshot(a4d1fb214b444dfa90abb7feb6651e29) on rbd image(ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:18:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324048381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.781 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 22 04:18:26 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.853 253665 DEBUG nova.storage.rbd_utils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] creating snapshot(snap) on rbd image(c410abb5-ca6a-4ea8-bbe5-04c76de12b91) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.914 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.914 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.919 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:18:26 np0005532048 nova_compute[253661]: 2025-11-22 09:18:26.919 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.082 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.083 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3908MB free_disk=59.8883171081543GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.083 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.084 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance ee68ed8e-d5b3-4069-ac90-f7e94430ed0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 636b1046-fff8-4a45-8a14-04010b2f282e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d583bf52-8135-4fca-a3f4-cf6efd88f497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.262 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.262 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:18:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 245 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.8 MiB/s wr, 319 op/s
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.433 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 22 04:18:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 22 04:18:27 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.860 253665 DEBUG nova.network.neutron [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.875 253665 INFO nova.compute.manager [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Took 1.65 seconds to deallocate network for instance.#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.920 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1566281401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.950 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.957 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:18:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:27.960 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:27.961 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:27.961 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.965 253665 DEBUG nova.compute.manager [req-39436e8e-906e-4f27-8ddd-8f1a82a5de3e req-4af51820-619b-4a45-b851-aee93ef96c81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Received event network-vif-deleted-68713fec-01b1-463b-861c-b96beeb4381a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.970 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.990 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.991 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:27 np0005532048 nova_compute[253661]: 2025-11-22 09:18:27.991 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.068 253665 DEBUG oslo_concurrency.processutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715458066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.551 253665 DEBUG oslo_concurrency.processutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.558 253665 DEBUG nova.compute.provider_tree [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.573 253665 DEBUG nova.scheduler.client.report [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.596 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.625 253665 INFO nova.scheduler.client.report [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Deleted allocations for instance d583bf52-8135-4fca-a3f4-cf6efd88f497#033[00m
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.699 253665 DEBUG oslo_concurrency.lockutils [None req-f3482b43-3b55-4063-8afb-1a44c39f1f94 8e3344198c364c67aa73008f33323a4d 377c148737af4a5fb70d3e00de87fcd3 - - default default] Lock "d583bf52-8135-4fca-a3f4-cf6efd88f497" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.989 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:28 np0005532048 nova_compute[253661]: 2025-11-22 09:18:28.990 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:18:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 279 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 453 op/s
Nov 22 04:18:29 np0005532048 podman[311481]: 2025-11-22 09:18:29.39531159 +0000 UTC m=+0.082784792 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.653 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.654 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.654 253665 INFO nova.compute.manager [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Rebooting instance#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.667 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.668 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.668 253665 DEBUG nova.network.neutron [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.687 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Snapshot image upload complete#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.687 253665 DEBUG nova.compute.manager [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.737 253665 INFO nova.compute.manager [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Shelve offloading#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.744 253665 INFO nova.virt.libvirt.driver [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance destroyed successfully.#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.744 253665 DEBUG nova.compute.manager [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.746 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.746 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:29 np0005532048 nova_compute[253661]: 2025-11-22 09:18:29.746 253665 DEBUG nova.network.neutron [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:18:30 np0005532048 nova_compute[253661]: 2025-11-22 09:18:30.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:31 np0005532048 nova_compute[253661]: 2025-11-22 09:18:31.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:31 np0005532048 nova_compute[253661]: 2025-11-22 09:18:31.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:18:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 279 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 9.8 MiB/s wr, 344 op/s
Nov 22 04:18:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:31 np0005532048 nova_compute[253661]: 2025-11-22 09:18:31.954 253665 DEBUG nova.network.neutron [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:31 np0005532048 nova_compute[253661]: 2025-11-22 09:18:31.957 253665 DEBUG nova.network.neutron [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:31 np0005532048 nova_compute[253661]: 2025-11-22 09:18:31.976 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:31 np0005532048 nova_compute[253661]: 2025-11-22 09:18:31.978 253665 DEBUG nova.compute.manager [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:31 np0005532048 nova_compute[253661]: 2025-11-22 09:18:31.978 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:32 np0005532048 kernel: tapa288a5e5-7b (unregistering): left promiscuous mode
Nov 22 04:18:32 np0005532048 NetworkManager[48920]: <info>  [1763803112.7714] device (tapa288a5e5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:32Z|00481|binding|INFO|Releasing lport a288a5e5-7b57-4be8-9617-3271ea1e210f from this chassis (sb_readonly=0)
Nov 22 04:18:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:32Z|00482|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f down in Southbound
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.783 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:32Z|00483|binding|INFO|Removing iface tapa288a5e5-7b ovn-installed in OVS
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.789 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.790 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis#033[00m
Nov 22 04:18:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.792 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebc42408-7b03-480c-a016-1e5bb2ebcc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:18:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.795 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6725983d-fd18-4f19-8b93-8ea625f16bf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:32.796 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace which is not needed anymore#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:32 np0005532048 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 22 04:18:32 np0005532048 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Consumed 14.927s CPU time.
Nov 22 04:18:32 np0005532048 systemd-machined[215941]: Machine qemu-55-instance-00000032 terminated.
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.906 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.907 253665 DEBUG nova.objects.instance [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.921 253665 DEBUG nova.virt.libvirt.vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.922 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.922 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.923 253665 DEBUG os_vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.925 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.925 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa288a5e5-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.930 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.936 253665 INFO os_vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.946 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start _get_guest_xml network_info=[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.950 253665 WARNING nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.957 253665 DEBUG nova.virt.libvirt.host [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.958 253665 DEBUG nova.virt.libvirt.host [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.962 253665 DEBUG nova.virt.libvirt.host [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.962 253665 DEBUG nova.virt.libvirt.host [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.963 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.964 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.965 253665 DEBUG nova.virt.hardware [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.965 253665 DEBUG nova.objects.instance [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:32Z|00484|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:18:32 np0005532048 nova_compute[253661]: 2025-11-22 09:18:32.984 253665 DEBUG oslo_concurrency.processutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.033 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.233 253665 INFO nova.virt.libvirt.driver [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Instance destroyed successfully.#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.233 253665 DEBUG nova.objects.instance [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid ee68ed8e-d5b3-4069-ac90-f7e94430ed0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:33Z|00485|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.242 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.247 253665 DEBUG nova.virt.libvirt.vif [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1186810200',display_name='tempest-DeleteServersTestJSON-server-1186810200',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1186810200',id=49,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:17:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-bkwxncnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member',shelved_at='2025-11-22T09:18:29.687740',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='c410abb5-ca6a-4ea8-bbe5-04c76de12b91'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:23Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=ee68ed8e-d5b3-4069-ac90-f7e94430ed0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.247 253665 DEBUG nova.network.os_vif_util [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.248 253665 DEBUG nova.network.os_vif_util [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.248 253665 DEBUG os_vif [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.250 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap085e3bcc-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.252 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.256 253665 INFO os_vif [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:a1:a9,bridge_name='br-int',has_traffic_filtering=True,id=085e3bcc-2e77-4c2e-8298-872aac04e65e,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap085e3bcc-2e')#033[00m
Nov 22 04:18:33 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [NOTICE]   (310276) : haproxy version is 2.8.14-c23fe91
Nov 22 04:18:33 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [NOTICE]   (310276) : path to executable is /usr/sbin/haproxy
Nov 22 04:18:33 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [WARNING]  (310276) : Exiting Master process...
Nov 22 04:18:33 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [WARNING]  (310276) : Exiting Master process...
Nov 22 04:18:33 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [ALERT]    (310276) : Current worker (310278) exited with code 143 (Terminated)
Nov 22 04:18:33 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[310272]: [WARNING]  (310276) : All workers exited. Exiting... (0)
Nov 22 04:18:33 np0005532048 systemd[1]: libpod-17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de.scope: Deactivated successfully.
Nov 22 04:18:33 np0005532048 podman[311536]: 2025-11-22 09:18:33.327303032 +0000 UTC m=+0.428670777 container died 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:18:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 279 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 7.7 MiB/s wr, 297 op/s
Nov 22 04:18:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4041587310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.508 253665 DEBUG oslo_concurrency.processutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:33 np0005532048 nova_compute[253661]: 2025-11-22 09:18:33.540 253665 DEBUG oslo_concurrency.processutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de-userdata-shm.mount: Deactivated successfully.
Nov 22 04:18:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d5234ae273265799e05869e908957591393f90ceac631303829d1658fcdf1825-merged.mount: Deactivated successfully.
Nov 22 04:18:33 np0005532048 podman[311536]: 2025-11-22 09:18:33.877362057 +0000 UTC m=+0.978729782 container cleanup 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:18:33 np0005532048 systemd[1]: libpod-conmon-17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de.scope: Deactivated successfully.
Nov 22 04:18:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201504531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.010 253665 DEBUG oslo_concurrency.processutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.013 253665 DEBUG nova.virt.libvirt.vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.013 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.015 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.017 253665 DEBUG nova.objects.instance [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.024660) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114024746, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2184, "num_deletes": 256, "total_data_size": 3394928, "memory_usage": 3458816, "flush_reason": "Manual Compaction"}
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.030 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <uuid>636b1046-fff8-4a45-8a14-04010b2f282e</uuid>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <name>instance-00000032</name>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerActionsTestJSON-server-149918095</nova:name>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:18:32</nova:creationTime>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <nova:port uuid="a288a5e5-7b57-4be8-9617-3271ea1e210f">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <entry name="serial">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <entry name="uuid">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk.config">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:70:38:8e"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <target dev="tapa288a5e5-7b"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log" append="off"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <input type="keyboard" bus="usb"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:18:34 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:18:34 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:18:34 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:18:34 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.031 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.031 253665 DEBUG nova.virt.libvirt.driver [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.032 253665 DEBUG nova.virt.libvirt.vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.033 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.033 253665 DEBUG nova.network.os_vif_util [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.034 253665 DEBUG os_vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.035 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.036 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.039 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.039 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.040 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:34 np0005532048 NetworkManager[48920]: <info>  [1763803114.0431] manager: (tapa288a5e5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.044 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.049 253665 INFO os_vif [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')#033[00m
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114075585, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3313731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30857, "largest_seqno": 33040, "table_properties": {"data_size": 3303792, "index_size": 6305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21045, "raw_average_key_size": 20, "raw_value_size": 3283705, "raw_average_value_size": 3235, "num_data_blocks": 277, "num_entries": 1015, "num_filter_entries": 1015, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763802916, "oldest_key_time": 1763802916, "file_creation_time": 1763803114, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 50966 microseconds, and 9434 cpu microseconds.
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.075635) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3313731 bytes OK
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.075667) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.114344) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.114393) EVENT_LOG_v1 {"time_micros": 1763803114114381, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.114420) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3385624, prev total WAL file size 3385624, number of live WAL files 2.
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.115693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3236KB)], [68(7053KB)]
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114115820, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10536564, "oldest_snapshot_seqno": -1}
Nov 22 04:18:34 np0005532048 podman[311656]: 2025-11-22 09:18:34.142061269 +0000 UTC m=+0.240788372 container remove 17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.149 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8023a022-8841-4744-a0a1-ddf933707d1c]: (4, ('Sat Nov 22 09:18:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de)\n17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de\nSat Nov 22 09:18:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de)\n17cbfbf230d0bc9fb444d62783da3ae5ea93c5a7167310e25c180637421f55de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.150 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfe44a0-70a2-4136-80db-38736d85d306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.152 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.154 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 kernel: tapebc42408-70: left promiscuous mode
Nov 22 04:18:34 np0005532048 NetworkManager[48920]: <info>  [1763803114.1730] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Nov 22 04:18:34 np0005532048 systemd-udevd[311514]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.177 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.182 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6bce74d3-f27f-4501-9342-a17ed3752e9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:34Z|00486|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 04:18:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:34Z|00487|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 NetworkManager[48920]: <info>  [1763803114.1921] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:18:34 np0005532048 NetworkManager[48920]: <info>  [1763803114.1933] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.196 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d97deaac-6300-427a-8398-749068545771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 NetworkManager[48920]: <info>  [1763803114.2000] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 NetworkManager[48920]: <info>  [1763803114.2007] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e987a5d-ee05-44fe-912f-4273ef350e58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.203 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5810 keys, 8876187 bytes, temperature: kUnknown
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114205487, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8876187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8836163, "index_size": 24426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 145821, "raw_average_key_size": 25, "raw_value_size": 8730446, "raw_average_value_size": 1502, "num_data_blocks": 993, "num_entries": 5810, "num_filter_entries": 5810, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803114, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.205994) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8876187 bytes
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.209359) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.1 rd, 98.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6335, records dropped: 525 output_compression: NoCompression
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.209401) EVENT_LOG_v1 {"time_micros": 1763803114209384, "job": 38, "event": "compaction_finished", "compaction_time_micros": 89957, "compaction_time_cpu_micros": 24225, "output_level": 6, "num_output_files": 1, "total_output_size": 8876187, "num_input_records": 6335, "num_output_records": 5810, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114210693, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803114212053, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.115547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:18:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:18:34.212370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:18:34 np0005532048 systemd-machined[215941]: New machine qemu-58-instance-00000032.
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.223 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bde3761d-8a15-4164-8c82-bb71f72cb7dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593627, 'reachable_time': 28985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311688, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.226 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.226 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e640f1f7-531e-4bff-a579-240e541d44b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.227 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 bound to our chassis#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.228 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.242 253665 DEBUG nova.compute.manager [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Received event network-changed-085e3bcc-2e77-4c2e-8298-872aac04e65e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.243 253665 DEBUG nova.compute.manager [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Refreshing instance network info cache due to event network-changed-085e3bcc-2e77-4c2e-8298-872aac04e65e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.243 253665 DEBUG oslo_concurrency.lockutils [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.243 253665 DEBUG oslo_concurrency.lockutils [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.244 253665 DEBUG nova.network.neutron [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Refreshing network info cache for port 085e3bcc-2e77-4c2e-8298-872aac04e65e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.243 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[854e36ab-3164-47b4-a9e0-d036bd8a3d0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.245 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.246 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.247 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b10f95a-4da5-48c8-b8f8-a4bda5103b75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.247 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[509d590a-ea77-4414-96d5-b660d529b117]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.262 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[00627682-b433-43c6-b530-8c30ca496b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 systemd[1]: Started Virtual Machine qemu-58-instance-00000032.
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.287 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0ce46d-4995-4e0c-ac8f-4440d357fb1c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.323 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[133acd3a-3980-405c-ac5c-a5a4b2a69a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.330 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[788a90a5-f767-41e0-933d-242d9ff2e5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 NetworkManager[48920]: <info>  [1763803114.3336] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.376 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[576020b2-5ad8-4c38-b67f-5b7585b24b12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.379 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[573a387c-d600-46df-9690-3f8e27c83aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 NetworkManager[48920]: <info>  [1763803114.4019] device (tapebc42408-70): carrier: link connected
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.407 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9b7941-5014-46f6-bc4a-8ed84e74b278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.418 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.433 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68be5a7b-cdfd-48f3-a22a-4e544514e729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596513, 'reachable_time': 27218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311719, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:34Z|00488|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 04:18:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:34Z|00489|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.453 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4781e2b1-372e-4fbf-8abd-42571e1f6449]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596513, 'tstamp': 596513}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311720, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.477 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1605adbf-8ceb-4790-bea8-5cc074d1c88c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596513, 'reachable_time': 27218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311721, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[594abb1e-ac37-4851-8ad5-01d725ca8b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.592 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c605a2e-e9b7-4e53-9d90-956577b07e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.593 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.593 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.594 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:34 np0005532048 NetworkManager[48920]: <info>  [1763803114.5966] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Nov 22 04:18:34 np0005532048 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.598 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:34Z|00490|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.614 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.616 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.617 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f0f05f-ede2-4777-8bf1-6f74ae7eba87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.618 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:18:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:34.618 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.955 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 636b1046-fff8-4a45-8a14-04010b2f282e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.955 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803114.954769, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.955 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.958 253665 DEBUG nova.compute.manager [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.962 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance rebooted successfully.#033[00m
Nov 22 04:18:34 np0005532048 nova_compute[253661]: 2025-11-22 09:18:34.962 253665 DEBUG nova.compute.manager [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.006 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.012 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.042 253665 DEBUG oslo_concurrency.lockutils [None req-26c1b4e6-be4a-4b97-8c4a-4c95157b6fc1 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.045 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803114.9576468, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.045 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.065 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.069 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:35 np0005532048 podman[311795]: 2025-11-22 09:18:34.977493849 +0000 UTC m=+0.027538601 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.222 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.222 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.234 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.302 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.302 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.309 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.309 253665 INFO nova.compute.claims [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:18:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 279 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 80 op/s
Nov 22 04:18:35 np0005532048 podman[311795]: 2025-11-22 09:18:35.380593923 +0000 UTC m=+0.430638665 container create 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.430 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:35 np0005532048 systemd[1]: Started libpod-conmon-3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb.scope.
Nov 22 04:18:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999d845ec8bd02cd92c7e0cd63eb74e8c346a62a6f4728db53982c6a161b0456/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:35 np0005532048 podman[311795]: 2025-11-22 09:18:35.659042579 +0000 UTC m=+0.709087321 container init 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:18:35 np0005532048 podman[311795]: 2025-11-22 09:18:35.665859505 +0000 UTC m=+0.715904247 container start 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 04:18:35 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [NOTICE]   (311834) : New worker (311836) forked
Nov 22 04:18:35 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [NOTICE]   (311834) : Loading success.
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.720 253665 DEBUG nova.network.neutron [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updated VIF entry in instance network info cache for port 085e3bcc-2e77-4c2e-8298-872aac04e65e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.721 253665 DEBUG nova.network.neutron [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Updating instance_info_cache with network_info: [{"id": "085e3bcc-2e77-4c2e-8298-872aac04e65e", "address": "fa:16:3e:52:a1:a9", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": null, "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap085e3bcc-2e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.729 253665 INFO nova.compute.manager [None req-0e1493bb-09fb-4bc6-befa-a524253c3e15 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Get console output#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.738 253665 INFO oslo.privsep.daemon [None req-0e1493bb-09fb-4bc6-befa-a524253c3e15 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpw8tso7nc/privsep.sock']#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.740 253665 DEBUG oslo_concurrency.lockutils [req-881105b4-8d93-4f5d-bfe9-b15c7ec5be90 req-956f3eaa-ddfe-4143-b052-8fd0630f29cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583672798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.984 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:35 np0005532048 nova_compute[253661]: 2025-11-22 09:18:35.991 253665 DEBUG nova.compute.provider_tree [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.005 253665 DEBUG nova.scheduler.client.report [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.024 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.025 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.071 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.071 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.089 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.104 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.186 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.187 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.188 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Creating image(s)#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.211 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.238 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.351 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.356 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.449 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.450 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.451 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.451 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.513 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.519 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 dec3a0c0-4e66-47fb-845c-42748f871bd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.570 253665 DEBUG nova.policy [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '22a23d70ca814c9597ead334e32c08a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9d1054fa34554ffa8a188984d2db6a60', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:18:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.799 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.800 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.800 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.800 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 WARNING nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.801 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 WARNING nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.802 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 WARNING nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.803 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.804 253665 DEBUG oslo_concurrency.lockutils [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.804 253665 DEBUG nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:36 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.804 253665 WARNING nova.compute.manager [req-ed8ec6f2-d763-4ec5-8738-a4f147b469a3 req-6a8f26c2-8d88-4d44-a6cb-7aab00c220cd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:36 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 22 04:18:37 np0005532048 nova_compute[253661]: 2025-11-22 09:18:37.026 253665 INFO oslo.privsep.daemon [None req-0e1493bb-09fb-4bc6-befa-a524253c3e15 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 22 04:18:37 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.865 311943 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 22 04:18:37 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.868 311943 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 22 04:18:37 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.870 311943 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 22 04:18:37 np0005532048 nova_compute[253661]: 2025-11-22 09:18:36.870 311943 INFO oslo.privsep.daemon [-] privsep daemon running as pid 311943#033[00m
Nov 22 04:18:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 247 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 109 op/s
Nov 22 04:18:38 np0005532048 nova_compute[253661]: 2025-11-22 09:18:38.087 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803103.0860217, ee68ed8e-d5b3-4069-ac90-f7e94430ed0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:38 np0005532048 nova_compute[253661]: 2025-11-22 09:18:38.088 253665 INFO nova.compute.manager [-] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:18:38 np0005532048 nova_compute[253661]: 2025-11-22 09:18:38.104 253665 DEBUG nova.compute.manager [None req-1b93036c-2ede-4c0a-9c0d-4f791728e7b4 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:38 np0005532048 nova_compute[253661]: 2025-11-22 09:18:38.109 253665 DEBUG nova.compute.manager [None req-1b93036c-2ede-4c0a-9c0d-4f791728e7b4 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:38 np0005532048 nova_compute[253661]: 2025-11-22 09:18:38.125 253665 INFO nova.compute.manager [None req-1b93036c-2ede-4c0a-9c0d-4f791728e7b4 - - - - - -] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Nov 22 04:18:38 np0005532048 nova_compute[253661]: 2025-11-22 09:18:38.383 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 dec3a0c0-4e66-47fb-845c-42748f871bd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.864s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:38 np0005532048 nova_compute[253661]: 2025-11-22 09:18:38.449 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] resizing rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:18:38 np0005532048 nova_compute[253661]: 2025-11-22 09:18:38.852 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Successfully created port: c10e771b-271b-4855-9004-fe8ee858ec5d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.043 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.241 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.242 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.257 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:18:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 233 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.503 253665 DEBUG nova.objects.instance [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lazy-loading 'migration_context' on Instance uuid dec3a0c0-4e66-47fb-845c-42748f871bd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.516 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.517 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Ensure instance console log exists: /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.518 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.518 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.519 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.560 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Deleting instance files /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_del#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.561 253665 INFO nova.virt.libvirt.driver [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: ee68ed8e-d5b3-4069-ac90-f7e94430ed0d] Deletion of /var/lib/nova/instances/ee68ed8e-d5b3-4069-ac90-f7e94430ed0d_del complete#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.657 253665 INFO nova.scheduler.client.report [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance ee68ed8e-d5b3-4069-ac90-f7e94430ed0d#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.715 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.716 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:39 np0005532048 nova_compute[253661]: 2025-11-22 09:18:39.826 253665 DEBUG oslo_concurrency.processutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.059 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803105.0584335, d583bf52-8135-4fca-a3f4-cf6efd88f497 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.060 253665 INFO nova.compute.manager [-] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.083 253665 DEBUG nova.compute.manager [None req-d601ede0-b2a9-4ee3-baaf-a222543b3f22 - - - - - -] [instance: d583bf52-8135-4fca-a3f4-cf6efd88f497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.131 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.237 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Successfully updated port: c10e771b-271b-4855-9004-fe8ee858ec5d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.253 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.254 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquired lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.254 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:18:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906110231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.321 253665 DEBUG oslo_concurrency.processutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.326 253665 DEBUG nova.compute.provider_tree [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.341 253665 DEBUG nova.scheduler.client.report [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.385 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.451 253665 DEBUG oslo_concurrency.lockutils [None req-414807e2-b288-4b5f-a944-ceca70e3ae7b 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "ee68ed8e-d5b3-4069-ac90-f7e94430ed0d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 41.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.821 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.963 253665 DEBUG nova.compute.manager [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-changed-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.963 253665 DEBUG nova.compute.manager [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Refreshing instance network info cache due to event network-changed-c10e771b-271b-4855-9004-fe8ee858ec5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:18:40 np0005532048 nova_compute[253661]: 2025-11-22 09:18:40.964 253665 DEBUG oslo_concurrency.lockutils [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 233 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Nov 22 04:18:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 22 04:18:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 22 04:18:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 22 04:18:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 246 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.405 253665 DEBUG nova.network.neutron [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updating instance_info_cache with network_info: [{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.424 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Releasing lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.425 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance network_info: |[{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.425 253665 DEBUG oslo_concurrency.lockutils [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.425 253665 DEBUG nova.network.neutron [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Refreshing network info cache for port c10e771b-271b-4855-9004-fe8ee858ec5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.428 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start _get_guest_xml network_info=[{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.434 253665 WARNING nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.458 253665 DEBUG nova.virt.libvirt.host [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.459 253665 DEBUG nova.virt.libvirt.host [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.466 253665 DEBUG nova.virt.libvirt.host [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.467 253665 DEBUG nova.virt.libvirt.host [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.467 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.467 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.468 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.468 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.469 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.469 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.469 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.470 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.470 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.470 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.471 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.471 253665 DEBUG nova.virt.hardware [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.475 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2131222639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.973 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:43 np0005532048 nova_compute[253661]: 2025-11-22 09:18:43.998 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.003 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.047 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192174460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.498 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.500 253665 DEBUG nova.virt.libvirt.vif [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1266648346',display_name='tempest-ServersTestJSON-server-1266648346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1266648346',id=52,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlHqesJZ81rHIrLZzqDDZqmgjYu5MzxRRBun28RXCGOItUHcjpLw69lsrxKRvDbiIeTcAfAS0eY1jM4zBK+YEZ0Fqn+yA8iBWGS3Ng7czuJICvlXeiMEyvgNWSqN1n7cw==',key_name='tempest-keypair-330217895',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d1054fa34554ffa8a188984d2db6a60',ramdisk_id='',reservation_id='r-562p3oi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1156873673',owner_user_name='tempest-ServersTestJSON-1156873673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22a23d70ca814c9597ead334e32c08a1',uuid=dec3a0c0-4e66-47fb-845c-42748f871bd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.500 253665 DEBUG nova.network.os_vif_util [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converting VIF {"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.501 253665 DEBUG nova.network.os_vif_util [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.503 253665 DEBUG nova.objects.instance [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lazy-loading 'pci_devices' on Instance uuid dec3a0c0-4e66-47fb-845c-42748f871bd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.517 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <uuid>dec3a0c0-4e66-47fb-845c-42748f871bd3</uuid>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <name>instance-00000034</name>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersTestJSON-server-1266648346</nova:name>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:18:43</nova:creationTime>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <nova:user uuid="22a23d70ca814c9597ead334e32c08a1">tempest-ServersTestJSON-1156873673-project-member</nova:user>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <nova:project uuid="9d1054fa34554ffa8a188984d2db6a60">tempest-ServersTestJSON-1156873673</nova:project>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <nova:port uuid="c10e771b-271b-4855-9004-fe8ee858ec5d">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <entry name="serial">dec3a0c0-4e66-47fb-845c-42748f871bd3</entry>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <entry name="uuid">dec3a0c0-4e66-47fb-845c-42748f871bd3</entry>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/dec3a0c0-4e66-47fb-845c-42748f871bd3_disk">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:f1:f2:e5"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <target dev="tapc10e771b-27"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/console.log" append="off"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:18:44 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:18:44 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:18:44 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:18:44 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.518 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Preparing to wait for external event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.518 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.519 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.519 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.520 253665 DEBUG nova.virt.libvirt.vif [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1266648346',display_name='tempest-ServersTestJSON-server-1266648346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1266648346',id=52,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlHqesJZ81rHIrLZzqDDZqmgjYu5MzxRRBun28RXCGOItUHcjpLw69lsrxKRvDbiIeTcAfAS0eY1jM4zBK+YEZ0Fqn+yA8iBWGS3Ng7czuJICvlXeiMEyvgNWSqN1n7cw==',key_name='tempest-keypair-330217895',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d1054fa34554ffa8a188984d2db6a60',ramdisk_id='',reservation_id='r-562p3oi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1156873673',owner_user_name='tempest-ServersTestJSON-1156873673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22a23d70ca814c9597ead334e32c08a1',uuid=dec3a0c0-4e66-47fb-845c-42748f871bd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.520 253665 DEBUG nova.network.os_vif_util [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converting VIF {"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.521 253665 DEBUG nova.network.os_vif_util [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.521 253665 DEBUG os_vif [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.522 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.523 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.526 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc10e771b-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.527 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc10e771b-27, col_values=(('external_ids', {'iface-id': 'c10e771b-271b-4855-9004-fe8ee858ec5d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f1:f2:e5', 'vm-uuid': 'dec3a0c0-4e66-47fb-845c-42748f871bd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:44 np0005532048 NetworkManager[48920]: <info>  [1763803124.5309] manager: (tapc10e771b-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.538 253665 INFO os_vif [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27')#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.589 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.589 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.589 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] No VIF found with MAC fa:16:3e:f1:f2:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.590 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Using config drive#033[00m
Nov 22 04:18:44 np0005532048 nova_compute[253661]: 2025-11-22 09:18:44.610 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.082 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Creating config drive at /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.089 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprg4lfp3s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.134 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.245 253665 DEBUG oslo_concurrency.lockutils [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.246 253665 DEBUG oslo_concurrency.lockutils [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.246 253665 DEBUG nova.compute.manager [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.247 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprg4lfp3s" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.274 253665 DEBUG nova.storage.rbd_utils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] rbd image dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.279 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.333 253665 DEBUG nova.compute.manager [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.335 253665 DEBUG nova.objects.instance [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.365 253665 DEBUG nova.virt.libvirt.driver [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:18:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 246 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 136 op/s
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.440 253665 DEBUG nova.network.neutron [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updated VIF entry in instance network info cache for port c10e771b-271b-4855-9004-fe8ee858ec5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.441 253665 DEBUG nova.network.neutron [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updating instance_info_cache with network_info: [{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.453 253665 DEBUG oslo_concurrency.lockutils [req-cd589270-e8d3-4a62-a0af-81ceb571bb21 req-15e1d60a-2169-449f-9d95-8d7bdf2bcd47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.496 253665 DEBUG oslo_concurrency.processutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config dec3a0c0-4e66-47fb-845c-42748f871bd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.497 253665 INFO nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Deleting local config drive /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3/disk.config because it was imported into RBD.#033[00m
Nov 22 04:18:45 np0005532048 kernel: tapc10e771b-27: entered promiscuous mode
Nov 22 04:18:45 np0005532048 NetworkManager[48920]: <info>  [1763803125.5611] manager: (tapc10e771b-27): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Nov 22 04:18:45 np0005532048 systemd-udevd[312176]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:18:45 np0005532048 NetworkManager[48920]: <info>  [1763803125.6048] device (tapc10e771b-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:18:45 np0005532048 NetworkManager[48920]: <info>  [1763803125.6061] device (tapc10e771b-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.609 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.617 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:f2:e5 10.100.0.3'], port_security=['fa:16:3e:f1:f2:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dec3a0c0-4e66-47fb-845c-42748f871bd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af1599cd-9805-40cb-9d20-ed7982b07412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d1054fa34554ffa8a188984d2db6a60', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac0f6fad-418e-4cf8-9b02-babdac3fb88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64f6ab83-a798-4bd9-aa90-a1cb3d63c1c0, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c10e771b-271b-4855-9004-fe8ee858ec5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:45Z|00491|binding|INFO|Claiming lport c10e771b-271b-4855-9004-fe8ee858ec5d for this chassis.
Nov 22 04:18:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:45Z|00492|binding|INFO|c10e771b-271b-4855-9004-fe8ee858ec5d: Claiming fa:16:3e:f1:f2:e5 10.100.0.3
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.619 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c10e771b-271b-4855-9004-fe8ee858ec5d in datapath af1599cd-9805-40cb-9d20-ed7982b07412 bound to our chassis#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.621 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network af1599cd-9805-40cb-9d20-ed7982b07412#033[00m
Nov 22 04:18:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:45Z|00493|binding|INFO|Setting lport c10e771b-271b-4855-9004-fe8ee858ec5d ovn-installed in OVS
Nov 22 04:18:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:45Z|00494|binding|INFO|Setting lport c10e771b-271b-4855-9004-fe8ee858ec5d up in Southbound
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.638 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[993a8fa3-6c6f-4ed1-835a-2a2c61805cf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.639 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaf1599cd-91 in ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:18:45 np0005532048 nova_compute[253661]: 2025-11-22 09:18:45.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.642 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaf1599cd-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.642 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d81abd7-6401-4b39-b809-6b6da519e8e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c90cb6d5-841a-4e33-96f7-703f03522518]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.656 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb4b16d-2719-41c8-bab0-bfedb90ca60f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 systemd-machined[215941]: New machine qemu-59-instance-00000034.
Nov 22 04:18:45 np0005532048 systemd[1]: Started Virtual Machine qemu-59-instance-00000034.
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.686 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf9c9f7-b2e1-4dce-af8a-fe16b0f6b19d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.728 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[750602b7-52ac-4def-b3ce-026edff813c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 NetworkManager[48920]: <info>  [1763803125.7358] manager: (tapaf1599cd-90): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.734 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8bc325-d697-41ae-953e-4d83889bd910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.780 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd7d2d8-f3d1-4a76-9e10-ada6e780df3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.784 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e878785a-51b8-4731-8f32-1749d5ebf42b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 NetworkManager[48920]: <info>  [1763803125.8148] device (tapaf1599cd-90): carrier: link connected
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.820 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[577ed49b-59de-4579-9b10-21f98c6d988b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.841 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e422ec6f-cdfd-4972-81c1-9512277d1b40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf1599cd-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:da:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597655, 'reachable_time': 30270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312213, 'error': None, 'target': 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.865 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[40a26ab5-0836-4836-a6b5-15b9875fbd66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:dad5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597655, 'tstamp': 597655}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312214, 'error': None, 'target': 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f098414-e96a-49a4-9a3d-cb2c887f1ae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf1599cd-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:da:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597655, 'reachable_time': 30270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312215, 'error': None, 'target': 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:45.926 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[761bc251-9071-4e33-8d01-ff44ed25802a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.008 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[64d8b908-eccc-4ff7-8238-c738db5cfffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.009 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf1599cd-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.009 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.010 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf1599cd-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.012 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:46 np0005532048 NetworkManager[48920]: <info>  [1763803126.0127] manager: (tapaf1599cd-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Nov 22 04:18:46 np0005532048 kernel: tapaf1599cd-90: entered promiscuous mode
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.027 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.029 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaf1599cd-90, col_values=(('external_ids', {'iface-id': 'b4f427ac-62da-49fd-b335-ef7e4dcae695'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.030 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:46Z|00495|binding|INFO|Releasing lport b4f427ac-62da-49fd-b335-ef7e4dcae695 from this chassis (sb_readonly=0)
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.031 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.032 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/af1599cd-9805-40cb-9d20-ed7982b07412.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/af1599cd-9805-40cb-9d20-ed7982b07412.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.033 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8cb93f-3269-49c4-aa28-61e586cc44fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.033 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-af1599cd-9805-40cb-9d20-ed7982b07412
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/af1599cd-9805-40cb-9d20-ed7982b07412.pid.haproxy
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID af1599cd-9805-40cb-9d20-ed7982b07412
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:18:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:46.034 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'env', 'PROCESS_TAG=haproxy-af1599cd-9805-40cb-9d20-ed7982b07412', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/af1599cd-9805-40cb-9d20-ed7982b07412.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.046 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.165 253665 DEBUG nova.compute.manager [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.167 253665 DEBUG oslo_concurrency.lockutils [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.167 253665 DEBUG oslo_concurrency.lockutils [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.167 253665 DEBUG oslo_concurrency.lockutils [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.168 253665 DEBUG nova.compute.manager [req-0767c9a3-1078-4bfd-886d-8c6a6a97f9a0 req-de88f81d-620a-489f-bb81-31470199bf0f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Processing event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.323 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.325 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803126.3243744, dec3a0c0-4e66-47fb-845c-42748f871bd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.325 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] VM Started (Lifecycle Event)#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.328 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.331 253665 INFO nova.virt.libvirt.driver [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance spawned successfully.#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.331 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.348 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.354 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.357 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.358 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.358 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.359 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.359 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.359 253665 DEBUG nova.virt.libvirt.driver [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.396 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.397 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803126.3244557, dec3a0c0-4e66-47fb-845c-42748f871bd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.397 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.417 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.426 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803126.3268192, dec3a0c0-4e66-47fb-845c-42748f871bd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.426 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.453 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.456 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.465 253665 INFO nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Took 10.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.465 253665 DEBUG nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.489 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.525 253665 INFO nova.compute.manager [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Took 11.25 seconds to build instance.#033[00m
Nov 22 04:18:46 np0005532048 podman[312290]: 2025-11-22 09:18:46.435185191 +0000 UTC m=+0.028076643 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:18:46 np0005532048 nova_compute[253661]: 2025-11-22 09:18:46.540 253665 DEBUG oslo_concurrency.lockutils [None req-765fcc72-6138-45a2-97fb-dffa85a37f99 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:46 np0005532048 podman[312290]: 2025-11-22 09:18:46.621042067 +0000 UTC m=+0.213933499 container create d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:18:46 np0005532048 systemd[1]: Started libpod-conmon-d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324.scope.
Nov 22 04:18:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c46800e6c5ca462086d92c20678c674e5c07106b89d77525644fa067c6c4bcd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:46 np0005532048 podman[312290]: 2025-11-22 09:18:46.747295445 +0000 UTC m=+0.340186897 container init d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:18:46 np0005532048 podman[312290]: 2025-11-22 09:18:46.753423504 +0000 UTC m=+0.346314936 container start d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:18:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:46 np0005532048 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [NOTICE]   (312309) : New worker (312311) forked
Nov 22 04:18:46 np0005532048 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [NOTICE]   (312309) : Loading success.
Nov 22 04:18:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 210 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Nov 22 04:18:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:47Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:38:8e 10.100.0.4
Nov 22 04:18:48 np0005532048 nova_compute[253661]: 2025-11-22 09:18:48.376 253665 DEBUG nova.compute.manager [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:48 np0005532048 nova_compute[253661]: 2025-11-22 09:18:48.376 253665 DEBUG oslo_concurrency.lockutils [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:48 np0005532048 nova_compute[253661]: 2025-11-22 09:18:48.377 253665 DEBUG oslo_concurrency.lockutils [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:48 np0005532048 nova_compute[253661]: 2025-11-22 09:18:48.377 253665 DEBUG oslo_concurrency.lockutils [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:48 np0005532048 nova_compute[253661]: 2025-11-22 09:18:48.377 253665 DEBUG nova.compute.manager [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] No waiting events found dispatching network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:48 np0005532048 nova_compute[253661]: 2025-11-22 09:18:48.377 253665 WARNING nova.compute.manager [req-2b38a664-aab3-40e3-99d2-6b00fd4752cc req-719dd4be-5179-4f5e-b60e-056d8a413105 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received unexpected event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d for instance with vm_state active and task_state None.#033[00m
Nov 22 04:18:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 167 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 741 KiB/s wr, 164 op/s
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.510 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.511 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.529 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.618 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.619 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.628 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.629 253665 INFO nova.compute.claims [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.794 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:49.891 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:49.892 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:18:49 np0005532048 nova_compute[253661]: 2025-11-22 09:18:49.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/300004298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.264 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.270 253665 DEBUG nova.compute.provider_tree [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.286 253665 DEBUG nova.scheduler.client.report [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.304 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.305 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.353 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.354 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.376 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.392 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.502 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.504 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.505 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Creating image(s)#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.532 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.565 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.592 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.597 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.650 253665 DEBUG nova.policy [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.688 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.690 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.691 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.691 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.721 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.727 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:18:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:18:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:18:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.950 253665 DEBUG nova.compute.manager [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-changed-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.951 253665 DEBUG nova.compute.manager [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Refreshing instance network info cache due to event network-changed-c10e771b-271b-4855-9004-fe8ee858ec5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.951 253665 DEBUG oslo_concurrency.lockutils [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.951 253665 DEBUG oslo_concurrency.lockutils [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:50 np0005532048 nova_compute[253661]: 2025-11-22 09:18:50.951 253665 DEBUG nova.network.neutron [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Refreshing network info cache for port c10e771b-271b-4855-9004-fe8ee858ec5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:18:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:18:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d1471e07-2751-439c-8b48-e83808a9788e does not exist
Nov 22 04:18:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0575880f-278d-4a0d-a466-6f311d7fdc7a does not exist
Nov 22 04:18:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8822cc69-3995-4e70-ac0b-7b6528e53fdd does not exist
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.123 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.197 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:18:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 167 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 741 KiB/s wr, 164 op/s
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.407 253665 DEBUG nova.objects.instance [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.431 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.432 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Ensure instance console log exists: /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.432 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.432 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.432 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:51 np0005532048 podman[312778]: 2025-11-22 09:18:51.668784769 +0000 UTC m=+0.038863455 container create c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:18:51 np0005532048 systemd[1]: Started libpod-conmon-c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8.scope.
Nov 22 04:18:51 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:51 np0005532048 podman[312778]: 2025-11-22 09:18:51.650573817 +0000 UTC m=+0.020652523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:18:51 np0005532048 podman[312778]: 2025-11-22 09:18:51.775071512 +0000 UTC m=+0.145150218 container init c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 04:18:51 np0005532048 podman[312778]: 2025-11-22 09:18:51.785632439 +0000 UTC m=+0.155711115 container start c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 22 04:18:51 np0005532048 podman[312778]: 2025-11-22 09:18:51.790552608 +0000 UTC m=+0.160631324 container attach c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:18:51 np0005532048 keen_keller[312795]: 167 167
Nov 22 04:18:51 np0005532048 systemd[1]: libpod-c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8.scope: Deactivated successfully.
Nov 22 04:18:51 np0005532048 podman[312778]: 2025-11-22 09:18:51.793623172 +0000 UTC m=+0.163701868 container died c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:18:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:18:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e0af812a6933bc8243815a941d50c66c341d6b54995f6cee3aa38957180d2185-merged.mount: Deactivated successfully.
Nov 22 04:18:51 np0005532048 podman[312778]: 2025-11-22 09:18:51.881474197 +0000 UTC m=+0.251552883 container remove c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.888 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Successfully created port: 8d030734-0e50-4fca-a432-cc2d1c2c9dea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:18:51 np0005532048 systemd[1]: libpod-conmon-c8ef7f9acd95fb0b701a77ea4af305e0fdb16c7149912e63fb5d22726a174fb8.scope: Deactivated successfully.
Nov 22 04:18:51 np0005532048 nova_compute[253661]: 2025-11-22 09:18:51.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:52 np0005532048 podman[312818]: 2025-11-22 09:18:52.0861542 +0000 UTC m=+0.053865299 container create e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:18:52 np0005532048 systemd[1]: Started libpod-conmon-e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019.scope.
Nov 22 04:18:52 np0005532048 podman[312818]: 2025-11-22 09:18:52.061253395 +0000 UTC m=+0.028964294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:18:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:52 np0005532048 podman[312818]: 2025-11-22 09:18:52.189671176 +0000 UTC m=+0.157382075 container init e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:18:52
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', '.rgw.root', 'images']
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:18:52 np0005532048 podman[312818]: 2025-11-22 09:18:52.200688004 +0000 UTC m=+0.168398883 container start e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:18:52 np0005532048 podman[312818]: 2025-11-22 09:18:52.204765843 +0000 UTC m=+0.172476722 container attach e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:53 np0005532048 nova_compute[253661]: 2025-11-22 09:18:53.258 253665 DEBUG nova.network.neutron [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updated VIF entry in instance network info cache for port c10e771b-271b-4855-9004-fe8ee858ec5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:18:53 np0005532048 nova_compute[253661]: 2025-11-22 09:18:53.258 253665 DEBUG nova.network.neutron [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updating instance_info_cache with network_info: [{"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:53 np0005532048 nova_compute[253661]: 2025-11-22 09:18:53.279 253665 DEBUG oslo_concurrency.lockutils [req-a82d535c-7e13-4f72-b56a-768343b849d7 req-abf5fad1-77a0-4adf-bb7b-d1730630db47 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dec3a0c0-4e66-47fb-845c-42748f871bd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:53 np0005532048 quirky_booth[312835]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:18:53 np0005532048 quirky_booth[312835]: --> relative data size: 1.0
Nov 22 04:18:53 np0005532048 quirky_booth[312835]: --> All data devices are unavailable
Nov 22 04:18:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 180 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 663 KiB/s wr, 184 op/s
Nov 22 04:18:53 np0005532048 systemd[1]: libpod-e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019.scope: Deactivated successfully.
Nov 22 04:18:53 np0005532048 podman[312818]: 2025-11-22 09:18:53.389556061 +0000 UTC m=+1.357266970 container died e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 04:18:53 np0005532048 systemd[1]: libpod-e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019.scope: Consumed 1.087s CPU time.
Nov 22 04:18:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cb0c398d7910fd8d4eb177ea3f10d7a54f3a26d6c18088dd6d8a99adb34e30da-merged.mount: Deactivated successfully.
Nov 22 04:18:53 np0005532048 podman[312818]: 2025-11-22 09:18:53.633023067 +0000 UTC m=+1.600733946 container remove e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:18:53 np0005532048 systemd[1]: libpod-conmon-e96e17e658c0c5874366a5131263d28e7c79657dd64b45532d1632c81543e019.scope: Deactivated successfully.
Nov 22 04:18:54 np0005532048 nova_compute[253661]: 2025-11-22 09:18:54.239 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Successfully updated port: 8d030734-0e50-4fca-a432-cc2d1c2c9dea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:18:54 np0005532048 podman[313016]: 2025-11-22 09:18:54.243136652 +0000 UTC m=+0.044420641 container create 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:18:54 np0005532048 nova_compute[253661]: 2025-11-22 09:18:54.252 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:54 np0005532048 nova_compute[253661]: 2025-11-22 09:18:54.253 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:54 np0005532048 nova_compute[253661]: 2025-11-22 09:18:54.253 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:18:54 np0005532048 systemd[1]: Started libpod-conmon-09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656.scope.
Nov 22 04:18:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:54 np0005532048 podman[313016]: 2025-11-22 09:18:54.221165038 +0000 UTC m=+0.022449047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:18:54 np0005532048 podman[313016]: 2025-11-22 09:18:54.332125864 +0000 UTC m=+0.133409883 container init 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:18:54 np0005532048 podman[313016]: 2025-11-22 09:18:54.338640042 +0000 UTC m=+0.139924041 container start 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 04:18:54 np0005532048 naughty_franklin[313033]: 167 167
Nov 22 04:18:54 np0005532048 systemd[1]: libpod-09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656.scope: Deactivated successfully.
Nov 22 04:18:54 np0005532048 podman[313016]: 2025-11-22 09:18:54.35209765 +0000 UTC m=+0.153381649 container attach 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:18:54 np0005532048 podman[313016]: 2025-11-22 09:18:54.353090013 +0000 UTC m=+0.154374022 container died 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:18:54 np0005532048 nova_compute[253661]: 2025-11-22 09:18:54.364 253665 DEBUG nova.compute.manager [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-changed-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:54 np0005532048 nova_compute[253661]: 2025-11-22 09:18:54.366 253665 DEBUG nova.compute.manager [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Refreshing instance network info cache due to event network-changed-8d030734-0e50-4fca-a432-cc2d1c2c9dea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:18:54 np0005532048 nova_compute[253661]: 2025-11-22 09:18:54.366 253665 DEBUG oslo_concurrency.lockutils [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0c9e6704ca143a8c678a8d289cf4759964765c68e4ea3989247cfc095cd2c3a0-merged.mount: Deactivated successfully.
Nov 22 04:18:54 np0005532048 nova_compute[253661]: 2025-11-22 09:18:54.415 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:18:54 np0005532048 podman[313016]: 2025-11-22 09:18:54.436005398 +0000 UTC m=+0.237289387 container remove 09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:18:54 np0005532048 systemd[1]: libpod-conmon-09c2d78e5cd27e073c112d301c9afe9f961182e0d4c35ca50b4363bcfd03e656.scope: Deactivated successfully.
Nov 22 04:18:54 np0005532048 nova_compute[253661]: 2025-11-22 09:18:54.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:54 np0005532048 podman[313058]: 2025-11-22 09:18:54.613152183 +0000 UTC m=+0.027377017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:18:54 np0005532048 podman[313058]: 2025-11-22 09:18:54.800221928 +0000 UTC m=+0.214446752 container create 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:18:54 np0005532048 systemd[1]: Started libpod-conmon-117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb.scope.
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:18:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:18:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:54 np0005532048 podman[313058]: 2025-11-22 09:18:54.988271037 +0000 UTC m=+0.402495861 container init 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:18:54 np0005532048 podman[313058]: 2025-11-22 09:18:54.996800595 +0000 UTC m=+0.411025439 container start 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:18:55 np0005532048 podman[313058]: 2025-11-22 09:18:55.008533859 +0000 UTC m=+0.422758813 container attach 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.362 253665 DEBUG nova.network.neutron [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Updating instance_info_cache with network_info: [{"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 215 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 198 op/s
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.404 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.405 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance network_info: |[{"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.406 253665 DEBUG oslo_concurrency.lockutils [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.406 253665 DEBUG nova.network.neutron [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Refreshing network info cache for port 8d030734-0e50-4fca-a432-cc2d1c2c9dea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.410 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start _get_guest_xml network_info=[{"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.416 253665 WARNING nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.426 253665 DEBUG nova.virt.libvirt.host [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.427 253665 DEBUG nova.virt.libvirt.host [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.432 253665 DEBUG nova.virt.libvirt.host [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.433 253665 DEBUG nova.virt.libvirt.host [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.433 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.434 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.434 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.434 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.435 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.436 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.436 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.436 253665 DEBUG nova.virt.hardware [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.440 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.490 253665 DEBUG nova.virt.libvirt.driver [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.800 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.802 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.820 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.839 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.840 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.862 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]: {
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:    "0": [
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:        {
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "devices": [
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "/dev/loop3"
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            ],
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_name": "ceph_lv0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_size": "21470642176",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "name": "ceph_lv0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "tags": {
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.cluster_name": "ceph",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.crush_device_class": "",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.encrypted": "0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.osd_id": "0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.type": "block",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.vdo": "0"
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            },
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "type": "block",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "vg_name": "ceph_vg0"
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:        }
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:    ],
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:    "1": [
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:        {
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "devices": [
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "/dev/loop4"
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            ],
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_name": "ceph_lv1",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_size": "21470642176",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "name": "ceph_lv1",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "tags": {
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.cluster_name": "ceph",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.crush_device_class": "",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.encrypted": "0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.osd_id": "1",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.type": "block",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.vdo": "0"
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            },
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "type": "block",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "vg_name": "ceph_vg1"
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:        }
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:    ],
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:    "2": [
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:        {
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "devices": [
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "/dev/loop5"
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            ],
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_name": "ceph_lv2",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_size": "21470642176",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "name": "ceph_lv2",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "tags": {
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.cluster_name": "ceph",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.crush_device_class": "",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.encrypted": "0",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.osd_id": "2",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.type": "block",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:                "ceph.vdo": "0"
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            },
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "type": "block",
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:            "vg_name": "ceph_vg2"
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:        }
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]:    ]
Nov 22 04:18:55 np0005532048 cool_hypatia[313075]: }
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.878 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.878 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.893 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.894 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:55 np0005532048 systemd[1]: libpod-117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb.scope: Deactivated successfully.
Nov 22 04:18:55 np0005532048 podman[313058]: 2025-11-22 09:18:55.897697564 +0000 UTC m=+1.311922378 container died 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.901 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.901 253665 INFO nova.compute.claims [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.904 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:18:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870768261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a4508037bd160a237e2ec1fd77f68806f17adb5d4d9792b699a571dc32825ad4-merged.mount: Deactivated successfully.
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.942 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.960 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:55 np0005532048 podman[313058]: 2025-11-22 09:18:55.965522363 +0000 UTC m=+1.379747177 container remove 117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 04:18:55 np0005532048 systemd[1]: libpod-conmon-117202fbc46d26ed523725caf4e19a9c775df8490c70fdb9c94b663628cdfdeb.scope: Deactivated successfully.
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.993 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:55 np0005532048 nova_compute[253661]: 2025-11-22 09:18:55.996 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:56 np0005532048 podman[313112]: 2025-11-22 09:18:56.014823761 +0000 UTC m=+0.088584383 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:18:56 np0005532048 podman[313105]: 2025-11-22 09:18:56.047529355 +0000 UTC m=+0.124320851 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.059 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.180 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128944253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.440 253665 DEBUG nova.network.neutron [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Updated VIF entry in instance network info cache for port 8d030734-0e50-4fca-a432-cc2d1c2c9dea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.441 253665 DEBUG nova.network.neutron [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Updating instance_info_cache with network_info: [{"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.457 253665 DEBUG oslo_concurrency.lockutils [req-a90f7450-c813-4eff-a5da-c23b3b8e6337 req-6989ae13-9c3b-4dae-b8cd-412cfb9eaf1f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2f0d9dce-1900-41c4-9b69-7e46f34dde81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.470 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.472 253665 DEBUG nova.virt.libvirt.vif [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1545163364',display_name='tempest-DeleteServersTestJSON-server-1545163364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1545163364',id=53,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-0l9kcni8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:50Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2f0d9dce-1900-41c4-9b69-7e46f34dde81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.473 253665 DEBUG nova.network.os_vif_util [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.476 253665 DEBUG nova.network.os_vif_util [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.478 253665 DEBUG nova.objects.instance [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.493 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <uuid>2f0d9dce-1900-41c4-9b69-7e46f34dde81</uuid>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <name>instance-00000035</name>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <nova:name>tempest-DeleteServersTestJSON-server-1545163364</nova:name>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:18:55</nova:creationTime>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <nova:port uuid="8d030734-0e50-4fca-a432-cc2d1c2c9dea">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <entry name="serial">2f0d9dce-1900-41c4-9b69-7e46f34dde81</entry>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <entry name="uuid">2f0d9dce-1900-41c4-9b69-7e46f34dde81</entry>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:8b:66:f7"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <target dev="tap8d030734-0e"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/console.log" append="off"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:18:56 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:18:56 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:18:56 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:18:56 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.494 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Preparing to wait for external event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.494 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.494 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.494 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.495 253665 DEBUG nova.virt.libvirt.vif [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1545163364',display_name='tempest-DeleteServersTestJSON-server-1545163364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1545163364',id=53,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-0l9kcni8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:50Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2f0d9dce-1900-41c4-9b69-7e46f34dde81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.496 253665 DEBUG nova.network.os_vif_util [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.496 253665 DEBUG nova.network.os_vif_util [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.497 253665 DEBUG os_vif [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.500 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.501 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.512 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d030734-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.512 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d030734-0e, col_values=(('external_ids', {'iface-id': '8d030734-0e50-4fca-a432-cc2d1c2c9dea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:66:f7', 'vm-uuid': '2f0d9dce-1900-41c4-9b69-7e46f34dde81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:56 np0005532048 NetworkManager[48920]: <info>  [1763803136.5155] manager: (tap8d030734-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.526 253665 INFO os_vif [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e')#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.595 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.595 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.595 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:8b:66:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.596 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Using config drive#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.620 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2296007028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.673 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.681 253665 DEBUG nova.compute.provider_tree [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.693 253665 DEBUG nova.scheduler.client.report [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:18:56 np0005532048 podman[313358]: 2025-11-22 09:18:56.613708333 +0000 UTC m=+0.025123421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.713 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.714 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.716 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.727 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.728 253665 INFO nova.compute.claims [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:18:56 np0005532048 podman[313358]: 2025-11-22 09:18:56.731642388 +0000 UTC m=+0.143057456 container create c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:18:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:56 np0005532048 systemd[1]: Started libpod-conmon-c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da.scope.
Nov 22 04:18:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:56.894 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.917 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.918 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.935 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:18:56 np0005532048 nova_compute[253661]: 2025-11-22 09:18:56.948 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:18:56 np0005532048 podman[313358]: 2025-11-22 09:18:56.979913591 +0000 UTC m=+0.391328679 container init c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:18:56 np0005532048 podman[313358]: 2025-11-22 09:18:56.987994337 +0000 UTC m=+0.399409405 container start c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:18:56 np0005532048 confident_morse[313393]: 167 167
Nov 22 04:18:56 np0005532048 systemd[1]: libpod-c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da.scope: Deactivated successfully.
Nov 22 04:18:57 np0005532048 conmon[313393]: conmon c9d82b0649d181c741f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da.scope/container/memory.events
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.029 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.030 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.030 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Creating image(s)#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.065 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:57 np0005532048 podman[313358]: 2025-11-22 09:18:57.081009278 +0000 UTC m=+0.492424376 container attach c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:18:57 np0005532048 podman[313358]: 2025-11-22 09:18:57.081547001 +0000 UTC m=+0.492962089 container died c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.093 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.114 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.117 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.167 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.212 253665 DEBUG nova.policy [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5f9c3cac3ab4d74a7aeffd50c07da03', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1d5505406f294eb4a17d4137cad567f1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.213 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.214 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.215 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.215 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.239 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.245 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-466c37357c3b2bcca86c1007c60932cd1431a4328f3f0957c3c471c86fc71f50-merged.mount: Deactivated successfully.
Nov 22 04:18:57 np0005532048 podman[313358]: 2025-11-22 09:18:57.304828485 +0000 UTC m=+0.716243553 container remove c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:18:57 np0005532048 systemd[1]: libpod-conmon-c9d82b0649d181c741f4d3e62dd8b12de58a627eafea1a31b59efa5748fb79da.scope: Deactivated successfully.
Nov 22 04:18:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 215 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.422 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Creating config drive at /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.427 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4n7pran3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:57 np0005532048 podman[313532]: 2025-11-22 09:18:57.557992387 +0000 UTC m=+0.082812323 container create f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.566 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4n7pran3" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.596 253665 DEBUG nova.storage.rbd_utils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:57 np0005532048 podman[313532]: 2025-11-22 09:18:57.505759808 +0000 UTC m=+0.030579794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.602 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:57 np0005532048 systemd[1]: Started libpod-conmon-f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86.scope.
Nov 22 04:18:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505920369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.677 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:57 np0005532048 podman[313532]: 2025-11-22 09:18:57.724442932 +0000 UTC m=+0.249262868 container init f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.725 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:57 np0005532048 podman[313532]: 2025-11-22 09:18:57.735188803 +0000 UTC m=+0.260008739 container start f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:18:57 np0005532048 podman[313532]: 2025-11-22 09:18:57.748473915 +0000 UTC m=+0.273293851 container attach f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.771 253665 DEBUG nova.compute.provider_tree [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.776 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] resizing rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.851 253665 DEBUG nova.scheduler.client.report [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.872 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.873 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.876 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.884 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.885 253665 INFO nova.compute.claims [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:18:57 np0005532048 kernel: tapa288a5e5-7b (unregistering): left promiscuous mode
Nov 22 04:18:57 np0005532048 NetworkManager[48920]: <info>  [1763803137.9353] device (tapa288a5e5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.938 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.939 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:18:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:57Z|00496|binding|INFO|Releasing lport a288a5e5-7b57-4be8-9617-3271ea1e210f from this chassis (sb_readonly=0)
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:57Z|00497|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f down in Southbound
Nov 22 04:18:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:57Z|00498|binding|INFO|Removing iface tapa288a5e5-7b ovn-installed in OVS
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.958 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:18:57 np0005532048 nova_compute[253661]: 2025-11-22 09:18:57.967 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.969 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.970 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis#033[00m
Nov 22 04:18:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.972 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebc42408-7b03-480c-a016-1e5bb2ebcc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:18:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.974 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ddba8c-2ad8-4df5-9617-0687271f9cb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:57.976 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace which is not needed anymore#033[00m
Nov 22 04:18:57 np0005532048 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 22 04:18:57 np0005532048 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000032.scope: Consumed 13.967s CPU time.
Nov 22 04:18:57 np0005532048 systemd-machined[215941]: Machine qemu-58-instance-00000032 terminated.
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.023 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.025 253665 DEBUG oslo_concurrency.processutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config 2f0d9dce-1900-41c4-9b69-7e46f34dde81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.026 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.029 253665 INFO nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Deleting local config drive /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81/disk.config because it was imported into RBD.#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.047 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'migration_context' on Instance uuid 045614f9-cfb4-4a52-996e-e880cbdf7dcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.068 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.068 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Ensure instance console log exists: /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.069 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.069 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.069 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:58 np0005532048 NetworkManager[48920]: <info>  [1763803138.0911] manager: (tap8d030734-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Nov 22 04:18:58 np0005532048 kernel: tap8d030734-0e: entered promiscuous mode
Nov 22 04:18:58 np0005532048 systemd-udevd[313656]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:18:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:58Z|00499|binding|INFO|Claiming lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea for this chassis.
Nov 22 04:18:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:58Z|00500|binding|INFO|8d030734-0e50-4fca-a432-cc2d1c2c9dea: Claiming fa:16:3e:8b:66:f7 10.100.0.10
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.101 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.108 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:66:f7 10.100.0.10'], port_security=['fa:16:3e:8b:66:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2f0d9dce-1900-41c4-9b69-7e46f34dde81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8d030734-0e50-4fca-a432-cc2d1c2c9dea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:18:58 np0005532048 NetworkManager[48920]: <info>  [1763803138.1103] device (tap8d030734-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:18:58 np0005532048 NetworkManager[48920]: <info>  [1763803138.1114] device (tap8d030734-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.121 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.123 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.123 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Creating image(s)#033[00m
Nov 22 04:18:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:58Z|00501|binding|INFO|Setting lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea ovn-installed in OVS
Nov 22 04:18:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:58Z|00502|binding|INFO|Setting lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea up in Southbound
Nov 22 04:18:58 np0005532048 systemd-machined[215941]: New machine qemu-60-instance-00000035.
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.150 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:58 np0005532048 systemd[1]: Started Virtual Machine qemu-60-instance-00000035.
Nov 22 04:18:58 np0005532048 NetworkManager[48920]: <info>  [1763803138.1631] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.195 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.254 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:58 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [NOTICE]   (311834) : haproxy version is 2.8.14-c23fe91
Nov 22 04:18:58 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [NOTICE]   (311834) : path to executable is /usr/sbin/haproxy
Nov 22 04:18:58 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [WARNING]  (311834) : Exiting Master process...
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.259 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:58 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [ALERT]    (311834) : Current worker (311836) exited with code 143 (Terminated)
Nov 22 04:18:58 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[311811]: [WARNING]  (311834) : All workers exited. Exiting... (0)
Nov 22 04:18:58 np0005532048 systemd[1]: libpod-3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb.scope: Deactivated successfully.
Nov 22 04:18:58 np0005532048 podman[313702]: 2025-11-22 09:18:58.270610813 +0000 UTC m=+0.176107801 container died 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.303 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.310 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Successfully created port: 88cfebb7-b545-4137-8094-3fa68a13f42b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.313 253665 DEBUG nova.policy [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5f9c3cac3ab4d74a7aeffd50c07da03', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1d5505406f294eb4a17d4137cad567f1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.356 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.356 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.357 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.357 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.381 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.384 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b4a5932d-6547-4c01-9c71-0907c65247a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.418 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb-userdata-shm.mount: Deactivated successfully.
Nov 22 04:18:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-999d845ec8bd02cd92c7e0cd63eb74e8c346a62a6f4728db53982c6a161b0456-merged.mount: Deactivated successfully.
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.482 253665 DEBUG nova.compute.manager [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.483 253665 DEBUG oslo_concurrency.lockutils [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.484 253665 DEBUG oslo_concurrency.lockutils [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.484 253665 DEBUG oslo_concurrency.lockutils [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.485 253665 DEBUG nova.compute.manager [req-504cd77f-910a-4ff7-925f-915f34c7c5f0 req-1bfe2051-46f4-47f0-84d3-49c76c22dca2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Processing event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.528 253665 INFO nova.virt.libvirt.driver [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance shutdown successfully after 13 seconds.#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.535 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.535 253665 DEBUG nova.objects.instance [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.541 253665 DEBUG nova.compute.manager [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.541 253665 DEBUG oslo_concurrency.lockutils [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.542 253665 DEBUG oslo_concurrency.lockutils [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.542 253665 DEBUG oslo_concurrency.lockutils [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.542 253665 DEBUG nova.compute.manager [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.542 253665 WARNING nova.compute.manager [req-e27c7805-c370-41e1-9791-f63aca69fb17 req-2ff2b9b9-0610-439f-9c14-985b67c93522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state powering-off.#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.546 253665 DEBUG nova.compute.manager [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.602 253665 DEBUG oslo_concurrency.lockutils [None req-4a230986-00b4-4cb1-ab08-3bf7510c5ad9 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:58 np0005532048 podman[313702]: 2025-11-22 09:18:58.636857742 +0000 UTC m=+0.542354730 container cleanup 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 04:18:58 np0005532048 systemd[1]: libpod-conmon-3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb.scope: Deactivated successfully.
Nov 22 04:18:58 np0005532048 podman[313893]: 2025-11-22 09:18:58.776943476 +0000 UTC m=+0.104408778 container remove 3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.785 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b90f6c1-a5fe-463f-b50f-24082167ae74]: (4, ('Sat Nov 22 09:18:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb)\n3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb\nSat Nov 22 09:18:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb)\n3f844f5387522ea1ce8e976e9422e9722c3b7a8efeeee40b5e6b61efd66a3bcb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.787 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc38c6f0-df94-4148-be72-f729aa824071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.789 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.793 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:58 np0005532048 kernel: tapebc42408-70: left promiscuous mode
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.817 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4f53e97a-a061-49f0-8541-69a189482bde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.835 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2e4fdb-c152-420f-b222-f2201a613308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.840 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0ca317-5999-492b-a57b-eff472fb05e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.862 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11bf2405-f3d9-4c0c-9e34-22c4459104c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596505, 'reachable_time': 26711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313956, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.868 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:18:58 np0005532048 systemd[1]: run-netns-ovnmeta\x2debc42408\x2d7b03\x2d480c\x2da016\x2d1e5bb2ebcc93.mount: Deactivated successfully.
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]: {
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "osd_id": 1,
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "type": "bluestore"
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:    },
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "osd_id": 0,
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "type": "bluestore"
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:    },
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "osd_id": 2,
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:        "type": "bluestore"
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]:    }
Nov 22 04:18:58 np0005532048 gracious_shockley[313574]: }
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.868 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[30faf1ef-6170-4868-b35d-ed8e114a3764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.873 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8d030734-0e50-4fca-a432-cc2d1c2c9dea in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.874 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.886 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803138.8864884, 2f0d9dce-1900-41c4-9b69-7e46f34dde81 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.887 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] VM Started (Lifecycle Event)#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[155405cf-84d8-466d-8ab3-f886f483f7a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.890 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.890 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.895 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.897 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7455d4d9-cd11-4393-8bb6-7ccdf5a3359c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.898 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[850fd33b-37ce-4c1e-92c3-dbdb5ee07ccf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.899 253665 INFO nova.virt.libvirt.driver [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance spawned successfully.#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.899 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.904 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.912 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[36bac71d-da25-4006-8b2c-305a8664ec3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 systemd[1]: libpod-f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86.scope: Deactivated successfully.
Nov 22 04:18:58 np0005532048 systemd[1]: libpod-f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86.scope: Consumed 1.108s CPU time.
Nov 22 04:18:58 np0005532048 podman[313532]: 2025-11-22 09:18:58.92155905 +0000 UTC m=+1.446379006 container died f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.924 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.933 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.934 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.934 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.935 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.935 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.936 253665 DEBUG nova.virt.libvirt.driver [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.943 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7d8c28-8713-4e24-82a3-f3f49166fe30]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.944 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.945 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803138.889366, 2f0d9dce-1900-41c4-9b69-7e46f34dde81 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.945 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:18:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604372826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.981 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.990 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7c328baa-5c59-4c27-a3f2-8e508e5b8d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:58.997 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8a853816-7985-4f66-bcc8-fbff13437645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:58 np0005532048 NetworkManager[48920]: <info>  [1763803138.9991] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Nov 22 04:18:58 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.998 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803138.8954682, 2f0d9dce-1900-41c4-9b69-7e46f34dde81 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:58.999 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.008 253665 INFO nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Took 8.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.009 253665 DEBUG nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ece057e0313525934cca61451493e13d908f4cc1d68032aa6594ca8b6e3352b1-merged.mount: Deactivated successfully.
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.025 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.038 253665 DEBUG nova.compute.provider_tree [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.041 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.045 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.048 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[599d8a5d-d113-4112-a728-474265a63ad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.052 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[696fa02a-9cf4-43d9-8738-133ae90a599a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.074 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:18:59 np0005532048 NetworkManager[48920]: <info>  [1763803139.0767] device (tapd93e3720-b0): carrier: link connected
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.080 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cd646734-8466-4551-9280-04395a669a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.082 253665 DEBUG nova.scheduler.client.report [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.107 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[83da51f2-9a06-426b-95de-715593823679]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598981, 'reachable_time': 18964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314002, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.111 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.112 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.116 253665 INFO nova.compute.manager [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Took 9.53 seconds to build instance.#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.126 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d198bb59-be08-4456-8450-42d48e9c23a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598981, 'tstamp': 598981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314003, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:59 np0005532048 podman[313532]: 2025-11-22 09:18:59.135095599 +0000 UTC m=+1.659915545 container remove f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.147 253665 DEBUG oslo_concurrency.lockutils [None req-e66b2126-c5d9-4c90-9429-6d880b5f1edf 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:59 np0005532048 systemd[1]: libpod-conmon-f42c710597a2440a95d2bc86b6dd6809451a3e320b963fb253c4028f2ceb4b86.scope: Deactivated successfully.
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.158 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b53f76-f0e0-4294-9655-4b98015790f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598981, 'reachable_time': 18964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314004, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.169 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.170 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:18:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:18:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:18:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.194 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.203 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b4a5932d-6547-4c01-9c71-0907c65247a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:18:59 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c4cb6b3d-54cc-4daa-8069-dd830fd9885f does not exist
Nov 22 04:18:59 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev cb2f250d-18c4-4c34-bbd0-6d089fb7d7d9 does not exist
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.211 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ce6fcc-468d-4ba4-a978-ca86cf26ebee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.246 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.315 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c13f8f-c7cb-4395-9789-676430f0906a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.318 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.318 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:59 np0005532048 NetworkManager[48920]: <info>  [1763803139.3221] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Nov 22 04:18:59 np0005532048 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.324 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:18:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:18:59Z|00503|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.353 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] resizing rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.355 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.356 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf0bc01-8527-43c8-9f1d-6850fbfd3b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.358 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:18:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:18:59.359 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:18:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 249 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 118 op/s
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.403 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.404 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.404 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Creating image(s)#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.430 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.464 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.495 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.503 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.567 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.580 253665 DEBUG nova.policy [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5f9c3cac3ab4d74a7aeffd50c07da03', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1d5505406f294eb4a17d4137cad567f1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.628 253665 DEBUG oslo_concurrency.lockutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.629 253665 DEBUG oslo_concurrency.lockutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.629 253665 DEBUG nova.network.neutron [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.629 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'info_cache' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.631 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.632 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.632 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.633 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.655 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.661 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 64142c1c-95e0-4db4-b743-bb94c85a208f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.707 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'migration_context' on Instance uuid b4a5932d-6547-4c01-9c71-0907c65247a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.726 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.727 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Ensure instance console log exists: /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.727 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.728 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:18:59 np0005532048 nova_compute[253661]: 2025-11-22 09:18:59.728 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:18:59 np0005532048 podman[314250]: 2025-11-22 09:18:59.816792013 +0000 UTC m=+0.063674049 container create 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:18:59 np0005532048 systemd[1]: Started libpod-conmon-0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19.scope.
Nov 22 04:18:59 np0005532048 podman[314250]: 2025-11-22 09:18:59.778163514 +0000 UTC m=+0.025045550 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:18:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:18:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e78995b3caad89fb07d376941b58acfb371f548ff3ea9aaf066112a811b999c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:59 np0005532048 podman[314250]: 2025-11-22 09:18:59.951095995 +0000 UTC m=+0.197978041 container init 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:18:59 np0005532048 podman[314266]: 2025-11-22 09:18:59.957536682 +0000 UTC m=+0.102694426 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:18:59 np0005532048 podman[314250]: 2025-11-22 09:18:59.958721941 +0000 UTC m=+0.205603977 container start 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:18:59 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [NOTICE]   (314299) : New worker (314301) forked
Nov 22 04:18:59 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [NOTICE]   (314299) : Loading success.
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.053 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 64142c1c-95e0-4db4-b743-bb94c85a208f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.114 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] resizing rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:19:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.247 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Successfully created port: 7612be10-c22f-4d60-89f7-232e865b6524 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.252 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Successfully updated port: 88cfebb7-b545-4137-8094-3fa68a13f42b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.261 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'migration_context' on Instance uuid 64142c1c-95e0-4db4-b743-bb94c85a208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.271 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.272 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquired lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.272 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.274 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.274 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Ensure instance console log exists: /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.274 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.275 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.275 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.464 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.893 253665 DEBUG nova.compute.manager [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.894 253665 DEBUG oslo_concurrency.lockutils [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.894 253665 DEBUG oslo_concurrency.lockutils [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.894 253665 DEBUG oslo_concurrency.lockutils [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.895 253665 DEBUG nova.compute.manager [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] No waiting events found dispatching network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.895 253665 WARNING nova.compute.manager [req-956c52e3-790d-41ff-8377-ab07922cec71 req-eeb0333b-6724-4653-b5a6-11c121f79da6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received unexpected event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.952 253665 DEBUG nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.953 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.953 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.954 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.954 253665 DEBUG nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.955 253665 WARNING nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.955 253665 DEBUG nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-changed-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.955 253665 DEBUG nova.compute.manager [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Refreshing instance network info cache due to event network-changed-88cfebb7-b545-4137-8094-3fa68a13f42b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:19:00 np0005532048 nova_compute[253661]: 2025-11-22 09:19:00.956 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.056 253665 DEBUG oslo_concurrency.lockutils [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.056 253665 DEBUG oslo_concurrency.lockutils [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.057 253665 DEBUG nova.compute.manager [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.059 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Successfully created port: 11f926cd-f731-4de9-861e-5842f91f48df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.064 253665 DEBUG nova.compute.manager [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.065 253665 DEBUG nova.objects.instance [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'flavor' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.086 253665 DEBUG nova.virt.libvirt.driver [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.287 253665 DEBUG nova.network.neutron [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:01Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f1:f2:e5 10.100.0.3
Nov 22 04:19:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:01Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f1:f2:e5 10.100.0.3
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.318 253665 DEBUG oslo_concurrency.lockutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.344 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.344 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.356 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.367 253665 DEBUG nova.virt.libvirt.vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.370 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.371 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.371 253665 DEBUG os_vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.373 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.374 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa288a5e5-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.381 253665 INFO os_vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')#033[00m
Nov 22 04:19:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 249 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 118 op/s
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.389 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start _get_guest_xml network_info=[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.394 253665 WARNING nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.398 253665 DEBUG nova.virt.libvirt.host [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.399 253665 DEBUG nova.virt.libvirt.host [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.402 253665 DEBUG nova.virt.libvirt.host [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.402 253665 DEBUG nova.virt.libvirt.host [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.403 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.403 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.403 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.404 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.404 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.404 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.404 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.405 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.405 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.405 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.405 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.406 253665 DEBUG nova.virt.hardware [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.406 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.418 253665 DEBUG oslo_concurrency.processutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.780 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Successfully updated port: 7612be10-c22f-4d60-89f7-232e865b6524 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:19:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.793 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Updating instance_info_cache with network_info: [{"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.795 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.796 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquired lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.796 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.824 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Releasing lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.825 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance network_info: |[{"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.825 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.825 253665 DEBUG nova.network.neutron [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Refreshing network info cache for port 88cfebb7-b545-4137-8094-3fa68a13f42b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.829 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start _get_guest_xml network_info=[{"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.834 253665 WARNING nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.840 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.840 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.847 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.847 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.848 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.848 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.848 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.849 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.850 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.850 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.853 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3084654943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.939 253665 DEBUG oslo_concurrency.processutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:01 np0005532048 nova_compute[253661]: 2025-11-22 09:19:01.971 253665 DEBUG oslo_concurrency.processutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.219 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:19:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083887834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.366 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.419 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439856252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.427 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.486 253665 DEBUG oslo_concurrency.processutils [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.488 253665 DEBUG nova.virt.libvirt.vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.488 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.489 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.491 253665 DEBUG nova.objects.instance [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.505 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <uuid>636b1046-fff8-4a45-8a14-04010b2f282e</uuid>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <name>instance-00000032</name>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerActionsTestJSON-server-149918095</nova:name>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:01</nova:creationTime>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:port uuid="a288a5e5-7b57-4be8-9617-3271ea1e210f">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="serial">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="uuid">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk.config">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:70:38:8e"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <target dev="tapa288a5e5-7b"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log" append="off"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <input type="keyboard" bus="usb"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:02 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:02 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.506 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.506 253665 DEBUG nova.virt.libvirt.driver [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.506 253665 DEBUG nova.virt.libvirt.vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.507 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.507 253665 DEBUG nova.network.os_vif_util [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.508 253665 DEBUG os_vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.508 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.509 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.514 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.514 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 NetworkManager[48920]: <info>  [1763803142.5174] manager: (tapa288a5e5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.523 253665 INFO os_vif [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')#033[00m
Nov 22 04:19:02 np0005532048 NetworkManager[48920]: <info>  [1763803142.6243] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Nov 22 04:19:02 np0005532048 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:02Z|00504|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 04:19:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:02Z|00505|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.641 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.643 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 bound to our chassis#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.645 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93#033[00m
Nov 22 04:19:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:02Z|00506|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 04:19:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:02Z|00507|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:19:02 np0005532048 systemd-udevd[314518]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.662 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73a1d481-710c-4ab7-b190-2635f3ed966e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.663 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.666 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017137695757634954 of space, bias 1.0, pg target 0.5141308727290487 quantized to 32 (current 32)
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.666 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe0a4bb-b5c8-41c6-b43c-f9e236a184e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.668 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf30bf4-a84b-463a-a962-17bfdc946616]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 NetworkManager[48920]: <info>  [1763803142.6725] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:19:02 np0005532048 NetworkManager[48920]: <info>  [1763803142.6735] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:19:02 np0005532048 systemd-machined[215941]: New machine qemu-61-instance-00000032.
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.682 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4b29ee8a-8f8e-4cf4-b699-dec17c456df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 systemd[1]: Started Virtual Machine qemu-61-instance-00000032.
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.705 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fca989-5518-4458-978e-7277a5cd745c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.744 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Successfully updated port: 11f926cd-f731-4de9-861e-5842f91f48df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.746 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[438b3c6a-80bd-4b2a-8d51-5e55a2ab6329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.752 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59c79439-664b-43ad-92c3-73dbc9dfc19b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 NetworkManager[48920]: <info>  [1763803142.7535] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/232)
Nov 22 04:19:02 np0005532048 systemd-udevd[314523]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.770 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.770 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquired lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.771 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.805 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c634a370-2a94-44ae-98ec-81f4eba28962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.810 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[15655f4a-e260-41a0-9bf1-96e8c945c46a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 NetworkManager[48920]: <info>  [1763803142.8388] device (tapebc42408-70): carrier: link connected
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.848 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c26ecdc7-7712-4412-9c52-36f47456628b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.870 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6385263-5cad-47aa-93ea-93074fa02358]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599357, 'reachable_time': 37278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314552, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.896 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f63db4e5-a38a-4853-a078-4bd2f262e873]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599357, 'tstamp': 599357}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314553, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.916 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46249882-8d0b-43da-94b6-130ca86ca14b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599357, 'reachable_time': 37278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314554, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126397494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:02.949 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7021e375-7427-4b24-9fcf-05972c71064e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.956 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.958 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-1',id=54,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:56Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=045614f9-cfb4-4a52-996e-e880cbdf7dcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.959 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.960 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.961 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 045614f9-cfb4-4a52-996e-e880cbdf7dcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.974 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <uuid>045614f9-cfb4-4a52-996e-e880cbdf7dcd</uuid>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <name>instance-00000036</name>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:name>tempest-ListServersNegativeTestJSON-server-724322260-1</nova:name>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:01</nova:creationTime>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:user uuid="e5f9c3cac3ab4d74a7aeffd50c07da03">tempest-ListServersNegativeTestJSON-920959944-project-member</nova:user>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:project uuid="1d5505406f294eb4a17d4137cad567f1">tempest-ListServersNegativeTestJSON-920959944</nova:project>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <nova:port uuid="88cfebb7-b545-4137-8094-3fa68a13f42b">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="serial">045614f9-cfb4-4a52-996e-e880cbdf7dcd</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="uuid">045614f9-cfb4-4a52-996e-e880cbdf7dcd</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:54:8a:3b"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <target dev="tap88cfebb7-b5"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/console.log" append="off"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:02 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:02 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:02 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:02 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.979 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Preparing to wait for external event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.980 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.980 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.980 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.981 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-1',id=54,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:56Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=045614f9-cfb4-4a52-996e-e880cbdf7dcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.981 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.982 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.983 253665 DEBUG os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.984 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.984 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.986 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.986 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88cfebb7-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.987 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88cfebb7-b5, col_values=(('external_ids', {'iface-id': '88cfebb7-b545-4137-8094-3fa68a13f42b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:8a:3b', 'vm-uuid': '045614f9-cfb4-4a52-996e-e880cbdf7dcd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.988 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 NetworkManager[48920]: <info>  [1763803142.9895] manager: (tap88cfebb7-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.992 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.994 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:02 np0005532048 nova_compute[253661]: 2025-11-22 09:19:02.995 253665 INFO os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5')#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.031 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b12c5fc-4e27-42f5-910f-734b92f71483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.035 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.035 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:03 np0005532048 NetworkManager[48920]: <info>  [1763803143.0673] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Nov 22 04:19:03 np0005532048 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.069 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.074 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.075 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:03Z|00508|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.078 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.078 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.079 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No VIF found with MAC fa:16:3e:54:8a:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.079 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Using config drive#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.101 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.102 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab0408a-a822-4555-a5b8-3cc8a275c00b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.103 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.105 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.102 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.228 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:19:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 308 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 782 KiB/s rd, 5.8 MiB/s wr, 132 op/s
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.390 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Creating config drive at /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.396 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnfd5fy7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.450 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 636b1046-fff8-4a45-8a14-04010b2f282e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.451 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803143.3879097, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.451 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.462 253665 DEBUG nova.compute.manager [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.464 253665 DEBUG nova.compute.manager [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-changed-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.464 253665 DEBUG nova.compute.manager [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Refreshing instance network info cache due to event network-changed-7612be10-c22f-4d60-89f7-232e865b6524. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.465 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.471 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance rebooted successfully.#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.471 253665 DEBUG nova.compute.manager [None req-ae252051-dd52-4c7e-bcaf-ca904564ac63 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.476 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.488 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.514 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.514 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803143.425389, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.515 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.547 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.550 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:03 np0005532048 podman[314655]: 2025-11-22 09:19:03.557217199 +0000 UTC m=+0.057373775 container create 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.569 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnfd5fy7" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:03 np0005532048 systemd[1]: Started libpod-conmon-96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79.scope.
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.597 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.603 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:19:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1467195d1769cb3d2de1fc7c53fe46b5d8e844ec9d0e75fe4e8f0d6486282a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:03 np0005532048 podman[314655]: 2025-11-22 09:19:03.53174693 +0000 UTC m=+0.031903526 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:19:03 np0005532048 podman[314655]: 2025-11-22 09:19:03.643110285 +0000 UTC m=+0.143266871 container init 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 04:19:03 np0005532048 podman[314655]: 2025-11-22 09:19:03.648862815 +0000 UTC m=+0.149019391 container start 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:19:03 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [NOTICE]   (314693) : New worker (314695) forked
Nov 22 04:19:03 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [NOTICE]   (314693) : Loading success.
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.766 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Updating instance_info_cache with network_info: [{"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.783 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Releasing lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.784 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance network_info: |[{"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.784 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.785 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Refreshing network info cache for port 7612be10-c22f-4d60-89f7-232e865b6524 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.788 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start _get_guest_xml network_info=[{"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.795 253665 WARNING nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.801 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.802 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.803 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config 045614f9-cfb4-4a52-996e-e880cbdf7dcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.804 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Deleting local config drive /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd/disk.config because it was imported into RBD.#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.813 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.814 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.814 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.815 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.815 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.816 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.816 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.816 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.816 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.817 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.817 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.817 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.817 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.818 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.821 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:03 np0005532048 NetworkManager[48920]: <info>  [1763803143.8723] manager: (tap88cfebb7-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Nov 22 04:19:03 np0005532048 kernel: tap88cfebb7-b5: entered promiscuous mode
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.871 253665 DEBUG nova.network.neutron [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Updated VIF entry in instance network info cache for port 88cfebb7-b545-4137-8094-3fa68a13f42b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.872 253665 DEBUG nova.network.neutron [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Updating instance_info_cache with network_info: [{"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:03Z|00509|binding|INFO|Claiming lport 88cfebb7-b545-4137-8094-3fa68a13f42b for this chassis.
Nov 22 04:19:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:03Z|00510|binding|INFO|88cfebb7-b545-4137-8094-3fa68a13f42b: Claiming fa:16:3e:54:8a:3b 10.100.0.10
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.888 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:8a:3b 10.100.0.10'], port_security=['fa:16:3e:54:8a:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '045614f9-cfb4-4a52-996e-e880cbdf7dcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88cfebb7-b545-4137-8094-3fa68a13f42b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:03 np0005532048 NetworkManager[48920]: <info>  [1763803143.8922] device (tap88cfebb7-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:19:03 np0005532048 NetworkManager[48920]: <info>  [1763803143.8933] device (tap88cfebb7-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.895 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88cfebb7-b545-4137-8094-3fa68a13f42b in datapath f6001c81-6c53-4678-b8e8-39c35706be23 bound to our chassis#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.899 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:03Z|00511|binding|INFO|Setting lport 88cfebb7-b545-4137-8094-3fa68a13f42b ovn-installed in OVS
Nov 22 04:19:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:03Z|00512|binding|INFO|Setting lport 88cfebb7-b545-4137-8094-3fa68a13f42b up in Southbound
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.903 253665 DEBUG oslo_concurrency.lockutils [req-16053700-5244-44d3-9cb2-d8ec5b8133f6 req-70a0b7ca-e5c7-456f-a3e5-b071f4c694e4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-045614f9-cfb4-4a52-996e-e880cbdf7dcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:03 np0005532048 nova_compute[253661]: 2025-11-22 09:19:03.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.916 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[27d70246-7024-48bb-8412-3892cf21564c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.919 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6001c81-61 in ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.922 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6001c81-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.922 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e795bcbc-05ed-40ce-811f-4f83bfb8195d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.923 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[178bd60f-9f38-4b09-9d46-48cd6b8d2caa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:03 np0005532048 systemd-machined[215941]: New machine qemu-62-instance-00000036.
Nov 22 04:19:03 np0005532048 systemd[1]: Started Virtual Machine qemu-62-instance-00000036.
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.941 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8a70bac4-58b4-4323-b376-5f98b53b3602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:03.959 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f731f8-3331-4b68-9340-a22248a3f143]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.013 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[80416119-4b3c-4202-b6d1-dd3bbc977569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 NetworkManager[48920]: <info>  [1763803144.0210] manager: (tapf6001c81-60): new Veth device (/org/freedesktop/NetworkManager/Devices/236)
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdb5f32-1030-4a7a-b9ac-6b2d7ba65e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.079 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5514d509-89a1-40d8-9f18-d610ef56982f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.085 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6addf0cc-27c8-4ca5-803d-3ad294cf25be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 NetworkManager[48920]: <info>  [1763803144.1186] device (tapf6001c81-60): carrier: link connected
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.127 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ce2832-ed26-4606-a81f-a2575532ed99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f749a7c-596c-43da-a050-d0052b0e9800]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 40749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314772, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.173 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8fef96-d4d7-4c5a-84e6-f05f8bf93351]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:70c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599485, 'tstamp': 599485}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314773, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af915c29-7475-4a4b-85a1-85fe038f4510]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 40749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314774, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.233 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[223f51f2-64b3-403e-8db1-bed57e125a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.273 253665 DEBUG nova.compute.manager [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.273 253665 DEBUG oslo_concurrency.lockutils [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.277 253665 DEBUG oslo_concurrency.lockutils [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.281 253665 DEBUG oslo_concurrency.lockutils [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.282 253665 DEBUG nova.compute.manager [req-8db92f25-6b28-42f2-a904-d40c285e2cb9 req-a46b9833-c5d7-4e2e-ab7c-bbd43e86b1cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Processing event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:19:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962065208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.316 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdcd05c-1d0f-4632-ae3f-2a558d216181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.319 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.320 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:04 np0005532048 NetworkManager[48920]: <info>  [1763803144.3234] manager: (tapf6001c81-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Nov 22 04:19:04 np0005532048 kernel: tapf6001c81-60: entered promiscuous mode
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.326 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:04Z|00513|binding|INFO|Releasing lport a0af7d9b-5116-431a-ad00-3df7641dc72f from this chassis (sb_readonly=0)
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.341 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.351 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6001c81-6c53-4678-b8e8-39c35706be23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6001c81-6c53-4678-b8e8-39c35706be23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.353 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[40dd5594-7d72-4b3a-ae32-740c6b29797b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.354 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-f6001c81-6c53-4678-b8e8-39c35706be23
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/f6001c81-6c53-4678-b8e8-39c35706be23.pid.haproxy
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID f6001c81-6c53-4678-b8e8-39c35706be23
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:19:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:04.356 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'env', 'PROCESS_TAG=haproxy-f6001c81-6c53-4678-b8e8-39c35706be23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6001c81-6c53-4678-b8e8-39c35706be23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.386 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.393 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.453 253665 DEBUG nova.network.neutron [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Updating instance_info_cache with network_info: [{"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.491 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Releasing lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.492 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance network_info: |[{"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.495 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start _get_guest_xml network_info=[{"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.502 253665 WARNING nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.511 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.512 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.515 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.515 253665 DEBUG nova.virt.libvirt.host [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.515 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.516 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.517 253665 DEBUG nova.virt.hardware [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.521 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:04 np0005532048 podman[314882]: 2025-11-22 09:19:04.832003553 +0000 UTC m=+0.074242125 container create ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:19:04 np0005532048 systemd[1]: Started libpod-conmon-ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9.scope.
Nov 22 04:19:04 np0005532048 podman[314882]: 2025-11-22 09:19:04.794294178 +0000 UTC m=+0.036532760 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:19:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:19:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708818679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aea65242036820cb6858ded2c2732f7a57b937b65a8c1c3d87746de9de9af3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.909 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803144.9060185, 045614f9-cfb4-4a52-996e-e880cbdf7dcd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.909 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] VM Started (Lifecycle Event)#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.911 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.915 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:19:04 np0005532048 podman[314882]: 2025-11-22 09:19:04.919555071 +0000 UTC m=+0.161793663 container init ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.919 253665 INFO nova.virt.libvirt.driver [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance spawned successfully.#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.920 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:19:04 np0005532048 podman[314882]: 2025-11-22 09:19:04.9277477 +0000 UTC m=+0.169986272 container start ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.927 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.949 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:04 np0005532048 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [NOTICE]   (314926) : New worker (314928) forked
Nov 22 04:19:04 np0005532048 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [NOTICE]   (314926) : Loading success.
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.959 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.960 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.960 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.961 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.961 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.962 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.966 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.967 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.967 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803144.9062157, 045614f9-cfb4-4a52-996e-e880cbdf7dcd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.967 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.969 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-2',id=55,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:58Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=b4a5932d-6547-4c01-9c71-0907c65247a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.969 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.970 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.971 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4a5932d-6547-4c01-9c71-0907c65247a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.989 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <uuid>b4a5932d-6547-4c01-9c71-0907c65247a1</uuid>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <name>instance-00000037</name>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <nova:name>tempest-ListServersNegativeTestJSON-server-724322260-2</nova:name>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:03</nova:creationTime>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <nova:user uuid="e5f9c3cac3ab4d74a7aeffd50c07da03">tempest-ListServersNegativeTestJSON-920959944-project-member</nova:user>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <nova:project uuid="1d5505406f294eb4a17d4137cad567f1">tempest-ListServersNegativeTestJSON-920959944</nova:project>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <nova:port uuid="7612be10-c22f-4d60-89f7-232e865b6524">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <entry name="serial">b4a5932d-6547-4c01-9c71-0907c65247a1</entry>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <entry name="uuid">b4a5932d-6547-4c01-9c71-0907c65247a1</entry>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b4a5932d-6547-4c01-9c71-0907c65247a1_disk">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:b3:cc:cd"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <target dev="tap7612be10-c2"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/console.log" append="off"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:04 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:04 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:04 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:04 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.990 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Preparing to wait for external event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.990 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.990 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.991 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.991 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-2',id=55,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:58Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=b4a5932d-6547-4c01-9c71-0907c65247a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.991 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.992 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.992 253665 DEBUG os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.993 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.994 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.997 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.998 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.998 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7612be10-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:04 np0005532048 nova_compute[253661]: 2025-11-22 09:19:04.999 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7612be10-c2, col_values=(('external_ids', {'iface-id': '7612be10-c22f-4d60-89f7-232e865b6524', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:cc:cd', 'vm-uuid': 'b4a5932d-6547-4c01-9c71-0907c65247a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.000 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:05 np0005532048 NetworkManager[48920]: <info>  [1763803145.0025] manager: (tap7612be10-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.010 253665 INFO os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2')#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.011 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803144.9147544, 045614f9-cfb4-4a52-996e-e880cbdf7dcd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.011 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.015 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Took 7.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.015 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.027 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.034 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3140546305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.054 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.061 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.089 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.096 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.198 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.212 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Took 9.35 seconds to build instance.#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.222 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.223 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.223 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No VIF found with MAC fa:16:3e:b3:cc:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.223 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Using config drive#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.249 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.262 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.353 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Updated VIF entry in instance network info cache for port 7612be10-c22f-4d60-89f7-232e865b6524. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.353 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Updating instance_info_cache with network_info: [{"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.365 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a5932d-6547-4c01-9c71-0907c65247a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.366 253665 DEBUG nova.compute.manager [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-changed-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.366 253665 DEBUG nova.compute.manager [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Refreshing instance network info cache due to event network-changed-11f926cd-f731-4de9-861e-5842f91f48df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.367 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.367 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.367 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Refreshing network info cache for port 11f926cd-f731-4de9-861e-5842f91f48df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:19:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 388 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 8.8 MiB/s wr, 252 op/s
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.561 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Creating config drive at /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.568 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkh4whl1d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554906053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.630 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.633 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-3',id=56,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:59Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=64142c1c-95e0-4db4-b743-bb94c85a208f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.634 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.635 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.636 253665 DEBUG nova.objects.instance [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 64142c1c-95e0-4db4-b743-bb94c85a208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.649 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <uuid>64142c1c-95e0-4db4-b743-bb94c85a208f</uuid>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <name>instance-00000038</name>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <nova:name>tempest-ListServersNegativeTestJSON-server-724322260-3</nova:name>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:04</nova:creationTime>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <nova:user uuid="e5f9c3cac3ab4d74a7aeffd50c07da03">tempest-ListServersNegativeTestJSON-920959944-project-member</nova:user>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <nova:project uuid="1d5505406f294eb4a17d4137cad567f1">tempest-ListServersNegativeTestJSON-920959944</nova:project>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <nova:port uuid="11f926cd-f731-4de9-861e-5842f91f48df">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <entry name="serial">64142c1c-95e0-4db4-b743-bb94c85a208f</entry>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <entry name="uuid">64142c1c-95e0-4db4-b743-bb94c85a208f</entry>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/64142c1c-95e0-4db4-b743-bb94c85a208f_disk">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:41:54:ea"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <target dev="tap11f926cd-f7"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/console.log" append="off"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:05 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:05 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:05 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:05 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.650 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Preparing to wait for external event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.650 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.650 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.651 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.652 253665 DEBUG nova.virt.libvirt.vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-3',id=56,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:18:59Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=64142c1c-95e0-4db4-b743-bb94c85a208f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.652 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.653 253665 DEBUG nova.network.os_vif_util [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.654 253665 DEBUG os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.655 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.656 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.664 253665 DEBUG nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.664 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.664 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.665 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.665 253665 DEBUG nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.665 253665 WARNING nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.665 253665 DEBUG nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.666 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.666 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.666 253665 DEBUG oslo_concurrency.lockutils [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.666 253665 DEBUG nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.667 253665 WARNING nova.compute.manager [req-a0c07f71-f017-4cfa-80d7-ffbc0e312cb5 req-03754ef9-c53f-46db-9a54-75898e6509b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.668 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11f926cd-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.668 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11f926cd-f7, col_values=(('external_ids', {'iface-id': '11f926cd-f731-4de9-861e-5842f91f48df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:54:ea', 'vm-uuid': '64142c1c-95e0-4db4-b743-bb94c85a208f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:05 np0005532048 NetworkManager[48920]: <info>  [1763803145.6717] manager: (tap11f926cd-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.672 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.680 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.680 253665 INFO os_vif [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7')#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.725 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.725 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.725 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] No VIF found with MAC fa:16:3e:41:54:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.726 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Using config drive#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.753 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.761 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkh4whl1d" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.795 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:05 np0005532048 nova_compute[253661]: 2025-11-22 09:19:05.803 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.063 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config b4a5932d-6547-4c01-9c71-0907c65247a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.064 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Deleting local config drive /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1/disk.config because it was imported into RBD.#033[00m
Nov 22 04:19:06 np0005532048 kernel: tap7612be10-c2: entered promiscuous mode
Nov 22 04:19:06 np0005532048 NetworkManager[48920]: <info>  [1763803146.1157] manager: (tap7612be10-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Nov 22 04:19:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:06Z|00514|binding|INFO|Claiming lport 7612be10-c22f-4d60-89f7-232e865b6524 for this chassis.
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.121 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:06Z|00515|binding|INFO|7612be10-c22f-4d60-89f7-232e865b6524: Claiming fa:16:3e:b3:cc:cd 10.100.0.9
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.129 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:cc:cd 10.100.0.9'], port_security=['fa:16:3e:b3:cc:cd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b4a5932d-6547-4c01-9c71-0907c65247a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7612be10-c22f-4d60-89f7-232e865b6524) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.131 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7612be10-c22f-4d60-89f7-232e865b6524 in datapath f6001c81-6c53-4678-b8e8-39c35706be23 bound to our chassis#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.133 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23#033[00m
Nov 22 04:19:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:06Z|00516|binding|INFO|Setting lport 7612be10-c22f-4d60-89f7-232e865b6524 ovn-installed in OVS
Nov 22 04:19:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:06Z|00517|binding|INFO|Setting lport 7612be10-c22f-4d60-89f7-232e865b6524 up in Southbound
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.158 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a582a09-88e2-414f-977f-6f05339ed75a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.159 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:06 np0005532048 systemd-machined[215941]: New machine qemu-63-instance-00000037.
Nov 22 04:19:06 np0005532048 systemd[1]: Started Virtual Machine qemu-63-instance-00000037.
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.193 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[921b2b7a-5099-4385-906a-ef6319a3513f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 systemd-udevd[315077]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.197 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a2833e-ac9a-4f11-b1e9-6fed9c50bd05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 NetworkManager[48920]: <info>  [1763803146.2115] device (tap7612be10-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:19:06 np0005532048 NetworkManager[48920]: <info>  [1763803146.2124] device (tap7612be10-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.246 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4181948a-ce90-4b63-afeb-58a0fb598e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.263 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb8d359-19bc-4c35-9c98-4ba8b3592e78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 40749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315087, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7fe806-d86e-45ad-bd31-163ea3a4b10f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599500, 'tstamp': 599500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315089, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599504, 'tstamp': 599504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315089, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.279 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.320 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.328 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.329 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.329 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.329 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.379 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Creating config drive at /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.385 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuz91_n2f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.435 253665 DEBUG nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.435 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.436 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.436 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.436 253665 DEBUG nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] No waiting events found dispatching network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.436 253665 WARNING nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received unexpected event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG oslo_concurrency.lockutils [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.437 253665 DEBUG nova.compute.manager [req-94340f59-ce79-4528-acc1-0e83fc908032 req-190073ca-0a24-4b3b-ba00-0a116ecea2ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Processing event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.538 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuz91_n2f" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.569 253665 DEBUG nova.storage.rbd_utils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] rbd image 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.576 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.645 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Updated VIF entry in instance network info cache for port 11f926cd-f731-4de9-861e-5842f91f48df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.646 253665 DEBUG nova.network.neutron [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Updating instance_info_cache with network_info: [{"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.660 253665 DEBUG oslo_concurrency.lockutils [req-cd556cc5-0f1b-463b-8823-eb1037ceb1af req-e85c01ef-fe6e-4781-a524-c5f623319606 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-64142c1c-95e0-4db4-b743-bb94c85a208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.715 253665 INFO nova.compute.manager [None req-a55cbe0e-e992-469b-a194-fc4a1a6993a2 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Pausing#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.716 253665 DEBUG nova.objects.instance [None req-a55cbe0e-e992-469b-a194-fc4a1a6993a2 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.742 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803146.7418752, 636b1046-fff8-4a45-8a14-04010b2f282e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.742 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.745 253665 DEBUG nova.compute.manager [None req-a55cbe0e-e992-469b-a194-fc4a1a6993a2 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.749 253665 DEBUG oslo_concurrency.processutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config 64142c1c-95e0-4db4-b743-bb94c85a208f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.750 253665 INFO nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Deleting local config drive /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f/disk.config because it was imported into RBD.#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.766 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.773 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:06 np0005532048 kernel: tap11f926cd-f7: entered promiscuous mode
Nov 22 04:19:06 np0005532048 systemd-udevd[315081]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:06 np0005532048 NetworkManager[48920]: <info>  [1763803146.8148] manager: (tap11f926cd-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Nov 22 04:19:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:06Z|00518|binding|INFO|Claiming lport 11f926cd-f731-4de9-861e-5842f91f48df for this chassis.
Nov 22 04:19:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:06Z|00519|binding|INFO|11f926cd-f731-4de9-861e-5842f91f48df: Claiming fa:16:3e:41:54:ea 10.100.0.5
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:06 np0005532048 NetworkManager[48920]: <info>  [1763803146.8286] device (tap11f926cd-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:19:06 np0005532048 NetworkManager[48920]: <info>  [1763803146.8294] device (tap11f926cd-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.832 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:54:ea 10.100.0.5'], port_security=['fa:16:3e:41:54:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '64142c1c-95e0-4db4-b743-bb94c85a208f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=11f926cd-f731-4de9-861e-5842f91f48df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.833 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 11f926cd-f731-4de9-861e-5842f91f48df in datapath f6001c81-6c53-4678-b8e8-39c35706be23 bound to our chassis#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.836 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23#033[00m
Nov 22 04:19:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:06Z|00520|binding|INFO|Setting lport 11f926cd-f731-4de9-861e-5842f91f48df ovn-installed in OVS
Nov 22 04:19:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:06Z|00521|binding|INFO|Setting lport 11f926cd-f731-4de9-861e-5842f91f48df up in Southbound
Nov 22 04:19:06 np0005532048 nova_compute[253661]: 2025-11-22 09:19:06.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.853 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94865e7a-374c-4b2e-a0fe-bd76e2108be6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 systemd-machined[215941]: New machine qemu-64-instance-00000038.
Nov 22 04:19:06 np0005532048 systemd[1]: Started Virtual Machine qemu-64-instance-00000038.
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.895 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4e30dd-93e0-4e27-858b-2d4d4c0b52b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.899 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[06f50ee8-7e02-4ef0-8949-d5db41c24ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.950 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[df13161b-c96a-4932-b6ee-72a1a2573814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:06.972 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e08c85-6c3c-4696-8117-e86ada17d9e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 40749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315154, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ec5491-7315-4b5b-a95f-c7dc65ea7bcd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599500, 'tstamp': 599500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315156, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599504, 'tstamp': 599504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315156, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.003 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.005 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.009 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.010 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.010 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.011 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 388 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.5 MiB/s wr, 279 op/s
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.512 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.512294, 64142c1c-95e0-4db4-b743-bb94c85a208f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.513 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] VM Started (Lifecycle Event)#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.530 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.544 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.5127084, 64142c1c-95e0-4db4-b743-bb94c85a208f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.545 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.561 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.565 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.580 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.594 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.595 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.5936224, b4a5932d-6547-4c01-9c71-0907c65247a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.595 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.600 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.605 253665 INFO nova.virt.libvirt.driver [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance spawned successfully.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.605 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.610 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.612 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.630 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.5938897, b4a5932d-6547-4c01-9c71-0907c65247a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.646 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.652 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.652 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.653 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.655 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.655 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.655 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.660 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.5993876, b4a5932d-6547-4c01-9c71-0907c65247a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.661 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.684 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.688 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.713 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.726 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Took 9.60 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.726 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.737 253665 DEBUG nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.737 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.737 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.738 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.738 253665 DEBUG nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Processing event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.738 253665 DEBUG nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.738 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.739 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.739 253665 DEBUG oslo_concurrency.lockutils [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.739 253665 DEBUG nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] No waiting events found dispatching network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.739 253665 WARNING nova.compute.manager [req-72a67d32-118b-4d87-9cd1-32d6b507a5c2 req-4dd1d6e4-b55d-49af-bf9f-7e2f3e1c4e52 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received unexpected event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.740 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.751 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803147.7440078, 64142c1c-95e0-4db4-b743-bb94c85a208f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.751 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.763 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.795 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.809 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Took 11.89 seconds to build instance.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.811 253665 INFO nova.virt.libvirt.driver [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance spawned successfully.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.812 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.814 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.824 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.824 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.824 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.824 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.825 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.826 253665 INFO nova.compute.manager [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Terminating instance#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.826 253665 DEBUG nova.compute.manager [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.834 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.838 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.840 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.840 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.841 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.841 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.841 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.842 253665 DEBUG nova.virt.libvirt.driver [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:07 np0005532048 kernel: tapc10e771b-27 (unregistering): left promiscuous mode
Nov 22 04:19:07 np0005532048 NetworkManager[48920]: <info>  [1763803147.8849] device (tapc10e771b-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:19:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:07Z|00522|binding|INFO|Releasing lport c10e771b-271b-4855-9004-fe8ee858ec5d from this chassis (sb_readonly=0)
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:07Z|00523|binding|INFO|Setting lport c10e771b-271b-4855-9004-fe8ee858ec5d down in Southbound
Nov 22 04:19:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:07Z|00524|binding|INFO|Removing iface tapc10e771b-27 ovn-installed in OVS
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.908 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:f2:e5 10.100.0.3'], port_security=['fa:16:3e:f1:f2:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dec3a0c0-4e66-47fb-845c-42748f871bd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af1599cd-9805-40cb-9d20-ed7982b07412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d1054fa34554ffa8a188984d2db6a60', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac0f6fad-418e-4cf8-9b02-babdac3fb88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64f6ab83-a798-4bd9-aa90-a1cb3d63c1c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c10e771b-271b-4855-9004-fe8ee858ec5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.910 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c10e771b-271b-4855-9004-fe8ee858ec5d in datapath af1599cd-9805-40cb-9d20-ed7982b07412 unbound from our chassis#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.910 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Took 8.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.911 253665 DEBUG nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.912 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network af1599cd-9805-40cb-9d20-ed7982b07412, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.913 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2224df67-6de1-4c88-b4d1-48ef44947d73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:07.917 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 namespace which is not needed anymore#033[00m
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.925 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:07 np0005532048 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Deactivated successfully.
Nov 22 04:19:07 np0005532048 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Consumed 14.150s CPU time.
Nov 22 04:19:07 np0005532048 systemd-machined[215941]: Machine qemu-59-instance-00000034 terminated.
Nov 22 04:19:07 np0005532048 nova_compute[253661]: 2025-11-22 09:19:07.982 253665 INFO nova.compute.manager [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Took 11.94 seconds to build instance.#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.004 253665 DEBUG oslo_concurrency.lockutils [None req-8b4ea150-a0d4-41d1-abfa-c9b7334eccef e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:08 np0005532048 NetworkManager[48920]: <info>  [1763803148.0484] manager: (tapc10e771b-27): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.052 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.059 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:08 np0005532048 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [NOTICE]   (312309) : haproxy version is 2.8.14-c23fe91
Nov 22 04:19:08 np0005532048 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [NOTICE]   (312309) : path to executable is /usr/sbin/haproxy
Nov 22 04:19:08 np0005532048 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [WARNING]  (312309) : Exiting Master process...
Nov 22 04:19:08 np0005532048 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [WARNING]  (312309) : Exiting Master process...
Nov 22 04:19:08 np0005532048 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [ALERT]    (312309) : Current worker (312311) exited with code 143 (Terminated)
Nov 22 04:19:08 np0005532048 neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412[312305]: [WARNING]  (312309) : All workers exited. Exiting... (0)
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.067 253665 INFO nova.virt.libvirt.driver [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Instance destroyed successfully.#033[00m
Nov 22 04:19:08 np0005532048 systemd[1]: libpod-d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324.scope: Deactivated successfully.
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.069 253665 DEBUG nova.objects.instance [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lazy-loading 'resources' on Instance uuid dec3a0c0-4e66-47fb-845c-42748f871bd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:08 np0005532048 podman[315260]: 2025-11-22 09:19:08.077742081 +0000 UTC m=+0.061673300 container died d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.081 253665 DEBUG nova.virt.libvirt.vif [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1266648346',display_name='tempest-ServersTestJSON-server-1266648346',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1266648346',id=52,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlHqesJZ81rHIrLZzqDDZqmgjYu5MzxRRBun28RXCGOItUHcjpLw69lsrxKRvDbiIeTcAfAS0eY1jM4zBK+YEZ0Fqn+yA8iBWGS3Ng7czuJICvlXeiMEyvgNWSqN1n7cw==',key_name='tempest-keypair-330217895',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9d1054fa34554ffa8a188984d2db6a60',ramdisk_id='',reservation_id='r-562p3oi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1156873673',owner_user_name='tempest-ServersTestJSON-1156873673-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:18:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='22a23d70ca814c9597ead334e32c08a1',uuid=dec3a0c0-4e66-47fb-845c-42748f871bd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.082 253665 DEBUG nova.network.os_vif_util [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converting VIF {"id": "c10e771b-271b-4855-9004-fe8ee858ec5d", "address": "fa:16:3e:f1:f2:e5", "network": {"id": "af1599cd-9805-40cb-9d20-ed7982b07412", "bridge": "br-int", "label": "tempest-ServersTestJSON-652179727-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d1054fa34554ffa8a188984d2db6a60", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10e771b-27", "ovs_interfaceid": "c10e771b-271b-4855-9004-fe8ee858ec5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.083 253665 DEBUG nova.network.os_vif_util [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.083 253665 DEBUG os_vif [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.085 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc10e771b-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.090 253665 INFO os_vif [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f1:f2:e5,bridge_name='br-int',has_traffic_filtering=True,id=c10e771b-271b-4855-9004-fe8ee858ec5d,network=Network(af1599cd-9805-40cb-9d20-ed7982b07412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10e771b-27')#033[00m
Nov 22 04:19:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9c46800e6c5ca462086d92c20678c674e5c07106b89d77525644fa067c6c4bcd-merged.mount: Deactivated successfully.
Nov 22 04:19:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324-userdata-shm.mount: Deactivated successfully.
Nov 22 04:19:08 np0005532048 podman[315260]: 2025-11-22 09:19:08.13781281 +0000 UTC m=+0.121744019 container cleanup d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 04:19:08 np0005532048 systemd[1]: libpod-conmon-d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324.scope: Deactivated successfully.
Nov 22 04:19:08 np0005532048 podman[315310]: 2025-11-22 09:19:08.221660587 +0000 UTC m=+0.054630729 container remove d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:19:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.232 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d604c4e1-69cf-45af-9506-aadb75dedd7d]: (4, ('Sat Nov 22 09:19:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 (d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324)\nd237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324\nSat Nov 22 09:19:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 (d237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324)\nd237023638750b9cd20fbe8243f8d191acaa2c9e269a61abf5b469189fb33324\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.234 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17604fcb-25f5-4bbe-ba6f-86ce65d0d5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.235 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf1599cd-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.237 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:08 np0005532048 kernel: tapaf1599cd-90: left promiscuous mode
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.261 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ddb2f0-3091-4d7b-839f-344258415d32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d2f952-d0df-4282-81c7-c3d29c8c84b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.278 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01f67cf9-06c3-462d-8e83-8ef825c153ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.298 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df7d101e-fe84-467b-ae89-1969c2f6c63c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597645, 'reachable_time': 22521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315326, 'error': None, 'target': 'ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.303 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-af1599cd-9805-40cb-9d20-ed7982b07412 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:19:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:08.304 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[68d09468-7b28-463d-b7fc-56208a60db57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:08 np0005532048 systemd[1]: run-netns-ovnmeta\x2daf1599cd\x2d9805\x2d40cb\x2d9d20\x2ded7982b07412.mount: Deactivated successfully.
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.575 253665 INFO nova.virt.libvirt.driver [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Deleting instance files /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3_del#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.576 253665 INFO nova.virt.libvirt.driver [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Deletion of /var/lib/nova/instances/dec3a0c0-4e66-47fb-845c-42748f871bd3_del complete#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.621 253665 INFO nova.compute.manager [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.622 253665 DEBUG oslo.service.loopingcall [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.623 253665 DEBUG nova.compute.manager [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.623 253665 DEBUG nova.network.neutron [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.696 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.697 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.697 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.698 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.698 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] No waiting events found dispatching network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.698 253665 WARNING nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received unexpected event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.699 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-unplugged-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.699 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.699 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.699 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.700 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] No waiting events found dispatching network-vif-unplugged-c10e771b-271b-4855-9004-fe8ee858ec5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.700 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-unplugged-c10e771b-271b-4855-9004-fe8ee858ec5d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.700 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.701 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.701 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.702 253665 DEBUG oslo_concurrency.lockutils [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.702 253665 DEBUG nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] No waiting events found dispatching network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:08 np0005532048 nova_compute[253661]: 2025-11-22 09:19:08.702 253665 WARNING nova.compute.manager [req-8e0ea65b-0190-4eae-95b2-2ab73556712f req-8fccab31-05fe-48a1-bceb-fa25d3e11c5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received unexpected event network-vif-plugged-c10e771b-271b-4855-9004-fe8ee858ec5d for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.098 253665 INFO nova.compute.manager [None req-2819563a-46c9-4dbf-8da2-eb6e4f495104 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Unpausing#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.099 253665 DEBUG nova.objects.instance [None req-2819563a-46c9-4dbf-8da2-eb6e4f495104 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'flavor' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.134 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803149.1339312, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.134 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:19:09 np0005532048 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.143 253665 DEBUG nova.virt.libvirt.guest [None req-2819563a-46c9-4dbf-8da2-eb6e4f495104 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.143 253665 DEBUG nova.compute.manager [None req-2819563a-46c9-4dbf-8da2-eb6e4f495104 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.153 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.156 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.188 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.188 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.188 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.188 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.189 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.190 253665 INFO nova.compute.manager [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Terminating instance#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.190 253665 DEBUG nova.compute.manager [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.191 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 22 04:19:09 np0005532048 kernel: tap88cfebb7-b5 (unregistering): left promiscuous mode
Nov 22 04:19:09 np0005532048 NetworkManager[48920]: <info>  [1763803149.2360] device (tap88cfebb7-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:19:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:09Z|00525|binding|INFO|Releasing lport 88cfebb7-b545-4137-8094-3fa68a13f42b from this chassis (sb_readonly=0)
Nov 22 04:19:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:09Z|00526|binding|INFO|Setting lport 88cfebb7-b545-4137-8094-3fa68a13f42b down in Southbound
Nov 22 04:19:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:09Z|00527|binding|INFO|Removing iface tap88cfebb7-b5 ovn-installed in OVS
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.257 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:8a:3b 10.100.0.10'], port_security=['fa:16:3e:54:8a:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '045614f9-cfb4-4a52-996e-e880cbdf7dcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88cfebb7-b545-4137-8094-3fa68a13f42b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.258 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88cfebb7-b545-4137-8094-3fa68a13f42b in datapath f6001c81-6c53-4678-b8e8-39c35706be23 unbound from our chassis#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.264 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:09 np0005532048 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Deactivated successfully.
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.288 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb67f76-635a-41ac-9752-9a2fdfbc7637]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:09 np0005532048 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Consumed 5.088s CPU time.
Nov 22 04:19:09 np0005532048 systemd-machined[215941]: Machine qemu-62-instance-00000036 terminated.
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.330 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[efa6b128-0b71-4c8e-b62f-5bfef6110837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.334 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84f13511-5753-4219-9bd0-807804064b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.382 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9495b50b-4eff-4ee4-b467-42871806381a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 343 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.5 MiB/s wr, 396 op/s
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.414 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b21bf68-b360-4ed2-8fea-ec8c14c028f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 36494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315338, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.429 253665 DEBUG nova.network.neutron [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.432 253665 INFO nova.virt.libvirt.driver [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Instance destroyed successfully.#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.433 253665 DEBUG nova.objects.instance [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'resources' on Instance uuid 045614f9-cfb4-4a52-996e-e880cbdf7dcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.449 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f87fea75-90ea-4e80-870f-6e495fb8c587]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599500, 'tstamp': 599500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315348, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599504, 'tstamp': 599504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315348, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.452 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.452 253665 DEBUG nova.virt.libvirt.vif [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-1',id=54,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:05Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=045614f9-cfb4-4a52-996e-e880cbdf7dcd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.453 253665 DEBUG nova.network.os_vif_util [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "88cfebb7-b545-4137-8094-3fa68a13f42b", "address": "fa:16:3e:54:8a:3b", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88cfebb7-b5", "ovs_interfaceid": "88cfebb7-b545-4137-8094-3fa68a13f42b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.454 253665 DEBUG nova.network.os_vif_util [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.455 253665 DEBUG os_vif [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.457 253665 INFO nova.compute.manager [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Took 0.83 seconds to deallocate network for instance.#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.458 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88cfebb7-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.459 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.463 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.464 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.467 253665 INFO os_vif [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:8a:3b,bridge_name='br-int',has_traffic_filtering=True,id=88cfebb7-b545-4137-8094-3fa68a13f42b,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88cfebb7-b5')#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.472 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.472 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.473 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:09.473 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.517 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.518 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.678 253665 DEBUG oslo_concurrency.processutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.970 253665 INFO nova.virt.libvirt.driver [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Deleting instance files /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd_del#033[00m
Nov 22 04:19:09 np0005532048 nova_compute[253661]: 2025-11-22 09:19:09.971 253665 INFO nova.virt.libvirt.driver [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Deletion of /var/lib/nova/instances/045614f9-cfb4-4a52-996e-e880cbdf7dcd_del complete#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.053 253665 INFO nova.compute.manager [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.053 253665 DEBUG oslo.service.loopingcall [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.054 253665 DEBUG nova.compute.manager [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.054 253665 DEBUG nova.network.neutron [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.192 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070905071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.231 253665 DEBUG oslo_concurrency.processutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.237 253665 DEBUG nova.compute.provider_tree [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.251 253665 DEBUG nova.scheduler.client.report [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.276 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.304 253665 INFO nova.scheduler.client.report [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Deleted allocations for instance dec3a0c0-4e66-47fb-845c-42748f871bd3#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.389 253665 DEBUG oslo_concurrency.lockutils [None req-9521a228-671a-4a6b-8c7a-1fc2dd40039c 22a23d70ca814c9597ead334e32c08a1 9d1054fa34554ffa8a188984d2db6a60 - - default default] Lock "dec3a0c0-4e66-47fb-845c-42748f871bd3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.763 253665 DEBUG nova.network.neutron [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.789 253665 INFO nova.compute.manager [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Took 0.73 seconds to deallocate network for instance.#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.839 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.840 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.915 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Received event network-vif-deleted-c10e771b-271b-4855-9004-fe8ee858ec5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.915 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-unplugged-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.915 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.916 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.916 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.916 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] No waiting events found dispatching network-vif-unplugged-88cfebb7-b545-4137-8094-3fa68a13f42b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.917 253665 WARNING nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received unexpected event network-vif-unplugged-88cfebb7-b545-4137-8094-3fa68a13f42b for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.917 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.917 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.917 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.918 253665 DEBUG oslo_concurrency.lockutils [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.918 253665 DEBUG nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] No waiting events found dispatching network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.918 253665 WARNING nova.compute.manager [req-7d3dc77d-9046-4abf-8ace-6bb77463dd8a req-55ae7da4-d960-4e19-b569-7de35b5eceeb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received unexpected event network-vif-plugged-88cfebb7-b545-4137-8094-3fa68a13f42b for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.943 253665 DEBUG oslo_concurrency.processutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:10 np0005532048 nova_compute[253661]: 2025-11-22 09:19:10.987 253665 DEBUG nova.compute.manager [req-ed7bfede-62e6-480b-8614-aa7b702c562b req-cd6e4fb3-5385-4ae3-9411-3c037c90df89 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Received event network-vif-deleted-88cfebb7-b545-4137-8094-3fa68a13f42b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:11 np0005532048 nova_compute[253661]: 2025-11-22 09:19:11.250 253665 DEBUG nova.virt.libvirt.driver [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:19:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 343 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.2 MiB/s wr, 361 op/s
Nov 22 04:19:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1414933995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:11 np0005532048 nova_compute[253661]: 2025-11-22 09:19:11.410 253665 DEBUG oslo_concurrency.processutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:11 np0005532048 nova_compute[253661]: 2025-11-22 09:19:11.415 253665 DEBUG nova.compute.provider_tree [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:11 np0005532048 nova_compute[253661]: 2025-11-22 09:19:11.433 253665 DEBUG nova.scheduler.client.report [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:11 np0005532048 nova_compute[253661]: 2025-11-22 09:19:11.453 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:11 np0005532048 nova_compute[253661]: 2025-11-22 09:19:11.489 253665 INFO nova.scheduler.client.report [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Deleted allocations for instance 045614f9-cfb4-4a52-996e-e880cbdf7dcd#033[00m
Nov 22 04:19:11 np0005532048 nova_compute[253661]: 2025-11-22 09:19:11.560 253665 DEBUG oslo_concurrency.lockutils [None req-5397a8e6-98f2-4097-a52c-ad7ab6f93df9 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "045614f9-cfb4-4a52-996e-e880cbdf7dcd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:12 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 22 04:19:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:19:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3512549868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:19:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:19:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3512549868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:19:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 299 MiB data, 638 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 7.0 MiB/s wr, 486 op/s
Nov 22 04:19:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:13Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:66:f7 10.100.0.10
Nov 22 04:19:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:13Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:66:f7 10.100.0.10
Nov 22 04:19:14 np0005532048 nova_compute[253661]: 2025-11-22 09:19:14.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.381 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.382 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.382 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.383 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.383 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.384 253665 INFO nova.compute.manager [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Terminating instance#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.385 253665 DEBUG nova.compute.manager [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:19:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 285 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 4.8 MiB/s wr, 500 op/s
Nov 22 04:19:15 np0005532048 kernel: tap7612be10-c2 (unregistering): left promiscuous mode
Nov 22 04:19:15 np0005532048 NetworkManager[48920]: <info>  [1763803155.4257] device (tap7612be10-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:19:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:15Z|00528|binding|INFO|Releasing lport 7612be10-c22f-4d60-89f7-232e865b6524 from this chassis (sb_readonly=0)
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:15Z|00529|binding|INFO|Setting lport 7612be10-c22f-4d60-89f7-232e865b6524 down in Southbound
Nov 22 04:19:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:15Z|00530|binding|INFO|Removing iface tap7612be10-c2 ovn-installed in OVS
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.439 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.443 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:cc:cd 10.100.0.9'], port_security=['fa:16:3e:b3:cc:cd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b4a5932d-6547-4c01-9c71-0907c65247a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7612be10-c22f-4d60-89f7-232e865b6524) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.444 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7612be10-c22f-4d60-89f7-232e865b6524 in datapath f6001c81-6c53-4678-b8e8-39c35706be23 unbound from our chassis#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.445 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6001c81-6c53-4678-b8e8-39c35706be23#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.475 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a08bc5f5-3c16-45e2-aadd-1011a6634572]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 22 04:19:15 np0005532048 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Consumed 9.091s CPU time.
Nov 22 04:19:15 np0005532048 systemd-machined[215941]: Machine qemu-63-instance-00000037 terminated.
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.511 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.511 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.511 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.512 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.512 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.513 253665 INFO nova.compute.manager [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Terminating instance#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.514 253665 DEBUG nova.compute.manager [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.523 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[63b5f52d-10de-46fa-86ae-d987e455fb2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.526 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3adb0aae-6c63-4e51-9e9f-c6b382f94f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.553 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[aac76173-fe44-49ac-bdbe-bc395f0713c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:15 np0005532048 kernel: tap11f926cd-f7 (unregistering): left promiscuous mode
Nov 22 04:19:15 np0005532048 NetworkManager[48920]: <info>  [1763803155.5629] device (tap11f926cd-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.571 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[52f13c03-8998-40c8-bafb-a750963cd18b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6001c81-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:70:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599485, 'reachable_time': 36494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315429, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:15Z|00531|binding|INFO|Releasing lport 11f926cd-f731-4de9-861e-5842f91f48df from this chassis (sb_readonly=0)
Nov 22 04:19:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:15Z|00532|binding|INFO|Setting lport 11f926cd-f731-4de9-861e-5842f91f48df down in Southbound
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:15Z|00533|binding|INFO|Removing iface tap11f926cd-f7 ovn-installed in OVS
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.585 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:54:ea 10.100.0.5'], port_security=['fa:16:3e:41:54:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '64142c1c-95e0-4db4-b743-bb94c85a208f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6001c81-6c53-4678-b8e8-39c35706be23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d5505406f294eb4a17d4137cad567f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3862dd9b-c79c-4d35-9b56-39c3500165f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0915442-85c3-4100-bdd2-9075f16a0456, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=11f926cd-f731-4de9-861e-5842f91f48df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.596 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8f4367-7d1a-4d00-96b3-3469518e9270]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599500, 'tstamp': 599500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315434, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6001c81-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 599504, 'tstamp': 599504}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315434, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.598 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Deactivated successfully.
Nov 22 04:19:15 np0005532048 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Consumed 8.215s CPU time.
Nov 22 04:19:15 np0005532048 systemd-machined[215941]: Machine qemu-64-instance-00000038 terminated.
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.616 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6001c81-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.616 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.617 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6001c81-60, col_values=(('external_ids', {'iface-id': 'a0af7d9b-5116-431a-ad00-3df7641dc72f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.617 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.618 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 11f926cd-f731-4de9-861e-5842f91f48df in datapath f6001c81-6c53-4678-b8e8-39c35706be23 unbound from our chassis#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.620 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6001c81-6c53-4678-b8e8-39c35706be23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.620 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[798ab1ca-0188-4085-9295-709ea2a88134]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.622 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 namespace which is not needed anymore#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.627 253665 INFO nova.virt.libvirt.driver [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Instance destroyed successfully.#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.627 253665 DEBUG nova.objects.instance [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'resources' on Instance uuid b4a5932d-6547-4c01-9c71-0907c65247a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.638 253665 DEBUG nova.virt.libvirt.vif [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-2',id=55,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-22T09:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:07Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=b4a5932d-6547-4c01-9c71-0907c65247a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.638 253665 DEBUG nova.network.os_vif_util [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "7612be10-c22f-4d60-89f7-232e865b6524", "address": "fa:16:3e:b3:cc:cd", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7612be10-c2", "ovs_interfaceid": "7612be10-c22f-4d60-89f7-232e865b6524", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.639 253665 DEBUG nova.network.os_vif_util [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.640 253665 DEBUG os_vif [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.642 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7612be10-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.649 253665 DEBUG nova.compute.manager [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-unplugged-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.649 253665 DEBUG oslo_concurrency.lockutils [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.650 253665 DEBUG oslo_concurrency.lockutils [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.650 253665 DEBUG oslo_concurrency.lockutils [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.650 253665 DEBUG nova.compute.manager [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] No waiting events found dispatching network-vif-unplugged-7612be10-c22f-4d60-89f7-232e865b6524 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.651 253665 DEBUG nova.compute.manager [req-228d8c7a-e3b8-430c-93f1-3f6d176178c1 req-53bfce5b-29d6-4b92-b4b9-7e806e2287ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-unplugged-7612be10-c22f-4d60-89f7-232e865b6524 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.654 253665 INFO os_vif [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:cc:cd,bridge_name='br-int',has_traffic_filtering=True,id=7612be10-c22f-4d60-89f7-232e865b6524,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7612be10-c2')#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.742 253665 INFO nova.virt.libvirt.driver [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Instance destroyed successfully.#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.743 253665 DEBUG nova.objects.instance [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lazy-loading 'resources' on Instance uuid 64142c1c-95e0-4db4-b743-bb94c85a208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.757 253665 DEBUG nova.virt.libvirt.vif [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-724322260',display_name='tempest-ListServersNegativeTestJSON-server-724322260-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-724322260-3',id=56,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-22T09:19:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1d5505406f294eb4a17d4137cad567f1',ramdisk_id='',reservation_id='r-jiur89cu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-920959944',owner_user_name='tempest-ListServersNegativeTestJSON-920959944-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:07Z,user_data=None,user_id='e5f9c3cac3ab4d74a7aeffd50c07da03',uuid=64142c1c-95e0-4db4-b743-bb94c85a208f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.758 253665 DEBUG nova.network.os_vif_util [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converting VIF {"id": "11f926cd-f731-4de9-861e-5842f91f48df", "address": "fa:16:3e:41:54:ea", "network": {"id": "f6001c81-6c53-4678-b8e8-39c35706be23", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1768893950-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1d5505406f294eb4a17d4137cad567f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11f926cd-f7", "ovs_interfaceid": "11f926cd-f731-4de9-861e-5842f91f48df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.759 253665 DEBUG nova.network.os_vif_util [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.759 253665 DEBUG os_vif [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.761 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.761 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11f926cd-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.764 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.767 253665 DEBUG nova.compute.manager [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-unplugged-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.768 253665 DEBUG oslo_concurrency.lockutils [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.768 253665 DEBUG oslo_concurrency.lockutils [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.768 253665 DEBUG oslo_concurrency.lockutils [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.769 253665 DEBUG nova.compute.manager [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] No waiting events found dispatching network-vif-unplugged-11f926cd-f731-4de9-861e-5842f91f48df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.769 253665 DEBUG nova.compute.manager [req-fecab983-d7eb-4ca3-a437-fe5e80d2d6ad req-64b9a375-7590-44fd-b900-43ae4a0b7d4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-unplugged-11f926cd-f731-4de9-861e-5842f91f48df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.770 253665 INFO os_vif [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:54:ea,bridge_name='br-int',has_traffic_filtering=True,id=11f926cd-f731-4de9-861e-5842f91f48df,network=Network(f6001c81-6c53-4678-b8e8-39c35706be23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11f926cd-f7')#033[00m
Nov 22 04:19:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:15Z|00534|binding|INFO|Releasing lport a0af7d9b-5116-431a-ad00-3df7641dc72f from this chassis (sb_readonly=0)
Nov 22 04:19:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:15Z|00535|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:19:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:15Z|00536|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 04:19:15 np0005532048 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [NOTICE]   (314926) : haproxy version is 2.8.14-c23fe91
Nov 22 04:19:15 np0005532048 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [NOTICE]   (314926) : path to executable is /usr/sbin/haproxy
Nov 22 04:19:15 np0005532048 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [WARNING]  (314926) : Exiting Master process...
Nov 22 04:19:15 np0005532048 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [ALERT]    (314926) : Current worker (314928) exited with code 143 (Terminated)
Nov 22 04:19:15 np0005532048 neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23[314920]: [WARNING]  (314926) : All workers exited. Exiting... (0)
Nov 22 04:19:15 np0005532048 systemd[1]: libpod-ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9.scope: Deactivated successfully.
Nov 22 04:19:15 np0005532048 podman[315484]: 2025-11-22 09:19:15.795971619 +0000 UTC m=+0.062283914 container died ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:19:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9-userdata-shm.mount: Deactivated successfully.
Nov 22 04:19:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0aea65242036820cb6858ded2c2732f7a57b937b65a8c1c3d87746de9de9af3b-merged.mount: Deactivated successfully.
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 podman[315484]: 2025-11-22 09:19:15.865107069 +0000 UTC m=+0.131419364 container cleanup ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:19:15 np0005532048 systemd[1]: libpod-conmon-ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9.scope: Deactivated successfully.
Nov 22 04:19:15 np0005532048 podman[315543]: 2025-11-22 09:19:15.948893445 +0000 UTC m=+0.057788115 container remove ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.955 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae1a867-6cf1-4bc5-9628-ffef9f867218]: (4, ('Sat Nov 22 09:19:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 (ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9)\nee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9\nSat Nov 22 09:19:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 (ee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9)\nee5bfe1464718cb830b1d56fe18e627343e8ea318be2aa8ec5fd384cb4633fd9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.957 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a70f84-2d5c-47c2-a44c-574eb3b289e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.958 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6001c81-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 kernel: tapf6001c81-60: left promiscuous mode
Nov 22 04:19:15 np0005532048 nova_compute[253661]: 2025-11-22 09:19:15.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.983 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9e2a0a-1a8d-40eb-9877-2d3a52328e3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:15.999 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c503a568-a95b-46d6-9936-a43ad13b13c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:16.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[873f4760-909b-4827-b6bc-5bc8dd075b81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:16.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08278100-ec31-4231-a982-0decbfc53a53]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599474, 'reachable_time': 27445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315559, 'error': None, 'target': 'ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:16 np0005532048 systemd[1]: run-netns-ovnmeta\x2df6001c81\x2d6c53\x2d4678\x2db8e8\x2d39c35706be23.mount: Deactivated successfully.
Nov 22 04:19:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:16.019 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6001c81-6c53-4678-b8e8-39c35706be23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:19:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:16.019 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[714b9bdd-4bb1-48f9-8778-63de7d790e25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.194 253665 INFO nova.virt.libvirt.driver [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Deleting instance files /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1_del#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.195 253665 INFO nova.virt.libvirt.driver [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Deletion of /var/lib/nova/instances/b4a5932d-6547-4c01-9c71-0907c65247a1_del complete#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.241 253665 INFO nova.compute.manager [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.242 253665 DEBUG oslo.service.loopingcall [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.243 253665 DEBUG nova.compute.manager [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.243 253665 DEBUG nova.network.neutron [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.291 253665 INFO nova.virt.libvirt.driver [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Deleting instance files /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f_del#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.292 253665 INFO nova.virt.libvirt.driver [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Deletion of /var/lib/nova/instances/64142c1c-95e0-4db4-b743-bb94c85a208f_del complete#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.351 253665 INFO nova.compute.manager [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.352 253665 DEBUG oslo.service.loopingcall [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.352 253665 DEBUG nova.compute.manager [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:19:16 np0005532048 nova_compute[253661]: 2025-11-22 09:19:16.352 253665 DEBUG nova.network.neutron [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:19:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 257 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 2.1 MiB/s wr, 407 op/s
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.248 253665 DEBUG nova.compute.manager [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.249 253665 DEBUG oslo_concurrency.lockutils [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.250 253665 DEBUG oslo_concurrency.lockutils [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.250 253665 DEBUG oslo_concurrency.lockutils [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.251 253665 DEBUG nova.compute.manager [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] No waiting events found dispatching network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.251 253665 WARNING nova.compute.manager [req-35b243d5-979f-4a3a-880f-3e91e69c6ab2 req-05a9b164-4281-4a79-a2ca-311bce3d06ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received unexpected event network-vif-plugged-7612be10-c22f-4d60-89f7-232e865b6524 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.324 253665 DEBUG nova.compute.manager [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.324 253665 DEBUG oslo_concurrency.lockutils [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.325 253665 DEBUG oslo_concurrency.lockutils [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.325 253665 DEBUG oslo_concurrency.lockutils [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.325 253665 DEBUG nova.compute.manager [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] No waiting events found dispatching network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.325 253665 WARNING nova.compute.manager [req-2aa309de-fb27-4ad4-95e5-d194e46b0d95 req-2b16e250-8fa6-4f54-a50a-c44827da28a9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received unexpected event network-vif-plugged-11f926cd-f731-4de9-861e-5842f91f48df for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.884 253665 DEBUG nova.network.neutron [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.901 253665 INFO nova.compute.manager [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Took 2.55 seconds to deallocate network for instance.#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.943 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:18 np0005532048 nova_compute[253661]: 2025-11-22 09:19:18.944 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.026 253665 DEBUG nova.network.neutron [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.040 253665 INFO nova.compute.manager [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Took 2.80 seconds to deallocate network for instance.#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.057 253665 DEBUG oslo_concurrency.processutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.097 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 202 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.2 MiB/s wr, 413 op/s
Nov 22 04:19:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399072842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.573 253665 DEBUG oslo_concurrency.processutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.580 253665 DEBUG nova.compute.provider_tree [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.594 253665 DEBUG nova.scheduler.client.report [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.617 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.620 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.677 253665 INFO nova.scheduler.client.report [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Deleted allocations for instance 64142c1c-95e0-4db4-b743-bb94c85a208f#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.724 253665 DEBUG oslo_concurrency.processutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:19 np0005532048 nova_compute[253661]: 2025-11-22 09:19:19.760 253665 DEBUG oslo_concurrency.lockutils [None req-a8f833a1-f1ce-43b2-bc6c-b9030f47b6b5 e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "64142c1c-95e0-4db4-b743-bb94c85a208f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825166211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.187 253665 DEBUG oslo_concurrency.processutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.193 253665 DEBUG nova.compute.provider_tree [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.198 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.206 253665 DEBUG nova.scheduler.client.report [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.222 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.248 253665 INFO nova.scheduler.client.report [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Deleted allocations for instance b4a5932d-6547-4c01-9c71-0907c65247a1#033[00m
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.337 253665 DEBUG oslo_concurrency.lockutils [None req-7bce4473-eb59-4a04-add6-3d6e8587e62e e5f9c3cac3ab4d74a7aeffd50c07da03 1d5505406f294eb4a17d4137cad567f1 - - default default] Lock "b4a5932d-6547-4c01-9c71-0907c65247a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:20Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:38:8e 10.100.0.4
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.429 253665 DEBUG nova.compute.manager [req-26f570e4-da80-4871-b3a9-9be8d4d4d623 req-1b838af3-fbaa-479d-a88c-5df653826789 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Received event network-vif-deleted-7612be10-c22f-4d60-89f7-232e865b6524 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.430 253665 DEBUG nova.compute.manager [req-3469098a-c9ad-401a-a91d-4401cd10105d req-f95f8c14-e27a-48bb-835e-c5e4c7174e3f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Received event network-vif-deleted-11f926cd-f731-4de9-861e-5842f91f48df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:20 np0005532048 nova_compute[253661]: 2025-11-22 09:19:20.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:21 np0005532048 nova_compute[253661]: 2025-11-22 09:19:21.259 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 202 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 295 op/s
Nov 22 04:19:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:22 np0005532048 nova_compute[253661]: 2025-11-22 09:19:22.297 253665 DEBUG nova.virt.libvirt.driver [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:23 np0005532048 nova_compute[253661]: 2025-11-22 09:19:23.062 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803148.0608265, dec3a0c0-4e66-47fb-845c-42748f871bd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:23 np0005532048 nova_compute[253661]: 2025-11-22 09:19:23.062 253665 INFO nova.compute.manager [-] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:19:23 np0005532048 nova_compute[253661]: 2025-11-22 09:19:23.081 253665 DEBUG nova.compute.manager [None req-a39fc479-51cd-47bf-a80b-b2c957638f21 - - - - - -] [instance: dec3a0c0-4e66-47fb-845c-42748f871bd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 202 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 321 op/s
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.424 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803149.422696, 045614f9-cfb4-4a52-996e-e880cbdf7dcd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.424 253665 INFO nova.compute.manager [-] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.435 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.436 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.436 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.452 253665 DEBUG nova.compute.manager [None req-ebec4ebf-1989-4978-a43c-7e797b5ab535 - - - - - -] [instance: 045614f9-cfb4-4a52-996e-e880cbdf7dcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:24Z|00537|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:19:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:24Z|00538|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:24 np0005532048 kernel: tap8d030734-0e (unregistering): left promiscuous mode
Nov 22 04:19:24 np0005532048 NetworkManager[48920]: <info>  [1763803164.6047] device (tap8d030734-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.614 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:24Z|00539|binding|INFO|Releasing lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea from this chassis (sb_readonly=0)
Nov 22 04:19:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:24Z|00540|binding|INFO|Setting lport 8d030734-0e50-4fca-a432-cc2d1c2c9dea down in Southbound
Nov 22 04:19:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:24Z|00541|binding|INFO|Removing iface tap8d030734-0e ovn-installed in OVS
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.625 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:66:f7 10.100.0.10'], port_security=['fa:16:3e:8b:66:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2f0d9dce-1900-41c4-9b69-7e46f34dde81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8d030734-0e50-4fca-a432-cc2d1c2c9dea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.627 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8d030734-0e50-4fca-a432-cc2d1c2c9dea in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.628 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.630 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[781d85fe-750b-410b-af5f-be0899059fcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.631 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore#033[00m
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:24 np0005532048 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 22 04:19:24 np0005532048 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Consumed 14.643s CPU time.
Nov 22 04:19:24 np0005532048 systemd-machined[215941]: Machine qemu-60-instance-00000035 terminated.
Nov 22 04:19:24 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [NOTICE]   (314299) : haproxy version is 2.8.14-c23fe91
Nov 22 04:19:24 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [NOTICE]   (314299) : path to executable is /usr/sbin/haproxy
Nov 22 04:19:24 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [WARNING]  (314299) : Exiting Master process...
Nov 22 04:19:24 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [WARNING]  (314299) : Exiting Master process...
Nov 22 04:19:24 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [ALERT]    (314299) : Current worker (314301) exited with code 143 (Terminated)
Nov 22 04:19:24 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[314275]: [WARNING]  (314299) : All workers exited. Exiting... (0)
Nov 22 04:19:24 np0005532048 systemd[1]: libpod-0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19.scope: Deactivated successfully.
Nov 22 04:19:24 np0005532048 podman[315627]: 2025-11-22 09:19:24.806018768 +0000 UTC m=+0.052861555 container died 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:19:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19-userdata-shm.mount: Deactivated successfully.
Nov 22 04:19:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7e78995b3caad89fb07d376941b58acfb371f548ff3ea9aaf066112a811b999c-merged.mount: Deactivated successfully.
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:24 np0005532048 podman[315627]: 2025-11-22 09:19:24.856414323 +0000 UTC m=+0.103257070 container cleanup 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:19:24 np0005532048 systemd[1]: libpod-conmon-0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19.scope: Deactivated successfully.
Nov 22 04:19:24 np0005532048 podman[315667]: 2025-11-22 09:19:24.932910182 +0000 UTC m=+0.048417398 container remove 0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.941 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a978dffd-255e-4af8-a31c-a524a74d6aa2]: (4, ('Sat Nov 22 09:19:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19)\n0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19\nSat Nov 22 09:19:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19)\n0b595248591a5c4f794e208bc6399b69bcb6272f36fe3d65d5d9b8e370673d19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.943 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[89796764-3892-4924-afc5-5fbb15981bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.945 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:24 np0005532048 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 04:19:24 np0005532048 nova_compute[253661]: 2025-11-22 09:19:24.966 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.970 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a873c037-e934-4e4f-aea8-de465e703e51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.993 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eca22bb1-21af-4abc-8b52-e5451b056aa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:24.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec252189-6ad4-4595-84e3-46cfc282d99b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:25.013 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[acd85d4e-66d5-4c3e-9e33-aa2a3e9bebe1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598972, 'reachable_time': 20149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315687, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:25 np0005532048 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 04:19:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:25.016 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:19:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:25.016 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[277d640e-63ea-4da3-89d6-59a3b92ca09d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.133 253665 DEBUG nova.compute.manager [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-unplugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.133 253665 DEBUG oslo_concurrency.lockutils [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.133 253665 DEBUG oslo_concurrency.lockutils [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.134 253665 DEBUG oslo_concurrency.lockutils [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.134 253665 DEBUG nova.compute.manager [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] No waiting events found dispatching network-vif-unplugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.134 253665 WARNING nova.compute.manager [req-88fd167d-d31f-4ef9-ab02-4526f31fac3d req-3718a320-c695-41a3-a61e-c6bc573f4355 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received unexpected event network-vif-unplugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea for instance with vm_state active and task_state powering-off.#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.314 253665 INFO nova.virt.libvirt.driver [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance shutdown successfully after 24 seconds.#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.319 253665 INFO nova.virt.libvirt.driver [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance destroyed successfully.#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.319 253665 DEBUG nova.objects.instance [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'numa_topology' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.330 253665 DEBUG nova.compute.manager [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.371 253665 DEBUG oslo_concurrency.lockutils [None req-bde7f2a3-8d7c-4b48-88ae-9c6a01c497d7 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 202 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 215 op/s
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.764 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.969 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.989 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.990 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:19:25 np0005532048 nova_compute[253661]: 2025-11-22 09:19:25.990 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.265 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.265 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.265 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.266 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.307 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:26 np0005532048 podman[315688]: 2025-11-22 09:19:26.383688813 +0000 UTC m=+0.061088056 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 04:19:26 np0005532048 podman[315690]: 2025-11-22 09:19:26.416327446 +0000 UTC m=+0.090842849 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984988463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.760 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.801816) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166801867, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 777, "num_deletes": 259, "total_data_size": 873398, "memory_usage": 887832, "flush_reason": "Manual Compaction"}
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166811209, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 864208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33041, "largest_seqno": 33817, "table_properties": {"data_size": 860269, "index_size": 1655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9443, "raw_average_key_size": 19, "raw_value_size": 852003, "raw_average_value_size": 1771, "num_data_blocks": 73, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803114, "oldest_key_time": 1763803114, "file_creation_time": 1763803166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9432 microseconds, and 4483 cpu microseconds.
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.811250) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 864208 bytes OK
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.811275) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.812909) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.812926) EVENT_LOG_v1 {"time_micros": 1763803166812919, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.812951) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 869368, prev total WAL file size 869368, number of live WAL files 2.
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.813620) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(843KB)], [71(8668KB)]
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166813700, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9740395, "oldest_snapshot_seqno": -1}
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.860 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.860 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.867 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5756 keys, 9631814 bytes, temperature: kUnknown
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166886559, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9631814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9590700, "index_size": 25644, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 145743, "raw_average_key_size": 25, "raw_value_size": 9484453, "raw_average_value_size": 1647, "num_data_blocks": 1042, "num_entries": 5756, "num_filter_entries": 5756, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.886893) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9631814 bytes
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.888551) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.5 rd, 132.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(22.4) write-amplify(11.1) OK, records in: 6291, records dropped: 535 output_compression: NoCompression
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.888570) EVENT_LOG_v1 {"time_micros": 1763803166888561, "job": 40, "event": "compaction_finished", "compaction_time_micros": 72970, "compaction_time_cpu_micros": 33065, "output_level": 6, "num_output_files": 1, "total_output_size": 9631814, "num_input_records": 6291, "num_output_records": 5756, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166888845, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803166890688, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.813524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:19:26 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:19:26.890729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.977 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.978 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.978 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.978 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.979 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.980 253665 INFO nova.compute.manager [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Terminating instance#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.981 253665 DEBUG nova.compute.manager [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.986 253665 INFO nova.virt.libvirt.driver [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Instance destroyed successfully.#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.987 253665 DEBUG nova.objects.instance [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid 2f0d9dce-1900-41c4-9b69-7e46f34dde81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.998 253665 DEBUG nova.virt.libvirt.vif [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:18:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1545163364',display_name='tempest-DeleteServersTestJSON-server-1545163364',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1545163364',id=53,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-0l9kcni8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:25Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=2f0d9dce-1900-41c4-9b69-7e46f34dde81,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:19:26 np0005532048 nova_compute[253661]: 2025-11-22 09:19:26.999 253665 DEBUG nova.network.os_vif_util [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "address": "fa:16:3e:8b:66:f7", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d030734-0e", "ovs_interfaceid": "8d030734-0e50-4fca-a432-cc2d1c2c9dea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.000 253665 DEBUG nova.network.os_vif_util [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.000 253665 DEBUG os_vif [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.003 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d030734-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.005 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.009 253665 INFO os_vif [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:66:f7,bridge_name='br-int',has_traffic_filtering=True,id=8d030734-0e50-4fca-a432-cc2d1c2c9dea,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d030734-0e')#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.084 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.086 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3844MB free_disk=59.89710998535156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.086 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.086 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.167 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 636b1046-fff8-4a45-8a14-04010b2f282e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2f0d9dce-1900-41c4-9b69-7e46f34dde81 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.168 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.243 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 202 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 726 KiB/s rd, 594 KiB/s wr, 126 op/s
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.452 253665 DEBUG nova.compute.manager [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.453 253665 DEBUG oslo_concurrency.lockutils [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.454 253665 DEBUG oslo_concurrency.lockutils [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.454 253665 DEBUG oslo_concurrency.lockutils [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.454 253665 DEBUG nova.compute.manager [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] No waiting events found dispatching network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.454 253665 WARNING nova.compute.manager [req-69e7e659-69e8-4a8c-b6d0-200ac6bb184d req-604c4081-615f-45e6-b606-61a7e5484619 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received unexpected event network-vif-plugged-8d030734-0e50-4fca-a432-cc2d1c2c9dea for instance with vm_state stopped and task_state deleting.#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.523 253665 INFO nova.virt.libvirt.driver [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Deleting instance files /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81_del#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.524 253665 INFO nova.virt.libvirt.driver [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Deletion of /var/lib/nova/instances/2f0d9dce-1900-41c4-9b69-7e46f34dde81_del complete#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.595 253665 INFO nova.compute.manager [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.596 253665 DEBUG oslo.service.loopingcall [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.596 253665 DEBUG nova.compute.manager [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.596 253665 DEBUG nova.network.neutron [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:19:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422326734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.723 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.730 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.744 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.790 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:19:27 np0005532048 nova_compute[253661]: 2025-11-22 09:19:27.790 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:27.961 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:27.962 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:27.964 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.436 253665 DEBUG nova.network.neutron [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.451 253665 INFO nova.compute.manager [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Took 0.85 seconds to deallocate network for instance.#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.502 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.503 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.576 253665 DEBUG oslo_concurrency.processutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.790 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.791 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.792 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.802 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.834 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.835 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.835 253665 INFO nova.compute.manager [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Rebooting instance#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.849 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.849 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:28 np0005532048 nova_compute[253661]: 2025-11-22 09:19:28.850 253665 DEBUG nova.network.neutron [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:19:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364095659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.103 253665 DEBUG oslo_concurrency.processutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.109 253665 DEBUG nova.compute.provider_tree [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.129 253665 DEBUG nova.scheduler.client.report [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.151 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.222 253665 INFO nova.scheduler.client.report [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance 2f0d9dce-1900-41c4-9b69-7e46f34dde81#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.343 253665 DEBUG oslo_concurrency.lockutils [None req-5aaf4c2e-b868-45ef-8867-8414dcd028f9 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "2f0d9dce-1900-41c4-9b69-7e46f34dde81" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 146 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 574 KiB/s rd, 95 KiB/s wr, 105 op/s
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.517 253665 DEBUG nova.compute.manager [req-e4191d2d-dc55-4ba9-8a27-471b43dca9e7 req-c7632a6c-1846-42f4-9c60-7220840053c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Received event network-vif-deleted-8d030734-0e50-4fca-a432-cc2d1c2c9dea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.614 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.615 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.639 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.709 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.710 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.718 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.718 253665 INFO nova.compute.claims [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:19:29 np0005532048 nova_compute[253661]: 2025-11-22 09:19:29.836 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.195 253665 DEBUG nova.network.neutron [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Updating instance_info_cache with network_info: [{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.201 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.209 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-636b1046-fff8-4a45-8a14-04010b2f282e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.212 253665 DEBUG nova.compute.manager [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:19:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394375110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.321 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.328 253665 DEBUG nova.compute.provider_tree [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.343 253665 DEBUG nova.scheduler.client.report [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.385 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.386 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:19:30 np0005532048 kernel: tapa288a5e5-7b (unregistering): left promiscuous mode
Nov 22 04:19:30 np0005532048 podman[315831]: 2025-11-22 09:19:30.408100119 +0000 UTC m=+0.094558638 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:19:30 np0005532048 NetworkManager[48920]: <info>  [1763803170.4135] device (tapa288a5e5-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:19:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:30Z|00542|binding|INFO|Releasing lport a288a5e5-7b57-4be8-9617-3271ea1e210f from this chassis (sb_readonly=0)
Nov 22 04:19:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:30Z|00543|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f down in Southbound
Nov 22 04:19:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:30Z|00544|binding|INFO|Removing iface tapa288a5e5-7b ovn-installed in OVS
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.443 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.445 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.446 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebc42408-7b03-480c-a016-1e5bb2ebcc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.447 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21334c29-b559-4501-b498-2db17d5541da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.449 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace which is not needed anymore#033[00m
Nov 22 04:19:30 np0005532048 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 22 04:19:30 np0005532048 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000032.scope: Consumed 14.616s CPU time.
Nov 22 04:19:30 np0005532048 systemd-machined[215941]: Machine qemu-61-instance-00000032 terminated.
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.476 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.502 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.547 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:19:30 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [NOTICE]   (314693) : haproxy version is 2.8.14-c23fe91
Nov 22 04:19:30 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [NOTICE]   (314693) : path to executable is /usr/sbin/haproxy
Nov 22 04:19:30 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [WARNING]  (314693) : Exiting Master process...
Nov 22 04:19:30 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [ALERT]    (314693) : Current worker (314695) exited with code 143 (Terminated)
Nov 22 04:19:30 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[314686]: [WARNING]  (314693) : All workers exited. Exiting... (0)
Nov 22 04:19:30 np0005532048 systemd[1]: libpod-96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79.scope: Deactivated successfully.
Nov 22 04:19:30 np0005532048 podman[315884]: 2025-11-22 09:19:30.603073457 +0000 UTC m=+0.052954468 container died 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.608 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance destroyed successfully.#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.608 253665 DEBUG nova.objects.instance [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.622 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803155.620987, b4a5932d-6547-4c01-9c71-0907c65247a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.622 253665 INFO nova.compute.manager [-] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.625 253665 DEBUG nova.virt.libvirt.vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.625 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.627 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.627 253665 DEBUG os_vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.629 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa288a5e5-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.646 253665 DEBUG nova.compute.manager [None req-743c4d9e-bfae-463c-9751-4b1771437bf2 - - - - - -] [instance: b4a5932d-6547-4c01-9c71-0907c65247a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.677 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79-userdata-shm.mount: Deactivated successfully.
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.683 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:19:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e1467195d1769cb3d2de1fc7c53fe46b5d8e844ec9d0e75fe4e8f0d6486282a0-merged.mount: Deactivated successfully.
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.685 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.686 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Creating image(s)#033[00m
Nov 22 04:19:30 np0005532048 podman[315884]: 2025-11-22 09:19:30.691226038 +0000 UTC m=+0.141107039 container cleanup 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.710 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:30 np0005532048 systemd[1]: libpod-conmon-96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79.scope: Deactivated successfully.
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.744 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.775 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:30 np0005532048 podman[315936]: 2025-11-22 09:19:30.77606014 +0000 UTC m=+0.054414353 container remove 96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.783 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6eeb4b18-a884-460d-9667-402d8768cbe9]: (4, ('Sat Nov 22 09:19:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79)\n96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79\nSat Nov 22 09:19:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 (96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79)\n96df698c97b92f18ba78f2975d8a7504aa58082e60447e225703d3dbc0606a79\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.788 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c512257-7d56-41d1-9686-9e4cba08f98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.790 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:30 np0005532048 kernel: tapebc42408-70: left promiscuous mode
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.813 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b716c5-eb11-44df-845d-4b9dc6d7ad2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.827 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.830 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94fcc511-46d3-416e-b4cb-788714e893e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.831 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd87446-d9a2-4af4-bf10-cd8f2a439f8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.831 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803155.7391326, 64142c1c-95e0-4db4-b743-bb94c85a208f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.832 253665 INFO nova.compute.manager [-] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.837 253665 INFO os_vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.846 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Start _get_guest_xml network_info=[{"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.851 253665 DEBUG nova.compute.manager [None req-96879638-cbb2-44b6-9172-bcf0d327e678 - - - - - -] [instance: 64142c1c-95e0-4db4-b743-bb94c85a208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.854 253665 WARNING nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.854 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82425e75-4126-44e2-94c0-bbad3c8bcffd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 599347, 'reachable_time': 39752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315991, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:30 np0005532048 systemd[1]: run-netns-ovnmeta\x2debc42408\x2d7b03\x2d480c\x2da016\x2d1e5bb2ebcc93.mount: Deactivated successfully.
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.859 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.860 253665 DEBUG nova.virt.libvirt.host [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:30.859 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d62379-52d3-4bdc-9e51-3065f9cee6a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.861 253665 DEBUG nova.virt.libvirt.host [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.865 253665 DEBUG nova.virt.libvirt.host [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.865 253665 DEBUG nova.virt.libvirt.host [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.866 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.866 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.866 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.867 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.868 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.868 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.868 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.868 253665 DEBUG nova.virt.hardware [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.869 253665 DEBUG nova.objects.instance [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.880 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.880 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.881 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.881 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.903 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.907 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:30 np0005532048 nova_compute[253661]: 2025-11-22 09:19:30.964 253665 DEBUG oslo_concurrency.processutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.292 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.369 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] resizing rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:19:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 146 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 51 KiB/s wr, 57 op/s
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.475 253665 DEBUG nova.objects.instance [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lazy-loading 'migration_context' on Instance uuid 18709ea6-4d81-4329-8bbc-2d62e5344ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.487 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.488 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Ensure instance console log exists: /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.488 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.489 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.489 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.490 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.494 253665 WARNING nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.499 253665 DEBUG nova.virt.libvirt.host [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.499 253665 DEBUG nova.virt.libvirt.host [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.502 253665 DEBUG nova.virt.libvirt.host [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.502 253665 DEBUG nova.virt.libvirt.host [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.503 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.503 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.504 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.505 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.505 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.505 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.505 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.506 253665 DEBUG nova.virt.hardware [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.509 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985709953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.590 253665 DEBUG oslo_concurrency.processutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.630 253665 DEBUG oslo_concurrency.processutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.690 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.691 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.712 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.796 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.797 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.809 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.809 253665 INFO nova.compute.claims [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:19:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616682928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:31 np0005532048 nova_compute[253661]: 2025-11-22 09:19:31.970 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.010 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.032 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.035 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/989007554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.117 253665 DEBUG oslo_concurrency.processutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.119 253665 DEBUG nova.virt.libvirt.vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.120 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.121 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.122 253665 DEBUG nova.objects.instance [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 636b1046-fff8-4a45-8a14-04010b2f282e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.144 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <uuid>636b1046-fff8-4a45-8a14-04010b2f282e</uuid>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <name>instance-00000032</name>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerActionsTestJSON-server-149918095</nova:name>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:30</nova:creationTime>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:port uuid="a288a5e5-7b57-4be8-9617-3271ea1e210f">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="serial">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="uuid">636b1046-fff8-4a45-8a14-04010b2f282e</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/636b1046-fff8-4a45-8a14-04010b2f282e_disk.config">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:70:38:8e"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <target dev="tapa288a5e5-7b"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/636b1046-fff8-4a45-8a14-04010b2f282e/console.log" append="off"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <input type="keyboard" bus="usb"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:32 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:32 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.146 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.146 253665 DEBUG nova.virt.libvirt.driver [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.147 253665 DEBUG nova.virt.libvirt.vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-149918095',display_name='tempest-ServerActionsTestJSON-server-149918095',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-149918095',id=50,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBMQnGeXgHp4cI5M12JaL8Zo2ZL/4zoaCwuCH2q4+kTomkNQdn/UBj3thq4PDoKPTbe3972juHAoGHaArFBzNTezRVmmKvumeQbx2q0Ky2UPckB/x2HJ6rYlPBo/AIPxUg==',key_name='tempest-keypair-2069777806',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:18:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-8x9old9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='559fd7e00a0a468797efe4955caffc4a',uuid=636b1046-fff8-4a45-8a14-04010b2f282e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.147 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "address": "fa:16:3e:70:38:8e", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa288a5e5-7b", "ovs_interfaceid": "a288a5e5-7b57-4be8-9617-3271ea1e210f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.148 253665 DEBUG nova.network.os_vif_util [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.148 253665 DEBUG os_vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.149 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.150 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.153 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa288a5e5-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.153 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa288a5e5-7b, col_values=(('external_ids', {'iface-id': 'a288a5e5-7b57-4be8-9617-3271ea1e210f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:38:8e', 'vm-uuid': '636b1046-fff8-4a45-8a14-04010b2f282e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 NetworkManager[48920]: <info>  [1763803172.1571] manager: (tapa288a5e5-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.163 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.165 253665 INFO os_vif [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:38:8e,bridge_name='br-int',has_traffic_filtering=True,id=a288a5e5-7b57-4be8-9617-3271ea1e210f,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa288a5e5-7b')#033[00m
Nov 22 04:19:32 np0005532048 kernel: tapa288a5e5-7b: entered promiscuous mode
Nov 22 04:19:32 np0005532048 systemd-udevd[315864]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:32 np0005532048 NetworkManager[48920]: <info>  [1763803172.2683] manager: (tapa288a5e5-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Nov 22 04:19:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:32Z|00545|binding|INFO|Claiming lport a288a5e5-7b57-4be8-9617-3271ea1e210f for this chassis.
Nov 22 04:19:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:32Z|00546|binding|INFO|a288a5e5-7b57-4be8-9617-3271ea1e210f: Claiming fa:16:3e:70:38:8e 10.100.0.4
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.270 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 NetworkManager[48920]: <info>  [1763803172.2834] device (tapa288a5e5-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:19:32 np0005532048 NetworkManager[48920]: <info>  [1763803172.2846] device (tapa288a5e5-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:19:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:32Z|00547|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f ovn-installed in OVS
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:32Z|00548|binding|INFO|Setting lport a288a5e5-7b57-4be8-9617-3271ea1e210f up in Southbound
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.304 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.305 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:38:8e 10.100.0.4'], port_security=['fa:16:3e:70:38:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '636b1046-fff8-4a45-8a14-04010b2f282e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'dbbf657f-109f-4bb0-a44e-e31728cbbf3a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a288a5e5-7b57-4be8-9617-3271ea1e210f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.307 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a288a5e5-7b57-4be8-9617-3271ea1e210f in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 bound to our chassis#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.309 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.329 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a8365356-4b5d-4ce5-8e4b-8d1623236150]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.330 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebc42408-71 in ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.332 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebc42408-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.332 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa3c4ca-bf94-4812-9649-fa270e2ba21c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.333 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea59460f-20b2-49e5-9f49-ccec20ddc569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 systemd-machined[215941]: New machine qemu-65-instance-00000032.
Nov 22 04:19:32 np0005532048 systemd[1]: Started Virtual Machine qemu-65-instance-00000032.
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.353 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1a2c49-372a-492d-b327-8d7261ee75a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.372 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[704e5cad-104c-4dbe-9e50-598a160ae8e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.415 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0dd5031-af71-414f-89d7-6eba38b463e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 NetworkManager[48920]: <info>  [1763803172.4252] manager: (tapebc42408-70): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d22b42bc-5ec8-4875-a6a6-79fc6bb5f22c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839126014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.468 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[50b5f28d-b67e-4770-a183-577b5f2e31db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.472 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3acea2-fafe-40b7-b47b-b1bcd3d4032f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928460180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.498 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:32 np0005532048 NetworkManager[48920]: <info>  [1763803172.5068] device (tapebc42408-70): carrier: link connected
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.512 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[abcf9551-b239-4f3f-9a8f-bf781ca116e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.514 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.515 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.524 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.526 253665 DEBUG nova.objects.instance [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18709ea6-4d81-4329-8bbc-2d62e5344ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.528 253665 DEBUG nova.compute.provider_tree [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.536 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4454b320-4d14-42e8-940a-d0fd367175c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602324, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316294, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.543 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.548 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <uuid>18709ea6-4d81-4329-8bbc-2d62e5344ef5</uuid>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <name>instance-00000039</name>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersAaction247Test-server-673972960</nova:name>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:31</nova:creationTime>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:user uuid="7a7411d90d324909b49644bc8eef8e0f">tempest-ServersAaction247Test-1280298349-project-member</nova:user>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <nova:project uuid="ef6ee0eea0834ec7853091ff44562661">tempest-ServersAaction247Test-1280298349</nova:project>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="serial">18709ea6-4d81-4329-8bbc-2d62e5344ef5</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="uuid">18709ea6-4d81-4329-8bbc-2d62e5344ef5</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/console.log" append="off"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:32 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:32 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:32 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:32 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.554 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a78d45ee-843e-4dff-ba51-2552775b29a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:e3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602324, 'tstamp': 602324}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316295, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.559 253665 DEBUG nova.scheduler.client.report [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.571 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7cad1a03-e531-488e-92e2-343fa53aab42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602324, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316296, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.598 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa433f32-2306-4da9-9c9d-1d486b2232f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.611 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.612 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.618 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.618 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.619 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Using config drive#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.645 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.671 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[172d980e-e8ea-41a1-af9a-5e0fc4e150b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.674 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:32 np0005532048 kernel: tapebc42408-70: entered promiscuous mode
Nov 22 04:19:32 np0005532048 NetworkManager[48920]: <info>  [1763803172.6774] manager: (tapebc42408-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.677 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.679 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:32Z|00549|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.684 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.684 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.686 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.689 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.689 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.702 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.704 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[162b28d7-18cf-4900-88a8-4e7076f8e414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.705 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/ebc42408-7b03-480c-a016-1e5bb2ebcc93.pid.haproxy
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID ebc42408-7b03-480c-a016-1e5bb2ebcc93
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:19:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:32.707 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'env', 'PROCESS_TAG=haproxy-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebc42408-7b03-480c-a016-1e5bb2ebcc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.709 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.709 253665 INFO nova.compute.claims [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.731 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.777 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.907 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.908 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.909 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating image(s)#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.936 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.963 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:32 np0005532048 nova_compute[253661]: 2025-11-22 09:19:32.996 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.003 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.056 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.107 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.109 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.110 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.111 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.141 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.147 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:33 np0005532048 podman[316442]: 2025-11-22 09:19:33.153833936 +0000 UTC m=+0.078555300 container create 719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.198 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Creating config drive at /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config#033[00m
Nov 22 04:19:33 np0005532048 podman[316442]: 2025-11-22 09:19:33.107171802 +0000 UTC m=+0.031893076 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.203 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy8c3aqfo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:33 np0005532048 systemd[1]: Started libpod-conmon-719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9.scope.
Nov 22 04:19:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:19:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e741de3208f7891e80bf1534293c26fdb1a7bf352ddd9a294c705d9b5426cd1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:33 np0005532048 podman[316442]: 2025-11-22 09:19:33.258230613 +0000 UTC m=+0.182951897 container init 719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.257 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 636b1046-fff8-4a45-8a14-04010b2f282e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.259 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803173.1139796, 636b1046-fff8-4a45-8a14-04010b2f282e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.259 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.262 253665 DEBUG nova.compute.manager [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:19:33 np0005532048 podman[316442]: 2025-11-22 09:19:33.265413947 +0000 UTC m=+0.190135211 container start 719d2ec28c394aecf44a6ae53e4f3ccde431055474065871e410faa516ff2de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.267 253665 DEBUG nova.policy [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5352d2182544454aab03bd4a74160247', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.281 253665 INFO nova.virt.libvirt.driver [-] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Instance rebooted successfully.#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.282 253665 DEBUG nova.compute.manager [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.286 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:33 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [NOTICE]   (316521) : New worker (316526) forked
Nov 22 04:19:33 np0005532048 neutron-haproxy-ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93[316491]: [NOTICE]   (316521) : Loading success.
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.299 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.326 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.327 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803173.114166, 636b1046-fff8-4a45-8a14-04010b2f282e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.330 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.355 253665 DEBUG oslo_concurrency.lockutils [None req-a75ed6da-493f-437d-a05f-ed7905792970 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.358 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.368 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.371 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy8c3aqfo" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 151 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 1.1 MiB/s wr, 93 op/s
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.416 253665 DEBUG nova.storage.rbd_utils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] rbd image 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.428 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.531 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444495597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.613 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.616 253665 DEBUG oslo_concurrency.processutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config 18709ea6-4d81-4329-8bbc-2d62e5344ef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.616 253665 INFO nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Deleting local config drive /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5/disk.config because it was imported into RBD.#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.630 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] resizing rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.681 253665 DEBUG nova.compute.provider_tree [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.694 253665 DEBUG nova.scheduler.client.report [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:33 np0005532048 systemd-machined[215941]: New machine qemu-66-instance-00000039.
Nov 22 04:19:33 np0005532048 systemd[1]: Started Virtual Machine qemu-66-instance-00000039.
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.757 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.758 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.767 253665 DEBUG nova.objects.instance [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'migration_context' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.781 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.781 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Ensure instance console log exists: /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.782 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.782 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.782 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.807 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.809 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.824 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.840 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.920 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.922 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.922 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Creating image(s)#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.944 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:33 np0005532048 nova_compute[253661]: 2025-11-22 09:19:33.970 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.002 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.009 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.106 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.107 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.108 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.109 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.138 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.143 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.186 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803174.1211925, 18709ea6-4d81-4329-8bbc-2d62e5344ef5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.188 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.192 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.192 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.197 253665 INFO nova.virt.libvirt.driver [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance spawned successfully.#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.198 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.208 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Successfully created port: 2a28300e-6b6b-4513-831f-e30f3694fbcd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.212 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.218 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.224 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.224 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.225 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.225 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.226 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.226 253665 DEBUG nova.virt.libvirt.driver [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.240 253665 DEBUG nova.policy [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '790eaa89f1a74325b81291d8beca6d38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.247 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.247 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803174.1371229, 18709ea6-4d81-4329-8bbc-2d62e5344ef5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.247 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] VM Started (Lifecycle Event)#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.269 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.274 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.281 253665 INFO nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Took 3.60 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.282 253665 DEBUG nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.294 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.344 253665 INFO nova.compute.manager [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Took 4.65 seconds to build instance.#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.363 253665 DEBUG oslo_concurrency.lockutils [None req-64e8a906-b99a-4b4f-812c-e589d68db918 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.446 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.501 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] resizing rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.594 253665 DEBUG nova.objects.instance [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'migration_context' on Instance uuid 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.607 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.608 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Ensure instance console log exists: /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.608 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.608 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.609 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.685 253665 DEBUG nova.compute.manager [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.686 253665 DEBUG oslo_concurrency.lockutils [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.686 253665 DEBUG oslo_concurrency.lockutils [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.686 253665 DEBUG oslo_concurrency.lockutils [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.687 253665 DEBUG nova.compute.manager [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:34 np0005532048 nova_compute[253661]: 2025-11-22 09:19:34.687 253665 WARNING nova.compute.manager [req-7e99f9f3-79b3-49cd-a895-44ba781a7de4 req-c67df32f-f048-4a6a-ada5-c6a30ea1c086 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-unplugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.076 253665 DEBUG nova.compute.manager [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.112 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Successfully updated port: 2a28300e-6b6b-4513-831f-e30f3694fbcd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.116 253665 INFO nova.compute.manager [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] instance snapshotting#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.117 253665 DEBUG nova.objects.instance [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lazy-loading 'flavor' on Instance uuid 18709ea6-4d81-4329-8bbc-2d62e5344ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.133 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.136 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquired lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.138 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.191 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Successfully created port: dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.205 253665 DEBUG nova.compute.manager [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-changed-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.205 253665 DEBUG nova.compute.manager [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Refreshing instance network info cache due to event network-changed-2a28300e-6b6b-4513-831f-e30f3694fbcd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.206 253665 DEBUG oslo_concurrency.lockutils [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.223 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.223 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.223 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.224 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.224 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.225 253665 INFO nova.compute.manager [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Terminating instance#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.226 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "refresh_cache-18709ea6-4d81-4329-8bbc-2d62e5344ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.226 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquired lock "refresh_cache-18709ea6-4d81-4329-8bbc-2d62e5344ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.226 253665 DEBUG nova.network.neutron [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.337 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:19:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 173 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.423 253665 DEBUG nova.network.neutron [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.446 253665 INFO nova.virt.libvirt.driver [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Beginning live snapshot process#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.494 253665 DEBUG nova.compute.manager [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.695 253665 DEBUG nova.network.neutron [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.708 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Releasing lock "refresh_cache-18709ea6-4d81-4329-8bbc-2d62e5344ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.708 253665 DEBUG nova.compute.manager [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:19:35 np0005532048 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000039.scope: Deactivated successfully.
Nov 22 04:19:35 np0005532048 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000039.scope: Consumed 1.970s CPU time.
Nov 22 04:19:35 np0005532048 systemd-machined[215941]: Machine qemu-66-instance-00000039 terminated.
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.933 253665 INFO nova.virt.libvirt.driver [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance destroyed successfully.#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.937 253665 DEBUG nova.objects.instance [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lazy-loading 'resources' on Instance uuid 18709ea6-4d81-4329-8bbc-2d62e5344ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:35 np0005532048 nova_compute[253661]: 2025-11-22 09:19:35.978 253665 DEBUG nova.compute.manager [None req-af36180e-9f1c-46d3-9683-c75f17adbec7 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.124 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Successfully updated port: dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.136 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.136 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquired lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.136 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.226 253665 DEBUG nova.network.neutron [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Updating instance_info_cache with network_info: [{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.244 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Releasing lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.245 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance network_info: |[{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.245 253665 DEBUG oslo_concurrency.lockutils [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.246 253665 DEBUG nova.network.neutron [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Refreshing network info cache for port 2a28300e-6b6b-4513-831f-e30f3694fbcd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.249 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start _get_guest_xml network_info=[{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.254 253665 WARNING nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.267 253665 DEBUG nova.virt.libvirt.host [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.268 253665 DEBUG nova.virt.libvirt.host [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.270 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.278 253665 DEBUG nova.virt.libvirt.host [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.279 253665 DEBUG nova.virt.libvirt.host [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.279 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.280 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.280 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.281 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.281 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.281 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.281 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.282 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.282 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.282 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.283 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.283 253665 DEBUG nova.virt.hardware [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.286 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.389 253665 INFO nova.virt.libvirt.driver [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Deleting instance files /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5_del#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.391 253665 INFO nova.virt.libvirt.driver [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Deletion of /var/lib/nova/instances/18709ea6-4d81-4329-8bbc-2d62e5344ef5_del complete#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.442 253665 INFO nova.compute.manager [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.443 253665 DEBUG oslo.service.loopingcall [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.443 253665 DEBUG nova.compute.manager [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.443 253665 DEBUG nova.network.neutron [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.547 253665 DEBUG nova.network.neutron [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.559 253665 DEBUG nova.network.neutron [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.572 253665 INFO nova.compute.manager [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Took 0.13 seconds to deallocate network for instance.#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.621 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.622 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/568750702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.713 253665 DEBUG oslo_concurrency.processutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.792 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.793 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.794 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.795 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.796 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.797 253665 WARNING nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.797 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.798 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.799 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.800 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.800 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.801 253665 WARNING nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.802 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.802 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.803 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.805 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "636b1046-fff8-4a45-8a14-04010b2f282e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.806 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] No waiting events found dispatching network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.806 253665 WARNING nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 636b1046-fff8-4a45-8a14-04010b2f282e] Received unexpected event network-vif-plugged-a288a5e5-7b57-4be8-9617-3271ea1e210f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.806 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-changed-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.807 253665 DEBUG nova.compute.manager [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Refreshing instance network info cache due to event network-changed-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.807 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.809 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.845 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:36 np0005532048 nova_compute[253661]: 2025-11-22 09:19:36.851 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.161 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.192 253665 DEBUG nova.network.neutron [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Updating instance_info_cache with network_info: [{"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.215 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Releasing lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.217 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance network_info: |[{"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.218 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.219 253665 DEBUG nova.network.neutron [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Refreshing network info cache for port dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.226 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start _get_guest_xml network_info=[{"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083126307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.235 253665 WARNING nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.249 253665 DEBUG nova.virt.libvirt.host [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.251 253665 DEBUG nova.virt.libvirt.host [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.257 253665 DEBUG nova.virt.libvirt.host [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.258 253665 DEBUG nova.virt.libvirt.host [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.259 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.260 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.261 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.262 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.263 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.263 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.264 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.265 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.266 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.266 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.267 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.267 253665 DEBUG nova.virt.hardware [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.275 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905178655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.325 253665 DEBUG oslo_concurrency.processutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.329 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.337 253665 DEBUG nova.virt.libvirt.vif [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:32Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.338 253665 DEBUG nova.network.os_vif_util [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.340 253665 DEBUG nova.network.os_vif_util [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.342 253665 DEBUG nova.objects.instance [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.351 253665 DEBUG nova.compute.provider_tree [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.360 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <uuid>6fc1c0e4-3bd1-44c5-a722-9a30961fc545</uuid>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <name>instance-0000003a</name>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1392829761</nova:name>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:36</nova:creationTime>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <nova:user uuid="5352d2182544454aab03bd4a74160247">tempest-ServerDiskConfigTestJSON-1778643933-project-member</nova:user>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <nova:project uuid="a29f2c834c7a4a2ea6c4fc6dea996a8e">tempest-ServerDiskConfigTestJSON-1778643933</nova:project>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <nova:port uuid="2a28300e-6b6b-4513-831f-e30f3694fbcd">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <entry name="serial">6fc1c0e4-3bd1-44c5-a722-9a30961fc545</entry>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <entry name="uuid">6fc1c0e4-3bd1-44c5-a722-9a30961fc545</entry>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:7c:af:ec"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <target dev="tap2a28300e-6b"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/console.log" append="off"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:37 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:37 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:37 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:37 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.369 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Preparing to wait for external event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.370 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.370 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.371 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.372 253665 DEBUG nova.virt.libvirt.vif [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:32Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.373 253665 DEBUG nova.network.os_vif_util [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.374 253665 DEBUG nova.network.os_vif_util [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.375 253665 DEBUG os_vif [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.383 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.384 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.386 253665 DEBUG nova.scheduler.client.report [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.394 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a28300e-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.395 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a28300e-6b, col_values=(('external_ids', {'iface-id': '2a28300e-6b6b-4513-831f-e30f3694fbcd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:af:ec', 'vm-uuid': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:37 np0005532048 NetworkManager[48920]: <info>  [1763803177.3989] manager: (tap2a28300e-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Nov 22 04:19:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 177 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 148 op/s
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.413 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.419 253665 INFO os_vif [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b')#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.451 253665 INFO nova.scheduler.client.report [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Deleted allocations for instance 18709ea6-4d81-4329-8bbc-2d62e5344ef5#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.492 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.492 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.492 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No VIF found with MAC fa:16:3e:7c:af:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.493 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Using config drive#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.513 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.520 253665 DEBUG nova.network.neutron [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Updated VIF entry in instance network info cache for port 2a28300e-6b6b-4513-831f-e30f3694fbcd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.521 253665 DEBUG nova.network.neutron [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Updating instance_info_cache with network_info: [{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.523 253665 DEBUG oslo_concurrency.lockutils [None req-5e74a0da-8801-4171-8174-b62f7385556f 7a7411d90d324909b49644bc8eef8e0f ef6ee0eea0834ec7853091ff44562661 - - default default] Lock "18709ea6-4d81-4329-8bbc-2d62e5344ef5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.546 253665 DEBUG oslo_concurrency.lockutils [req-861370e8-2d14-4090-a077-87b6017f2cd9 req-cfe994f9-9a04-4898-8537-eb7882919c95 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6fc1c0e4-3bd1-44c5-a722-9a30961fc545" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.757 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating config drive at /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.763 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmeawbfsl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642242874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.812 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.838 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.842 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.915 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmeawbfsl" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.938 253665 DEBUG nova.storage.rbd_utils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:37 np0005532048 nova_compute[253661]: 2025-11-22 09:19:37.941 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.115 253665 DEBUG oslo_concurrency.processutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.116 253665 INFO nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deleting local config drive /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config because it was imported into RBD.#033[00m
Nov 22 04:19:38 np0005532048 kernel: tap2a28300e-6b: entered promiscuous mode
Nov 22 04:19:38 np0005532048 NetworkManager[48920]: <info>  [1763803178.1741] manager: (tap2a28300e-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Nov 22 04:19:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:38Z|00550|binding|INFO|Claiming lport 2a28300e-6b6b-4513-831f-e30f3694fbcd for this chassis.
Nov 22 04:19:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:38Z|00551|binding|INFO|2a28300e-6b6b-4513-831f-e30f3694fbcd: Claiming fa:16:3e:7c:af:ec 10.100.0.12
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.189 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:af:ec 10.100.0.12'], port_security=['fa:16:3e:7c:af:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a28300e-6b6b-4513-831f-e30f3694fbcd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.191 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a28300e-6b6b-4513-831f-e30f3694fbcd in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd bound to our chassis#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.195 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:38Z|00552|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd ovn-installed in OVS
Nov 22 04:19:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:38Z|00553|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd up in Southbound
Nov 22 04:19:38 np0005532048 systemd-udevd[317108]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.214 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aea7b08a-d03d-450f-a1b8-360c560c157c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.216 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d1bce2-e1 in ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:19:38 np0005532048 systemd-machined[215941]: New machine qemu-67-instance-0000003a.
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.221 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d1bce2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.221 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6e80b8-e8f7-4bd4-9a2e-08dcb9994cb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.224 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3b9901-30e5-4d63-b360-cc3e100d64ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 systemd[1]: Started Virtual Machine qemu-67-instance-0000003a.
Nov 22 04:19:38 np0005532048 NetworkManager[48920]: <info>  [1763803178.2312] device (tap2a28300e-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:19:38 np0005532048 NetworkManager[48920]: <info>  [1763803178.2322] device (tap2a28300e-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.246 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a98bc613-d36e-4907-b40f-5c70d5d648d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.272 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7948337-f483-4b16-9026-6ac9ebc7fef0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.311 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[09a278db-4dc2-4fcf-84a2-d96809225c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 systemd-udevd[317112]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.320 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb52144-d941-47f0-8c16-0eaacedad424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 NetworkManager[48920]: <info>  [1763803178.3216] manager: (tap01d1bce2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/249)
Nov 22 04:19:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761882775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.354 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a367c4-dcc4-4916-aae8-79e41989c857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.357 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[134cdd8f-79f4-4dce-9926-197d31693133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.371 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.372 253665 DEBUG nova.virt.libvirt.vif [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2092993764',display_name='tempest-DeleteServersTestJSON-server-2092993764',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2092993764',id=59,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-c64g70hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:33Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=5a489088-2d5b-49b6-8280-e2d86fa4fbf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.373 253665 DEBUG nova.network.os_vif_util [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.374 253665 DEBUG nova.network.os_vif_util [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.375 253665 DEBUG nova.objects.instance [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.389 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <uuid>5a489088-2d5b-49b6-8280-e2d86fa4fbf3</uuid>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <name>instance-0000003b</name>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <nova:name>tempest-DeleteServersTestJSON-server-2092993764</nova:name>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:37</nova:creationTime>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <nova:user uuid="790eaa89f1a74325b81291d8beca6d38">tempest-DeleteServersTestJSON-487469072-project-member</nova:user>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <nova:project uuid="d4fe4f74353442a9a8042d29dcf6274e">tempest-DeleteServersTestJSON-487469072</nova:project>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <nova:port uuid="dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <entry name="serial">5a489088-2d5b-49b6-8280-e2d86fa4fbf3</entry>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <entry name="uuid">5a489088-2d5b-49b6-8280-e2d86fa4fbf3</entry>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:61:20:2a"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <target dev="tapdae1e68a-69"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/console.log" append="off"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:38 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:38 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:38 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:38 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.394 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Preparing to wait for external event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.395 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.395 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.395 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:38 np0005532048 NetworkManager[48920]: <info>  [1763803178.3966] device (tap01d1bce2-e0): carrier: link connected
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.396 253665 DEBUG nova.virt.libvirt.vif [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2092993764',display_name='tempest-DeleteServersTestJSON-server-2092993764',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2092993764',id=59,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-c64g70hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:33Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=5a489088-2d5b-49b6-8280-e2d86fa4fbf3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.396 253665 DEBUG nova.network.os_vif_util [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.397 253665 DEBUG nova.network.os_vif_util [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.397 253665 DEBUG os_vif [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.398 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.399 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.401 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdae1e68a-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.402 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdae1e68a-69, col_values=(('external_ids', {'iface-id': 'dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:20:2a', 'vm-uuid': '5a489088-2d5b-49b6-8280-e2d86fa4fbf3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.403 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[972d0c8c-5869-4c62-9b88-25723dbec5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[56c888ba-0212-468b-896c-9817affaf82d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602913, 'reachable_time': 31761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317143, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 NetworkManager[48920]: <info>  [1763803178.4467] manager: (tapdae1e68a-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[efb73dee-0a03-4c6c-ad6f-4ae57660f073]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:2279'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602913, 'tstamp': 602913}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317144, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.450 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.453 253665 INFO os_vif [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69')#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.474 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13b59599-a7bb-4dd9-9482-8e8b50291ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602913, 'reachable_time': 31761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317146, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.509 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30b3996b-f2af-4cda-a66a-d3e87463122d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.523 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.523 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.523 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] No VIF found with MAC fa:16:3e:61:20:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.524 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Using config drive#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.554 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.615 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[63161003-2911-4ac1-b6c5-986200de48d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.618 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.618 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.619 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d1bce2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:38 np0005532048 NetworkManager[48920]: <info>  [1763803178.6218] manager: (tap01d1bce2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 kernel: tap01d1bce2-e0: entered promiscuous mode
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.628 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d1bce2-e0, col_values=(('external_ids', {'iface-id': '23aa3d02-a12d-464a-8395-5aa8724c0fd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:38Z|00554|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.655 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.660 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f5697c4f-3454-40fd-81d3-d44e62cd2b7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.662 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:19:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:38.662 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'env', 'PROCESS_TAG=haproxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.971 253665 DEBUG nova.network.neutron [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Updated VIF entry in instance network info cache for port dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.972 253665 DEBUG nova.network.neutron [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Updating instance_info_cache with network_info: [{"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.987 253665 DEBUG oslo_concurrency.lockutils [req-a55f0990-be28-459e-8a25-8282ad152b91 req-b499eb86-4529-41f0-a032-5180982b7d6a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-5a489088-2d5b-49b6-8280-e2d86fa4fbf3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:38 np0005532048 nova_compute[253661]: 2025-11-22 09:19:38.998 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Creating config drive at /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.004 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sr2tzsj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:39 np0005532048 podman[317199]: 2025-11-22 09:19:39.109007397 +0000 UTC m=+0.055100540 container create 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.117 253665 DEBUG nova.compute.manager [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.118 253665 DEBUG oslo_concurrency.lockutils [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.118 253665 DEBUG oslo_concurrency.lockutils [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.118 253665 DEBUG oslo_concurrency.lockutils [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.119 253665 DEBUG nova.compute.manager [req-4230f22f-9c4f-4782-b916-8774de811ca3 req-e2f63d7a-705b-4d16-a081-dc7b6836cfe9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Processing event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:19:39 np0005532048 systemd[1]: Started libpod-conmon-13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03.scope.
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.152 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sr2tzsj" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:19:39 np0005532048 podman[317199]: 2025-11-22 09:19:39.081645362 +0000 UTC m=+0.027738495 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:19:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1241bc236f075e40e2ab81b6d24373f4bf0bae4ad4e81373a81f5c0654f230/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.186 253665 DEBUG nova.storage.rbd_utils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] rbd image 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:39 np0005532048 podman[317199]: 2025-11-22 09:19:39.192297351 +0000 UTC m=+0.138390514 container init 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.192 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:39 np0005532048 podman[317199]: 2025-11-22 09:19:39.199861464 +0000 UTC m=+0.145954597 container start 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 04:19:39 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [NOTICE]   (317238) : New worker (317241) forked
Nov 22 04:19:39 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [NOTICE]   (317238) : Loading success.
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.366 253665 DEBUG oslo_concurrency.processutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config 5a489088-2d5b-49b6-8280-e2d86fa4fbf3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.367 253665 INFO nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Deleting local config drive /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3/disk.config because it was imported into RBD.#033[00m
Nov 22 04:19:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 215 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.4 MiB/s wr, 286 op/s
Nov 22 04:19:39 np0005532048 kernel: tapdae1e68a-69: entered promiscuous mode
Nov 22 04:19:39 np0005532048 NetworkManager[48920]: <info>  [1763803179.4312] manager: (tapdae1e68a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Nov 22 04:19:39 np0005532048 systemd-udevd[317137]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:39Z|00555|binding|INFO|Claiming lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 for this chassis.
Nov 22 04:19:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:39Z|00556|binding|INFO|dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38: Claiming fa:16:3e:61:20:2a 10.100.0.3
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.441 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:20:2a 10.100.0.3'], port_security=['fa:16:3e:61:20:2a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a489088-2d5b-49b6-8280-e2d86fa4fbf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.443 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 in datapath d93e3720-b00d-41f5-8283-164e9f857d24 bound to our chassis#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.444 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d93e3720-b00d-41f5-8283-164e9f857d24#033[00m
Nov 22 04:19:39 np0005532048 NetworkManager[48920]: <info>  [1763803179.4487] device (tapdae1e68a-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:19:39 np0005532048 NetworkManager[48920]: <info>  [1763803179.4500] device (tapdae1e68a-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:19:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:39Z|00557|binding|INFO|Setting lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 ovn-installed in OVS
Nov 22 04:19:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:39Z|00558|binding|INFO|Setting lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 up in Southbound
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.461 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[732f88b6-a03b-466a-bdf5-53953da0e9c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.462 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd93e3720-b1 in ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.465 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd93e3720-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.465 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d10a0a3f-7bc8-4ea8-82a9-3b737809bf5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.467 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa698dc-ce24-40f1-9244-32fc2a5f8a51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.480 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e2effba5-a4d5-4e3e-ab17-0aaacc40ce94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 systemd-machined[215941]: New machine qemu-68-instance-0000003b.
Nov 22 04:19:39 np0005532048 systemd[1]: Started Virtual Machine qemu-68-instance-0000003b.
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.500 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce80eb5-d7b2-4c35-96a8-277dfe310173]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.516 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.5152586, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.516 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Started (Lifecycle Event)#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.521 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.533 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.539 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.544 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance spawned successfully.#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.545 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.549 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.548 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44f4c2a5-8910-4ae5-8ef8-da5cd6b94524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 NetworkManager[48920]: <info>  [1763803179.5623] manager: (tapd93e3720-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.561 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8d5c56-c6c5-43a6-b84c-0cdb3b3373d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.572 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.573 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.5202582, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.573 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.590 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.591 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.591 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.591 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.592 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.592 253665 DEBUG nova.virt.libvirt.driver [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.596 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.600 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.5287998, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.601 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.604 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[42c8b8de-7b42-4bb4-947c-e9c367084ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.607 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d747be05-06ce-4604-8c11-241b3649757a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.625 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.627 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:39 np0005532048 NetworkManager[48920]: <info>  [1763803179.6323] device (tapd93e3720-b0): carrier: link connected
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.639 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[10d3fb62-165a-4ba6-aaba-8305481b2e74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.646 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.653 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f29c8a-610c-4b91-a95b-7a97a4ae573e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603037, 'reachable_time': 32531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317343, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.659 253665 INFO nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Took 6.75 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.659 253665 DEBUG nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.669 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f659b8b0-d87a-4583-9770-ce1467fe5cce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:9b56'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603037, 'tstamp': 603037}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317344, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.695 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[838e92de-e128-4b15-875c-4d63a32809df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd93e3720-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:9b:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603037, 'reachable_time': 32531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317345, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.711 253665 INFO nova.compute.manager [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Took 7.93 seconds to build instance.#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.726 253665 DEBUG oslo_concurrency.lockutils [None req-24a24e11-68c5-460d-b9a1-acae3fa2d62d 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.739 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88e8400d-453f-4386-b46d-fcbbbeca3e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.825 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[39ead8b7-4d51-48e8-87d5-f466c8632878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.826 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.826 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.827 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd93e3720-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:39 np0005532048 NetworkManager[48920]: <info>  [1763803179.8291] manager: (tapd93e3720-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Nov 22 04:19:39 np0005532048 kernel: tapd93e3720-b0: entered promiscuous mode
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.837 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd93e3720-b0, col_values=(('external_ids', {'iface-id': '956ab441-c5ef-4e3d-a7c6-6129a5260345'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:39Z|00559|binding|INFO|Releasing lport 956ab441-c5ef-4e3d-a7c6-6129a5260345 from this chassis (sb_readonly=0)
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.850 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803164.8495457, 2f0d9dce-1900-41c4-9b69-7e46f34dde81 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.851 253665 INFO nova.compute.manager [-] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.851 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.852 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5b488f-2a01-42c0-918a-c210771e33ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.853 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/d93e3720-b00d-41f5-8283-164e9f857d24.pid.haproxy
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID d93e3720-b00d-41f5-8283-164e9f857d24
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:19:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:39.854 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'env', 'PROCESS_TAG=haproxy-d93e3720-b00d-41f5-8283-164e9f857d24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d93e3720-b00d-41f5-8283-164e9f857d24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.867 253665 DEBUG nova.compute.manager [None req-172b08c9-0d42-4898-b55e-9472a2edb81f - - - - - -] [instance: 2f0d9dce-1900-41c4-9b69-7e46f34dde81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.904 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.9036124, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.904 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Started (Lifecycle Event)#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.921 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.925 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803179.906118, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.926 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.941 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.945 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:39 np0005532048 nova_compute[253661]: 2025-11-22 09:19:39.963 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:40 np0005532048 podman[317416]: 2025-11-22 09:19:40.288272861 +0000 UTC m=+0.054859423 container create 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 04:19:40 np0005532048 systemd[1]: Started libpod-conmon-48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8.scope.
Nov 22 04:19:40 np0005532048 podman[317416]: 2025-11-22 09:19:40.262361251 +0000 UTC m=+0.028947833 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:19:40 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:19:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334c5af23dba16e3d478964a076fed62cfc26886b13bd29e456185b8274415e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:40 np0005532048 podman[317416]: 2025-11-22 09:19:40.378904114 +0000 UTC m=+0.145490696 container init 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:19:40 np0005532048 podman[317416]: 2025-11-22 09:19:40.385550205 +0000 UTC m=+0.152136767 container start 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:19:40 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [NOTICE]   (317432) : New worker (317434) forked
Nov 22 04:19:40 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [NOTICE]   (317432) : Loading success.
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.945 253665 DEBUG nova.compute.manager [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.945 253665 DEBUG oslo_concurrency.lockutils [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.946 253665 DEBUG oslo_concurrency.lockutils [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.946 253665 DEBUG oslo_concurrency.lockutils [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.946 253665 DEBUG nova.compute.manager [req-cfa77d17-41e4-44d9-bef3-84ca8564b238 req-e87d8e9f-e736-48ce-a2dc-1789f07179d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Processing event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.947 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.956 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803180.951288, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.956 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.958 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.962 253665 INFO nova.virt.libvirt.driver [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance spawned successfully.#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.962 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.974 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.979 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.982 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.983 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.983 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.984 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.984 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:40 np0005532048 nova_compute[253661]: 2025-11-22 09:19:40.985 253665 DEBUG nova.virt.libvirt.driver [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.006 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.037 253665 INFO nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Took 7.12 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.038 253665 DEBUG nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.094 253665 INFO nova.compute.manager [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Took 8.43 seconds to build instance.#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.110 253665 DEBUG oslo_concurrency.lockutils [None req-c412d776-9811-4bbe-b6cf-c4c518c4b966 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.213 253665 DEBUG nova.compute.manager [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.214 253665 DEBUG oslo_concurrency.lockutils [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.214 253665 DEBUG oslo_concurrency.lockutils [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.214 253665 DEBUG oslo_concurrency.lockutils [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.215 253665 DEBUG nova.compute.manager [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:41 np0005532048 nova_compute[253661]: 2025-11-22 09:19:41.215 253665 WARNING nova.compute.manager [req-707d65e9-13e9-4584-a472-1f19b971fb0f req-090dfd6e-27cd-472c-ac71-03e6e1225690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 215 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 276 op/s
Nov 22 04:19:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:42 np0005532048 nova_compute[253661]: 2025-11-22 09:19:42.480 253665 INFO nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Rebuilding instance#033[00m
Nov 22 04:19:42 np0005532048 nova_compute[253661]: 2025-11-22 09:19:42.719 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:42 np0005532048 nova_compute[253661]: 2025-11-22 09:19:42.735 253665 DEBUG nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:42 np0005532048 nova_compute[253661]: 2025-11-22 09:19:42.775 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_requests' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:42 np0005532048 nova_compute[253661]: 2025-11-22 09:19:42.785 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:42 np0005532048 nova_compute[253661]: 2025-11-22 09:19:42.794 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'resources' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:42 np0005532048 nova_compute[253661]: 2025-11-22 09:19:42.809 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'migration_context' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:42 np0005532048 nova_compute[253661]: 2025-11-22 09:19:42.832 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:19:42 np0005532048 nova_compute[253661]: 2025-11-22 09:19:42.838 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.277 253665 DEBUG nova.compute.manager [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.278 253665 DEBUG oslo_concurrency.lockutils [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.278 253665 DEBUG oslo_concurrency.lockutils [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.278 253665 DEBUG oslo_concurrency.lockutils [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.279 253665 DEBUG nova.compute.manager [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] No waiting events found dispatching network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.279 253665 WARNING nova.compute.manager [req-1bd69b4c-b14f-4b9d-b914-18e63a2c4b62 req-330e14e0-cce6-4c91-8a8f-1a992d0d6e4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received unexpected event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.380 253665 DEBUG nova.objects.instance [None req-400b75c6-a44d-48fd-ae34-077406dbd5e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.401 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803183.4008892, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.401 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:19:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 5.4 MiB/s wr, 374 op/s
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.416 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.428 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.451 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 22 04:19:43 np0005532048 kernel: tapdae1e68a-69 (unregistering): left promiscuous mode
Nov 22 04:19:43 np0005532048 NetworkManager[48920]: <info>  [1763803183.8516] device (tapdae1e68a-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.868 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:43Z|00560|binding|INFO|Releasing lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 from this chassis (sb_readonly=0)
Nov 22 04:19:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:43Z|00561|binding|INFO|Setting lport dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 down in Southbound
Nov 22 04:19:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:43Z|00562|binding|INFO|Removing iface tapdae1e68a-69 ovn-installed in OVS
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.879 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.885 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:20:2a 10.100.0.3'], port_security=['fa:16:3e:61:20:2a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a489088-2d5b-49b6-8280-e2d86fa4fbf3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d93e3720-b00d-41f5-8283-164e9f857d24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4fe4f74353442a9a8042d29dcf6274e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815c5d1d-1b6e-45f9-b701-a476430eeb72', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f96831c1-7eec-48e6-a84b-cf09edd17625, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.887 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 in datapath d93e3720-b00d-41f5-8283-164e9f857d24 unbound from our chassis#033[00m
Nov 22 04:19:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.889 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d93e3720-b00d-41f5-8283-164e9f857d24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:19:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.890 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5104e2-d9b3-4150-98be-279825dfc605]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:43.895 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 namespace which is not needed anymore#033[00m
Nov 22 04:19:43 np0005532048 nova_compute[253661]: 2025-11-22 09:19:43.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:43 np0005532048 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Nov 22 04:19:43 np0005532048 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003b.scope: Consumed 2.967s CPU time.
Nov 22 04:19:43 np0005532048 systemd-machined[215941]: Machine qemu-68-instance-0000003b terminated.
Nov 22 04:19:44 np0005532048 nova_compute[253661]: 2025-11-22 09:19:44.018 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:44 np0005532048 nova_compute[253661]: 2025-11-22 09:19:44.027 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:44 np0005532048 nova_compute[253661]: 2025-11-22 09:19:44.029 253665 DEBUG nova.compute.manager [None req-400b75c6-a44d-48fd-ae34-077406dbd5e3 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:44 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [NOTICE]   (317432) : haproxy version is 2.8.14-c23fe91
Nov 22 04:19:44 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [NOTICE]   (317432) : path to executable is /usr/sbin/haproxy
Nov 22 04:19:44 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [WARNING]  (317432) : Exiting Master process...
Nov 22 04:19:44 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [WARNING]  (317432) : Exiting Master process...
Nov 22 04:19:44 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [ALERT]    (317432) : Current worker (317434) exited with code 143 (Terminated)
Nov 22 04:19:44 np0005532048 neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24[317428]: [WARNING]  (317432) : All workers exited. Exiting... (0)
Nov 22 04:19:44 np0005532048 systemd[1]: libpod-48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8.scope: Deactivated successfully.
Nov 22 04:19:44 np0005532048 podman[317470]: 2025-11-22 09:19:44.060791737 +0000 UTC m=+0.059206149 container died 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 04:19:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8-userdata-shm.mount: Deactivated successfully.
Nov 22 04:19:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5334c5af23dba16e3d478964a076fed62cfc26886b13bd29e456185b8274415e-merged.mount: Deactivated successfully.
Nov 22 04:19:44 np0005532048 podman[317470]: 2025-11-22 09:19:44.136280221 +0000 UTC m=+0.134694633 container cleanup 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:19:44 np0005532048 systemd[1]: libpod-conmon-48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8.scope: Deactivated successfully.
Nov 22 04:19:44 np0005532048 podman[317505]: 2025-11-22 09:19:44.211045859 +0000 UTC m=+0.043281273 container remove 48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:19:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2356e2c7-087f-4314-82ce-75082ede1ab9]: (4, ('Sat Nov 22 09:19:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8)\n48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8\nSat Nov 22 09:19:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 (48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8)\n48ea9a7bddcfec51cb3b45cf35866bb57a71cdfa17ac1f9d47eb3e9d196748c8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.219 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7e99416d-57ca-4e1d-8cb5-808e552483ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.220 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd93e3720-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:44 np0005532048 nova_compute[253661]: 2025-11-22 09:19:44.222 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:44 np0005532048 kernel: tapd93e3720-b0: left promiscuous mode
Nov 22 04:19:44 np0005532048 nova_compute[253661]: 2025-11-22 09:19:44.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.246 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a349315-8a09-4cd6-957d-887b91b2948b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a39320f6-e476-46e8-b760-8fb4cd70c487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.269 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38cae474-abcc-49dc-8317-aba9b659602e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.291 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ee148e-7851-48cd-990c-cb16ab7e4521]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603026, 'reachable_time': 27729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317524, 'error': None, 'target': 'ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:44 np0005532048 systemd[1]: run-netns-ovnmeta\x2dd93e3720\x2db00d\x2d41f5\x2d8283\x2d164e9f857d24.mount: Deactivated successfully.
Nov 22 04:19:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.296 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d93e3720-b00d-41f5-8283-164e9f857d24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:19:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:44.296 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[af8d26eb-977d-4121-9657-4a4d8ce2cdfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.208 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.360 253665 DEBUG nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-unplugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.361 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.362 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.363 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.363 253665 DEBUG nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] No waiting events found dispatching network-vif-unplugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.364 253665 WARNING nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received unexpected event network-vif-unplugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 for instance with vm_state suspended and task_state None.#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.364 253665 DEBUG nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.364 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.365 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.365 253665 DEBUG oslo_concurrency.lockutils [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.366 253665 DEBUG nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] No waiting events found dispatching network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:45 np0005532048 nova_compute[253661]: 2025-11-22 09:19:45.366 253665 WARNING nova.compute.manager [req-508cd7eb-e58e-46fe-9d75-46fef90ff543 req-b26822db-157b-4274-aa5e-eb87b20709f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received unexpected event network-vif-plugged-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 for instance with vm_state suspended and task_state None.#033[00m
Nov 22 04:19:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 4.4 MiB/s wr, 380 op/s
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.222 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.223 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.224 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.224 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.224 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.226 253665 INFO nova.compute.manager [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Terminating instance#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.227 253665 DEBUG nova.compute.manager [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.231 253665 INFO nova.virt.libvirt.driver [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Instance destroyed successfully.#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.232 253665 DEBUG nova.objects.instance [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lazy-loading 'resources' on Instance uuid 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.244 253665 DEBUG nova.virt.libvirt.vif [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:19:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2092993764',display_name='tempest-DeleteServersTestJSON-server-2092993764',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2092993764',id=59,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d4fe4f74353442a9a8042d29dcf6274e',ramdisk_id='',reservation_id='r-c64g70hs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-487469072',owner_user_name='tempest-DeleteServersTestJSON-487469072-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:19:44Z,user_data=None,user_id='790eaa89f1a74325b81291d8beca6d38',uuid=5a489088-2d5b-49b6-8280-e2d86fa4fbf3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.245 253665 DEBUG nova.network.os_vif_util [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converting VIF {"id": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "address": "fa:16:3e:61:20:2a", "network": {"id": "d93e3720-b00d-41f5-8283-164e9f857d24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-92539411-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4fe4f74353442a9a8042d29dcf6274e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdae1e68a-69", "ovs_interfaceid": "dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.246 253665 DEBUG nova.network.os_vif_util [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.246 253665 DEBUG os_vif [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.248 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.248 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdae1e68a-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.253 253665 INFO os_vif [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:20:2a,bridge_name='br-int',has_traffic_filtering=True,id=dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38,network=Network(d93e3720-b00d-41f5-8283-164e9f857d24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdae1e68a-69')#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.690 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.795 253665 INFO nova.virt.libvirt.driver [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Deleting instance files /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3_del#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.797 253665 INFO nova.virt.libvirt.driver [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Deletion of /var/lib/nova/instances/5a489088-2d5b-49b6-8280-e2d86fa4fbf3_del complete#033[00m
Nov 22 04:19:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.864 253665 INFO nova.compute.manager [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.865 253665 DEBUG oslo.service.loopingcall [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.865 253665 DEBUG nova.compute.manager [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:19:46 np0005532048 nova_compute[253661]: 2025-11-22 09:19:46.865 253665 DEBUG nova.network.neutron [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:19:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:47Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:38:8e 10.100.0.4
Nov 22 04:19:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 206 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 3.3 MiB/s wr, 377 op/s
Nov 22 04:19:47 np0005532048 nova_compute[253661]: 2025-11-22 09:19:47.531 253665 DEBUG nova.network.neutron [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:47 np0005532048 nova_compute[253661]: 2025-11-22 09:19:47.550 253665 INFO nova.compute.manager [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Took 0.68 seconds to deallocate network for instance.#033[00m
Nov 22 04:19:47 np0005532048 nova_compute[253661]: 2025-11-22 09:19:47.599 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:47 np0005532048 nova_compute[253661]: 2025-11-22 09:19:47.599 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:47 np0005532048 nova_compute[253661]: 2025-11-22 09:19:47.639 253665 DEBUG nova.compute.manager [req-5fec05f0-e2a9-47b9-aafd-4362c88f32d7 req-d0367400-344d-4c63-9be2-8020703e400f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Received event network-vif-deleted-dae1e68a-6925-4b4b-bd59-2c1d7cd0fe38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:47 np0005532048 nova_compute[253661]: 2025-11-22 09:19:47.701 253665 DEBUG oslo_concurrency.processutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3271679337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:48 np0005532048 nova_compute[253661]: 2025-11-22 09:19:48.197 253665 DEBUG oslo_concurrency.processutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:48 np0005532048 nova_compute[253661]: 2025-11-22 09:19:48.202 253665 DEBUG nova.compute.provider_tree [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:48 np0005532048 nova_compute[253661]: 2025-11-22 09:19:48.219 253665 DEBUG nova.scheduler.client.report [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:48 np0005532048 nova_compute[253661]: 2025-11-22 09:19:48.243 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:48 np0005532048 nova_compute[253661]: 2025-11-22 09:19:48.271 253665 INFO nova.scheduler.client.report [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Deleted allocations for instance 5a489088-2d5b-49b6-8280-e2d86fa4fbf3#033[00m
Nov 22 04:19:48 np0005532048 nova_compute[253661]: 2025-11-22 09:19:48.325 253665 DEBUG oslo_concurrency.lockutils [None req-151909ea-7b3c-4be4-9c5a-29540eac463d 790eaa89f1a74325b81291d8beca6d38 d4fe4f74353442a9a8042d29dcf6274e - - default default] Lock "5a489088-2d5b-49b6-8280-e2d86fa4fbf3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 169 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 2.2 MiB/s wr, 352 op/s
Nov 22 04:19:49 np0005532048 nova_compute[253661]: 2025-11-22 09:19:49.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:50 np0005532048 nova_compute[253661]: 2025-11-22 09:19:50.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:50 np0005532048 nova_compute[253661]: 2025-11-22 09:19:50.931 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803175.9306934, 18709ea6-4d81-4329-8bbc-2d62e5344ef5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:50 np0005532048 nova_compute[253661]: 2025-11-22 09:19:50.931 253665 INFO nova.compute.manager [-] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:19:50 np0005532048 nova_compute[253661]: 2025-11-22 09:19:50.956 253665 DEBUG nova.compute.manager [None req-cb6e4164-755c-4fe4-ab0a-44a34e2cb1e4 - - - - - -] [instance: 18709ea6-4d81-4329-8bbc-2d62e5344ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:51 np0005532048 nova_compute[253661]: 2025-11-22 09:19:51.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 169 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 35 KiB/s wr, 212 op/s
Nov 22 04:19:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:19:52
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.meta']
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:19:52 np0005532048 nova_compute[253661]: 2025-11-22 09:19:52.456 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:52 np0005532048 nova_compute[253661]: 2025-11-22 09:19:52.456 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:52 np0005532048 nova_compute[253661]: 2025-11-22 09:19:52.470 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:19:52 np0005532048 nova_compute[253661]: 2025-11-22 09:19:52.529 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:52 np0005532048 nova_compute[253661]: 2025-11-22 09:19:52.529 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:52 np0005532048 nova_compute[253661]: 2025-11-22 09:19:52.539 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:19:52 np0005532048 nova_compute[253661]: 2025-11-22 09:19:52.539 253665 INFO nova.compute.claims [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:19:52 np0005532048 nova_compute[253661]: 2025-11-22 09:19:52.660 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:52 np0005532048 nova_compute[253661]: 2025-11-22 09:19:52.895 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:19:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:52Z|00563|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:19:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:52Z|00564|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.021 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724326761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.140 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.146 253665 DEBUG nova.compute.provider_tree [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.161 253665 DEBUG nova.scheduler.client.report [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.197 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.199 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.244 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.246 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.271 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.290 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.377 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.379 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.379 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Creating image(s)#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.401 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 188 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.9 MiB/s wr, 242 op/s
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.427 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.450 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.453 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.489 253665 DEBUG nova.policy [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2c92c50f03874da0a9bd18e66157708e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dd04e58a339948e6b219ee858ce56620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.527 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.528 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.530 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.530 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.554 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.557 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:53.732 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:53.734 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:19:53 np0005532048 nova_compute[253661]: 2025-11-22 09:19:53.733 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:54 np0005532048 nova_compute[253661]: 2025-11-22 09:19:54.149 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:54 np0005532048 nova_compute[253661]: 2025-11-22 09:19:54.202 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] resizing rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:19:54 np0005532048 nova_compute[253661]: 2025-11-22 09:19:54.227 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Successfully created port: 95d3860d-a485-46b6-8875-35bb61ae7e9d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:19:54 np0005532048 nova_compute[253661]: 2025-11-22 09:19:54.289 253665 DEBUG nova.objects.instance [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lazy-loading 'migration_context' on Instance uuid 87fbaa81-3eae-4dac-9613-700a29ab0daf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:54 np0005532048 nova_compute[253661]: 2025-11-22 09:19:54.300 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:19:54 np0005532048 nova_compute[253661]: 2025-11-22 09:19:54.301 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Ensure instance console log exists: /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:19:54 np0005532048 nova_compute[253661]: 2025-11-22 09:19:54.301 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:54 np0005532048 nova_compute[253661]: 2025-11-22 09:19:54.301 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:54 np0005532048 nova_compute[253661]: 2025-11-22 09:19:54.302 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:19:54 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.215 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Successfully updated port: 95d3860d-a485-46b6-8875-35bb61ae7e9d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.230 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.230 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquired lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.230 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:19:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 196 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.481 253665 DEBUG nova.compute.manager [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-changed-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.481 253665 DEBUG nova.compute.manager [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Refreshing instance network info cache due to event network-changed-95d3860d-a485-46b6-8875-35bb61ae7e9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.482 253665 DEBUG oslo_concurrency.lockutils [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.521 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:19:55 np0005532048 kernel: tap2a28300e-6b (unregistering): left promiscuous mode
Nov 22 04:19:55 np0005532048 NetworkManager[48920]: <info>  [1763803195.6907] device (tap2a28300e-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:19:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:55Z|00565|binding|INFO|Releasing lport 2a28300e-6b6b-4513-831f-e30f3694fbcd from this chassis (sb_readonly=0)
Nov 22 04:19:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:55Z|00566|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd down in Southbound
Nov 22 04:19:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:55Z|00567|binding|INFO|Removing iface tap2a28300e-6b ovn-installed in OVS
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.706 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:af:ec 10.100.0.12'], port_security=['fa:16:3e:7c:af:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a28300e-6b6b-4513-831f-e30f3694fbcd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.707 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a28300e-6b6b-4513-831f-e30f3694fbcd in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd unbound from our chassis#033[00m
Nov 22 04:19:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.709 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:19:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.710 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[259809f2-fdec-422b-ad18-23a85e962bba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:55.710 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace which is not needed anymore#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.717 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:55 np0005532048 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 22 04:19:55 np0005532048 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Consumed 14.399s CPU time.
Nov 22 04:19:55 np0005532048 systemd-machined[215941]: Machine qemu-67-instance-0000003a terminated.
Nov 22 04:19:55 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [NOTICE]   (317238) : haproxy version is 2.8.14-c23fe91
Nov 22 04:19:55 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [NOTICE]   (317238) : path to executable is /usr/sbin/haproxy
Nov 22 04:19:55 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [WARNING]  (317238) : Exiting Master process...
Nov 22 04:19:55 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [WARNING]  (317238) : Exiting Master process...
Nov 22 04:19:55 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [ALERT]    (317238) : Current worker (317241) exited with code 143 (Terminated)
Nov 22 04:19:55 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[317216]: [WARNING]  (317238) : All workers exited. Exiting... (0)
Nov 22 04:19:55 np0005532048 systemd[1]: libpod-13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03.scope: Deactivated successfully.
Nov 22 04:19:55 np0005532048 podman[317779]: 2025-11-22 09:19:55.863668238 +0000 UTC m=+0.074671825 container died 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.913 253665 DEBUG nova.compute.manager [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.913 253665 DEBUG oslo_concurrency.lockutils [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.913 253665 DEBUG oslo_concurrency.lockutils [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.914 253665 DEBUG oslo_concurrency.lockutils [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.914 253665 DEBUG nova.compute.manager [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.914 253665 WARNING nova.compute.manager [req-3f5e2410-7707-4c36-ae9f-fdceb17351bc req-a16c5d0e-2b2d-4b7d-af7a-4e0f83bb1ef3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state rebuilding.#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.928 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.941 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance shutdown successfully after 13 seconds.#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.949 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance destroyed successfully.#033[00m
Nov 22 04:19:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03-userdata-shm.mount: Deactivated successfully.
Nov 22 04:19:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0c1241bc236f075e40e2ab81b6d24373f4bf0bae4ad4e81373a81f5c0654f230-merged.mount: Deactivated successfully.
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.960 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance destroyed successfully.#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.961 253665 DEBUG nova.virt.libvirt.vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:42Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.961 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.962 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.962 253665 DEBUG os_vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.964 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a28300e-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.967 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:55 np0005532048 nova_compute[253661]: 2025-11-22 09:19:55.969 253665 INFO os_vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b')#033[00m
Nov 22 04:19:56 np0005532048 podman[317779]: 2025-11-22 09:19:56.113093448 +0000 UTC m=+0.324097035 container cleanup 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:19:56 np0005532048 systemd[1]: libpod-conmon-13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03.scope: Deactivated successfully.
Nov 22 04:19:56 np0005532048 podman[317836]: 2025-11-22 09:19:56.363725568 +0000 UTC m=+0.227809887 container remove 13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:19:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.370 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6caee83b-abdd-454e-ba61-2ab0f7d2db54]: (4, ('Sat Nov 22 09:19:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03)\n13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03\nSat Nov 22 09:19:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03)\n13b973f9146c6ea95321cf2c7475aca8774339f4d87d07ac7f8eac0d7c9e7a03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.373 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4cfe2c-8edb-49cc-bd8f-0e121962e71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.374 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:56 np0005532048 kernel: tap01d1bce2-e0: left promiscuous mode
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f2231c-7b66-43c2-aef6-d4fc4f9b4ba8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.398 253665 DEBUG nova.network.neutron [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Updating instance_info_cache with network_info: [{"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.407 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[227cbc3e-e43e-48f6-aa36-8eeb08e7b214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc10925-e45f-4ecf-92d4-a498ad283f17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.417 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Releasing lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.418 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance network_info: |[{"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.418 253665 DEBUG oslo_concurrency.lockutils [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.419 253665 DEBUG nova.network.neutron [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Refreshing network info cache for port 95d3860d-a485-46b6-8875-35bb61ae7e9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.421 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start _get_guest_xml network_info=[{"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.424 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[673d13bb-910b-4557-b9a8-3e699588a615]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602904, 'reachable_time': 32759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317851, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:56 np0005532048 systemd[1]: run-netns-ovnmeta\x2d01d1bce2\x2def3d\x2d44bf\x2da3f9\x2d13dc692c2ddd.mount: Deactivated successfully.
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.428 253665 WARNING nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:19:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.428 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:19:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:56.429 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c4e080-f1ab-4b04-b744-5ea42e5feb18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.434 253665 DEBUG nova.virt.libvirt.host [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.435 253665 DEBUG nova.virt.libvirt.host [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.440 253665 DEBUG nova.virt.libvirt.host [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.441 253665 DEBUG nova.virt.libvirt.host [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.441 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.441 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.442 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.442 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.442 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.443 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.444 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.444 253665 DEBUG nova.virt.hardware [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.447 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:56 np0005532048 podman[317849]: 2025-11-22 09:19:56.514451041 +0000 UTC m=+0.090380788 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 22 04:19:56 np0005532048 podman[317857]: 2025-11-22 09:19:56.515990748 +0000 UTC m=+0.060570103 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:19:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2377928327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.878 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.905 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:56 np0005532048 nova_compute[253661]: 2025-11-22 09:19:56.909 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.293 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deleting instance files /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_del#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.294 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deletion of /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_del complete#033[00m
Nov 22 04:19:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1392090696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 221 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 950 KiB/s rd, 3.2 MiB/s wr, 144 op/s
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.418 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.419 253665 DEBUG nova.virt.libvirt.vif [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1641561282',display_name='tempest-ServerAddressesTestJSON-server-1641561282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1641561282',id=60,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd04e58a339948e6b219ee858ce56620',ramdisk_id='',reservation_id='r-idyfcrg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1270862588',owner_user_name='tempest-ServerAddressesTestJSON-1270862588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:53Z,user_data=None,user_id='2c92c50f03874da0a9bd18e66157708e',uuid=87fbaa81-3eae-4dac-9613-700a29ab0daf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.419 253665 DEBUG nova.network.os_vif_util [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converting VIF {"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.420 253665 DEBUG nova.network.os_vif_util [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.421 253665 DEBUG nova.objects.instance [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87fbaa81-3eae-4dac-9613-700a29ab0daf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.424 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.424 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating image(s)#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.442 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.461 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.480 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.484 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.522 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <uuid>87fbaa81-3eae-4dac-9613-700a29ab0daf</uuid>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <name>instance-0000003c</name>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerAddressesTestJSON-server-1641561282</nova:name>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:56</nova:creationTime>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <nova:user uuid="2c92c50f03874da0a9bd18e66157708e">tempest-ServerAddressesTestJSON-1270862588-project-member</nova:user>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <nova:project uuid="dd04e58a339948e6b219ee858ce56620">tempest-ServerAddressesTestJSON-1270862588</nova:project>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <nova:port uuid="95d3860d-a485-46b6-8875-35bb61ae7e9d">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <entry name="serial">87fbaa81-3eae-4dac-9613-700a29ab0daf</entry>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <entry name="uuid">87fbaa81-3eae-4dac-9613-700a29ab0daf</entry>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/87fbaa81-3eae-4dac-9613-700a29ab0daf_disk">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:22:69:07"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <target dev="tap95d3860d-a4"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/console.log" append="off"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:57 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:57 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:57 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:57 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.523 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Preparing to wait for external event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.523 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.524 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.524 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.524 253665 DEBUG nova.virt.libvirt.vif [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1641561282',display_name='tempest-ServerAddressesTestJSON-server-1641561282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1641561282',id=60,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd04e58a339948e6b219ee858ce56620',ramdisk_id='',reservation_id='r-idyfcrg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1270862588',owner_user_name='tempest-ServerAddressesTestJSON-1270862588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:53Z,user_data=None,user_id='2c92c50f03874da0a9bd18e66157708e',uuid=87fbaa81-3eae-4dac-9613-700a29ab0daf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.525 253665 DEBUG nova.network.os_vif_util [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converting VIF {"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.525 253665 DEBUG nova.network.os_vif_util [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.526 253665 DEBUG os_vif [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.527 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.527 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.527 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.530 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95d3860d-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.530 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95d3860d-a4, col_values=(('external_ids', {'iface-id': '95d3860d-a485-46b6-8875-35bb61ae7e9d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:69:07', 'vm-uuid': '87fbaa81-3eae-4dac-9613-700a29ab0daf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:57 np0005532048 NetworkManager[48920]: <info>  [1763803197.5323] manager: (tap95d3860d-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.537 253665 INFO os_vif [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4')#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.561 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.562 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.562 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.563 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.586 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.589 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.643 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.644 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.644 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] No VIF found with MAC fa:16:3e:22:69:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.645 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Using config drive#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.667 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.884 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.932 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] resizing rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.994 253665 DEBUG nova.compute.manager [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.994 253665 DEBUG oslo_concurrency.lockutils [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.995 253665 DEBUG oslo_concurrency.lockutils [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.995 253665 DEBUG oslo_concurrency.lockutils [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.996 253665 DEBUG nova.compute.manager [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:19:57 np0005532048 nova_compute[253661]: 2025-11-22 09:19:57.996 253665 WARNING nova.compute.manager [req-9dcaed7b-1190-49a6-8158-487e6340e64d req-f506e576-bcc2-43df-9847-9916c5a139c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.030 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.031 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Ensure instance console log exists: /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.031 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.031 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.032 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.033 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start _get_guest_xml network_info=[{"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.037 253665 WARNING nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.042 253665 DEBUG nova.virt.libvirt.host [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.042 253665 DEBUG nova.virt.libvirt.host [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.045 253665 DEBUG nova.virt.libvirt.host [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.045 253665 DEBUG nova.virt.libvirt.host [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.045 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.045 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.046 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.046 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.046 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.046 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.047 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.048 253665 DEBUG nova.virt.hardware [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.048 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.063 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701328283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.537 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.561 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.565 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.932 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Creating config drive at /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config#033[00m
Nov 22 04:19:58 np0005532048 nova_compute[253661]: 2025-11-22 09:19:58.937 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5jc7pgw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:19:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.009 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.011 253665 DEBUG nova.virt.libvirt.vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:57Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.011 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.012 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.015 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <uuid>6fc1c0e4-3bd1-44c5-a722-9a30961fc545</uuid>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <name>instance-0000003a</name>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1392829761</nova:name>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:19:58</nova:creationTime>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <nova:user uuid="5352d2182544454aab03bd4a74160247">tempest-ServerDiskConfigTestJSON-1778643933-project-member</nova:user>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <nova:project uuid="a29f2c834c7a4a2ea6c4fc6dea996a8e">tempest-ServerDiskConfigTestJSON-1778643933</nova:project>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <nova:port uuid="2a28300e-6b6b-4513-831f-e30f3694fbcd">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <entry name="serial">6fc1c0e4-3bd1-44c5-a722-9a30961fc545</entry>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <entry name="uuid">6fc1c0e4-3bd1-44c5-a722-9a30961fc545</entry>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:7c:af:ec"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <target dev="tap2a28300e-6b"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/console.log" append="off"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:19:59 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:19:59 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:19:59 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:19:59 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.015 253665 DEBUG nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Preparing to wait for external event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.015 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.015 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.016 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.016 253665 DEBUG nova.virt.libvirt.vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:19:57Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.016 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.017 253665 DEBUG nova.network.os_vif_util [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.017 253665 DEBUG os_vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.018 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.018 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.018 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.020 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.020 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a28300e-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.021 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a28300e-6b, col_values=(('external_ids', {'iface-id': '2a28300e-6b6b-4513-831f-e30f3694fbcd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:af:ec', 'vm-uuid': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.022 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 NetworkManager[48920]: <info>  [1763803199.0234] manager: (tap2a28300e-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.030 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803184.0254579, 5a489088-2d5b-49b6-8280-e2d86fa4fbf3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.030 253665 INFO nova.compute.manager [-] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.032 253665 INFO os_vif [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b')#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.049 253665 DEBUG nova.compute.manager [None req-8103c6d8-e501-45f1-b770-5d2a87a95db4 - - - - - -] [instance: 5a489088-2d5b-49b6-8280-e2d86fa4fbf3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.067 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.067 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.068 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No VIF found with MAC fa:16:3e:7c:af:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.068 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Using config drive#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.093 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.099 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5jc7pgw" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.118 253665 DEBUG nova.storage.rbd_utils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] rbd image 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.121 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.157 253665 DEBUG nova.network.neutron [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Updated VIF entry in instance network info cache for port 95d3860d-a485-46b6-8875-35bb61ae7e9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.157 253665 DEBUG nova.network.neutron [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Updating instance_info_cache with network_info: [{"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.160 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.202 253665 DEBUG oslo_concurrency.lockutils [req-fdfa4a12-b388-4b07-9342-32198517d1a4 req-f25de4fe-5d66-493d-9988-d1fa980567f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-87fbaa81-3eae-4dac-9613-700a29ab0daf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.203 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'keypairs' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.293 253665 DEBUG oslo_concurrency.processutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config 87fbaa81-3eae-4dac-9613-700a29ab0daf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.294 253665 INFO nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Deleting local config drive /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf/disk.config because it was imported into RBD.#033[00m
Nov 22 04:19:59 np0005532048 kernel: tap95d3860d-a4: entered promiscuous mode
Nov 22 04:19:59 np0005532048 NetworkManager[48920]: <info>  [1763803199.3542] manager: (tap95d3860d-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Nov 22 04:19:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:59Z|00568|binding|INFO|Claiming lport 95d3860d-a485-46b6-8875-35bb61ae7e9d for this chassis.
Nov 22 04:19:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:59Z|00569|binding|INFO|95d3860d-a485-46b6-8875-35bb61ae7e9d: Claiming fa:16:3e:22:69:07 10.100.0.3
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:59Z|00570|binding|INFO|Setting lport 95d3860d-a485-46b6-8875-35bb61ae7e9d ovn-installed in OVS
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.389 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 systemd-udevd[318274]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:59 np0005532048 systemd-machined[215941]: New machine qemu-69-instance-0000003c.
Nov 22 04:19:59 np0005532048 systemd[1]: Started Virtual Machine qemu-69-instance-0000003c.
Nov 22 04:19:59 np0005532048 NetworkManager[48920]: <info>  [1763803199.4035] device (tap95d3860d-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:19:59 np0005532048 NetworkManager[48920]: <info>  [1763803199.4052] device (tap95d3860d-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:19:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:59Z|00571|binding|INFO|Setting lport 95d3860d-a485-46b6-8875-35bb61ae7e9d up in Southbound
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.409 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:69:07 10.100.0.3'], port_security=['fa:16:3e:22:69:07 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '87fbaa81-3eae-4dac-9613-700a29ab0daf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd04e58a339948e6b219ee858ce56620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dfe1d73e-9743-4e1d-a71d-46f13de720cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=923cf162-d21a-49b5-93fe-032ba9e780ee, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=95d3860d-a485-46b6-8875-35bb61ae7e9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.411 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 95d3860d-a485-46b6-8875-35bb61ae7e9d in datapath 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 bound to our chassis#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.412 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4#033[00m
Nov 22 04:19:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 208 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 691 KiB/s rd, 4.2 MiB/s wr, 181 op/s
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.428 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19e2a395-d1b9-4e5b-ac46-7c0df3660879]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.429 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap209ca7a4-91 in ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.432 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap209ca7a4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.432 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[160ed07d-0fc6-4b0e-b8db-20b95df4b467]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.433 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30b419db-48b9-4486-a42d-60684833c876]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.457 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[045553f5-f3b9-4a92-88c0-992a2a971562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.486 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad4bcbb-c4eb-4794-aa82-270b4846eafa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.518 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0483e96c-9497-40f5-9ec2-cc92cb26c30a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 systemd-udevd[318285]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:19:59 np0005532048 NetworkManager[48920]: <info>  [1763803199.5263] manager: (tap209ca7a4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26dcfd40-4241-4135-b725-356f395ac3e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.565 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[19de77e7-6fc8-408f-8d84-6fff01b41ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.570 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a24cdf69-1990-4133-af59-2c7f78a36294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 NetworkManager[48920]: <info>  [1763803199.6022] device (tap209ca7a4-90): carrier: link connected
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.607 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[148eec53-e3af-47ce-a305-81ec79d439de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.628 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[56bbf093-efad-41c8-a8ff-db90e231e72a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap209ca7a4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:6a:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605034, 'reachable_time': 24390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318384, 'error': None, 'target': 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.649 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f96877a2-9c0c-45b3-89df-8d42a970a269]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:6aa6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605034, 'tstamp': 605034}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318389, 'error': None, 'target': 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.670 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94053dcd-aced-456e-9dd9-2433c824f84c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap209ca7a4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:6a:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605034, 'reachable_time': 24390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318405, 'error': None, 'target': 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.711 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7de7a075-b61f-4b27-83a7-7351275742a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.786 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[972a5437-846b-49f0-b4b3-4fa1e5734872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.787 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap209ca7a4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.788 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.788 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap209ca7a4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 kernel: tap209ca7a4-90: entered promiscuous mode
Nov 22 04:19:59 np0005532048 NetworkManager[48920]: <info>  [1763803199.7913] manager: (tap209ca7a4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.798 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap209ca7a4-90, col_values=(('external_ids', {'iface-id': '38e24d56-d793-475f-b75d-30c0f92d5222'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:19:59Z|00572|binding|INFO|Releasing lport 38e24d56-d793-475f-b75d-30c0f92d5222 from this chassis (sb_readonly=0)
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 nova_compute[253661]: 2025-11-22 09:19:59.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.823 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.824 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10ed19ed-42d4-41c1-ac1e-d279b5d5c15d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.825 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4.pid.haproxy
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:19:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:19:59.825 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'env', 'PROCESS_TAG=haproxy-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:20:00 np0005532048 podman[318476]: 2025-11-22 09:20:00.214733651 +0000 UTC m=+0.051925923 container create 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:20:00 np0005532048 nova_compute[253661]: 2025-11-22 09:20:00.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:20:00 np0005532048 systemd[1]: Started libpod-conmon-0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6.scope.
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:20:00 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6f0eadfb-20f3-4eb9-adfd-ab612bfe70f9 does not exist
Nov 22 04:20:00 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c3f2e8a0-3273-464d-9e0d-cdbbfa5463b8 does not exist
Nov 22 04:20:00 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ce7ee678-ba3a-49c2-8a00-1c0fdcefd216 does not exist
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:20:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:20:00 np0005532048 podman[318476]: 2025-11-22 09:20:00.188938344 +0000 UTC m=+0.026130636 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:20:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1960c85f8448b6f749a91f07ab39044bd5c3bd65ae0acbccc14832bb6734a46a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:00 np0005532048 podman[318476]: 2025-11-22 09:20:00.321434993 +0000 UTC m=+0.158627295 container init 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 04:20:00 np0005532048 podman[318476]: 2025-11-22 09:20:00.328846673 +0000 UTC m=+0.166038945 container start 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:20:00 np0005532048 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [NOTICE]   (318563) : New worker (318566) forked
Nov 22 04:20:00 np0005532048 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [NOTICE]   (318563) : Loading success.
Nov 22 04:20:00 np0005532048 nova_compute[253661]: 2025-11-22 09:20:00.376 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803200.374917, 87fbaa81-3eae-4dac-9613-700a29ab0daf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:00 np0005532048 nova_compute[253661]: 2025-11-22 09:20:00.377 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] VM Started (Lifecycle Event)#033[00m
Nov 22 04:20:00 np0005532048 nova_compute[253661]: 2025-11-22 09:20:00.413 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:00 np0005532048 nova_compute[253661]: 2025-11-22 09:20:00.418 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803200.375687, 87fbaa81-3eae-4dac-9613-700a29ab0daf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:00 np0005532048 nova_compute[253661]: 2025-11-22 09:20:00.418 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:20:00 np0005532048 nova_compute[253661]: 2025-11-22 09:20:00.439 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:00 np0005532048 nova_compute[253661]: 2025-11-22 09:20:00.443 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:00 np0005532048 nova_compute[253661]: 2025-11-22 09:20:00.463 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:20:00 np0005532048 podman[318601]: 2025-11-22 09:20:00.566346985 +0000 UTC m=+0.103158258 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 04:20:00 np0005532048 podman[318718]: 2025-11-22 09:20:00.903842955 +0000 UTC m=+0.041296045 container create 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:20:00 np0005532048 systemd[1]: Started libpod-conmon-802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42.scope.
Nov 22 04:20:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:00 np0005532048 podman[318718]: 2025-11-22 09:20:00.88593353 +0000 UTC m=+0.023386650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:00 np0005532048 podman[318718]: 2025-11-22 09:20:00.982099047 +0000 UTC m=+0.119552157 container init 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:20:00 np0005532048 podman[318718]: 2025-11-22 09:20:00.989129057 +0000 UTC m=+0.126582147 container start 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:20:00 np0005532048 dreamy_bassi[318734]: 167 167
Nov 22 04:20:00 np0005532048 systemd[1]: libpod-802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42.scope: Deactivated successfully.
Nov 22 04:20:00 np0005532048 conmon[318734]: conmon 802f32f1310d8a72bdee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42.scope/container/memory.events
Nov 22 04:20:00 np0005532048 podman[318718]: 2025-11-22 09:20:00.995142553 +0000 UTC m=+0.132595673 container attach 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:20:00 np0005532048 podman[318718]: 2025-11-22 09:20:00.996031985 +0000 UTC m=+0.133485075 container died 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:20:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-005f70835b4ba88eccf3c8f8e3e1896613a23767cd414d9d885050bf447f221d-merged.mount: Deactivated successfully.
Nov 22 04:20:01 np0005532048 podman[318718]: 2025-11-22 09:20:01.036123789 +0000 UTC m=+0.173576869 container remove 802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 04:20:01 np0005532048 systemd[1]: libpod-conmon-802f32f1310d8a72bdee403e868b2a8f68885952ebcb58f218f2fbc64cdd1c42.scope: Deactivated successfully.
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.075 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Creating config drive at /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.080 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptrbdcnc0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:20:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:20:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.127 253665 DEBUG nova.compute.manager [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.128 253665 DEBUG oslo_concurrency.lockutils [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.129 253665 DEBUG oslo_concurrency.lockutils [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.129 253665 DEBUG oslo_concurrency.lockutils [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.130 253665 DEBUG nova.compute.manager [req-5c169edc-9e06-4d9e-a01a-a657467f2191 req-70bb2c42-a1e9-41a4-8827-a9f5a304f6ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Processing event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.131 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.135 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803201.1348798, 87fbaa81-3eae-4dac-9613-700a29ab0daf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.135 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.137 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.142 253665 INFO nova.virt.libvirt.driver [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance spawned successfully.#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.142 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.171 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.176 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.177 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.177 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.177 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.178 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.178 253665 DEBUG nova.virt.libvirt.driver [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.182 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.221 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.222 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptrbdcnc0" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:01 np0005532048 podman[318760]: 2025-11-22 09:20:01.223622635 +0000 UTC m=+0.044435071 container create 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.249 253665 DEBUG nova.storage.rbd_utils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.256 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:01 np0005532048 systemd[1]: Started libpod-conmon-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope.
Nov 22 04:20:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.291 253665 INFO nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Took 7.91 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.292 253665 DEBUG nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:01 np0005532048 podman[318760]: 2025-11-22 09:20:01.298541276 +0000 UTC m=+0.119353732 container init 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:20:01 np0005532048 podman[318760]: 2025-11-22 09:20:01.203794103 +0000 UTC m=+0.024606559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:01 np0005532048 podman[318760]: 2025-11-22 09:20:01.304786397 +0000 UTC m=+0.125598833 container start 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:20:01 np0005532048 podman[318760]: 2025-11-22 09:20:01.308423056 +0000 UTC m=+0.129235512 container attach 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.355 253665 INFO nova.compute.manager [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Took 8.84 seconds to build instance.#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.377 253665 DEBUG oslo_concurrency.lockutils [None req-d1e402b3-1878-4812-bdff-ae2bd90f1f55 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 208 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 4.2 MiB/s wr, 135 op/s
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.434 253665 DEBUG oslo_concurrency.processutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config 6fc1c0e4-3bd1-44c5-a722-9a30961fc545_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.435 253665 INFO nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deleting local config drive /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545/disk.config because it was imported into RBD.#033[00m
Nov 22 04:20:01 np0005532048 NetworkManager[48920]: <info>  [1763803201.4949] manager: (tap2a28300e-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Nov 22 04:20:01 np0005532048 kernel: tap2a28300e-6b: entered promiscuous mode
Nov 22 04:20:01 np0005532048 systemd-udevd[318351]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:20:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:01Z|00573|binding|INFO|Claiming lport 2a28300e-6b6b-4513-831f-e30f3694fbcd for this chassis.
Nov 22 04:20:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:01Z|00574|binding|INFO|2a28300e-6b6b-4513-831f-e30f3694fbcd: Claiming fa:16:3e:7c:af:ec 10.100.0.12
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.496 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.504 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:af:ec 10.100.0.12'], port_security=['fa:16:3e:7c:af:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a28300e-6b6b-4513-831f-e30f3694fbcd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.506 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a28300e-6b6b-4513-831f-e30f3694fbcd in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd bound to our chassis#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.508 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd#033[00m
Nov 22 04:20:01 np0005532048 NetworkManager[48920]: <info>  [1763803201.5128] device (tap2a28300e-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:20:01 np0005532048 NetworkManager[48920]: <info>  [1763803201.5135] device (tap2a28300e-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.521 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66652709-b2c1-494f-b3cc-0cac3e11b2af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.522 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d1bce2-e1 in ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:01Z|00575|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd ovn-installed in OVS
Nov 22 04:20:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:01Z|00576|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd up in Southbound
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.525 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d1bce2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11e227c7-c80b-42d6-b989-9c1cbc7237d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.526 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0f4e5d-538f-4425-9be0-07568f0ee52e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.539 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a0757bf6-6bf0-490a-97da-e924fe379bd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 systemd-machined[215941]: New machine qemu-70-instance-0000003a.
Nov 22 04:20:01 np0005532048 systemd[1]: Started Virtual Machine qemu-70-instance-0000003a.
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.564 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e7d646-c576-4487-9d9f-fed3799e1f61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.598 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1933da21-b61f-4da7-9b1e-76d1058ef229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 NetworkManager[48920]: <info>  [1763803201.6058] manager: (tap01d1bce2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/261)
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.608 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[09a75e49-2a8e-4c97-94c8-35ad61320add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.648 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef6a28a-659b-450c-bc88-1a488d984293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.655 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[46b97107-9c32-414d-9c3c-235afae47354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 NetworkManager[48920]: <info>  [1763803201.6810] device (tap01d1bce2-e0): carrier: link connected
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.687 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b0b48e-94ba-4c59-b5df-c1df7c3277cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.710 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41d3fb08-6924-4c32-90ab-8fc7c62322c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605241, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318849, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.734 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[581d6ca7-e95c-4a1c-9253-75c6eb18cc3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:2279'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605241, 'tstamp': 605241}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318850, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.736 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.759 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d358b67a-d4ce-4e9c-a8d6-05f7dcbe8093]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605241, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318851, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.802 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b205bff9-899b-4534-bd68-2368d7714a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[445b30fe-14e1-4837-8da2-59b21f204578]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.869 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.869 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.870 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d1bce2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:01 np0005532048 NetworkManager[48920]: <info>  [1763803201.8722] manager: (tap01d1bce2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Nov 22 04:20:01 np0005532048 kernel: tap01d1bce2-e0: entered promiscuous mode
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d1bce2-e0, col_values=(('external_ids', {'iface-id': '23aa3d02-a12d-464a-8395-5aa8724c0fd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.875 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.877 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:20:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:01Z|00577|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.899 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec6bab8-18e7-4524-8ad8-e5410cfc2132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.901 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:20:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:01.902 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'env', 'PROCESS_TAG=haproxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:20:01 np0005532048 nova_compute[253661]: 2025-11-22 09:20:01.903 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:02 np0005532048 podman[318894]: 2025-11-22 09:20:02.342638915 +0000 UTC m=+0.056267638 container create a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:20:02 np0005532048 systemd[1]: Started libpod-conmon-a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb.scope.
Nov 22 04:20:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba224bf444db80dea210a4884aaac2259dd20704596313fa2367533ab6ec9709/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:02 np0005532048 podman[318894]: 2025-11-22 09:20:02.312667187 +0000 UTC m=+0.026295930 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:20:02 np0005532048 podman[318894]: 2025-11-22 09:20:02.427788194 +0000 UTC m=+0.141416937 container init a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 04:20:02 np0005532048 podman[318894]: 2025-11-22 09:20:02.436622458 +0000 UTC m=+0.150251181 container start a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:20:02 np0005532048 sad_chaum[318794]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:20:02 np0005532048 sad_chaum[318794]: --> relative data size: 1.0
Nov 22 04:20:02 np0005532048 sad_chaum[318794]: --> All data devices are unavailable
Nov 22 04:20:02 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [NOTICE]   (318923) : New worker (318926) forked
Nov 22 04:20:02 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [NOTICE]   (318923) : Loading success.
Nov 22 04:20:02 np0005532048 systemd[1]: libpod-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope: Deactivated successfully.
Nov 22 04:20:02 np0005532048 systemd[1]: libpod-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope: Consumed 1.089s CPU time.
Nov 22 04:20:02 np0005532048 conmon[318794]: conmon 761de1e76a5a2236d39b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope/container/memory.events
Nov 22 04:20:02 np0005532048 podman[318760]: 2025-11-22 09:20:02.50128392 +0000 UTC m=+1.322096356 container died 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:20:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6c9ddcb46290f12173890c96ff2c510b2d0193b140e036159b79448f6ff62289-merged.mount: Deactivated successfully.
Nov 22 04:20:02 np0005532048 podman[318760]: 2025-11-22 09:20:02.578971387 +0000 UTC m=+1.399783853 container remove 761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:20:02 np0005532048 systemd[1]: libpod-conmon-761de1e76a5a2236d39ba9b911ec9ccb07d6fb170adb2a3bd0398429627aee2f.scope: Deactivated successfully.
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014442085653122354 of space, bias 1.0, pg target 0.4332625695936706 quantized to 32 (current 32)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:20:02 np0005532048 nova_compute[253661]: 2025-11-22 09:20:02.997 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:20:02 np0005532048 nova_compute[253661]: 2025-11-22 09:20:02.998 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803202.9971855, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:02 np0005532048 nova_compute[253661]: 2025-11-22 09:20:02.998 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Started (Lifecycle Event)#033[00m
Nov 22 04:20:03 np0005532048 nova_compute[253661]: 2025-11-22 09:20:03.158 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:03 np0005532048 nova_compute[253661]: 2025-11-22 09:20:03.165 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803202.9973536, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:03 np0005532048 nova_compute[253661]: 2025-11-22 09:20:03.165 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:20:03 np0005532048 nova_compute[253661]: 2025-11-22 09:20:03.182 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:03 np0005532048 nova_compute[253661]: 2025-11-22 09:20:03.187 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:03 np0005532048 nova_compute[253661]: 2025-11-22 09:20:03.206 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:20:03 np0005532048 podman[319131]: 2025-11-22 09:20:03.261468131 +0000 UTC m=+0.043377535 container create 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:20:03 np0005532048 systemd[1]: Started libpod-conmon-6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637.scope.
Nov 22 04:20:03 np0005532048 podman[319131]: 2025-11-22 09:20:03.24168389 +0000 UTC m=+0.023593314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:03 np0005532048 podman[319131]: 2025-11-22 09:20:03.375955323 +0000 UTC m=+0.157864747 container init 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:20:03 np0005532048 podman[319131]: 2025-11-22 09:20:03.386498199 +0000 UTC m=+0.168407603 container start 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:20:03 np0005532048 podman[319131]: 2025-11-22 09:20:03.390547768 +0000 UTC m=+0.172457192 container attach 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:20:03 np0005532048 flamboyant_ride[319146]: 167 167
Nov 22 04:20:03 np0005532048 systemd[1]: libpod-6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637.scope: Deactivated successfully.
Nov 22 04:20:03 np0005532048 podman[319131]: 2025-11-22 09:20:03.39476194 +0000 UTC m=+0.176671344 container died 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:20:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 215 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 916 KiB/s rd, 5.7 MiB/s wr, 170 op/s
Nov 22 04:20:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bf1e966886c63e8fa4ccb361aed3bed2944012bd61408b396d5f844d58c9c5a4-merged.mount: Deactivated successfully.
Nov 22 04:20:03 np0005532048 podman[319131]: 2025-11-22 09:20:03.443085994 +0000 UTC m=+0.224995388 container remove 6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:20:03 np0005532048 systemd[1]: libpod-conmon-6a4cada93426875af53a2c51adc3dca60182baf677aec63c767fd0aab46cf637.scope: Deactivated successfully.
Nov 22 04:20:03 np0005532048 podman[319170]: 2025-11-22 09:20:03.668672965 +0000 UTC m=+0.064224381 container create 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:20:03 np0005532048 systemd[1]: Started libpod-conmon-1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec.scope.
Nov 22 04:20:03 np0005532048 podman[319170]: 2025-11-22 09:20:03.632296501 +0000 UTC m=+0.027847947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:03 np0005532048 podman[319170]: 2025-11-22 09:20:03.770796767 +0000 UTC m=+0.166348193 container init 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 04:20:03 np0005532048 podman[319170]: 2025-11-22 09:20:03.779429917 +0000 UTC m=+0.174981333 container start 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:20:03 np0005532048 podman[319170]: 2025-11-22 09:20:03.783400033 +0000 UTC m=+0.178951469 container attach 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.023 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.288 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.289 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.290 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.290 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.290 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] No waiting events found dispatching network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.290 253665 WARNING nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received unexpected event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d for instance with vm_state active and task_state None.#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.291 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.291 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.291 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.291 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.292 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Processing event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.292 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.292 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.292 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.293 253665 DEBUG oslo_concurrency.lockutils [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.293 253665 DEBUG nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.293 253665 WARNING nova.compute.manager [req-9a8f83b3-6142-4653-980f-6b9e24554682 req-e7a7416a-ac41-4b25-8266-f5341a5ab494 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.294 253665 DEBUG nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.300 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803204.2999496, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.300 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.302 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.307 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance spawned successfully.#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.307 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.320 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.325 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.328 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.329 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.329 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.330 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.330 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.330 253665 DEBUG nova.virt.libvirt.driver [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.353 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.386 253665 DEBUG nova.compute.manager [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.438 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.440 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.441 253665 DEBUG nova.objects.instance [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.501 253665 DEBUG oslo_concurrency.lockutils [None req-adb74dbf-68d9-4b5b-a099-d188b2e78157 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.619 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.620 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.621 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.621 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.621 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.623 253665 INFO nova.compute.manager [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Terminating instance#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.624 253665 DEBUG nova.compute.manager [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]: {
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:    "0": [
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:        {
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "devices": [
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "/dev/loop3"
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            ],
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_name": "ceph_lv0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_size": "21470642176",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "name": "ceph_lv0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "tags": {
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.cluster_name": "ceph",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.crush_device_class": "",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.encrypted": "0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.osd_id": "0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.type": "block",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.vdo": "0"
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            },
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "type": "block",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "vg_name": "ceph_vg0"
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:        }
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:    ],
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:    "1": [
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:        {
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "devices": [
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "/dev/loop4"
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            ],
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_name": "ceph_lv1",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_size": "21470642176",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "name": "ceph_lv1",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "tags": {
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.cluster_name": "ceph",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.crush_device_class": "",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.encrypted": "0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.osd_id": "1",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.type": "block",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.vdo": "0"
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            },
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "type": "block",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "vg_name": "ceph_vg1"
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:        }
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:    ],
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:    "2": [
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:        {
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "devices": [
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "/dev/loop5"
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            ],
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_name": "ceph_lv2",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_size": "21470642176",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "name": "ceph_lv2",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "tags": {
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.cluster_name": "ceph",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.crush_device_class": "",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.encrypted": "0",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.osd_id": "2",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.type": "block",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:                "ceph.vdo": "0"
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            },
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "type": "block",
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:            "vg_name": "ceph_vg2"
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:        }
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]:    ]
Nov 22 04:20:04 np0005532048 serene_ardinghelli[319187]: }
Nov 22 04:20:04 np0005532048 systemd[1]: libpod-1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec.scope: Deactivated successfully.
Nov 22 04:20:04 np0005532048 podman[319170]: 2025-11-22 09:20:04.667372233 +0000 UTC m=+1.062923669 container died 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:20:04 np0005532048 kernel: tap95d3860d-a4 (unregistering): left promiscuous mode
Nov 22 04:20:04 np0005532048 NetworkManager[48920]: <info>  [1763803204.6757] device (tap95d3860d-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:20:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:04Z|00578|binding|INFO|Releasing lport 95d3860d-a485-46b6-8875-35bb61ae7e9d from this chassis (sb_readonly=0)
Nov 22 04:20:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:04Z|00579|binding|INFO|Setting lport 95d3860d-a485-46b6-8875-35bb61ae7e9d down in Southbound
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:04Z|00580|binding|INFO|Removing iface tap95d3860d-a4 ovn-installed in OVS
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3bef9c2cc118d52b32d54b1db05fa4a651b0ca99bf3ded5b91963af0f58c427d-merged.mount: Deactivated successfully.
Nov 22 04:20:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.708 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:69:07 10.100.0.3'], port_security=['fa:16:3e:22:69:07 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '87fbaa81-3eae-4dac-9613-700a29ab0daf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd04e58a339948e6b219ee858ce56620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dfe1d73e-9743-4e1d-a71d-46f13de720cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=923cf162-d21a-49b5-93fe-032ba9e780ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=95d3860d-a485-46b6-8875-35bb61ae7e9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:20:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.710 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 95d3860d-a485-46b6-8875-35bb61ae7e9d in datapath 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 unbound from our chassis#033[00m
Nov 22 04:20:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.711 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:20:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.713 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[016e3db7-c6da-4ed8-b428-93edbc965bad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:04.714 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 namespace which is not needed anymore#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:04 np0005532048 podman[319170]: 2025-11-22 09:20:04.734536494 +0000 UTC m=+1.130087910 container remove 1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:20:04 np0005532048 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Nov 22 04:20:04 np0005532048 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Consumed 4.376s CPU time.
Nov 22 04:20:04 np0005532048 systemd[1]: libpod-conmon-1e7bb271dc57c75dcb89e2edf32ce4ec6b8c443beac4cd50da6eea570887e9ec.scope: Deactivated successfully.
Nov 22 04:20:04 np0005532048 systemd-machined[215941]: Machine qemu-69-instance-0000003c terminated.
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.875 253665 INFO nova.virt.libvirt.driver [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Instance destroyed successfully.#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.877 253665 DEBUG nova.objects.instance [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lazy-loading 'resources' on Instance uuid 87fbaa81-3eae-4dac-9613-700a29ab0daf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:04 np0005532048 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [NOTICE]   (318563) : haproxy version is 2.8.14-c23fe91
Nov 22 04:20:04 np0005532048 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [NOTICE]   (318563) : path to executable is /usr/sbin/haproxy
Nov 22 04:20:04 np0005532048 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [WARNING]  (318563) : Exiting Master process...
Nov 22 04:20:04 np0005532048 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [ALERT]    (318563) : Current worker (318566) exited with code 143 (Terminated)
Nov 22 04:20:04 np0005532048 neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4[318532]: [WARNING]  (318563) : All workers exited. Exiting... (0)
Nov 22 04:20:04 np0005532048 systemd[1]: libpod-0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6.scope: Deactivated successfully.
Nov 22 04:20:04 np0005532048 podman[319230]: 2025-11-22 09:20:04.898795155 +0000 UTC m=+0.069794367 container died 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.902 253665 DEBUG nova.virt.libvirt.vif [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1641561282',display_name='tempest-ServerAddressesTestJSON-server-1641561282',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1641561282',id=60,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd04e58a339948e6b219ee858ce56620',ramdisk_id='',reservation_id='r-idyfcrg0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1270862588',owner_user_name='tempest-ServerAddressesTestJSON-1270862588-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:20:01Z,user_data=None,user_id='2c92c50f03874da0a9bd18e66157708e',uuid=87fbaa81-3eae-4dac-9613-700a29ab0daf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.904 253665 DEBUG nova.network.os_vif_util [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converting VIF {"id": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "address": "fa:16:3e:22:69:07", "network": {"id": "209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1867510668-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd04e58a339948e6b219ee858ce56620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d3860d-a4", "ovs_interfaceid": "95d3860d-a485-46b6-8875-35bb61ae7e9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.905 253665 DEBUG nova.network.os_vif_util [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.905 253665 DEBUG os_vif [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.908 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.908 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95d3860d-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.910 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:04 np0005532048 nova_compute[253661]: 2025-11-22 09:20:04.915 253665 INFO os_vif [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:69:07,bridge_name='br-int',has_traffic_filtering=True,id=95d3860d-a485-46b6-8875-35bb61ae7e9d,network=Network(209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d3860d-a4')#033[00m
Nov 22 04:20:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6-userdata-shm.mount: Deactivated successfully.
Nov 22 04:20:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1960c85f8448b6f749a91f07ab39044bd5c3bd65ae0acbccc14832bb6734a46a-merged.mount: Deactivated successfully.
Nov 22 04:20:04 np0005532048 podman[319230]: 2025-11-22 09:20:04.94875923 +0000 UTC m=+0.119758442 container cleanup 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 04:20:04 np0005532048 systemd[1]: libpod-conmon-0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6.scope: Deactivated successfully.
Nov 22 04:20:05 np0005532048 podman[319331]: 2025-11-22 09:20:05.034938833 +0000 UTC m=+0.056943954 container remove 0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:20:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.043 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[456e367e-44a6-4e84-b5a2-56cad1c72095]: (4, ('Sat Nov 22 09:20:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 (0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6)\n0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6\nSat Nov 22 09:20:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 (0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6)\n0e8d452af7d83a50fa7083f15c4718030779fc429fbd8a1455b199fae5a19bc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.049 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef592f4-2719-4276-be05-40b0564276c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.050 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap209ca7a4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:05 np0005532048 kernel: tap209ca7a4-90: left promiscuous mode
Nov 22 04:20:05 np0005532048 nova_compute[253661]: 2025-11-22 09:20:05.057 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:05 np0005532048 nova_compute[253661]: 2025-11-22 09:20:05.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.078 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df5b743b-101f-4a28-bfe4-09d5b6ed4f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.092 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98e57bd1-d572-4bd0-bd8c-b8de5d5be87b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.094 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[214b57d7-4bcf-497d-9524-94ea06159622]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.117 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[005cd859-8409-4dd1-9464-cdbeee77b21c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605025, 'reachable_time': 35782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319394, 'error': None, 'target': 'ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.122 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-209ca7a4-9a63-439a-9ff5-4a96e0ff3cf4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:20:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:05.122 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b660061a-e336-4787-98b2-3663ec756128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:05 np0005532048 systemd[1]: run-netns-ovnmeta\x2d209ca7a4\x2d9a63\x2d439a\x2d9ff5\x2d4a96e0ff3cf4.mount: Deactivated successfully.
Nov 22 04:20:05 np0005532048 nova_compute[253661]: 2025-11-22 09:20:05.219 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 833 KiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 22 04:20:05 np0005532048 nova_compute[253661]: 2025-11-22 09:20:05.435 253665 INFO nova.virt.libvirt.driver [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Deleting instance files /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf_del#033[00m
Nov 22 04:20:05 np0005532048 nova_compute[253661]: 2025-11-22 09:20:05.436 253665 INFO nova.virt.libvirt.driver [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Deletion of /var/lib/nova/instances/87fbaa81-3eae-4dac-9613-700a29ab0daf_del complete#033[00m
Nov 22 04:20:05 np0005532048 podman[319436]: 2025-11-22 09:20:05.536967892 +0000 UTC m=+0.046880401 container create 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:20:05 np0005532048 systemd[1]: Started libpod-conmon-5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b.scope.
Nov 22 04:20:05 np0005532048 podman[319436]: 2025-11-22 09:20:05.516368381 +0000 UTC m=+0.026280910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:05 np0005532048 nova_compute[253661]: 2025-11-22 09:20:05.633 253665 INFO nova.compute.manager [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:20:05 np0005532048 nova_compute[253661]: 2025-11-22 09:20:05.635 253665 DEBUG oslo.service.loopingcall [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:20:05 np0005532048 nova_compute[253661]: 2025-11-22 09:20:05.636 253665 DEBUG nova.compute.manager [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:20:05 np0005532048 nova_compute[253661]: 2025-11-22 09:20:05.636 253665 DEBUG nova.network.neutron [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:20:05 np0005532048 podman[319436]: 2025-11-22 09:20:05.657269225 +0000 UTC m=+0.167181754 container init 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:20:05 np0005532048 podman[319436]: 2025-11-22 09:20:05.667391561 +0000 UTC m=+0.177304070 container start 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:20:05 np0005532048 podman[319436]: 2025-11-22 09:20:05.671812838 +0000 UTC m=+0.181725347 container attach 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:20:05 np0005532048 fervent_brown[319453]: 167 167
Nov 22 04:20:05 np0005532048 systemd[1]: libpod-5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b.scope: Deactivated successfully.
Nov 22 04:20:05 np0005532048 podman[319458]: 2025-11-22 09:20:05.750720295 +0000 UTC m=+0.035436752 container died 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:20:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4eeb6349cadd98f62d260a4797f4916ff424595bab0f0815a12f81ddfe912665-merged.mount: Deactivated successfully.
Nov 22 04:20:05 np0005532048 podman[319458]: 2025-11-22 09:20:05.805205259 +0000 UTC m=+0.089921716 container remove 5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:20:05 np0005532048 systemd[1]: libpod-conmon-5a9655e1498f82e0de7fd118ee6675241c810504e5b3c9a4083c37062b7f762b.scope: Deactivated successfully.
Nov 22 04:20:06 np0005532048 podman[319481]: 2025-11-22 09:20:06.024209771 +0000 UTC m=+0.046339477 container create fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:20:06 np0005532048 systemd[1]: Started libpod-conmon-fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e.scope.
Nov 22 04:20:06 np0005532048 podman[319481]: 2025-11-22 09:20:06.003840976 +0000 UTC m=+0.025970702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:06 np0005532048 podman[319481]: 2025-11-22 09:20:06.13860802 +0000 UTC m=+0.160737736 container init fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:20:06 np0005532048 podman[319481]: 2025-11-22 09:20:06.145875958 +0000 UTC m=+0.168005664 container start fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:20:06 np0005532048 podman[319481]: 2025-11-22 09:20:06.149529066 +0000 UTC m=+0.171658792 container attach fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.383 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-unplugged-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.386 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.387 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.387 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.387 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] No waiting events found dispatching network-vif-unplugged-95d3860d-a485-46b6-8875-35bb61ae7e9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.388 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-unplugged-95d3860d-a485-46b6-8875-35bb61ae7e9d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.388 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.388 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.389 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.389 253665 DEBUG oslo_concurrency.lockutils [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.389 253665 DEBUG nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] No waiting events found dispatching network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:06 np0005532048 nova_compute[253661]: 2025-11-22 09:20:06.390 253665 WARNING nova.compute.manager [req-4e4e153f-79e3-473c-b5a9-ff1b5b727ced req-73e08d66-7d09-4f73-b558-5453c9c4b8c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received unexpected event network-vif-plugged-95d3860d-a485-46b6-8875-35bb61ae7e9d for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:20:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]: {
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "osd_id": 1,
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "type": "bluestore"
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:    },
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "osd_id": 0,
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "type": "bluestore"
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:    },
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "osd_id": 2,
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:        "type": "bluestore"
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]:    }
Nov 22 04:20:07 np0005532048 naughty_hoover[319498]: }
Nov 22 04:20:07 np0005532048 systemd[1]: libpod-fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e.scope: Deactivated successfully.
Nov 22 04:20:07 np0005532048 podman[319481]: 2025-11-22 09:20:07.234009347 +0000 UTC m=+1.256139053 container died fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:20:07 np0005532048 systemd[1]: libpod-fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e.scope: Consumed 1.093s CPU time.
Nov 22 04:20:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b71037f2badbcb87a13cade527ecdcc1a3b2de5bcd32e59ebc1eb7b2754089b0-merged.mount: Deactivated successfully.
Nov 22 04:20:07 np0005532048 podman[319481]: 2025-11-22 09:20:07.303152277 +0000 UTC m=+1.325281993 container remove fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:20:07 np0005532048 systemd[1]: libpod-conmon-fe098747cd88c763410e124073ebac2260df15ec4dfd06ac4256ce95c114581e.scope: Deactivated successfully.
Nov 22 04:20:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:20:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:20:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:20:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:20:07 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b73037fc-4de3-43d6-9fa3-6fcaf166a918 does not exist
Nov 22 04:20:07 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e1ded6ec-620f-4829-8f42-857f92634a08 does not exist
Nov 22 04:20:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 194 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 233 op/s
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.539 253665 DEBUG nova.network.neutron [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.560 253665 INFO nova.compute.manager [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Took 1.92 seconds to deallocate network for instance.#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.612 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.613 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.719 253665 DEBUG oslo_concurrency.processutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.783 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.785 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.786 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.786 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.786 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.789 253665 INFO nova.compute.manager [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Terminating instance#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.791 253665 DEBUG nova.compute.manager [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.828 253665 DEBUG nova.compute.manager [req-6e0fcf3b-914f-4248-ab5e-88deabc37c38 req-9232b74f-542e-42d4-b2ff-209af984e721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Received event network-vif-deleted-95d3860d-a485-46b6-8875-35bb61ae7e9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:07 np0005532048 kernel: tap2a28300e-6b (unregistering): left promiscuous mode
Nov 22 04:20:07 np0005532048 NetworkManager[48920]: <info>  [1763803207.8435] device (tap2a28300e-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:20:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:07Z|00581|binding|INFO|Releasing lport 2a28300e-6b6b-4513-831f-e30f3694fbcd from this chassis (sb_readonly=0)
Nov 22 04:20:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:07Z|00582|binding|INFO|Setting lport 2a28300e-6b6b-4513-831f-e30f3694fbcd down in Southbound
Nov 22 04:20:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:07Z|00583|binding|INFO|Removing iface tap2a28300e-6b ovn-installed in OVS
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.868 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:af:ec 10.100.0.12'], port_security=['fa:16:3e:7c:af:ec 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6fc1c0e4-3bd1-44c5-a722-9a30961fc545', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a28300e-6b6b-4513-831f-e30f3694fbcd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:20:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.870 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a28300e-6b6b-4513-831f-e30f3694fbcd in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd unbound from our chassis#033[00m
Nov 22 04:20:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.872 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:20:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.873 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38f08ced-d268-4935-9afc-5c63c41e665d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:07.873 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace which is not needed anymore#033[00m
Nov 22 04:20:07 np0005532048 nova_compute[253661]: 2025-11-22 09:20:07.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:07 np0005532048 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 22 04:20:07 np0005532048 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003a.scope: Consumed 4.890s CPU time.
Nov 22 04:20:07 np0005532048 systemd-machined[215941]: Machine qemu-70-instance-0000003a terminated.
Nov 22 04:20:08 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [NOTICE]   (318923) : haproxy version is 2.8.14-c23fe91
Nov 22 04:20:08 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [NOTICE]   (318923) : path to executable is /usr/sbin/haproxy
Nov 22 04:20:08 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [WARNING]  (318923) : Exiting Master process...
Nov 22 04:20:08 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [ALERT]    (318923) : Current worker (318926) exited with code 143 (Terminated)
Nov 22 04:20:08 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[318916]: [WARNING]  (318923) : All workers exited. Exiting... (0)
Nov 22 04:20:08 np0005532048 systemd[1]: libpod-a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb.scope: Deactivated successfully.
Nov 22 04:20:08 np0005532048 podman[319638]: 2025-11-22 09:20:08.040517074 +0000 UTC m=+0.059663771 container died a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.043 253665 INFO nova.virt.libvirt.driver [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Instance destroyed successfully.#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.044 253665 DEBUG nova.objects.instance [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'resources' on Instance uuid 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.065 253665 DEBUG nova.virt.libvirt.vif [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:19:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1392829761',display_name='tempest-ServerDiskConfigTestJSON-server-1392829761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1392829761',id=58,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-4205cvpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:20:04Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=6fc1c0e4-3bd1-44c5-a722-9a30961fc545,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.066 253665 DEBUG nova.network.os_vif_util [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "address": "fa:16:3e:7c:af:ec", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a28300e-6b", "ovs_interfaceid": "2a28300e-6b6b-4513-831f-e30f3694fbcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.066 253665 DEBUG nova.network.os_vif_util [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.067 253665 DEBUG os_vif [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.069 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.069 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a28300e-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.071 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.073 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.076 253665 INFO os_vif [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:af:ec,bridge_name='br-int',has_traffic_filtering=True,id=2a28300e-6b6b-4513-831f-e30f3694fbcd,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a28300e-6b')#033[00m
Nov 22 04:20:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb-userdata-shm.mount: Deactivated successfully.
Nov 22 04:20:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ba224bf444db80dea210a4884aaac2259dd20704596313fa2367533ab6ec9709-merged.mount: Deactivated successfully.
Nov 22 04:20:08 np0005532048 podman[319638]: 2025-11-22 09:20:08.15680745 +0000 UTC m=+0.175954137 container cleanup a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:20:08 np0005532048 systemd[1]: libpod-conmon-a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb.scope: Deactivated successfully.
Nov 22 04:20:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1094984022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.220 253665 DEBUG oslo_concurrency.processutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.227 253665 DEBUG nova.compute.provider_tree [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:20:08 np0005532048 podman[319697]: 2025-11-22 09:20:08.236862015 +0000 UTC m=+0.051608385 container remove a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.241 253665 DEBUG nova.scheduler.client.report [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:20:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.242 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1cd920-1cac-4a81-bc80-749366ca8c39]: (4, ('Sat Nov 22 09:20:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb)\na2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb\nSat Nov 22 09:20:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd (a2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb)\na2d1b6df348420ef64ac97e924e2fac673f446dffc8d9182392adf9aa95d53fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.244 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2556c1f5-d63a-42fc-97a5-8f83f92601bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.244 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:08 np0005532048 kernel: tap01d1bce2-e0: left promiscuous mode
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.267 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d397bf-195d-4156-bd9d-f47644a3d4a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.282 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb95d16-79a6-4250-b22e-40bddd7cb222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.283 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c49a91e7-410d-42df-9e06-9cfcb0d9c854]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.296 253665 INFO nova.scheduler.client.report [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Deleted allocations for instance 87fbaa81-3eae-4dac-9613-700a29ab0daf#033[00m
Nov 22 04:20:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.302 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1037863c-9872-421f-aee7-65dae1411fd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605233, 'reachable_time': 27853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319714, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:08 np0005532048 systemd[1]: run-netns-ovnmeta\x2d01d1bce2\x2def3d\x2d44bf\x2da3f9\x2d13dc692c2ddd.mount: Deactivated successfully.
Nov 22 04:20:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.306 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:20:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:08.307 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6cc8d8-76aa-4307-9232-cf4e35f16de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.373 253665 DEBUG oslo_concurrency.lockutils [None req-6172074a-0009-4eb6-8342-33e8554ac711 2c92c50f03874da0a9bd18e66157708e dd04e58a339948e6b219ee858ce56620 - - default default] Lock "87fbaa81-3eae-4dac-9613-700a29ab0daf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:20:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.524 253665 INFO nova.virt.libvirt.driver [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deleting instance files /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_del#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.525 253665 INFO nova.virt.libvirt.driver [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deletion of /var/lib/nova/instances/6fc1c0e4-3bd1-44c5-a722-9a30961fc545_del complete#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.582 253665 INFO nova.compute.manager [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.583 253665 DEBUG oslo.service.loopingcall [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.583 253665 DEBUG nova.compute.manager [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.584 253665 DEBUG nova.network.neutron [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.781 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.782 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.782 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-unplugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.783 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.784 253665 DEBUG oslo_concurrency.lockutils [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.784 253665 DEBUG nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] No waiting events found dispatching network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:08 np0005532048 nova_compute[253661]: 2025-11-22 09:20:08.784 253665 WARNING nova.compute.manager [req-9a962823-985c-434b-81d9-95d1c417e7dc req-003e5af0-1e6c-47c8-879d-ae17fb2e1214 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received unexpected event network-vif-plugged-2a28300e-6b6b-4513-831f-e30f3694fbcd for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.157 253665 DEBUG nova.network.neutron [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.173 253665 INFO nova.compute.manager [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Took 0.59 seconds to deallocate network for instance.#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.222 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.223 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.297 253665 DEBUG oslo_concurrency.processutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 157 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 266 op/s
Nov 22 04:20:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428577081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.808 253665 DEBUG oslo_concurrency.processutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.817 253665 DEBUG nova.compute.provider_tree [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.846 253665 DEBUG nova.scheduler.client.report [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.866 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.892 253665 INFO nova.scheduler.client.report [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Deleted allocations for instance 6fc1c0e4-3bd1-44c5-a722-9a30961fc545#033[00m
Nov 22 04:20:09 np0005532048 nova_compute[253661]: 2025-11-22 09:20:09.961 253665 DEBUG oslo_concurrency.lockutils [None req-6586ea7c-4648-4398-9ba3-5d8ab3f7d030 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "6fc1c0e4-3bd1-44c5-a722-9a30961fc545" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:10 np0005532048 nova_compute[253661]: 2025-11-22 09:20:10.222 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:10 np0005532048 nova_compute[253661]: 2025-11-22 09:20:10.381 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:10 np0005532048 nova_compute[253661]: 2025-11-22 09:20:10.382 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:10 np0005532048 nova_compute[253661]: 2025-11-22 09:20:10.396 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:20:10 np0005532048 nova_compute[253661]: 2025-11-22 09:20:10.458 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:10 np0005532048 nova_compute[253661]: 2025-11-22 09:20:10.459 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:10 np0005532048 nova_compute[253661]: 2025-11-22 09:20:10.466 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:20:10 np0005532048 nova_compute[253661]: 2025-11-22 09:20:10.466 253665 INFO nova.compute.claims [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:20:10 np0005532048 nova_compute[253661]: 2025-11-22 09:20:10.580 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1705343458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.052 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.060 253665 DEBUG nova.compute.manager [req-1a80dde3-06fd-4eb8-8889-48945e166964 req-e0b7b123-65b7-4c85-9687-433a7f5d862f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Received event network-vif-deleted-2a28300e-6b6b-4513-831f-e30f3694fbcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.066 253665 DEBUG nova.compute.provider_tree [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.088 253665 DEBUG nova.scheduler.client.report [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.110 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.111 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.164 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.164 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.181 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.196 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.286 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.288 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.288 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Creating image(s)#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.316 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.345 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.371 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.376 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 157 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 203 op/s
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.419 253665 DEBUG nova.policy [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5352d2182544454aab03bd4a74160247', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.467 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.467 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.468 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.468 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.509 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.515 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aadc298c-a1ba-41ca-9015-0a4d08420487_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.689 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.690 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.695 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.696 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.712 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.716 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.804 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.805 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.813 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.814 253665 INFO nova.compute.claims [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.818 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:11 np0005532048 nova_compute[253661]: 2025-11-22 09:20:11.958 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.080 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aadc298c-a1ba-41ca-9015-0a4d08420487_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.157 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] resizing rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:20:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:12Z|00584|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.252 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Successfully created port: 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:20:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790294290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:20:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:20:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790294290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.347 253665 DEBUG nova.objects.instance [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'migration_context' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.360 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.360 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Ensure instance console log exists: /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.361 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.361 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.361 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/294044666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.475 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.484 253665 DEBUG nova.compute.provider_tree [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.505 253665 DEBUG nova.scheduler.client.report [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.526 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.527 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.529 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.539 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.539 253665 INFO nova.compute.claims [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.596 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.597 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.615 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.630 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.709 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.768 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.773 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.774 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Creating image(s)#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.805 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.835 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.864 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.870 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.922 253665 DEBUG nova.policy [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3457ea0f757244e8a49e3e224d581e8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8120d22470024f2197238c7c48c5ba0e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.964 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.965 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.966 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.966 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:12 np0005532048 nova_compute[253661]: 2025-11-22 09:20:12.995 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.000 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.071 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737140166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.222 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Successfully updated port: 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.237 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.238 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquired lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.238 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.252 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.259 253665 DEBUG nova.compute.provider_tree [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.275 253665 DEBUG nova.scheduler.client.report [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.298 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.300 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.356 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.357 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.377 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.395 253665 DEBUG nova.compute.manager [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-changed-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.396 253665 DEBUG nova.compute.manager [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Refreshing instance network info cache due to event network-changed-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.396 253665 DEBUG oslo_concurrency.lockutils [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.397 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:20:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 140 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 223 op/s
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.486 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.487 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.488 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Creating image(s)#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.523 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.585 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.752 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.759 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.841 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.842 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.843 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.843 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.870 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:13 np0005532048 nova_compute[253661]: 2025-11-22 09:20:13.874 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:14 np0005532048 nova_compute[253661]: 2025-11-22 09:20:14.242 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:20:14 np0005532048 nova_compute[253661]: 2025-11-22 09:20:14.255 253665 DEBUG nova.policy [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '559fd7e00a0a468797efe4955caffc4a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:20:14 np0005532048 nova_compute[253661]: 2025-11-22 09:20:14.306 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:14 np0005532048 nova_compute[253661]: 2025-11-22 09:20:14.380 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] resizing rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:20:14 np0005532048 nova_compute[253661]: 2025-11-22 09:20:14.867 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Successfully created port: 0c106a61-dc2d-42f2-9c81-9c68f52ce123 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.272 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 154 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 195 op/s
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.456 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.518 253665 DEBUG nova.objects.instance [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lazy-loading 'migration_context' on Instance uuid 30c09d44-c691-4f03-a20d-2e86a0d0a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.526 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] resizing rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.581 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.582 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Ensure instance console log exists: /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.582 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.583 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.583 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.697 253665 DEBUG nova.objects.instance [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'migration_context' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.720 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.721 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Ensure instance console log exists: /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.721 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.721 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:15 np0005532048 nova_compute[253661]: 2025-11-22 09:20:15.722 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.082 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Successfully created port: 43cec84a-e6cc-4492-8869-806f677f3026 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.130 253665 DEBUG nova.network.neutron [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Updating instance_info_cache with network_info: [{"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.154 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Releasing lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.154 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance network_info: |[{"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.155 253665 DEBUG oslo_concurrency.lockutils [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.155 253665 DEBUG nova.network.neutron [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Refreshing network info cache for port 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.159 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Start _get_guest_xml network_info=[{"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.166 253665 WARNING nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.174 253665 DEBUG nova.virt.libvirt.host [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.175 253665 DEBUG nova.virt.libvirt.host [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.178 253665 DEBUG nova.virt.libvirt.host [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.179 253665 DEBUG nova.virt.libvirt.host [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.179 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.181 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.182 253665 DEBUG nova.virt.hardware [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.185 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.404 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Successfully updated port: 0c106a61-dc2d-42f2-9c81-9c68f52ce123 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.418 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.418 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquired lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.418 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.567 253665 DEBUG nova.compute.manager [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-changed-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.567 253665 DEBUG nova.compute.manager [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Refreshing instance network info cache due to event network-changed-0c106a61-dc2d-42f2-9c81-9c68f52ce123. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.568 253665 DEBUG oslo_concurrency.lockutils [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:20:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1865413845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.658 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.681 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.686 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.725 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:20:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:16 np0005532048 nova_compute[253661]: 2025-11-22 09:20:16.995 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Successfully updated port: 43cec84a-e6cc-4492-8869-806f677f3026 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.017 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.017 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquired lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.017 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:20:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/799981972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.157 253665 DEBUG nova.compute.manager [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-changed-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.157 253665 DEBUG nova.compute.manager [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Refreshing instance network info cache due to event network-changed-43cec84a-e6cc-4492-8869-806f677f3026. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.158 253665 DEBUG oslo_concurrency.lockutils [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.160 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.161 253665 DEBUG nova.virt.libvirt.vif [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-257400181',display_name='tempest-ServerDiskConfigTestJSON-server-257400181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-257400181',id=61,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-wzpxdi7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:11Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=aadc298c-a1ba-41ca-9015-0a4d08420487,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.162 253665 DEBUG nova.network.os_vif_util [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.163 253665 DEBUG nova.network.os_vif_util [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.164 253665 DEBUG nova.objects.instance [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_devices' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.177 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <uuid>aadc298c-a1ba-41ca-9015-0a4d08420487</uuid>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <name>instance-0000003d</name>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-257400181</nova:name>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:20:16</nova:creationTime>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <nova:user uuid="5352d2182544454aab03bd4a74160247">tempest-ServerDiskConfigTestJSON-1778643933-project-member</nova:user>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <nova:project uuid="a29f2c834c7a4a2ea6c4fc6dea996a8e">tempest-ServerDiskConfigTestJSON-1778643933</nova:project>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <nova:port uuid="27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <entry name="serial">aadc298c-a1ba-41ca-9015-0a4d08420487</entry>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <entry name="uuid">aadc298c-a1ba-41ca-9015-0a4d08420487</entry>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/aadc298c-a1ba-41ca-9015-0a4d08420487_disk">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:84:ea:a6"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <target dev="tap27b3ab6b-d0"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/console.log" append="off"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:20:17 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:20:17 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:20:17 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:20:17 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.177 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Preparing to wait for external event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.178 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.178 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.178 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.179 253665 DEBUG nova.virt.libvirt.vif [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-257400181',display_name='tempest-ServerDiskConfigTestJSON-server-257400181',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-257400181',id=61,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a29f2c834c7a4a2ea6c4fc6dea996a8e',ramdisk_id='',reservation_id='r-wzpxdi7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1778643933',owner_user_name='tempest-ServerDiskConfigTestJSON-1778643933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:11Z,user_data=None,user_id='5352d2182544454aab03bd4a74160247',uuid=aadc298c-a1ba-41ca-9015-0a4d08420487,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.179 253665 DEBUG nova.network.os_vif_util [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converting VIF {"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.180 253665 DEBUG nova.network.os_vif_util [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.180 253665 DEBUG os_vif [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.181 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.181 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.185 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27b3ab6b-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.186 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27b3ab6b-d0, col_values=(('external_ids', {'iface-id': '27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:ea:a6', 'vm-uuid': 'aadc298c-a1ba-41ca-9015-0a4d08420487'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:17 np0005532048 NetworkManager[48920]: <info>  [1763803217.1890] manager: (tap27b3ab6b-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.189 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.198 253665 INFO os_vif [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:ea:a6,bridge_name='br-int',has_traffic_filtering=True,id=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643,network=Network(01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27b3ab6b-d0')#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.200 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.253 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.253 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.253 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] No VIF found with MAC fa:16:3e:84:ea:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.255 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Using config drive#033[00m
Nov 22 04:20:17 np0005532048 nova_compute[253661]: 2025-11-22 09:20:17.288 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 204 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 213 op/s
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.262 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Creating config drive at /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.266 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgvg01j3d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.301 253665 DEBUG nova.network.neutron [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Updating instance_info_cache with network_info: [{"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.326 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Releasing lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.327 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance network_info: |[{"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.328 253665 DEBUG oslo_concurrency.lockutils [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.329 253665 DEBUG nova.network.neutron [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Refreshing network info cache for port 0c106a61-dc2d-42f2-9c81-9c68f52ce123 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.332 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start _get_guest_xml network_info=[{"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.337 253665 WARNING nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.342 253665 DEBUG nova.virt.libvirt.host [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.342 253665 DEBUG nova.virt.libvirt.host [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.351 253665 DEBUG nova.virt.libvirt.host [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.352 253665 DEBUG nova.virt.libvirt.host [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.352 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.353 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.353 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.354 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.355 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.355 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.355 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.355 253665 DEBUG nova.virt.hardware [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.360 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.411 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgvg01j3d" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.443 253665 DEBUG nova.storage.rbd_utils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] rbd image aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.449 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.539 253665 DEBUG nova.network.neutron [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Updated VIF entry in instance network info cache for port 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.540 253665 DEBUG nova.network.neutron [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Updating instance_info_cache with network_info: [{"id": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "address": "fa:16:3e:84:ea:a6", "network": {"id": "01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2134562737-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a29f2c834c7a4a2ea6c4fc6dea996a8e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27b3ab6b-d0", "ovs_interfaceid": "27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.553 253665 DEBUG oslo_concurrency.lockutils [req-3e18e005-1cce-46d3-9a30-b66d8382225f req-74e42190-8458-43eb-aba4-8f593106515a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-aadc298c-a1ba-41ca-9015-0a4d08420487" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.576 253665 DEBUG nova.network.neutron [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Updating instance_info_cache with network_info: [{"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.591 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Releasing lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.592 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance network_info: |[{"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.592 253665 DEBUG oslo_concurrency.lockutils [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.592 253665 DEBUG nova.network.neutron [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Refreshing network info cache for port 43cec84a-e6cc-4492-8869-806f677f3026 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.595 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Start _get_guest_xml network_info=[{"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.600 253665 WARNING nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.605 253665 DEBUG nova.virt.libvirt.host [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.606 253665 DEBUG nova.virt.libvirt.host [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.615 253665 DEBUG nova.virt.libvirt.host [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.616 253665 DEBUG nova.virt.libvirt.host [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.617 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.617 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.618 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.618 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.618 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.618 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.619 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.619 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.619 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.619 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.620 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.620 253665 DEBUG nova.virt.hardware [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.624 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.673 253665 DEBUG oslo_concurrency.processutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config aadc298c-a1ba-41ca-9015-0a4d08420487_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.674 253665 INFO nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Deleting local config drive /var/lib/nova/instances/aadc298c-a1ba-41ca-9015-0a4d08420487/disk.config because it was imported into RBD.#033[00m
Nov 22 04:20:18 np0005532048 kernel: tap27b3ab6b-d0: entered promiscuous mode
Nov 22 04:20:18 np0005532048 NetworkManager[48920]: <info>  [1763803218.7645] manager: (tap27b3ab6b-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Nov 22 04:20:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:18Z|00585|binding|INFO|Claiming lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for this chassis.
Nov 22 04:20:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:18Z|00586|binding|INFO|27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643: Claiming fa:16:3e:84:ea:a6 10.100.0.14
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.765 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.771 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:ea:a6 10.100.0.14'], port_security=['fa:16:3e:84:ea:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aadc298c-a1ba-41ca-9015-0a4d08420487', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a29f2c834c7a4a2ea6c4fc6dea996a8e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9acc6289-82af-49fc-aec4-129a3648eb3e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e23b009d-efb8-4598-83cc-050b9cf1ce0d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.773 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 in datapath 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd bound to our chassis#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.775 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd#033[00m
Nov 22 04:20:18 np0005532048 systemd-udevd[320469]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.787 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9c13cc-9769-4310-9526-cbf1ab09a500]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.789 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01d1bce2-e1 in ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.792 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01d1bce2-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.792 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d830098d-688e-4a51-b872-5abbe10c54c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:18Z|00587|binding|INFO|Setting lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 ovn-installed in OVS
Nov 22 04:20:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:18Z|00588|binding|INFO|Setting lport 27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 up in Southbound
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.795 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c14a7d93-c618-48b2-8969-d010f33ecd7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:18 np0005532048 NetworkManager[48920]: <info>  [1763803218.8087] device (tap27b3ab6b-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:20:18 np0005532048 NetworkManager[48920]: <info>  [1763803218.8099] device (tap27b3ab6b-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.815 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d4390402-9e10-401c-88dc-5d8355c9b028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 systemd-machined[215941]: New machine qemu-71-instance-0000003d.
Nov 22 04:20:18 np0005532048 systemd[1]: Started Virtual Machine qemu-71-instance-0000003d.
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.835 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef21546-e0da-4f94-813c-b7082f098171]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.876 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e00962-f299-479f-af38-f6a9473d6688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 systemd-udevd[320484]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:20:18 np0005532048 NetworkManager[48920]: <info>  [1763803218.8843] manager: (tap01d1bce2-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/265)
Nov 22 04:20:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343977246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.881 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[545f21b3-1801-4914-95d2-0829d4d887da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.919 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.925 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[704c3e72-981e-47a6-92d8-d9b28d51dcfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.929 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5af68e5a-db9b-4a40-a569-9a3a344b36d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.945 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:18 np0005532048 nova_compute[253661]: 2025-11-22 09:20:18.949 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:18 np0005532048 NetworkManager[48920]: <info>  [1763803218.9522] device (tap01d1bce2-e0): carrier: link connected
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.959 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0388d8-dbce-4dc3-bdbd-768b3bd4c88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.977 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b71938e8-9bc5-4d4e-b6ec-51192f4e087f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606968, 'reachable_time': 27022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320534, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:18 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:18.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf167f47-4863-4a60-bccd-21ccae2416e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:2279'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606968, 'tstamp': 606968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320535, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea52952-ef2b-4257-947f-d1804a49c44c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01d1bce2-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:22:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606968, 'reachable_time': 27022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320536, 'error': None, 'target': 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.059 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a287022d-1e82-47cb-8380-7a7137e5cc35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474094733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.146 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[287ed762-22b7-489d-a1a3-47d029eadf33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.147 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01d1bce2-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.148 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.149 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01d1bce2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:19 np0005532048 kernel: tap01d1bce2-e0: entered promiscuous mode
Nov 22 04:20:19 np0005532048 NetworkManager[48920]: <info>  [1763803219.1525] manager: (tap01d1bce2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.154 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01d1bce2-e0, col_values=(('external_ids', {'iface-id': '23aa3d02-a12d-464a-8395-5aa8724c0fd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:19Z|00589|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.159 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.175 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.176 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58324faf-38f7-412d-a139-69e92f80135d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.177 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.pid.haproxy
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:20:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:19.178 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'env', 'PROCESS_TAG=haproxy-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.195 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.201 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.258 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.265 253665 DEBUG nova.compute.manager [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.265 253665 DEBUG oslo_concurrency.lockutils [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.266 253665 DEBUG oslo_concurrency.lockutils [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.266 253665 DEBUG oslo_concurrency.lockutils [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.266 253665 DEBUG nova.compute.manager [req-ab4e0559-439a-452a-abb7-492eefb5effb req-0b7b4a68-29d6-4621-98f8-de380bae06fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Processing event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.375 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.383 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803219.3826275, aadc298c-a1ba-41ca-9015-0a4d08420487 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.383 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Started (Lifecycle Event)#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.387 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.391 253665 INFO nova.virt.libvirt.driver [-] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance spawned successfully.#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.392 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.407 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.412 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.417 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.418 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.418 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.419 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.419 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.420 253665 DEBUG nova.virt.libvirt.driver [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 262 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 785 KiB/s rd, 5.3 MiB/s wr, 150 op/s
Nov 22 04:20:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702820486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.447 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.447 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803219.382842, aadc298c-a1ba-41ca-9015-0a4d08420487 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.447 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.466 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.468 253665 DEBUG nova.virt.libvirt.vif [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1373824792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1373824792',id=62,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8120d22470024f2197238c7c48c5ba0e',ramdisk_id='',reservation_id='r-q5mmcejy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-563581713',owner_user_name='tempest-InstanceActionsV221TestJSON-563581713-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:12Z,user_data=None,user_id='3457ea0f757244e8a49e3e224d581e8a',uuid=30c09d44-c691-4f03-a20d-2e86a0d0a762,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.468 253665 DEBUG nova.network.os_vif_util [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converting VIF {"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.469 253665 DEBUG nova.network.os_vif_util [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.470 253665 DEBUG nova.objects.instance [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lazy-loading 'pci_devices' on Instance uuid 30c09d44-c691-4f03-a20d-2e86a0d0a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.473 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.474 253665 DEBUG nova.network.neutron [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Updated VIF entry in instance network info cache for port 0c106a61-dc2d-42f2-9c81-9c68f52ce123. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.475 253665 DEBUG nova.network.neutron [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Updating instance_info_cache with network_info: [{"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.479 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803219.3863351, aadc298c-a1ba-41ca-9015-0a4d08420487 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.479 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.493 253665 INFO nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Took 8.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.493 253665 DEBUG nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.495 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <uuid>30c09d44-c691-4f03-a20d-2e86a0d0a762</uuid>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <name>instance-0000003e</name>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-1373824792</nova:name>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:20:18</nova:creationTime>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:user uuid="3457ea0f757244e8a49e3e224d581e8a">tempest-InstanceActionsV221TestJSON-563581713-project-member</nova:user>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:project uuid="8120d22470024f2197238c7c48c5ba0e">tempest-InstanceActionsV221TestJSON-563581713</nova:project>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:port uuid="0c106a61-dc2d-42f2-9c81-9c68f52ce123">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="serial">30c09d44-c691-4f03-a20d-2e86a0d0a762</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="uuid">30c09d44-c691-4f03-a20d-2e86a0d0a762</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/30c09d44-c691-4f03-a20d-2e86a0d0a762_disk">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:06:5d:4f"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <target dev="tap0c106a61-dc"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/console.log" append="off"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:20:19 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:20:19 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.495 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Preparing to wait for external event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.495 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.496 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.496 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.496 253665 DEBUG nova.virt.libvirt.vif [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1373824792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1373824792',id=62,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8120d22470024f2197238c7c48c5ba0e',ramdisk_id='',reservation_id='r-q5mmcejy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-563581713',owner_user_name='tempest-InstanceActionsV221TestJSON-563581713-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:12Z,user_data=None,user_id='3457ea0f757244e8a49e3e224d581e8a',uuid=30c09d44-c691-4f03-a20d-2e86a0d0a762,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.497 253665 DEBUG nova.network.os_vif_util [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converting VIF {"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.497 253665 DEBUG nova.network.os_vif_util [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.497 253665 DEBUG os_vif [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.498 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.498 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.503 253665 DEBUG oslo_concurrency.lockutils [req-f044c729-fb86-4ef7-befa-ac4a0946173e req-7edcd3f3-0cbe-43f1-a262-0e4fec523f0c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-30c09d44-c691-4f03-a20d-2e86a0d0a762" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.503 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.504 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.504 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c106a61-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.505 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c106a61-dc, col_values=(('external_ids', {'iface-id': '0c106a61-dc2d-42f2-9c81-9c68f52ce123', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:5d:4f', 'vm-uuid': '30c09d44-c691-4f03-a20d-2e86a0d0a762'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:19 np0005532048 NetworkManager[48920]: <info>  [1763803219.5079] manager: (tap0c106a61-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.513 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.515 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.516 253665 INFO os_vif [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc')#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.553 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.571 253665 INFO nova.compute.manager [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Took 9.13 seconds to build instance.#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.576 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.576 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.576 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] No VIF found with MAC fa:16:3e:06:5d:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.577 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Using config drive#033[00m
Nov 22 04:20:19 np0005532048 podman[320672]: 2025-11-22 09:20:19.583074788 +0000 UTC m=+0.058386240 container create 4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.609 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.626 253665 DEBUG oslo_concurrency.lockutils [None req-1e9772bc-dde9-47dd-a242-6995fb67574c 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:19 np0005532048 systemd[1]: Started libpod-conmon-4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a.scope.
Nov 22 04:20:19 np0005532048 podman[320672]: 2025-11-22 09:20:19.550123667 +0000 UTC m=+0.025435139 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:20:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/165dbfb6280e5c3468bc4607011907ecf91b892441f109cec428a46150992cc3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:19 np0005532048 podman[320672]: 2025-11-22 09:20:19.676781825 +0000 UTC m=+0.152093297 container init 4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:20:19 np0005532048 podman[320672]: 2025-11-22 09:20:19.683421306 +0000 UTC m=+0.158732758 container start 4d482d1a43f271a497f8fe2483c607843ac25d4399c9c62d7eb8a13c1a537c1a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:20:19 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [NOTICE]   (320710) : New worker (320712) forked
Nov 22 04:20:19 np0005532048 neutron-haproxy-ovnmeta-01d1bce2-ef3d-44bf-a3f9-13dc692c2ddd[320706]: [NOTICE]   (320710) : Loading success.
Nov 22 04:20:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341290301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.746 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.747 253665 DEBUG nova.virt.libvirt.vif [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783668956',display_name='tempest-tempest.common.compute-instance-783668956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783668956',id=63,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-k5tayptk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:13Z,user_data=None,user_id='559fd7e00a0a468797efe4955caffc4a',uuid=d364f1c2-d606-448a-b3bd-00f1d5c1b858,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.748 253665 DEBUG nova.network.os_vif_util [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.748 253665 DEBUG nova.network.os_vif_util [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.749 253665 DEBUG nova.objects.instance [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.764 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <uuid>d364f1c2-d606-448a-b3bd-00f1d5c1b858</uuid>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <name>instance-0000003f</name>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:name>tempest-tempest.common.compute-instance-783668956</nova:name>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:20:18</nova:creationTime>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:user uuid="559fd7e00a0a468797efe4955caffc4a">tempest-ServerActionsTestJSON-1918756964-project-member</nova:user>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:project uuid="d9601c2d2b97440483ffc0bf4f598e73">tempest-ServerActionsTestJSON-1918756964</nova:project>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <nova:port uuid="43cec84a-e6cc-4492-8869-806f677f3026">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="serial">d364f1c2-d606-448a-b3bd-00f1d5c1b858</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="uuid">d364f1c2-d606-448a-b3bd-00f1d5c1b858</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:29:62:0d"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <target dev="tap43cec84a-e6"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/console.log" append="off"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:20:19 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:20:19 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:20:19 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:20:19 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.764 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Preparing to wait for external event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.764 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.764 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.765 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.765 253665 DEBUG nova.virt.libvirt.vif [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-783668956',display_name='tempest-tempest.common.compute-instance-783668956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-783668956',id=63,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9601c2d2b97440483ffc0bf4f598e73',ramdisk_id='',reservation_id='r-k5tayptk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1918756964',owner_user_name='tempest-ServerActionsTestJSON-1918756964-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:13Z,user_data=None,user_id='559fd7e00a0a468797efe4955caffc4a',uuid=d364f1c2-d606-448a-b3bd-00f1d5c1b858,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.765 253665 DEBUG nova.network.os_vif_util [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converting VIF {"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.766 253665 DEBUG nova.network.os_vif_util [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.767 253665 DEBUG os_vif [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.767 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.767 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.768 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.770 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.771 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43cec84a-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.771 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43cec84a-e6, col_values=(('external_ids', {'iface-id': '43cec84a-e6cc-4492-8869-806f677f3026', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:62:0d', 'vm-uuid': 'd364f1c2-d606-448a-b3bd-00f1d5c1b858'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.773 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 NetworkManager[48920]: <info>  [1763803219.7741] manager: (tap43cec84a-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.780 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.781 253665 INFO os_vif [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:62:0d,bridge_name='br-int',has_traffic_filtering=True,id=43cec84a-e6cc-4492-8869-806f677f3026,network=Network(ebc42408-7b03-480c-a016-1e5bb2ebcc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43cec84a-e6')#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.827 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.827 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.828 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] No VIF found with MAC fa:16:3e:29:62:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.828 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Using config drive#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.849 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.873 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803204.8704858, 87fbaa81-3eae-4dac-9613-700a29ab0daf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.873 253665 INFO nova.compute.manager [-] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.892 253665 DEBUG nova.compute.manager [None req-0bb5fb31-49e4-46d6-8e54-6f5cd95994ab - - - - - -] [instance: 87fbaa81-3eae-4dac-9613-700a29ab0daf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.905 253665 DEBUG nova.network.neutron [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Updated VIF entry in instance network info cache for port 43cec84a-e6cc-4492-8869-806f677f3026. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.906 253665 DEBUG nova.network.neutron [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Updating instance_info_cache with network_info: [{"id": "43cec84a-e6cc-4492-8869-806f677f3026", "address": "fa:16:3e:29:62:0d", "network": {"id": "ebc42408-7b03-480c-a016-1e5bb2ebcc93", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1126339976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9601c2d2b97440483ffc0bf4f598e73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43cec84a-e6", "ovs_interfaceid": "43cec84a-e6cc-4492-8869-806f677f3026", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.919 253665 DEBUG oslo_concurrency.lockutils [req-8348fba2-a096-4b7a-b4a8-471d1876a5f1 req-af78b05e-54d7-4136-9653-12bcf3129389 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d364f1c2-d606-448a-b3bd-00f1d5c1b858" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.951 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Creating config drive at /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config#033[00m
Nov 22 04:20:19 np0005532048 nova_compute[253661]: 2025-11-22 09:20:19.956 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjut6r0o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.099 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjut6r0o" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.133 253665 DEBUG nova.storage.rbd_utils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] rbd image 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.139 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.195 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Creating config drive at /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.202 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqd9jltl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.342 253665 DEBUG oslo_concurrency.processutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config 30c09d44-c691-4f03-a20d-2e86a0d0a762_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.344 253665 INFO nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Deleting local config drive /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762/disk.config because it was imported into RBD.#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.368 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqd9jltl" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.403 253665 DEBUG nova.storage.rbd_utils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] rbd image d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.409 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:20 np0005532048 NetworkManager[48920]: <info>  [1763803220.4160] manager: (tap0c106a61-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Nov 22 04:20:20 np0005532048 systemd-udevd[320508]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:20:20 np0005532048 kernel: tap0c106a61-dc: entered promiscuous mode
Nov 22 04:20:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:20Z|00590|binding|INFO|Claiming lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 for this chassis.
Nov 22 04:20:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:20Z|00591|binding|INFO|0c106a61-dc2d-42f2-9c81-9c68f52ce123: Claiming fa:16:3e:06:5d:4f 10.100.0.9
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.436 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:5d:4f 10.100.0.9'], port_security=['fa:16:3e:06:5d:4f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30c09d44-c691-4f03-a20d-2e86a0d0a762', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8120d22470024f2197238c7c48c5ba0e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab0be74d-f44a-43fd-be23-3e0ac42b6c84', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e425ea4f-8728-48a7-950a-425a7d828903, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0c106a61-dc2d-42f2-9c81-9c68f52ce123) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.437 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0c106a61-dc2d-42f2-9c81-9c68f52ce123 in datapath 6e6525e9-2fbb-452c-a3eb-9774aebbdb59 bound to our chassis#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.439 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e6525e9-2fbb-452c-a3eb-9774aebbdb59#033[00m
Nov 22 04:20:20 np0005532048 NetworkManager[48920]: <info>  [1763803220.4466] device (tap0c106a61-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:20:20 np0005532048 NetworkManager[48920]: <info>  [1763803220.4481] device (tap0c106a61-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:20:20 np0005532048 systemd-machined[215941]: New machine qemu-72-instance-0000003e.
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.461 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:20Z|00592|binding|INFO|Setting lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 ovn-installed in OVS
Nov 22 04:20:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:20Z|00593|binding|INFO|Setting lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 up in Southbound
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.463 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68f77aef-21b0-4a29-b404-0fc93cef462a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.464 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e6525e9-21 in ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.465 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.466 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e6525e9-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.466 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e61406-fa0e-4b28-b376-ac298c6cde3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.469 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae35bce6-8323-49f3-8d50-b55d81be8356]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 systemd[1]: Started Virtual Machine qemu-72-instance-0000003e.
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.489 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[de69dbbd-a245-41d8-b00c-37ce67ac1bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.508 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[36fe66fc-bcc7-438d-b400-f650b4136235]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.547 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[333cc51f-caf2-4a5f-941d-bfc94c771efd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6eec84db-5d98-4465-a486-3fff72eba95e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 NetworkManager[48920]: <info>  [1763803220.5551] manager: (tap6e6525e9-20): new Veth device (/org/freedesktop/NetworkManager/Devices/270)
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.603 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44c06f0a-84e0-4e30-88a6-b6a0a1f0923e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.607 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[90e58b9a-0379-4a68-9343-5723223dbf4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.633 253665 DEBUG oslo_concurrency.processutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config d364f1c2-d606-448a-b3bd-00f1d5c1b858_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.634 253665 INFO nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Deleting local config drive /var/lib/nova/instances/d364f1c2-d606-448a-b3bd-00f1d5c1b858/disk.config because it was imported into RBD.#033[00m
Nov 22 04:20:20 np0005532048 NetworkManager[48920]: <info>  [1763803220.6453] device (tap6e6525e9-20): carrier: link connected
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.653 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[733847c6-31f3-40a2-92dd-8f451fc9f9ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.679 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[083d4210-d262-4868-b8f5-aa660f6f8e80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e6525e9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:45:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607138, 'reachable_time': 18500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320861, 'error': None, 'target': 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 NetworkManager[48920]: <info>  [1763803220.6815] manager: (tap43cec84a-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Nov 22 04:20:20 np0005532048 kernel: tap43cec84a-e6: entered promiscuous mode
Nov 22 04:20:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:20Z|00594|binding|INFO|Claiming lport 43cec84a-e6cc-4492-8869-806f677f3026 for this chassis.
Nov 22 04:20:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:20Z|00595|binding|INFO|43cec84a-e6cc-4492-8869-806f677f3026: Claiming fa:16:3e:29:62:0d 10.100.0.10
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.686 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:20 np0005532048 NetworkManager[48920]: <info>  [1763803220.6949] device (tap43cec84a-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:20:20 np0005532048 NetworkManager[48920]: <info>  [1763803220.6958] device (tap43cec84a-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.694 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:62:0d 10.100.0.10'], port_security=['fa:16:3e:29:62:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd364f1c2-d606-448a-b3bd-00f1d5c1b858', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9601c2d2b97440483ffc0bf4f598e73', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c62f4ce9-5b21-4154-83ce-fbb32299e500', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0765ecd7-99f5-46ea-a582-183132d36653, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=43cec84a-e6cc-4492-8869-806f677f3026) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.703 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0689e63e-70cc-4618-9dda-b2d06a1f4126]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:45d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607138, 'tstamp': 607138}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320868, 'error': None, 'target': 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:20Z|00596|binding|INFO|Setting lport 43cec84a-e6cc-4492-8869-806f677f3026 ovn-installed in OVS
Nov 22 04:20:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:20Z|00597|binding|INFO|Setting lport 43cec84a-e6cc-4492-8869-806f677f3026 up in Southbound
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:20 np0005532048 systemd-machined[215941]: New machine qemu-73-instance-0000003f.
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.733 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8292422e-b577-4d31-9e51-ce355ac71d8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e6525e9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:45:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607138, 'reachable_time': 18500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320871, 'error': None, 'target': 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 systemd[1]: Started Virtual Machine qemu-73-instance-0000003f.
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.780 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[43a15588-df63-4513-b7b6-8ff25c1c1ae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.885 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[024b3602-1399-499d-b36e-d5974b990652]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.887 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e6525e9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.887 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.888 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e6525e9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:20 np0005532048 NetworkManager[48920]: <info>  [1763803220.8913] manager: (tap6e6525e9-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Nov 22 04:20:20 np0005532048 kernel: tap6e6525e9-20: entered promiscuous mode
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.897 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e6525e9-20, col_values=(('external_ids', {'iface-id': '7cdf9637-bfab-4d96-a02e-638779af4eb2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:20Z|00598|binding|INFO|Releasing lport 7cdf9637-bfab-4d96-a02e-638779af4eb2 from this chassis (sb_readonly=0)
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.917 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.919 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e6525e9-2fbb-452c-a3eb-9774aebbdb59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e6525e9-2fbb-452c-a3eb-9774aebbdb59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.920 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[415eb3c8-b448-41c8-b58e-712e75630672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.921 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-6e6525e9-2fbb-452c-a3eb-9774aebbdb59
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/6e6525e9-2fbb-452c-a3eb-9774aebbdb59.pid.haproxy
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 6e6525e9-2fbb-452c-a3eb-9774aebbdb59
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:20:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:20.922 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'env', 'PROCESS_TAG=haproxy-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e6525e9-2fbb-452c-a3eb-9774aebbdb59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.973 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803220.9692056, 30c09d44-c691-4f03-a20d-2e86a0d0a762 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.974 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] VM Started (Lifecycle Event)#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.978 253665 DEBUG nova.compute.manager [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.979 253665 DEBUG oslo_concurrency.lockutils [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.979 253665 DEBUG oslo_concurrency.lockutils [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.979 253665 DEBUG oslo_concurrency.lockutils [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:20 np0005532048 nova_compute[253661]: 2025-11-22 09:20:20.980 253665 DEBUG nova.compute.manager [req-80126f7a-5039-4446-bb6e-bb61402f2566 req-e0a922fc-f23c-4253-b829-bf832c5f3b39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Processing event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.001 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.008 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803220.9698238, 30c09d44-c691-4f03-a20d-2e86a0d0a762 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.008 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.029 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.039 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.057 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.220 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.221 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803221.2214315, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.221 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Started (Lifecycle Event)#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.224 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.228 253665 INFO nova.virt.libvirt.driver [-] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance spawned successfully.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.228 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.242 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.249 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.255 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.256 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.257 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.258 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.258 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.259 253665 DEBUG nova.virt.libvirt.driver [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.281 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.281 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803221.2239003, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.281 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.322 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.325 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803221.2240658, d364f1c2-d606-448a-b3bd-00f1d5c1b858 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.326 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.347 253665 INFO nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Took 7.86 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.348 253665 DEBUG nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.359 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:21 np0005532048 podman[320993]: 2025-11-22 09:20:21.363432928 +0000 UTC m=+0.061350212 container create 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.363 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:21 np0005532048 systemd[1]: Started libpod-conmon-48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685.scope.
Nov 22 04:20:21 np0005532048 podman[320993]: 2025-11-22 09:20:21.330569679 +0000 UTC m=+0.028486993 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:20:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 262 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 5.3 MiB/s wr, 93 op/s
Nov 22 04:20:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:20:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3bb9c65833804f518fd940b222e6ab1d587a39e3be2ff7560dea3f1c278cb5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:21 np0005532048 podman[320993]: 2025-11-22 09:20:21.469070135 +0000 UTC m=+0.166987419 container init 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:20:21 np0005532048 podman[320993]: 2025-11-22 09:20:21.475646194 +0000 UTC m=+0.173563478 container start 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 04:20:21 np0005532048 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [NOTICE]   (321012) : New worker (321014) forked
Nov 22 04:20:21 np0005532048 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [NOTICE]   (321012) : Loading success.
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.505 253665 INFO nova.compute.manager [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Took 9.73 seconds to build instance.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.523 253665 DEBUG oslo_concurrency.lockutils [None req-e0a9bdd5-fa7c-4e36-a27d-5ea3f13af3f4 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.572 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 43cec84a-e6cc-4492-8869-806f677f3026 in datapath ebc42408-7b03-480c-a016-1e5bb2ebcc93 unbound from our chassis#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.574 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebc42408-7b03-480c-a016-1e5bb2ebcc93#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.594 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccd9ec4-59c6-4171-803d-0844d7b56202]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.637 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f50ea833-b756-4fb9-ae15-68ffa8b172f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.640 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[583edd04-ed72-4cf0-a715-7e8b6fa38818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.683 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7833be-d100-48b3-a5de-8bdc1b4e06a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.685 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.685 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.686 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.686 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aadc298c-a1ba-41ca-9015-0a4d08420487-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.686 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] No waiting events found dispatching network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 WARNING nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Received unexpected event network-vif-plugged-27b3ab6b-d0f2-4ad1-aeb2-64d3ff16b643 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.687 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.688 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Processing event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.688 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.688 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.688 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.689 253665 DEBUG oslo_concurrency.lockutils [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.689 253665 DEBUG nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] No waiting events found dispatching network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.689 253665 WARNING nova.compute.manager [req-653ad209-5698-4718-be55-72a8527dd8d0 req-01110eb8-f7d2-459e-8e76-6415da8d6e8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received unexpected event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.690 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.696 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.698 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803221.6981463, 30c09d44-c691-4f03-a20d-2e86a0d0a762 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.698 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.701 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4a2340-478b-48f8-8d0c-573abcb852cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebc42408-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:e3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 602324, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321028, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.703 253665 INFO nova.virt.libvirt.driver [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance spawned successfully.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.704 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.718 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.720 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0987a573-93b8-49b0-903a-aa8260a7fbe9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602336, 'tstamp': 602336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321029, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebc42408-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 602340, 'tstamp': 602340}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321029, 'error': None, 'target': 'ovnmeta-ebc42408-7b03-480c-a016-1e5bb2ebcc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.723 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebc42408-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.725 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.727 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebc42408-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.727 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.727 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebc42408-70, col_values=(('external_ids', {'iface-id': 'efc8861c-ffa7-41c8-9325-c43c7271007f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:21.728 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.736 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.737 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.737 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.738 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.738 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.739 253665 DEBUG nova.virt.libvirt.driver [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.772 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.812 253665 INFO nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Took 9.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.812 253665 DEBUG nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.885 253665 INFO nova.compute.manager [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Took 10.12 seconds to build instance.#033[00m
Nov 22 04:20:21 np0005532048 nova_compute[253661]: 2025-11-22 09:20:21.902 253665 DEBUG oslo_concurrency.lockutils [None req-3ecc650e-0e69-4345-ba2f-512aea310625 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.506 253665 INFO nova.compute.manager [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Rebuilding instance#033[00m
Nov 22 04:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.774 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'trusted_certs' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.788 253665 DEBUG nova.compute.manager [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.826 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_requests' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.836 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'pci_devices' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.846 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'resources' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.855 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] Lazy-loading 'migration_context' on Instance uuid aadc298c-a1ba-41ca-9015-0a4d08420487 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.865 253665 DEBUG nova.objects.instance [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:20:22 np0005532048 nova_compute[253661]: 2025-11-22 09:20:22.869 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:20:23 np0005532048 nova_compute[253661]: 2025-11-22 09:20:23.041 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803208.0392523, 6fc1c0e4-3bd1-44c5-a722-9a30961fc545 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:23 np0005532048 nova_compute[253661]: 2025-11-22 09:20:23.042 253665 INFO nova.compute.manager [-] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:20:23 np0005532048 nova_compute[253661]: 2025-11-22 09:20:23.066 253665 DEBUG nova.compute.manager [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:23 np0005532048 nova_compute[253661]: 2025-11-22 09:20:23.067 253665 DEBUG oslo_concurrency.lockutils [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:23 np0005532048 nova_compute[253661]: 2025-11-22 09:20:23.067 253665 DEBUG oslo_concurrency.lockutils [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:23 np0005532048 nova_compute[253661]: 2025-11-22 09:20:23.068 253665 DEBUG oslo_concurrency.lockutils [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d364f1c2-d606-448a-b3bd-00f1d5c1b858-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:23 np0005532048 nova_compute[253661]: 2025-11-22 09:20:23.068 253665 DEBUG nova.compute.manager [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] No waiting events found dispatching network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:23 np0005532048 nova_compute[253661]: 2025-11-22 09:20:23.069 253665 WARNING nova.compute.manager [req-67d001b1-6e83-4633-a015-7834472c5e36 req-6540290b-9e59-4c56-aba6-f5ec4b539ec5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Received unexpected event network-vif-plugged-43cec84a-e6cc-4492-8869-806f677f3026 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:20:23 np0005532048 nova_compute[253661]: 2025-11-22 09:20:23.084 253665 DEBUG nova.compute.manager [None req-c648a8ac-6f3a-48e1-9e80-c8f346071d08 - - - - - -] [instance: 6fc1c0e4-3bd1-44c5-a722-9a30961fc545] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 262 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.4 MiB/s wr, 244 op/s
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.356 253665 INFO nova.compute.manager [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Rebuilding instance#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.562 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.576 253665 DEBUG nova.compute.manager [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.719 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_requests' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.722 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.722 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.723 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.723 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.723 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.724 253665 INFO nova.compute.manager [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Terminating instance#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.725 253665 DEBUG nova.compute.manager [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.738 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'pci_devices' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.751 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'resources' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.762 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] Lazy-loading 'migration_context' on Instance uuid d364f1c2-d606-448a-b3bd-00f1d5c1b858 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.773 253665 DEBUG nova.objects.instance [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:20:24 np0005532048 kernel: tap0c106a61-dc (unregistering): left promiscuous mode
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:24 np0005532048 NetworkManager[48920]: <info>  [1763803224.7825] device (tap0c106a61-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.788 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.792 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:24Z|00599|binding|INFO|Releasing lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 from this chassis (sb_readonly=0)
Nov 22 04:20:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:24Z|00600|binding|INFO|Setting lport 0c106a61-dc2d-42f2-9c81-9c68f52ce123 down in Southbound
Nov 22 04:20:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:24Z|00601|binding|INFO|Removing iface tap0c106a61-dc ovn-installed in OVS
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.795 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.818 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.839 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:5d:4f 10.100.0.9'], port_security=['fa:16:3e:06:5d:4f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '30c09d44-c691-4f03-a20d-2e86a0d0a762', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8120d22470024f2197238c7c48c5ba0e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab0be74d-f44a-43fd-be23-3e0ac42b6c84', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e425ea4f-8728-48a7-950a-425a7d828903, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0c106a61-dc2d-42f2-9c81-9c68f52ce123) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:20:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.840 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0c106a61-dc2d-42f2-9c81-9c68f52ce123 in datapath 6e6525e9-2fbb-452c-a3eb-9774aebbdb59 unbound from our chassis#033[00m
Nov 22 04:20:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.842 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e6525e9-2fbb-452c-a3eb-9774aebbdb59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:20:24 np0005532048 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 22 04:20:24 np0005532048 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003e.scope: Consumed 3.552s CPU time.
Nov 22 04:20:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.843 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11900ac9-baf6-4240-ae37-9128fc190599]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:24 np0005532048 systemd-machined[215941]: Machine qemu-72-instance-0000003e terminated.
Nov 22 04:20:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:24.847 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 namespace which is not needed anymore#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.954 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.960 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.971 253665 INFO nova.virt.libvirt.driver [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Instance destroyed successfully.#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.972 253665 DEBUG nova.objects.instance [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lazy-loading 'resources' on Instance uuid 30c09d44-c691-4f03-a20d-2e86a0d0a762 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.986 253665 DEBUG nova.virt.libvirt.vif [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:20:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1373824792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1373824792',id=62,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:20:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8120d22470024f2197238c7c48c5ba0e',ramdisk_id='',reservation_id='r-q5mmcejy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-563581713',owner_user_name='tempest-InstanceActionsV221TestJSON-563581713-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:20:21Z,user_data=None,user_id='3457ea0f757244e8a49e3e224d581e8a',uuid=30c09d44-c691-4f03-a20d-2e86a0d0a762,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.987 253665 DEBUG nova.network.os_vif_util [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converting VIF {"id": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "address": "fa:16:3e:06:5d:4f", "network": {"id": "6e6525e9-2fbb-452c-a3eb-9774aebbdb59", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1503502526-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8120d22470024f2197238c7c48c5ba0e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c106a61-dc", "ovs_interfaceid": "0c106a61-dc2d-42f2-9c81-9c68f52ce123", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.988 253665 DEBUG nova.network.os_vif_util [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.988 253665 DEBUG os_vif [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.991 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c106a61-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:24 np0005532048 nova_compute[253661]: 2025-11-22 09:20:24.997 253665 INFO os_vif [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:4f,bridge_name='br-int',has_traffic_filtering=True,id=0c106a61-dc2d-42f2-9c81-9c68f52ce123,network=Network(6e6525e9-2fbb-452c-a3eb-9774aebbdb59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c106a61-dc')#033[00m
Nov 22 04:20:25 np0005532048 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [NOTICE]   (321012) : haproxy version is 2.8.14-c23fe91
Nov 22 04:20:25 np0005532048 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [NOTICE]   (321012) : path to executable is /usr/sbin/haproxy
Nov 22 04:20:25 np0005532048 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [WARNING]  (321012) : Exiting Master process...
Nov 22 04:20:25 np0005532048 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [WARNING]  (321012) : Exiting Master process...
Nov 22 04:20:25 np0005532048 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [ALERT]    (321012) : Current worker (321014) exited with code 143 (Terminated)
Nov 22 04:20:25 np0005532048 neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59[321008]: [WARNING]  (321012) : All workers exited. Exiting... (0)
Nov 22 04:20:25 np0005532048 systemd[1]: libpod-48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685.scope: Deactivated successfully.
Nov 22 04:20:25 np0005532048 podman[321053]: 2025-11-22 09:20:25.023663845 +0000 UTC m=+0.067849129 container died 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:20:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685-userdata-shm.mount: Deactivated successfully.
Nov 22 04:20:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3a3bb9c65833804f518fd940b222e6ab1d587a39e3be2ff7560dea3f1c278cb5-merged.mount: Deactivated successfully.
Nov 22 04:20:25 np0005532048 podman[321053]: 2025-11-22 09:20:25.082581247 +0000 UTC m=+0.126766531 container cleanup 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.086 253665 DEBUG nova.compute.manager [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-unplugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.086 253665 DEBUG oslo_concurrency.lockutils [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.087 253665 DEBUG oslo_concurrency.lockutils [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.087 253665 DEBUG oslo_concurrency.lockutils [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.087 253665 DEBUG nova.compute.manager [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] No waiting events found dispatching network-vif-unplugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.087 253665 DEBUG nova.compute.manager [req-a8064dad-dc58-461e-9e6b-9bc0cf0f753d req-cf733650-78b4-42f2-ab21-ba19cfd240da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-unplugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:20:25 np0005532048 systemd[1]: libpod-conmon-48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685.scope: Deactivated successfully.
Nov 22 04:20:25 np0005532048 podman[321108]: 2025-11-22 09:20:25.180185218 +0000 UTC m=+0.063343660 container remove 48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 04:20:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e8e866-6653-4b2e-add8-8c3a1cbaffd4]: (4, ('Sat Nov 22 09:20:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 (48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685)\n48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685\nSat Nov 22 09:20:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 (48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685)\n48718be155f44a8c0ba53750850a7d77bb1294abd375e81315b3b439464c6685\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.188 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[87dd580a-d45c-46df-8dae-a2566f4fa2be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.188 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e6525e9-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:25 np0005532048 kernel: tap6e6525e9-20: left promiscuous mode
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.213 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[179c65af-a170-4edd-a674-887132829d49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:20:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.230 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[318a5911-53a6-4bd8-a7ac-68f21f007f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.232 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ad12a490-cfaf-45e5-aeff-a1cbf18d8bb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:20:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.256 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b573331-e09a-4b30-a9e8-6b2f98430168]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607128, 'reachable_time': 28724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321122, 'error': None, 'target': 'ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:25 np0005532048 systemd[1]: run-netns-ovnmeta\x2d6e6525e9\x2d2fbb\x2d452c\x2da3eb\x2d9774aebbdb59.mount: Deactivated successfully.
Nov 22 04:20:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.263 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e6525e9-2fbb-452c-a3eb-9774aebbdb59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:20:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:25.263 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[df3502ff-6651-4216-bdea-fe559e22032b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.277 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 262 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 225 op/s
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.577 253665 INFO nova.virt.libvirt.driver [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Deleting instance files /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762_del#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.577 253665 INFO nova.virt.libvirt.driver [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Deletion of /var/lib/nova/instances/30c09d44-c691-4f03-a20d-2e86a0d0a762_del complete#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.649 253665 INFO nova.compute.manager [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.650 253665 DEBUG oslo.service.loopingcall [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.650 253665 DEBUG nova.compute.manager [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:20:25 np0005532048 nova_compute[253661]: 2025-11-22 09:20:25.650 253665 DEBUG nova.network.neutron [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:20:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:27 np0005532048 podman[321125]: 2025-11-22 09:20:27.371381941 +0000 UTC m=+0.063573036 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:20:27 np0005532048 podman[321124]: 2025-11-22 09:20:27.386359405 +0000 UTC m=+0.083562222 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:20:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 249 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.1 MiB/s wr, 279 op/s
Nov 22 04:20:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:27.962 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:27.963 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:20:27.965 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.257 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613503102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.821 253665 DEBUG nova.compute.manager [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.822 253665 DEBUG oslo_concurrency.lockutils [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.822 253665 DEBUG oslo_concurrency.lockutils [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.823 253665 DEBUG oslo_concurrency.lockutils [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.823 253665 DEBUG nova.compute.manager [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] No waiting events found dispatching network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.824 253665 WARNING nova.compute.manager [req-f96659d4-6949-406a-bd7b-86a7509e1c2d req-7e215582-c6fb-4324-ad96-7dec7fd5d46c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received unexpected event network-vif-plugged-0c106a61-dc2d-42f2-9c81-9c68f52ce123 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.827 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.930 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.932 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.946 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.947 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.956 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:20:28 np0005532048 nova_compute[253661]: 2025-11-22 09:20:28.956 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.151 253665 DEBUG nova.network.neutron [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.175 253665 INFO nova.compute.manager [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Took 3.53 seconds to deallocate network for instance.#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.228 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.229 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.243 253665 DEBUG nova.compute.manager [req-224a03e3-be1c-438c-9259-548742e90232 req-ca3db9ef-3e96-4395-af0a-b9533395bff1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Received event network-vif-deleted-0c106a61-dc2d-42f2-9c81-9c68f52ce123 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.248 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3578MB free_disk=59.886558532714844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.319 253665 DEBUG oslo_concurrency.processutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 275 op/s
Nov 22 04:20:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352036944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.776 253665 DEBUG oslo_concurrency.processutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.789 253665 DEBUG nova.compute.provider_tree [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.817 253665 DEBUG nova.scheduler.client.report [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.851 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.857 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.887 253665 INFO nova.scheduler.client.report [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Deleted allocations for instance 30c09d44-c691-4f03-a20d-2e86a0d0a762#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.974 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 636b1046-fff8-4a45-8a14-04010b2f282e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance aadc298c-a1ba-41ca-9015-0a4d08420487 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d364f1c2-d606-448a-b3bd-00f1d5c1b858 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.986 253665 DEBUG oslo_concurrency.lockutils [None req-f09a2fcb-05fb-41c8-9495-95e61da3c2d5 3457ea0f757244e8a49e3e224d581e8a 8120d22470024f2197238c7c48c5ba0e - - default default] Lock "30c09d44-c691-4f03-a20d-2e86a0d0a762" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:29 np0005532048 nova_compute[253661]: 2025-11-22 09:20:29.994 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:30 np0005532048 nova_compute[253661]: 2025-11-22 09:20:30.023 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:30 np0005532048 nova_compute[253661]: 2025-11-22 09:20:30.070 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:30 np0005532048 nova_compute[253661]: 2025-11-22 09:20:30.278 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550048957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:30 np0005532048 nova_compute[253661]: 2025-11-22 09:20:30.536 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:30 np0005532048 nova_compute[253661]: 2025-11-22 09:20:30.542 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:20:30 np0005532048 nova_compute[253661]: 2025-11-22 09:20:30.557 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:20:30 np0005532048 nova_compute[253661]: 2025-11-22 09:20:30.656 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:20:30 np0005532048 nova_compute[253661]: 2025-11-22 09:20:30.657 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:31 np0005532048 podman[321226]: 2025-11-22 09:20:31.406538979 +0000 UTC m=+0.102329878 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:20:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 216 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 247 op/s
Nov 22 04:20:31 np0005532048 nova_compute[253661]: 2025-11-22 09:20:31.656 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:20:31 np0005532048 nova_compute[253661]: 2025-11-22 09:20:31.657 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:20:31 np0005532048 nova_compute[253661]: 2025-11-22 09:20:31.657 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:20:31 np0005532048 nova_compute[253661]: 2025-11-22 09:20:31.657 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:20:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:32 np0005532048 nova_compute[253661]: 2025-11-22 09:20:32.956 253665 DEBUG nova.virt.libvirt.driver [None req-a25f0ec0-c3fc-4a4e-ac11-b5272b58a750 5352d2182544454aab03bd4a74160247 a29f2c834c7a4a2ea6c4fc6dea996a8e - - default default] [instance: aadc298c-a1ba-41ca-9015-0a4d08420487] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.238 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.239 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.252 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.313 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.313 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.323 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.324 253665 INFO nova.compute.claims [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:20:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 220 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 640 KiB/s wr, 264 op/s
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.489 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:33Z|00602|binding|INFO|Releasing lport efc8861c-ffa7-41c8-9325-c43c7271007f from this chassis (sb_readonly=0)
Nov 22 04:20:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:33Z|00603|binding|INFO|Releasing lport 23aa3d02-a12d-464a-8395-5aa8724c0fd4 from this chassis (sb_readonly=0)
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.786 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818487124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.978 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.983 253665 DEBUG nova.compute.provider_tree [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:20:33 np0005532048 nova_compute[253661]: 2025-11-22 09:20:33.995 253665 DEBUG nova.scheduler.client.report [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.011 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.012 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.053 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.054 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.072 253665 INFO nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.086 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.180 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.182 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.183 253665 INFO nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Creating image(s)#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.206 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.232 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.254 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.260 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.296 253665 DEBUG nova.policy [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7fc7bde5e89f466d88e469ac1f35a435', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '933c51626a49465db409069a1b3eb7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.332 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.333 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.333 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.334 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.367 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.372 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.863 253665 DEBUG nova.virt.libvirt.driver [None req-dc046360-8f98-4425-a50b-ed4c1e53adfe 559fd7e00a0a468797efe4955caffc4a d9601c2d2b97440483ffc0bf4f598e73 - - default default] [instance: d364f1c2-d606-448a-b3bd-00f1d5c1b858] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.990 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Successfully created port: 1c553ce7-b95a-447b-9fed-01b378014028 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:20:34 np0005532048 nova_compute[253661]: 2025-11-22 09:20:34.997 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.281 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 220 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 603 KiB/s wr, 112 op/s
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.700 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Successfully updated port: 1c553ce7-b95a-447b-9fed-01b378014028 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.702 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.702 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.718 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.718 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquired lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.718 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.720 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.788 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.789 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.797 253665 DEBUG nova.virt.hardware [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.797 253665 INFO nova.compute.claims [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.944 253665 DEBUG nova.compute.manager [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Received event network-changed-1c553ce7-b95a-447b-9fed-01b378014028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.945 253665 DEBUG nova.compute.manager [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Refreshing instance network info cache due to event network-changed-1c553ce7-b95a-447b-9fed-01b378014028. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.945 253665 DEBUG oslo_concurrency.lockutils [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.955 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:35 np0005532048 nova_compute[253661]: 2025-11-22 09:20:35.995 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.153 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.781s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.221 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] resizing rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:20:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156729813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.440 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.446 253665 DEBUG nova.compute.provider_tree [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.460 253665 DEBUG nova.scheduler.client.report [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.481 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.482 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.524 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.524 253665 DEBUG nova.network.neutron [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.541 253665 INFO nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.560 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.635 253665 DEBUG nova.compute.manager [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.637 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.638 253665 INFO nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Creating image(s)#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.665 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.696 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.731 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.737 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:36Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:62:0d 10.100.0.10
Nov 22 04:20:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:36Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:62:0d 10.100.0.10
Nov 22 04:20:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.825 253665 DEBUG nova.objects.instance [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'migration_context' on Instance uuid 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.832 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.834 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.835 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.835 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.862 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.868 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.911 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.912 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Ensure instance console log exists: /var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.913 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.913 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:36 np0005532048 nova_compute[253661]: 2025-11-22 09:20:36.914 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.070 253665 DEBUG nova.policy [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7fc7bde5e89f466d88e469ac1f35a435', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '933c51626a49465db409069a1b3eb7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:20:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:37Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:ea:a6 10.100.0.14
Nov 22 04:20:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:20:37Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:ea:a6 10.100.0.14
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.323 253665 DEBUG nova.network.neutron [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Updating instance_info_cache with network_info: [{"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.350 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Releasing lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.352 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Instance network_info: |[{"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.353 253665 DEBUG oslo_concurrency.lockutils [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-094a1e4e-c6c0-4994-907c-aae7c2cdbe36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.354 253665 DEBUG nova.network.neutron [req-4c5f96d2-14f4-4f69-925f-5208f43b9216 req-4b02fcc6-b45f-4863-8c40-c8b7a7d0db8c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Refreshing network info cache for port 1c553ce7-b95a-447b-9fed-01b378014028 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.356 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Start _get_guest_xml network_info=[{"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.363 253665 WARNING nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.372 253665 DEBUG nova.virt.libvirt.host [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.372 253665 DEBUG nova.virt.libvirt.host [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.388 253665 DEBUG oslo_concurrency.processutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.389 253665 DEBUG nova.virt.libvirt.host [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.390 253665 DEBUG nova.virt.libvirt.host [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.390 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.390 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.391 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.392 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.392 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.392 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.393 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.393 253665 DEBUG nova.virt.hardware [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.396 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 241 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 151 op/s
Nov 22 04:20:37 np0005532048 nova_compute[253661]: 2025-11-22 09:20:37.520 253665 DEBUG nova.storage.rbd_utils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] resizing rbd image f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:20:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 328 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.8 MiB/s wr, 177 op/s
Nov 22 04:20:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012472494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:39 np0005532048 nova_compute[253661]: 2025-11-22 09:20:39.684 253665 DEBUG nova.network.neutron [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Successfully created port: e043dc2b-6062-4cda-bf32-37ab692618c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:20:39 np0005532048 nova_compute[253661]: 2025-11-22 09:20:39.693 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:39 np0005532048 nova_compute[253661]: 2025-11-22 09:20:39.749 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:39 np0005532048 nova_compute[253661]: 2025-11-22 09:20:39.755 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:20:39 np0005532048 nova_compute[253661]: 2025-11-22 09:20:39.967 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803224.966233, 30c09d44-c691-4f03-a20d-2e86a0d0a762 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:20:39 np0005532048 nova_compute[253661]: 2025-11-22 09:20:39.967 253665 INFO nova.compute.manager [-] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:20:39 np0005532048 nova_compute[253661]: 2025-11-22 09:20:39.989 253665 DEBUG nova.compute.manager [None req-d8c063cf-0ce7-4c9c-913e-cbaecc4d76d9 - - - - - -] [instance: 30c09d44-c691-4f03-a20d-2e86a0d0a762] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.002 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.102 253665 DEBUG nova.objects.instance [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'migration_context' on Instance uuid f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.112 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.112 253665 DEBUG nova.virt.libvirt.driver [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Ensure instance console log exists: /var/lib/nova/instances/f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.113 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.113 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.114 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519355073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.299 253665 DEBUG oslo_concurrency.processutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.301 253665 DEBUG nova.virt.libvirt.vif [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1168981248',display_name='tempest-ServerRescueNegativeTestJSON-server-1168981248',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1168981248',id=64,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='933c51626a49465db409069a1b3eb7be',ramdisk_id='',reservation_id='r-4kcyxfno',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1742140611',owner_user_name='tempest-ServerRescueNegativeTestJSON-1742140611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:34Z,user_data=None,user_id='7fc7bde5e89f466d88e469ac1f35a435',uuid=094a1e4e-c6c0-4994-907c-aae7c2cdbe36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.301 253665 DEBUG nova.network.os_vif_util [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converting VIF {"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.302 253665 DEBUG nova.network.os_vif_util [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.303 253665 DEBUG nova.objects.instance [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 094a1e4e-c6c0-4994-907c-aae7c2cdbe36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.317 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <uuid>094a1e4e-c6c0-4994-907c-aae7c2cdbe36</uuid>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <name>instance-00000040</name>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1168981248</nova:name>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:20:37</nova:creationTime>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <nova:user uuid="7fc7bde5e89f466d88e469ac1f35a435">tempest-ServerRescueNegativeTestJSON-1742140611-project-member</nova:user>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <nova:project uuid="933c51626a49465db409069a1b3eb7be">tempest-ServerRescueNegativeTestJSON-1742140611</nova:project>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <nova:port uuid="1c553ce7-b95a-447b-9fed-01b378014028">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <entry name="serial">094a1e4e-c6c0-4994-907c-aae7c2cdbe36</entry>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <entry name="uuid">094a1e4e-c6c0-4994-907c-aae7c2cdbe36</entry>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk.config">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:62:09:c9"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <target dev="tap1c553ce7-b9"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/094a1e4e-c6c0-4994-907c-aae7c2cdbe36/console.log" append="off"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:20:40 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:20:40 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:20:40 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:20:40 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.318 253665 DEBUG nova.compute.manager [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Preparing to wait for external event network-vif-plugged-1c553ce7-b95a-447b-9fed-01b378014028 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.318 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.319 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.319 253665 DEBUG oslo_concurrency.lockutils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Lock "094a1e4e-c6c0-4994-907c-aae7c2cdbe36-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.319 253665 DEBUG nova.virt.libvirt.vif [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:20:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1168981248',display_name='tempest-ServerRescueNegativeTestJSON-server-1168981248',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1168981248',id=64,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='933c51626a49465db409069a1b3eb7be',ramdisk_id='',reservation_id='r-4kcyxfno',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1742140611',owner_user_name='tempest-ServerRescueNegativeTestJSON-1742140611-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:20:34Z,user_data=None,user_id='7fc7bde5e89f466d88e469ac1f35a435',uuid=094a1e4e-c6c0-4994-907c-aae7c2cdbe36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.320 253665 DEBUG nova.network.os_vif_util [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converting VIF {"id": "1c553ce7-b95a-447b-9fed-01b378014028", "address": "fa:16:3e:62:09:c9", "network": {"id": "851968f3-2dc6-498b-a08c-7b5b2c2383d4", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-997092733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "933c51626a49465db409069a1b3eb7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c553ce7-b9", "ovs_interfaceid": "1c553ce7-b95a-447b-9fed-01b378014028", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.320 253665 DEBUG nova.network.os_vif_util [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.321 253665 DEBUG os_vif [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.321 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.322 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.322 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.326 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c553ce7-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.326 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c553ce7-b9, col_values=(('external_ids', {'iface-id': '1c553ce7-b95a-447b-9fed-01b378014028', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:09:c9', 'vm-uuid': '094a1e4e-c6c0-4994-907c-aae7c2cdbe36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:20:40 np0005532048 NetworkManager[48920]: <info>  [1763803240.3304] manager: (tap1c553ce7-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.337 253665 INFO os_vif [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:09:c9,bridge_name='br-int',has_traffic_filtering=True,id=1c553ce7-b95a-447b-9fed-01b378014028,network=Network(851968f3-2dc6-498b-a08c-7b5b2c2383d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c553ce7-b9')#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.381 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.381 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.381 253665 DEBUG nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] No VIF found with MAC fa:16:3e:62:09:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.382 253665 INFO nova.virt.libvirt.driver [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: 094a1e4e-c6c0-4994-907c-aae7c2cdbe36] Using config drive#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.404 253665 DEBUG nova.storage.rbd_utils [None req-60306d0c-5635-435a-b9d0-f18456bab5e0 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] rbd image 094a1e4e-c6c0-4994-907c-aae7c2cdbe36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.475 253665 DEBUG nova.network.neutron [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] [instance: f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5] Successfully updated port: e043dc2b-6062-4cda-bf32-37ab692618c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:20:40 np0005532048 nova_compute[253661]: 2025-11-22 09:20:40.491 253665 DEBUG oslo_concurrency.lockutils [None req-fe703e62-1ffb-4876-9bbd-94e109d67722 7fc7bde5e89f466d88e469ac1f35a435 933c51626a49465db409069a1b3eb7be - - default default] Acquiring lock "refresh_cache-f4b7f2ae-ea2d-4751-aeab-25baf1bd67a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <entry name="serial">a207d8c4-4fce-4fe6-9ba5-548a92e757ac</entry>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <entry name="uuid">a207d8c4-4fce-4fe6-9ba5-548a92e757ac</entry>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.rescue">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <target dev="vdb" bus="virtio"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:86:73:cb"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <target dev="tap15bf0e02-e0"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/console.log" append="off"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:24:29 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:24:29 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:24:29 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:24:29 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:24:29 np0005532048 nova_compute[253661]: 2025-11-22 09:24:29.915 253665 INFO nova.virt.libvirt.driver [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance destroyed successfully.#033[00m
Nov 22 04:24:29 np0005532048 nova_compute[253661]: 2025-11-22 09:24:29.995 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:30 np0005532048 rsyslogd[1005]: imjournal: 11394 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.002 253665 DEBUG nova.compute.manager [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.002 253665 DEBUG oslo_concurrency.lockutils [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.003 253665 DEBUG oslo_concurrency.lockutils [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.003 253665 DEBUG oslo_concurrency.lockutils [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.004 253665 DEBUG nova.compute.manager [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] No waiting events found dispatching network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.004 253665 WARNING nova.compute.manager [req-3432a5be-66fb-4afa-961c-7b94fbb9d646 req-e0ed34c4-34ed-4f13-9600-a611caf62ce6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received unexpected event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.012 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.012 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.012 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.013 253665 DEBUG nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] No VIF found with MAC fa:16:3e:86:73:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.014 253665 INFO nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Using config drive#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.046 253665 DEBUG nova.storage.rbd_utils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.068 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.099 253665 DEBUG nova.objects.instance [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'keypairs' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.295 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updating instance_info_cache with network_info: [{"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.313 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-9096405c-eb66-4d27-abbb-e709b767afea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.314 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.315 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.315 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.459 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.667 253665 INFO nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Creating config drive at /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.674 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgo4oo9ae execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.843 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgo4oo9ae" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.875 253665 DEBUG nova.storage.rbd_utils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] rbd image a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:30 np0005532048 nova_compute[253661]: 2025-11-22 09:24:30.879 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.365 253665 DEBUG nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-unplugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.366 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.366 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.366 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.366 253665 DEBUG nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] No waiting events found dispatching network-vif-unplugged-edd81944-578b-4533-9db7-f17a3fb84211 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.367 253665 WARNING nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received unexpected event network-vif-unplugged-edd81944-578b-4533-9db7-f17a3fb84211 for instance with vm_state stopped and task_state None.#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.367 253665 DEBUG nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.367 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.367 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.368 253665 DEBUG oslo_concurrency.lockutils [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.368 253665 DEBUG nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] No waiting events found dispatching network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.368 253665 WARNING nova.compute.manager [req-506440ae-dfa2-4ca8-8ae3-3a6db56447ee req-1b53ede4-1e69-4105-ae80-bd7b5c94414c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received unexpected event network-vif-plugged-edd81944-578b-4533-9db7-f17a3fb84211 for instance with vm_state stopped and task_state None.#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.400 253665 DEBUG oslo_concurrency.processutils [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue a207d8c4-4fce-4fe6-9ba5-548a92e757ac_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.401 253665 INFO nova.virt.libvirt.driver [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Deleting local config drive /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac/disk.config.rescue because it was imported into RBD.#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.442 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.442 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.443 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.443 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.443 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.445 253665 INFO nova.compute.manager [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Terminating instance#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.446 253665 DEBUG nova.compute.manager [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:24:31 np0005532048 kernel: tap15bf0e02-e0: entered promiscuous mode
Nov 22 04:24:31 np0005532048 systemd-udevd[335719]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:24:31 np0005532048 NetworkManager[48920]: <info>  [1763803471.4729] manager: (tap15bf0e02-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:31Z|00813|binding|INFO|Claiming lport 15bf0e02-e093-4f45-995f-abb925d1cf71 for this chassis.
Nov 22 04:24:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:31Z|00814|binding|INFO|15bf0e02-e093-4f45-995f-abb925d1cf71: Claiming fa:16:3e:86:73:cb 10.100.0.14
Nov 22 04:24:31 np0005532048 NetworkManager[48920]: <info>  [1763803471.4851] device (tap15bf0e02-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:24:31 np0005532048 NetworkManager[48920]: <info>  [1763803471.4860] device (tap15bf0e02-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.488 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:73:cb 10.100.0.14'], port_security=['fa:16:3e:86:73:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a207d8c4-4fce-4fe6-9ba5-548a92e757ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcc3ab0c-697f-4983-ad7d-7f2a44c0b653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e78196ec949a45cf803d3e585b603558', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'acde5338-5012-4a35-a74f-7e2170896be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70ae9d50-6442-4aca-8fcc-29daad21c977, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=15bf0e02-e093-4f45-995f-abb925d1cf71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.489 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 15bf0e02-e093-4f45-995f-abb925d1cf71 in datapath fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 bound to our chassis#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.490 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.491 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[220d0148-04f7-44b3-bed6-8a86e62c76b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:31Z|00815|binding|INFO|Setting lport 15bf0e02-e093-4f45-995f-abb925d1cf71 ovn-installed in OVS
Nov 22 04:24:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:31Z|00816|binding|INFO|Setting lport 15bf0e02-e093-4f45-995f-abb925d1cf71 up in Southbound
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.508 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 systemd-machined[215941]: New machine qemu-98-instance-0000004e.
Nov 22 04:24:31 np0005532048 systemd[1]: Started Virtual Machine qemu-98-instance-0000004e.
Nov 22 04:24:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 367 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 5.8 MiB/s wr, 134 op/s
Nov 22 04:24:31 np0005532048 kernel: tap56dc3604-53 (unregistering): left promiscuous mode
Nov 22 04:24:31 np0005532048 NetworkManager[48920]: <info>  [1763803471.6453] device (tap56dc3604-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:24:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:31Z|00817|binding|INFO|Releasing lport 56dc3604-5308-40cd-a3a8-f768a68f4ef6 from this chassis (sb_readonly=0)
Nov 22 04:24:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:31Z|00818|binding|INFO|Setting lport 56dc3604-5308-40cd-a3a8-f768a68f4ef6 down in Southbound
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.649 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:31Z|00819|binding|INFO|Removing iface tap56dc3604-53 ovn-installed in OVS
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.659 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:03:d2 10.100.0.14'], port_security=['fa:16:3e:26:03:d2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '99df8fe4-a61a-40d5-b089-90de5d98050f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=56dc3604-5308-40cd-a3a8-f768a68f4ef6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.660 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 56dc3604-5308-40cd-a3a8-f768a68f4ef6 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.662 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.674 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.684 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abca885b-a5f3-4036-b8c9-d689e9ad616a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:24:31 np0005532048 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000050.scope: Deactivated successfully.
Nov 22 04:24:31 np0005532048 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000050.scope: Consumed 3.626s CPU time.
Nov 22 04:24:31 np0005532048 systemd-machined[215941]: Machine qemu-97-instance-00000050 terminated.
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.718 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8afc8a-38af-4884-a4a9-99330cfc5d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.723 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d06e1a62-0b15-4caa-af82-03e6cec936d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.762 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecf6f3e-5a2c-4c76-9b7f-4e0ceb10bcf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.787 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5d89e8-969a-4157-b875-37dbf94beef6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335854, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5de39eee-95f9-45b8-958a-c6bd3e278a2f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335855, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335855, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.814 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.822 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.823 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.823 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:24:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:31.823 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.888 253665 INFO nova.virt.libvirt.driver [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Instance destroyed successfully.#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.889 253665 DEBUG nova.objects.instance [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid 99df8fe4-a61a-40d5-b089-90de5d98050f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.905 253665 DEBUG nova.virt.libvirt.vif [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:24:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2075369',display_name='tempest-ServersTestJSON-server-2075369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2075369',id=80,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:24:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-gztn8gsw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:24:29Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=99df8fe4-a61a-40d5-b089-90de5d98050f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.906 253665 DEBUG nova.network.os_vif_util [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "address": "fa:16:3e:26:03:d2", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56dc3604-53", "ovs_interfaceid": "56dc3604-5308-40cd-a3a8-f768a68f4ef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.908 253665 DEBUG nova.network.os_vif_util [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.909 253665 DEBUG os_vif [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.911 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.911 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56dc3604-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:31 np0005532048 nova_compute[253661]: 2025-11-22 09:24:31.967 253665 INFO os_vif [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:03:d2,bridge_name='br-int',has_traffic_filtering=True,id=56dc3604-5308-40cd-a3a8-f768a68f4ef6,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56dc3604-53')#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.062 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for a207d8c4-4fce-4fe6-9ba5-548a92e757ac due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.063 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803472.0626485, a207d8c4-4fce-4fe6-9ba5-548a92e757ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.063 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.067 253665 DEBUG nova.compute.manager [None req-f7dd329c-9e46-4ef8-8eb3-da38719f557b 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.097 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG nova.compute.manager [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-unplugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG oslo_concurrency.lockutils [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG oslo_concurrency.lockutils [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG oslo_concurrency.lockutils [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.102 253665 DEBUG nova.compute.manager [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] No waiting events found dispatching network-vif-unplugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.103 253665 DEBUG nova.compute.manager [req-7a54921d-158b-4815-b646-adf2c4598781 req-7cd89bf6-7401-4a0d-80cf-14de31663cfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-unplugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.125 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803472.0638218, a207d8c4-4fce-4fe6-9ba5-548a92e757ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] VM Started (Lifecycle Event)#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.146 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.151 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.245 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.246 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.246 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.246 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.246 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:32 np0005532048 podman[335935]: 2025-11-22 09:24:32.381571491 +0000 UTC m=+0.065478144 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:24:32 np0005532048 podman[335934]: 2025-11-22 09:24:32.400282673 +0000 UTC m=+0.084184786 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.434 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.434 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.434 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.436 253665 INFO nova.compute.manager [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Terminating instance#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.436 253665 DEBUG nova.compute.manager [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.447 253665 INFO nova.virt.libvirt.driver [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Instance destroyed successfully.#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.447 253665 DEBUG nova.objects.instance [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'resources' on Instance uuid 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.460 253665 DEBUG nova.virt.libvirt.vif [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:23:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-566833626',display_name='tempest-tempest.common.compute-instance-566833626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-566833626',id=77,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:24:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-3abserjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:24:29Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9cef4b12-b28c-47df-9af2-a0bf9934e4d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.460 253665 DEBUG nova.network.os_vif_util [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "edd81944-578b-4533-9db7-f17a3fb84211", "address": "fa:16:3e:70:b6:3f", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedd81944-57", "ovs_interfaceid": "edd81944-578b-4533-9db7-f17a3fb84211", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.461 253665 DEBUG nova.network.os_vif_util [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.461 253665 DEBUG os_vif [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.463 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.463 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedd81944-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.465 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.470 253665 INFO os_vif [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b6:3f,bridge_name='br-int',has_traffic_filtering=True,id=edd81944-578b-4533-9db7-f17a3fb84211,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedd81944-57')#033[00m
Nov 22 04:24:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:24:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227198430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:32 np0005532048 nova_compute[253661]: 2025-11-22 09:24:32.957 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.089 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.090 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.090 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.093 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.093 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.097 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.097 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.099 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.099 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.102 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.103 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.292 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.293 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.80893325805664GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.294 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.294 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.367 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9096405c-eb66-4d27-abbb-e709b767afea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.367 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.367 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance a207d8c4-4fce-4fe6-9ba5-548a92e757ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.367 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e0b05f62-6966-4bf3-aee5-e4d2137a6cfc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.368 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 99df8fe4-a61a-40d5-b089-90de5d98050f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.368 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.368 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.467 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 413 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.5 MiB/s wr, 208 op/s
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.589 253665 DEBUG nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.590 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.590 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.590 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.590 253665 DEBUG nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 WARNING nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received unexpected event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with vm_state rescued and task_state None.#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG oslo_concurrency.lockutils [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.591 253665 DEBUG nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.592 253665 WARNING nova.compute.manager [req-2ce63068-9ddb-4aa5-baab-7f5bd0d7efe6 req-2ac00fe8-d1c5-4c5a-9a1d-62f7f9dcafcd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received unexpected event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with vm_state rescued and task_state None.#033[00m
Nov 22 04:24:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:24:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502384094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.978 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:33 np0005532048 nova_compute[253661]: 2025-11-22 09:24:33.984 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:24:34 np0005532048 nova_compute[253661]: 2025-11-22 09:24:34.001 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:24:34 np0005532048 nova_compute[253661]: 2025-11-22 09:24:34.022 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:24:34 np0005532048 nova_compute[253661]: 2025-11-22 09:24:34.023 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:34 np0005532048 nova_compute[253661]: 2025-11-22 09:24:34.205 253665 DEBUG nova.compute.manager [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:34 np0005532048 nova_compute[253661]: 2025-11-22 09:24:34.206 253665 DEBUG oslo_concurrency.lockutils [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:34 np0005532048 nova_compute[253661]: 2025-11-22 09:24:34.206 253665 DEBUG oslo_concurrency.lockutils [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:34 np0005532048 nova_compute[253661]: 2025-11-22 09:24:34.207 253665 DEBUG oslo_concurrency.lockutils [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:34 np0005532048 nova_compute[253661]: 2025-11-22 09:24:34.207 253665 DEBUG nova.compute.manager [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] No waiting events found dispatching network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:24:34 np0005532048 nova_compute[253661]: 2025-11-22 09:24:34.207 253665 WARNING nova.compute.manager [req-db20f231-4291-4502-a57a-35947bbfd850 req-22f66485-6401-453c-bcb0-5301758be9af 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received unexpected event network-vif-plugged-56dc3604-5308-40cd-a3a8-f768a68f4ef6 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:24:35 np0005532048 nova_compute[253661]: 2025-11-22 09:24:35.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 419 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 235 op/s
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.023 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.024 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.024 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.042 253665 DEBUG nova.compute.manager [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.042 253665 DEBUG nova.compute.manager [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing instance network info cache due to event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.042 253665 DEBUG oslo_concurrency.lockutils [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.043 253665 DEBUG oslo_concurrency.lockutils [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.043 253665 DEBUG nova.network.neutron [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.689 253665 DEBUG nova.compute.manager [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.690 253665 DEBUG nova.compute.manager [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing instance network info cache due to event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:24:36 np0005532048 nova_compute[253661]: 2025-11-22 09:24:36.690 253665 DEBUG oslo_concurrency.lockutils [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:24:37 np0005532048 nova_compute[253661]: 2025-11-22 09:24:37.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 373 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 248 op/s
Nov 22 04:24:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:37 np0005532048 nova_compute[253661]: 2025-11-22 09:24:37.911 253665 DEBUG nova.network.neutron [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updated VIF entry in instance network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:24:37 np0005532048 nova_compute[253661]: 2025-11-22 09:24:37.912 253665 DEBUG nova.network.neutron [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:24:37 np0005532048 nova_compute[253661]: 2025-11-22 09:24:37.925 253665 DEBUG oslo_concurrency.lockutils [req-00a9579d-c8b0-49b2-9253-be908169f836 req-12f74ee1-7b4e-4be3-b8ee-216ac529fddc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:24:37 np0005532048 nova_compute[253661]: 2025-11-22 09:24:37.926 253665 DEBUG oslo_concurrency.lockutils [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:24:37 np0005532048 nova_compute[253661]: 2025-11-22 09:24:37.926 253665 DEBUG nova.network.neutron [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:24:38 np0005532048 podman[336034]: 2025-11-22 09:24:38.468161401 +0000 UTC m=+0.144670648 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:24:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 326 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.5 MiB/s wr, 268 op/s
Nov 22 04:24:39 np0005532048 nova_compute[253661]: 2025-11-22 09:24:39.701 253665 DEBUG nova.network.neutron [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updated VIF entry in instance network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:24:39 np0005532048 nova_compute[253661]: 2025-11-22 09:24:39.702 253665 DEBUG nova.network.neutron [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:24:39 np0005532048 nova_compute[253661]: 2025-11-22 09:24:39.726 253665 DEBUG oslo_concurrency.lockutils [req-1cc4e63e-d4b2-4ec2-8bb7-7f64867dc468 req-555e734b-5cd1-4ccf-ad49-97dca850f4b0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:24:39 np0005532048 nova_compute[253661]: 2025-11-22 09:24:39.848 253665 DEBUG nova.compute.manager [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:39 np0005532048 nova_compute[253661]: 2025-11-22 09:24:39.849 253665 DEBUG nova.compute.manager [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing instance network info cache due to event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:24:39 np0005532048 nova_compute[253661]: 2025-11-22 09:24:39.849 253665 DEBUG oslo_concurrency.lockutils [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:24:39 np0005532048 nova_compute[253661]: 2025-11-22 09:24:39.850 253665 DEBUG oslo_concurrency.lockutils [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:24:39 np0005532048 nova_compute[253661]: 2025-11-22 09:24:39.850 253665 DEBUG nova.network.neutron [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:24:40 np0005532048 nova_compute[253661]: 2025-11-22 09:24:40.494 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.362 253665 INFO nova.virt.libvirt.driver [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Deleting instance files /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f_del#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.363 253665 INFO nova.virt.libvirt.driver [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Deletion of /var/lib/nova/instances/99df8fe4-a61a-40d5-b089-90de5d98050f_del complete#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.423 253665 INFO nova.compute.manager [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Took 9.98 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.424 253665 DEBUG oslo.service.loopingcall [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.424 253665 DEBUG nova.compute.manager [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.424 253665 DEBUG nova.network.neutron [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:24:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 326 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 207 op/s
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.736 253665 INFO nova.virt.libvirt.driver [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deleting instance files /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_del#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.737 253665 INFO nova.virt.libvirt.driver [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deletion of /var/lib/nova/instances/9cef4b12-b28c-47df-9af2-a0bf9934e4d7_del complete#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.796 253665 INFO nova.compute.manager [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Took 9.36 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.797 253665 DEBUG oslo.service.loopingcall [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.798 253665 DEBUG nova.compute.manager [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.798 253665 DEBUG nova.network.neutron [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.908 253665 DEBUG nova.network.neutron [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updated VIF entry in instance network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.909 253665 DEBUG nova.network.neutron [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:24:41 np0005532048 nova_compute[253661]: 2025-11-22 09:24:41.931 253665 DEBUG oslo_concurrency.lockutils [req-913bf776-849b-42ff-9405-db563f2d2cd1 req-8ffef3c0-23be-4bfe-ac9f-fe0c165be371 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.744 253665 DEBUG nova.network.neutron [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.758 253665 DEBUG nova.network.neutron [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.761 253665 INFO nova.compute.manager [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Took 0.96 seconds to deallocate network for instance.#033[00m
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.789 253665 INFO nova.compute.manager [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Took 1.36 seconds to deallocate network for instance.#033[00m
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.833 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.834 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.852 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.872 253665 DEBUG nova.compute.manager [req-3f2a8d9d-4316-4682-9b34-3d0eb4f2f17a req-2a54d1aa-3929-4a84-b18e-52a52debe7ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Received event network-vif-deleted-edd81944-578b-4533-9db7-f17a3fb84211 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:42 np0005532048 nova_compute[253661]: 2025-11-22 09:24:42.959 253665 DEBUG oslo_concurrency.processutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:24:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959503324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:43 np0005532048 nova_compute[253661]: 2025-11-22 09:24:43.498 253665 DEBUG oslo_concurrency.processutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:43 np0005532048 nova_compute[253661]: 2025-11-22 09:24:43.505 253665 DEBUG nova.compute.provider_tree [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:24:43 np0005532048 nova_compute[253661]: 2025-11-22 09:24:43.519 253665 DEBUG nova.scheduler.client.report [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:24:43 np0005532048 nova_compute[253661]: 2025-11-22 09:24:43.541 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:43 np0005532048 nova_compute[253661]: 2025-11-22 09:24:43.545 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 326 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 223 op/s
Nov 22 04:24:43 np0005532048 nova_compute[253661]: 2025-11-22 09:24:43.578 253665 INFO nova.scheduler.client.report [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Deleted allocations for instance 9cef4b12-b28c-47df-9af2-a0bf9934e4d7#033[00m
Nov 22 04:24:43 np0005532048 nova_compute[253661]: 2025-11-22 09:24:43.648 253665 DEBUG oslo_concurrency.lockutils [None req-52ce365a-6a9c-4bb0-aef6-f2fab0ba8ffb 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9cef4b12-b28c-47df-9af2-a0bf9934e4d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:43 np0005532048 nova_compute[253661]: 2025-11-22 09:24:43.695 253665 DEBUG oslo_concurrency.processutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:43 np0005532048 nova_compute[253661]: 2025-11-22 09:24:43.749 253665 DEBUG nova.compute.manager [req-5028dd52-4207-49d2-9dc0-c9287014a000 req-82065a50-74f2-4526-9a62-1abe17d0df76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Received event network-vif-deleted-56dc3604-5308-40cd-a3a8-f768a68f4ef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:24:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998847628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:44 np0005532048 nova_compute[253661]: 2025-11-22 09:24:44.201 253665 DEBUG oslo_concurrency.processutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:44 np0005532048 nova_compute[253661]: 2025-11-22 09:24:44.209 253665 DEBUG nova.compute.provider_tree [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:24:44 np0005532048 nova_compute[253661]: 2025-11-22 09:24:44.225 253665 DEBUG nova.scheduler.client.report [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:24:44 np0005532048 nova_compute[253661]: 2025-11-22 09:24:44.249 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:44 np0005532048 nova_compute[253661]: 2025-11-22 09:24:44.273 253665 INFO nova.scheduler.client.report [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance 99df8fe4-a61a-40d5-b089-90de5d98050f#033[00m
Nov 22 04:24:44 np0005532048 nova_compute[253661]: 2025-11-22 09:24:44.331 253665 DEBUG oslo_concurrency.lockutils [None req-83c121b1-3a0a-484d-a636-3f114fe97ad4 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "99df8fe4-a61a-40d5-b089-90de5d98050f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:44 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 22 04:24:44 np0005532048 nova_compute[253661]: 2025-11-22 09:24:44.609 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803469.6081023, 9cef4b12-b28c-47df-9af2-a0bf9934e4d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:24:44 np0005532048 nova_compute[253661]: 2025-11-22 09:24:44.609 253665 INFO nova.compute.manager [-] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:24:44 np0005532048 nova_compute[253661]: 2025-11-22 09:24:44.642 253665 DEBUG nova.compute.manager [None req-8a90239e-3e0a-4cd8-ba5a-269bcf6ed3ac - - - - - -] [instance: 9cef4b12-b28c-47df-9af2-a0bf9934e4d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:24:45 np0005532048 nova_compute[253661]: 2025-11-22 09:24:45.068 253665 DEBUG nova.compute.manager [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:45 np0005532048 nova_compute[253661]: 2025-11-22 09:24:45.068 253665 DEBUG nova.compute.manager [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing instance network info cache due to event network-changed-15bf0e02-e093-4f45-995f-abb925d1cf71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:24:45 np0005532048 nova_compute[253661]: 2025-11-22 09:24:45.068 253665 DEBUG oslo_concurrency.lockutils [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:24:45 np0005532048 nova_compute[253661]: 2025-11-22 09:24:45.069 253665 DEBUG oslo_concurrency.lockutils [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:24:45 np0005532048 nova_compute[253661]: 2025-11-22 09:24:45.069 253665 DEBUG nova.network.neutron [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Refreshing network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:24:45 np0005532048 nova_compute[253661]: 2025-11-22 09:24:45.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 326 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 34 KiB/s wr, 151 op/s
Nov 22 04:24:46 np0005532048 nova_compute[253661]: 2025-11-22 09:24:46.886 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803471.8844786, 99df8fe4-a61a-40d5-b089-90de5d98050f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:24:46 np0005532048 nova_compute[253661]: 2025-11-22 09:24:46.887 253665 INFO nova.compute.manager [-] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:24:46 np0005532048 nova_compute[253661]: 2025-11-22 09:24:46.913 253665 DEBUG nova.compute.manager [None req-c9f6297d-0647-4d78-b07d-f781cd36e47d - - - - - -] [instance: 99df8fe4-a61a-40d5-b089-90de5d98050f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:24:47 np0005532048 nova_compute[253661]: 2025-11-22 09:24:47.351 253665 DEBUG nova.network.neutron [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updated VIF entry in instance network info cache for port 15bf0e02-e093-4f45-995f-abb925d1cf71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:24:47 np0005532048 nova_compute[253661]: 2025-11-22 09:24:47.352 253665 DEBUG nova.network.neutron [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [{"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:24:47 np0005532048 nova_compute[253661]: 2025-11-22 09:24:47.368 253665 DEBUG oslo_concurrency.lockutils [req-3f65d4fa-08e6-47db-acf4-2a9889065572 req-c9556d4e-2ffa-497a-93c7-6d71f837457b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-a207d8c4-4fce-4fe6-9ba5-548a92e757ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:24:47 np0005532048 nova_compute[253661]: 2025-11-22 09:24:47.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 326 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 17 KiB/s wr, 137 op/s
Nov 22 04:24:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 326 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 828 KiB/s rd, 14 KiB/s wr, 106 op/s
Nov 22 04:24:50 np0005532048 nova_compute[253661]: 2025-11-22 09:24:50.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:50 np0005532048 nova_compute[253661]: 2025-11-22 09:24:50.537 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:50 np0005532048 nova_compute[253661]: 2025-11-22 09:24:50.537 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:50 np0005532048 nova_compute[253661]: 2025-11-22 09:24:50.555 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:24:50 np0005532048 nova_compute[253661]: 2025-11-22 09:24:50.633 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:50 np0005532048 nova_compute[253661]: 2025-11-22 09:24:50.634 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:50 np0005532048 nova_compute[253661]: 2025-11-22 09:24:50.640 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:24:50 np0005532048 nova_compute[253661]: 2025-11-22 09:24:50.640 253665 INFO nova.compute.claims [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:24:50 np0005532048 nova_compute[253661]: 2025-11-22 09:24:50.769 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.000 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.002 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.003 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.004 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.004 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.007 253665 INFO nova.compute.manager [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Terminating instance#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.008 253665 DEBUG nova.compute.manager [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:24:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:24:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1747420391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.278 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.286 253665 DEBUG nova.compute.provider_tree [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.328 253665 DEBUG nova.scheduler.client.report [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.350 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.351 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.405 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.405 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.419 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.443 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:24:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 326 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 12 KiB/s wr, 77 op/s
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.552 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.553 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.554 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Creating image(s)#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.580 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.606 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.631 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.636 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.677 253665 DEBUG nova.policy [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.729 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.730 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.731 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.732 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.756 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:51 np0005532048 nova_compute[253661]: 2025-11-22 09:24:51.762 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 672288f2-2f9b-4643-9ebf-a949ad316298_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:52 np0005532048 kernel: tap15bf0e02-e0 (unregistering): left promiscuous mode
Nov 22 04:24:52 np0005532048 NetworkManager[48920]: <info>  [1763803492.0307] device (tap15bf0e02-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:24:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:52Z|00820|binding|INFO|Releasing lport 15bf0e02-e093-4f45-995f-abb925d1cf71 from this chassis (sb_readonly=0)
Nov 22 04:24:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:52Z|00821|binding|INFO|Setting lport 15bf0e02-e093-4f45-995f-abb925d1cf71 down in Southbound
Nov 22 04:24:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:52Z|00822|binding|INFO|Removing iface tap15bf0e02-e0 ovn-installed in OVS
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.043 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:52.050 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:73:cb 10.100.0.14'], port_security=['fa:16:3e:86:73:cb 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a207d8c4-4fce-4fe6-9ba5-548a92e757ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcc3ab0c-697f-4983-ad7d-7f2a44c0b653', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e78196ec949a45cf803d3e585b603558', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'acde5338-5012-4a35-a74f-7e2170896be1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70ae9d50-6442-4aca-8fcc-29daad21c977, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=15bf0e02-e093-4f45-995f-abb925d1cf71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:24:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:52.051 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 15bf0e02-e093-4f45-995f-abb925d1cf71 in datapath fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 unbound from our chassis#033[00m
Nov 22 04:24:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:52.052 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fcc3ab0c-697f-4983-ad7d-7f2a44c0b653 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:24:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:24:52.053 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac671fcc-a7a1-495b-ac05-4c484c42c9a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:24:52 np0005532048 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 22 04:24:52 np0005532048 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d0000004e.scope: Consumed 13.945s CPU time.
Nov 22 04:24:52 np0005532048 systemd-machined[215941]: Machine qemu-98-instance-0000004e terminated.
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:24:52
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.meta']
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.239 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.253 253665 INFO nova.virt.libvirt.driver [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Instance destroyed successfully.#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.254 253665 DEBUG nova.objects.instance [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lazy-loading 'resources' on Instance uuid a207d8c4-4fce-4fe6-9ba5-548a92e757ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.266 253665 DEBUG nova.virt.libvirt.vif [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:23:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-533420499',display_name='tempest-ServerRescueTestJSONUnderV235-server-533420499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-533420499',id=78,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:24:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e78196ec949a45cf803d3e585b603558',ramdisk_id='',reservation_id='r-v4cum4j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1716369832',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1716369832-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:24:32Z,user_data=None,user_id='3f7dbcc13af740b491f0498f4ddec69d',uuid=a207d8c4-4fce-4fe6-9ba5-548a92e757ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.266 253665 DEBUG nova.network.os_vif_util [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converting VIF {"id": "15bf0e02-e093-4f45-995f-abb925d1cf71", "address": "fa:16:3e:86:73:cb", "network": {"id": "fcc3ab0c-697f-4983-ad7d-7f2a44c0b653", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1643155409-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e78196ec949a45cf803d3e585b603558", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15bf0e02-e0", "ovs_interfaceid": "15bf0e02-e093-4f45-995f-abb925d1cf71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.267 253665 DEBUG nova.network.os_vif_util [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.267 253665 DEBUG os_vif [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.269 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.269 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15bf0e02-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.276 253665 INFO os_vif [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=15bf0e02-e093-4f45-995f-abb925d1cf71,network=Network(fcc3ab0c-697f-4983-ad7d-7f2a44c0b653),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15bf0e02-e0')#033[00m
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1d61e125-cec0-4738-bd98-311ab0a02730 does not exist
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5f77a31c-9489-4091-b3d5-0351d6d040af does not exist
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6463a12b-0a2d-4980-ad94-c1776bb62029 does not exist
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:24:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:24:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:24:52 np0005532048 nova_compute[253661]: 2025-11-22 09:24:52.937 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Successfully created port: b173a545-d888-43c0-a1fb-2969a871663c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:24:53 np0005532048 podman[336524]: 2025-11-22 09:24:53.011517633 +0000 UTC m=+0.035852127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:24:53 np0005532048 podman[336524]: 2025-11-22 09:24:53.153762392 +0000 UTC m=+0.178096876 container create 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:24:53 np0005532048 systemd[1]: Started libpod-conmon-6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0.scope.
Nov 22 04:24:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:24:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:24:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:24:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:24:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 328 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 22 KiB/s wr, 81 op/s
Nov 22 04:24:53 np0005532048 podman[336524]: 2025-11-22 09:24:53.593931284 +0000 UTC m=+0.618265788 container init 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:24:53 np0005532048 podman[336524]: 2025-11-22 09:24:53.609246954 +0000 UTC m=+0.633581408 container start 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:24:53 np0005532048 systemd[1]: libpod-6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0.scope: Deactivated successfully.
Nov 22 04:24:53 np0005532048 romantic_jemison[336540]: 167 167
Nov 22 04:24:53 np0005532048 conmon[336540]: conmon 6ee7d9bc1dd75794731a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0.scope/container/memory.events
Nov 22 04:24:53 np0005532048 podman[336524]: 2025-11-22 09:24:53.687579288 +0000 UTC m=+0.711913732 container attach 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:24:53 np0005532048 podman[336524]: 2025-11-22 09:24:53.688545102 +0000 UTC m=+0.712879586 container died 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:24:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-85652c12bed1bcf6410be7b162be0f36e8b6692d5e24286977287447f29fa7a5-merged.mount: Deactivated successfully.
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.020 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Successfully updated port: b173a545-d888-43c0-a1fb-2969a871663c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.040 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 672288f2-2f9b-4643-9ebf-a949ad316298_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.073 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.074 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.074 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.124 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.341 253665 DEBUG nova.compute.manager [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-changed-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.342 253665 DEBUG nova.compute.manager [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Refreshing instance network info cache due to event network-changed-b173a545-d888-43c0-a1fb-2969a871663c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.342 253665 DEBUG oslo_concurrency.lockutils [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:24:54 np0005532048 podman[336524]: 2025-11-22 09:24:54.350210657 +0000 UTC m=+1.374545141 container remove 6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jemison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 04:24:54 np0005532048 systemd[1]: libpod-conmon-6ee7d9bc1dd75794731a983d92d13cda889cbb81fde6e1a516daa11c5d4355b0.scope: Deactivated successfully.
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.515 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.517 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.542 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.548 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:24:54 np0005532048 podman[336620]: 2025-11-22 09:24:54.544679389 +0000 UTC m=+0.024435952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.641 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.642 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:54 np0005532048 podman[336620]: 2025-11-22 09:24:54.646718566 +0000 UTC m=+0.126475099 container create 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.652 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.653 253665 INFO nova.compute.claims [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:24:54 np0005532048 systemd[1]: Started libpod-conmon-7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1.scope.
Nov 22 04:24:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:24:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:54 np0005532048 nova_compute[253661]: 2025-11-22 09:24:54.831 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:54 np0005532048 podman[336620]: 2025-11-22 09:24:54.954984808 +0000 UTC m=+0.434741361 container init 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:24:54 np0005532048 podman[336620]: 2025-11-22 09:24:54.963628607 +0000 UTC m=+0.443385140 container start 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:24:55 np0005532048 podman[336620]: 2025-11-22 09:24:55.073991905 +0000 UTC m=+0.553748518 container attach 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.164 253665 DEBUG nova.objects.instance [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid 672288f2-2f9b-4643-9ebf-a949ad316298 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.178 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.179 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Ensure instance console log exists: /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.180 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.180 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.180 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:24:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3161226656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.272 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.280 253665 DEBUG nova.compute.provider_tree [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.292 253665 DEBUG nova.scheduler.client.report [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.312 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.313 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.363 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.363 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.385 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.408 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.505 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.507 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.507 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Creating image(s)#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.535 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 340 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 620 KiB/s rd, 504 KiB/s wr, 72 op/s
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.570 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:24:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.607 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.611 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.654 253665 DEBUG nova.policy [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7e5709393702478dbf0bd566dc94d7fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9b06c711e582499ab500917d85e27e3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.663 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.706 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.707 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.708 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.708 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.732 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:55 np0005532048 nova_compute[253661]: 2025-11-22 09:24:55.736 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.201 253665 DEBUG nova.network.neutron [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Updating instance_info_cache with network_info: [{"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.231 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.231 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance network_info: |[{"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.232 253665 DEBUG oslo_concurrency.lockutils [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.233 253665 DEBUG nova.network.neutron [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Refreshing network info cache for port b173a545-d888-43c0-a1fb-2969a871663c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.237 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start _get_guest_xml network_info=[{"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.245 253665 WARNING nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.257 253665 DEBUG nova.virt.libvirt.host [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.259 253665 DEBUG nova.virt.libvirt.host [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.262 253665 DEBUG nova.virt.libvirt.host [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.263 253665 DEBUG nova.virt.libvirt.host [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.264 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.264 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.265 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.265 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.265 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.265 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.266 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.266 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.266 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.267 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.267 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.267 253665 DEBUG nova.virt.hardware [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:24:56 np0005532048 eloquent_keller[336637]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:24:56 np0005532048 eloquent_keller[336637]: --> relative data size: 1.0
Nov 22 04:24:56 np0005532048 eloquent_keller[336637]: --> All data devices are unavailable
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.273 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:56 np0005532048 systemd[1]: libpod-7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1.scope: Deactivated successfully.
Nov 22 04:24:56 np0005532048 podman[336620]: 2025-11-22 09:24:56.325172863 +0000 UTC m=+1.804929396 container died 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:24:56 np0005532048 systemd[1]: libpod-7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1.scope: Consumed 1.185s CPU time.
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.406 253665 DEBUG nova.compute.manager [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-unplugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.407 253665 DEBUG oslo_concurrency.lockutils [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.407 253665 DEBUG oslo_concurrency.lockutils [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.407 253665 DEBUG oslo_concurrency.lockutils [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.407 253665 DEBUG nova.compute.manager [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-unplugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.408 253665 DEBUG nova.compute.manager [req-2adbda62-bb9c-4b72-b4b0-f7d8b36afacd req-625729df-269e-4505-aedd-8c5bee19b567 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-unplugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:24:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8a36edd85aa9e40dd7ef32445d42ef0bfdf92322d1e74f98fca88e4a34168fba-merged.mount: Deactivated successfully.
Nov 22 04:24:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:24:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159468975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.751 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.782 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.786 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:56 np0005532048 nova_compute[253661]: 2025-11-22 09:24:56.923 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Successfully created port: 702cad91-d4bb-4f0c-b378-7e05e928ad09 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:24:57 np0005532048 nova_compute[253661]: 2025-11-22 09:24:57.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 326 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 629 KiB/s rd, 1.1 MiB/s wr, 88 op/s
Nov 22 04:24:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:58 np0005532048 podman[336620]: 2025-11-22 09:24:58.32204801 +0000 UTC m=+3.801804573 container remove 7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_keller, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:24:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:24:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550476733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.392 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.394 253665 DEBUG nova.virt.libvirt.vif [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-287905590',display_name='tempest-ServersTestJSON-server-287905590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-287905590',id=81,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO/elwrk2u0pM3LkpugKE0r9pgrYufUX3T9HzzxAQRxB89i5bBiA7C9yWlosrYihPiHzlNqfpGLV7W1tbdzbGLdP3NreuJMAPnqDTjhMrZ8g7ZHEYCTPrFyftTjdWlo1pA==',key_name='tempest-key-1706317659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-ic2f0n30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:51Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=672288f2-2f9b-4643-9ebf-a949ad316298,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.395 253665 DEBUG nova.network.os_vif_util [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.396 253665 DEBUG nova.network.os_vif_util [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.397 253665 DEBUG nova.objects.instance [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid 672288f2-2f9b-4643-9ebf-a949ad316298 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.410 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <uuid>672288f2-2f9b-4643-9ebf-a949ad316298</uuid>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <name>instance-00000051</name>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersTestJSON-server-287905590</nova:name>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:24:56</nova:creationTime>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <nova:port uuid="b173a545-d888-43c0-a1fb-2969a871663c">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <entry name="serial">672288f2-2f9b-4643-9ebf-a949ad316298</entry>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <entry name="uuid">672288f2-2f9b-4643-9ebf-a949ad316298</entry>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/672288f2-2f9b-4643-9ebf-a949ad316298_disk">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/672288f2-2f9b-4643-9ebf-a949ad316298_disk.config">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:a0:f1:9e"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <target dev="tapb173a545-d8"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/console.log" append="off"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:24:58 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:24:58 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:24:58 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:24:58 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:24:58 np0005532048 systemd[1]: libpod-conmon-7b9bb953a7c4fabf21cf9e90b958fac3440419aeb01c8db84b21d3a540a9d7e1.scope: Deactivated successfully.
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.411 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Preparing to wait for external event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.412 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.412 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.412 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.413 253665 DEBUG nova.virt.libvirt.vif [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-287905590',display_name='tempest-ServersTestJSON-server-287905590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-287905590',id=81,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO/elwrk2u0pM3LkpugKE0r9pgrYufUX3T9HzzxAQRxB89i5bBiA7C9yWlosrYihPiHzlNqfpGLV7W1tbdzbGLdP3NreuJMAPnqDTjhMrZ8g7ZHEYCTPrFyftTjdWlo1pA==',key_name='tempest-key-1706317659',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-ic2f0n30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:51Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=672288f2-2f9b-4643-9ebf-a949ad316298,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.413 253665 DEBUG nova.network.os_vif_util [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.414 253665 DEBUG nova.network.os_vif_util [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.414 253665 DEBUG os_vif [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.415 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.419 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb173a545-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.420 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb173a545-d8, col_values=(('external_ids', {'iface-id': 'b173a545-d888-43c0-a1fb-2969a871663c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:f1:9e', 'vm-uuid': '672288f2-2f9b-4643-9ebf-a949ad316298'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:24:58 np0005532048 NetworkManager[48920]: <info>  [1763803498.4236] manager: (tapb173a545-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.426 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.432 253665 INFO os_vif [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8')#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.445 253665 DEBUG nova.network.neutron [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Updated VIF entry in instance network info cache for port b173a545-d888-43c0-a1fb-2969a871663c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.445 253665 DEBUG nova.network.neutron [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Updating instance_info_cache with network_info: [{"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.456 253665 DEBUG oslo_concurrency.lockutils [req-b7ca6c53-2950-4a0d-bf3c-2734912b6e46 req-b4623745-766c-4659-8252-1eca9dd3918b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-672288f2-2f9b-4643-9ebf-a949ad316298" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.546 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.547 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.547 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:a0:f1:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.548 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Using config drive#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.570 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.594 253665 DEBUG nova.compute.manager [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.594 253665 DEBUG oslo_concurrency.lockutils [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.595 253665 DEBUG oslo_concurrency.lockutils [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.595 253665 DEBUG oslo_concurrency.lockutils [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.595 253665 DEBUG nova.compute.manager [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] No waiting events found dispatching network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.595 253665 WARNING nova.compute.manager [req-7f721e92-93d0-4f68-9ea4-b14e3bfb2bb7 req-d5e75570-ff0a-4451-a938-a41b6b6b31d0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received unexpected event network-vif-plugged-15bf0e02-e093-4f45-995f-abb925d1cf71 for instance with vm_state rescued and task_state deleting.#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.736 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.740 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.003s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.815 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] resizing rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.931 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Successfully updated port: 702cad91-d4bb-4f0c-b378-7e05e928ad09 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.951 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.951 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquired lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:24:58 np0005532048 nova_compute[253661]: 2025-11-22 09:24:58.951 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.002 253665 DEBUG nova.compute.manager [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-changed-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.003 253665 DEBUG nova.compute.manager [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Refreshing instance network info cache due to event network-changed-702cad91-d4bb-4f0c-b378-7e05e928ad09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.004 253665 DEBUG oslo_concurrency.lockutils [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:24:59 np0005532048 podman[337091]: 2025-11-22 09:24:59.12660287 +0000 UTC m=+0.115949694 container create 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:24:59 np0005532048 podman[337091]: 2025-11-22 09:24:59.039197076 +0000 UTC m=+0.028543920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.136 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Creating config drive at /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.141 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxe7lhv2p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.189 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:24:59 np0005532048 systemd[1]: Started libpod-conmon-35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf.scope.
Nov 22 04:24:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.297 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxe7lhv2p" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.324 253665 DEBUG nova.storage.rbd_utils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:24:59 np0005532048 podman[337091]: 2025-11-22 09:24:59.325620731 +0000 UTC m=+0.314967575 container init 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.329 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:24:59 np0005532048 podman[337091]: 2025-11-22 09:24:59.334172688 +0000 UTC m=+0.323519512 container start 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:24:59 np0005532048 sad_dubinsky[337110]: 167 167
Nov 22 04:24:59 np0005532048 systemd[1]: libpod-35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf.scope: Deactivated successfully.
Nov 22 04:24:59 np0005532048 podman[337091]: 2025-11-22 09:24:59.394720801 +0000 UTC m=+0.384067675 container attach 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:24:59 np0005532048 podman[337091]: 2025-11-22 09:24:59.396073484 +0000 UTC m=+0.385420318 container died 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.511 253665 DEBUG nova.objects.instance [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'migration_context' on Instance uuid 493b70aa-aaa2-4c40-bfea-6eff7ffec547 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.526 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.526 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Ensure instance console log exists: /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.527 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.527 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.528 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:24:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 323 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 415 KiB/s rd, 2.6 MiB/s wr, 88 op/s
Nov 22 04:24:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c2cb284674dc479cd85af8e30711e53c5c52a6a45193ea742ff698bd27ed8cff-merged.mount: Deactivated successfully.
Nov 22 04:24:59 np0005532048 podman[337091]: 2025-11-22 09:24:59.770345402 +0000 UTC m=+0.759692216 container remove 35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:24:59 np0005532048 systemd[1]: libpod-conmon-35c6a887ae3def0bd51edd28808f456868999d2684e8ece856438a4b48e531cf.scope: Deactivated successfully.
Nov 22 04:24:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:59Z|00823|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 04:24:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:24:59Z|00824|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 04:24:59 np0005532048 nova_compute[253661]: 2025-11-22 09:24:59.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:00 np0005532048 podman[337190]: 2025-11-22 09:25:00.046578151 +0000 UTC m=+0.086942013 container create 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:25:00 np0005532048 podman[337190]: 2025-11-22 09:24:59.986354155 +0000 UTC m=+0.026718057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:25:00 np0005532048 systemd[1]: Started libpod-conmon-8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe.scope.
Nov 22 04:25:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:25:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.247 253665 DEBUG nova.network.neutron [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updating instance_info_cache with network_info: [{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:00 np0005532048 podman[337190]: 2025-11-22 09:25:00.275126406 +0000 UTC m=+0.315490278 container init 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:25:00 np0005532048 podman[337190]: 2025-11-22 09:25:00.28316539 +0000 UTC m=+0.323529252 container start 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.287 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Releasing lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.288 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance network_info: |[{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.289 253665 DEBUG oslo_concurrency.lockutils [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.289 253665 DEBUG nova.network.neutron [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Refreshing network info cache for port 702cad91-d4bb-4f0c-b378-7e05e928ad09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.292 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start _get_guest_xml network_info=[{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.300 253665 WARNING nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.313 253665 DEBUG nova.virt.libvirt.host [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.315 253665 DEBUG nova.virt.libvirt.host [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.320 253665 DEBUG nova.virt.libvirt.host [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.321 253665 DEBUG nova.virt.libvirt.host [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.321 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.321 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.322 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.323 253665 DEBUG nova.virt.hardware [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.326 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:00 np0005532048 podman[337190]: 2025-11-22 09:25:00.36630295 +0000 UTC m=+0.406666812 container attach 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.375 253665 DEBUG oslo_concurrency.processutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config 672288f2-2f9b-4643-9ebf-a949ad316298_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.376 253665 INFO nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Deleting local config drive /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298/disk.config because it was imported into RBD.#033[00m
Nov 22 04:25:00 np0005532048 kernel: tapb173a545-d8: entered promiscuous mode
Nov 22 04:25:00 np0005532048 NetworkManager[48920]: <info>  [1763803500.4441] manager: (tapb173a545-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:00Z|00825|binding|INFO|Claiming lport b173a545-d888-43c0-a1fb-2969a871663c for this chassis.
Nov 22 04:25:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:00Z|00826|binding|INFO|b173a545-d888-43c0-a1fb-2969a871663c: Claiming fa:16:3e:a0:f1:9e 10.100.0.14
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:00Z|00827|binding|INFO|Setting lport b173a545-d888-43c0-a1fb-2969a871663c ovn-installed in OVS
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:00 np0005532048 systemd-udevd[337240]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.506 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:00 np0005532048 NetworkManager[48920]: <info>  [1763803500.5078] device (tapb173a545-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:25:00 np0005532048 NetworkManager[48920]: <info>  [1763803500.5092] device (tapb173a545-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:25:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1378503543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.814 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.844 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:00 np0005532048 nova_compute[253661]: 2025-11-22 09:25:00.849 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]: {
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:    "0": [
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:        {
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "devices": [
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "/dev/loop3"
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            ],
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_name": "ceph_lv0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_size": "21470642176",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "name": "ceph_lv0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "tags": {
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.cluster_name": "ceph",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.crush_device_class": "",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.encrypted": "0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.osd_id": "0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.type": "block",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.vdo": "0"
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            },
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "type": "block",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "vg_name": "ceph_vg0"
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:        }
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:    ],
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:    "1": [
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:        {
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "devices": [
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "/dev/loop4"
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            ],
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_name": "ceph_lv1",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_size": "21470642176",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "name": "ceph_lv1",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "tags": {
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.cluster_name": "ceph",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.crush_device_class": "",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.encrypted": "0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.osd_id": "1",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.type": "block",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.vdo": "0"
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            },
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "type": "block",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "vg_name": "ceph_vg1"
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:        }
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:    ],
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:    "2": [
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:        {
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "devices": [
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "/dev/loop5"
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            ],
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_name": "ceph_lv2",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_size": "21470642176",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "name": "ceph_lv2",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "tags": {
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.cluster_name": "ceph",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.crush_device_class": "",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.encrypted": "0",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.osd_id": "2",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.type": "block",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:                "ceph.vdo": "0"
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            },
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "type": "block",
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:            "vg_name": "ceph_vg2"
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:        }
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]:    ]
Nov 22 04:25:01 np0005532048 sharp_robinson[337207]: }
Nov 22 04:25:01 np0005532048 systemd[1]: libpod-8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe.scope: Deactivated successfully.
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.264 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f1:9e 10.100.0.14'], port_security=['fa:16:3e:a0:f1:9e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '672288f2-2f9b-4643-9ebf-a949ad316298', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b173a545-d888-43c0-a1fb-2969a871663c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:01Z|00828|binding|INFO|Setting lport b173a545-d888-43c0-a1fb-2969a871663c up in Southbound
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.267 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b173a545-d888-43c0-a1fb-2969a871663c in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis#033[00m
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.269 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:25:01 np0005532048 podman[337290]: 2025-11-22 09:25:01.28790029 +0000 UTC m=+0.062005890 container died 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:25:01 np0005532048 systemd-machined[215941]: New machine qemu-99-instance-00000051.
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.299 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ef6738-0fec-4985-b3e4-eaad92cada3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:01 np0005532048 systemd[1]: Started Virtual Machine qemu-99-instance-00000051.
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.338 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[16f6d48e-0366-495b-a21f-8cbb8b080399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.342 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[78b6aa88-6041-4406-9409-8849211de463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.374 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[39b91085-239c-449c-b75a-6c40d41d5794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644878248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[02df6f4c-f93a-4b03-99b1-c0c20a9a21c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337318, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.404 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.407 253665 DEBUG nova.virt.libvirt.vif [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1274648896',display_name='tempest-ServerActionsTestOtherA-server-1274648896',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1274648896',id=82,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-n007hk6i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:55Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=493b70aa-aaa2-4c40-bfea-6eff7ffec547,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.407 253665 DEBUG nova.network.os_vif_util [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.408 253665 DEBUG nova.network.os_vif_util [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.409 253665 DEBUG nova.objects.instance [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 493b70aa-aaa2-4c40-bfea-6eff7ffec547 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.416 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ccea8f48-ca9f-46e7-b009-87a9d66c4db3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337321, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337321, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.419 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.422 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <uuid>493b70aa-aaa2-4c40-bfea-6eff7ffec547</uuid>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <name>instance-00000052</name>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerActionsTestOtherA-server-1274648896</nova:name>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:25:00</nova:creationTime>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <nova:user uuid="7e5709393702478dbf0bd566dc94d7fe">tempest-ServerActionsTestOtherA-1527475006-project-member</nova:user>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <nova:project uuid="9b06c711e582499ab500917d85e27e3c">tempest-ServerActionsTestOtherA-1527475006</nova:project>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <nova:port uuid="702cad91-d4bb-4f0c-b378-7e05e928ad09">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <entry name="serial">493b70aa-aaa2-4c40-bfea-6eff7ffec547</entry>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <entry name="uuid">493b70aa-aaa2-4c40-bfea-6eff7ffec547</entry>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:ab:29:75"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <target dev="tap702cad91-d4"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/console.log" append="off"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:25:01 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:25:01 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:25:01 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:25:01 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.423 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Preparing to wait for external event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.423 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.424 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.424 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.424 253665 DEBUG nova.virt.libvirt.vif [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1274648896',display_name='tempest-ServerActionsTestOtherA-server-1274648896',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1274648896',id=82,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-n007hk6i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:24:55Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=493b70aa-aaa2-4c40-bfea-6eff7ffec547,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.425 253665 DEBUG nova.network.os_vif_util [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.425 253665 DEBUG nova.network.os_vif_util [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.426 253665 DEBUG os_vif [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.427 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.428 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.458 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.459 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap702cad91-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.460 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap702cad91-d4, col_values=(('external_ids', {'iface-id': '702cad91-d4bb-4f0c-b378-7e05e928ad09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:29:75', 'vm-uuid': '493b70aa-aaa2-4c40-bfea-6eff7ffec547'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:01 np0005532048 NetworkManager[48920]: <info>  [1763803501.4719] manager: (tap702cad91-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.480 253665 INFO os_vif [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4')#033[00m
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.479 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.479 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.479 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:01.480 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 323 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.6 MiB/s wr, 61 op/s
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.555 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.556 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.556 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] No VIF found with MAC fa:16:3e:ab:29:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.556 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Using config drive#033[00m
Nov 22 04:25:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-87556c740ec1678fbf461ef660a3eb8655bcf97b3c6399e09f535443ad10b315-merged.mount: Deactivated successfully.
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.585 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:01 np0005532048 podman[337290]: 2025-11-22 09:25:01.655629221 +0000 UTC m=+0.429734831 container remove 8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:25:01 np0005532048 systemd[1]: libpod-conmon-8e51910b6b3f65da86b68e01b499c763cfeb4b7bfeeaf67ab772fc6c8cfcb8fe.scope: Deactivated successfully.
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.877 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Creating config drive at /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.884 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpro0botoa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.966 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803501.9664326, 672288f2-2f9b-4643-9ebf-a949ad316298 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.967 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] VM Started (Lifecycle Event)#033[00m
Nov 22 04:25:01 np0005532048 nova_compute[253661]: 2025-11-22 09:25:01.996 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.002 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803501.970519, 672288f2-2f9b-4643-9ebf-a949ad316298 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.003 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.029 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.035 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpro0botoa" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.070 253665 DEBUG nova.storage.rbd_utils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] rbd image 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.075 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.125 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.141 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:02Z|00829|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 04:25:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:02Z|00830|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:02 np0005532048 podman[337567]: 2025-11-22 09:25:02.43449953 +0000 UTC m=+0.052950341 container create a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.482 253665 DEBUG nova.network.neutron [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updated VIF entry in instance network info cache for port 702cad91-d4bb-4f0c-b378-7e05e928ad09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.483 253665 DEBUG nova.network.neutron [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updating instance_info_cache with network_info: [{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.496 253665 DEBUG oslo_concurrency.lockutils [req-b140fb8a-d9ed-4526-a3ab-ab929cdfc08c req-03e9ad5c-137a-45b9-be0b-9ba63ffd89cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:02 np0005532048 podman[337567]: 2025-11-22 09:25:02.404380273 +0000 UTC m=+0.022831104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:25:02 np0005532048 systemd[1]: Started libpod-conmon-a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144.scope.
Nov 22 04:25:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:25:02 np0005532048 podman[337567]: 2025-11-22 09:25:02.622354212 +0000 UTC m=+0.240805043 container init a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:25:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:02 np0005532048 podman[337567]: 2025-11-22 09:25:02.635945181 +0000 UTC m=+0.254395992 container start a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:25:02 np0005532048 heuristic_shockley[337607]: 167 167
Nov 22 04:25:02 np0005532048 systemd[1]: libpod-a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144.scope: Deactivated successfully.
Nov 22 04:25:02 np0005532048 podman[337567]: 2025-11-22 09:25:02.683168293 +0000 UTC m=+0.301619104 container attach a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:25:02 np0005532048 podman[337567]: 2025-11-22 09:25:02.683656494 +0000 UTC m=+0.302107315 container died a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.682 253665 INFO nova.virt.libvirt.driver [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Deleting instance files /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_del#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.684 253665 INFO nova.virt.libvirt.driver [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Deletion of /var/lib/nova/instances/a207d8c4-4fce-4fe6-9ba5-548a92e757ac_del complete#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.714 253665 DEBUG oslo_concurrency.processutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config 493b70aa-aaa2-4c40-bfea-6eff7ffec547_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.715 253665 INFO nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Deleting local config drive /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547/disk.config because it was imported into RBD.#033[00m
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002498288138145471 of space, bias 1.0, pg target 0.7494864414436414 quantized to 32 (current 32)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.758 253665 INFO nova.compute.manager [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Took 11.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.758 253665 DEBUG oslo.service.loopingcall [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.759 253665 DEBUG nova.compute.manager [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.759 253665 DEBUG nova.network.neutron [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:25:02 np0005532048 podman[337581]: 2025-11-22 09:25:02.764461158 +0000 UTC m=+0.276080865 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:25:02 np0005532048 podman[337582]: 2025-11-22 09:25:02.764798606 +0000 UTC m=+0.276518506 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:25:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f4a891f982beeabef8b767cc371267a328b61a874a5d2a9a79c71738942dcc28-merged.mount: Deactivated successfully.
Nov 22 04:25:02 np0005532048 kernel: tap702cad91-d4: entered promiscuous mode
Nov 22 04:25:02 np0005532048 NetworkManager[48920]: <info>  [1763803502.8041] manager: (tap702cad91-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Nov 22 04:25:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:02Z|00831|binding|INFO|Claiming lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 for this chassis.
Nov 22 04:25:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:02Z|00832|binding|INFO|702cad91-d4bb-4f0c-b378-7e05e928ad09: Claiming fa:16:3e:ab:29:75 10.100.0.14
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.806 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:02 np0005532048 systemd-udevd[337244]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.821 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:29:75 10.100.0.14'], port_security=['fa:16:3e:ab:29:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '493b70aa-aaa2-4c40-bfea-6eff7ffec547', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb487cef-189d-444c-a09e-c2cc59f79353', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=702cad91-d4bb-4f0c-b378-7e05e928ad09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:02Z|00833|binding|INFO|Setting lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 ovn-installed in OVS
Nov 22 04:25:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:02Z|00834|binding|INFO|Setting lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 up in Southbound
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.823 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 702cad91-d4bb-4f0c-b378-7e05e928ad09 in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 bound to our chassis#033[00m
Nov 22 04:25:02 np0005532048 NetworkManager[48920]: <info>  [1763803502.8249] device (tap702cad91-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:25:02 np0005532048 NetworkManager[48920]: <info>  [1763803502.8263] device (tap702cad91-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.827 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.826 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0936cc0d-3697-4210-9c23-8f3e8e452e86#033[00m
Nov 22 04:25:02 np0005532048 podman[337567]: 2025-11-22 09:25:02.836468068 +0000 UTC m=+0.454918879 container remove a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:25:02 np0005532048 systemd[1]: libpod-conmon-a472c1293730b9957c5777322304ba9c3ea86938bcd391c8138e613d21772144.scope: Deactivated successfully.
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.849 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea27d7f-587d-4eeb-b800-3c1c1176b62b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:02 np0005532048 systemd-machined[215941]: New machine qemu-100-instance-00000052.
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.886 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf36a29-267f-4322-a696-d23b2824074b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:02 np0005532048 systemd[1]: Started Virtual Machine qemu-100-instance-00000052.
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.889 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3844b1-d8ee-4ea8-a199-6358842fd956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.937 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c801235d-5bac-4833-9646-d4bce919091f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.967 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1298563a-b927-45c4-8561-541e02664b63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 36436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337666, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef235262-a6d5-4c80-a91e-0693ec6f5357]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622144, 'tstamp': 622144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337668, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622147, 'tstamp': 622147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337668, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.988 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:02 np0005532048 nova_compute[253661]: 2025-11-22 09:25:02.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.993 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0936cc0d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.994 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.995 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0936cc0d-30, col_values=(('external_ids', {'iface-id': 'a1484e81-5431-4cb7-9298-4572e8674d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:02.995 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:03 np0005532048 podman[337674]: 2025-11-22 09:25:03.108397912 +0000 UTC m=+0.061189279 container create 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:25:03 np0005532048 systemd[1]: Started libpod-conmon-50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe.scope.
Nov 22 04:25:03 np0005532048 podman[337674]: 2025-11-22 09:25:03.079895514 +0000 UTC m=+0.032686911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:25:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:25:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:03 np0005532048 podman[337674]: 2025-11-22 09:25:03.259520666 +0000 UTC m=+0.212312083 container init 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:25:03 np0005532048 podman[337674]: 2025-11-22 09:25:03.270957073 +0000 UTC m=+0.223748450 container start 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:25:03 np0005532048 podman[337674]: 2025-11-22 09:25:03.378909893 +0000 UTC m=+0.331701280 container attach 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:25:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 333 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.664 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803503.663474, 493b70aa-aaa2-4c40-bfea-6eff7ffec547 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.665 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] VM Started (Lifecycle Event)#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.688 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.693 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803503.663768, 493b70aa-aaa2-4c40-bfea-6eff7ffec547 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.694 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.715 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.722 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.745 253665 DEBUG nova.compute.manager [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.746 253665 DEBUG oslo_concurrency.lockutils [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.746 253665 DEBUG oslo_concurrency.lockutils [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.747 253665 DEBUG oslo_concurrency.lockutils [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.747 253665 DEBUG nova.compute.manager [req-9a75157d-56c9-4cf3-969e-20824a64d968 req-dba23d70-1e33-4cf2-96c8-9a34b0c4024c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Processing event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.748 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.749 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.752 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803503.7524176, 493b70aa-aaa2-4c40-bfea-6eff7ffec547 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.752 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.773 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.777 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.779 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.784 253665 INFO nova.virt.libvirt.driver [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance spawned successfully.#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.785 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.808 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.809 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.810 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.811 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.811 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.812 253665 DEBUG nova.virt.libvirt.driver [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.817 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.863 253665 INFO nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Took 8.36 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.863 253665 DEBUG nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.932 253665 INFO nova.compute.manager [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Took 9.32 seconds to build instance.#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.975 253665 DEBUG oslo_concurrency.lockutils [None req-93b993d0-c722-46bc-892c-c60e66ca7183 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:03 np0005532048 nova_compute[253661]: 2025-11-22 09:25:03.998 253665 DEBUG nova.network.neutron [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.018 253665 INFO nova.compute.manager [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Took 1.26 seconds to deallocate network for instance.#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.040 253665 DEBUG nova.compute.manager [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.041 253665 DEBUG oslo_concurrency.lockutils [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.041 253665 DEBUG oslo_concurrency.lockutils [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.042 253665 DEBUG oslo_concurrency.lockutils [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.042 253665 DEBUG nova.compute.manager [req-d04945cf-587b-4e3d-9cf9-fb818fb79b68 req-8e2882b9-6e7e-4ff0-a9de-8644aeba5451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Processing event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.043 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.047 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803504.0472684, 672288f2-2f9b-4643-9ebf-a949ad316298 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.047 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.050 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.062 253665 INFO nova.virt.libvirt.driver [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance spawned successfully.#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.063 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.196 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.197 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.198 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.214 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.219 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.219 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.220 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.220 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.221 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.221 253665 DEBUG nova.virt.libvirt.driver [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.230 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.272 253665 INFO nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Took 12.72 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.272 253665 DEBUG nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.314 253665 DEBUG oslo_concurrency.processutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]: {
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "osd_id": 1,
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "type": "bluestore"
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:    },
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "osd_id": 0,
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "type": "bluestore"
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:    },
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "osd_id": 2,
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:        "type": "bluestore"
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]:    }
Nov 22 04:25:04 np0005532048 nervous_margulis[337691]: }
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.372 253665 INFO nova.compute.manager [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Took 13.76 seconds to build instance.#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.389 253665 DEBUG oslo_concurrency.lockutils [None req-24fd0f3e-45c5-49ae-8df2-4af792f07217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:04 np0005532048 systemd[1]: libpod-50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe.scope: Deactivated successfully.
Nov 22 04:25:04 np0005532048 systemd[1]: libpod-50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe.scope: Consumed 1.120s CPU time.
Nov 22 04:25:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:04.454 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:04.456 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.456 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:04 np0005532048 podman[337769]: 2025-11-22 09:25:04.477936623 +0000 UTC m=+0.040475439 container died 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:25:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6dcce24a06d15131770d077836a5a7804eb147276c34d575916109c82245962c-merged.mount: Deactivated successfully.
Nov 22 04:25:04 np0005532048 podman[337769]: 2025-11-22 09:25:04.743232417 +0000 UTC m=+0.305771223 container remove 50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:25:04 np0005532048 systemd[1]: libpod-conmon-50454db941ff0d5af69095a3276f89fc9ea3cfc0c1683e48fe7a47b3c0e060fe.scope: Deactivated successfully.
Nov 22 04:25:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:25:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769568295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:25:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.897 253665 DEBUG oslo_concurrency.processutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.905 253665 DEBUG nova.compute.provider_tree [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.920 253665 DEBUG nova.scheduler.client.report [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:25:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6307148d-2bfb-42ff-8a83-1144d8b65f79 does not exist
Nov 22 04:25:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4ce7f6f5-2740-4bed-81ea-e7119e1f83cf does not exist
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.945 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:04 np0005532048 nova_compute[253661]: 2025-11-22 09:25:04.971 253665 INFO nova.scheduler.client.report [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Deleted allocations for instance a207d8c4-4fce-4fe6-9ba5-548a92e757ac#033[00m
Nov 22 04:25:05 np0005532048 nova_compute[253661]: 2025-11-22 09:25:05.041 253665 DEBUG oslo_concurrency.lockutils [None req-611bffdc-4620-447f-9834-b71cc2a72c07 3f7dbcc13af740b491f0498f4ddec69d e78196ec949a45cf803d3e585b603558 - - default default] Lock "a207d8c4-4fce-4fe6-9ba5-548a92e757ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:25:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:25:05 np0005532048 nova_compute[253661]: 2025-11-22 09:25:05.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 333 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 3.6 MiB/s wr, 116 op/s
Nov 22 04:25:05 np0005532048 nova_compute[253661]: 2025-11-22 09:25:05.707 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:05 np0005532048 nova_compute[253661]: 2025-11-22 09:25:05.707 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:05 np0005532048 nova_compute[253661]: 2025-11-22 09:25:05.708 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:05 np0005532048 nova_compute[253661]: 2025-11-22 09:25:05.708 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:05 np0005532048 nova_compute[253661]: 2025-11-22 09:25:05.708 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:05 np0005532048 nova_compute[253661]: 2025-11-22 09:25:05.710 253665 INFO nova.compute.manager [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Terminating instance#033[00m
Nov 22 04:25:05 np0005532048 nova_compute[253661]: 2025-11-22 09:25:05.711 253665 DEBUG nova.compute.manager [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.096 253665 DEBUG nova.compute.manager [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.097 253665 DEBUG oslo_concurrency.lockutils [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.099 253665 DEBUG oslo_concurrency.lockutils [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.099 253665 DEBUG oslo_concurrency.lockutils [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.100 253665 DEBUG nova.compute.manager [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] No waiting events found dispatching network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.101 253665 WARNING nova.compute.manager [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received unexpected event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.101 253665 DEBUG nova.compute.manager [req-9b434384-01e9-441a-a783-5b9ad43b01d5 req-e6b3753b-c7c3-49b8-ae1f-b231716ae8ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Received event network-vif-deleted-15bf0e02-e093-4f45-995f-abb925d1cf71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:06 np0005532048 kernel: tapb173a545-d8 (unregistering): left promiscuous mode
Nov 22 04:25:06 np0005532048 NetworkManager[48920]: <info>  [1763803506.2270] device (tapb173a545-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.248 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:06Z|00835|binding|INFO|Releasing lport b173a545-d888-43c0-a1fb-2969a871663c from this chassis (sb_readonly=0)
Nov 22 04:25:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:06Z|00836|binding|INFO|Setting lport b173a545-d888-43c0-a1fb-2969a871663c down in Southbound
Nov 22 04:25:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:06Z|00837|binding|INFO|Removing iface tapb173a545-d8 ovn-installed in OVS
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.251 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f1:9e 10.100.0.14'], port_security=['fa:16:3e:a0:f1:9e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '672288f2-2f9b-4643-9ebf-a949ad316298', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b173a545-d888-43c0-a1fb-2969a871663c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.254 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b173a545-d888-43c0-a1fb-2969a871663c in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.256 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.267 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.279 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d042aba7-3aa2-48c4-bc39-42cd76f4a19f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:06 np0005532048 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Deactivated successfully.
Nov 22 04:25:06 np0005532048 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000051.scope: Consumed 2.246s CPU time.
Nov 22 04:25:06 np0005532048 systemd-machined[215941]: Machine qemu-99-instance-00000051 terminated.
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.335 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[20d3cb9c-e869-4460-9914-4242faddc6a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.341 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ca14da-acfc-46d1-823d-683908857eb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.354 253665 DEBUG nova.compute.manager [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.355 253665 DEBUG oslo_concurrency.lockutils [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.355 253665 DEBUG oslo_concurrency.lockutils [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.355 253665 DEBUG oslo_concurrency.lockutils [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.355 253665 DEBUG nova.compute.manager [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] No waiting events found dispatching network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.356 253665 WARNING nova.compute.manager [req-147fbb94-674c-472f-a850-4464728645cd req-8f8b1a49-b182-41b9-89d6-4ee3ec8c197f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received unexpected event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.362 253665 INFO nova.virt.libvirt.driver [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Instance destroyed successfully.#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.363 253665 DEBUG nova.objects.instance [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid 672288f2-2f9b-4643-9ebf-a949ad316298 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.374 253665 DEBUG nova.virt.libvirt.vif [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:24:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-287905590',display_name='tempest-ServersTestJSON-server-287905590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-287905590',id=81,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO/elwrk2u0pM3LkpugKE0r9pgrYufUX3T9HzzxAQRxB89i5bBiA7C9yWlosrYihPiHzlNqfpGLV7W1tbdzbGLdP3NreuJMAPnqDTjhMrZ8g7ZHEYCTPrFyftTjdWlo1pA==',key_name='tempest-key-1706317659',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-ic2f0n30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:04Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=672288f2-2f9b-4643-9ebf-a949ad316298,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.374 253665 DEBUG nova.network.os_vif_util [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "b173a545-d888-43c0-a1fb-2969a871663c", "address": "fa:16:3e:a0:f1:9e", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb173a545-d8", "ovs_interfaceid": "b173a545-d888-43c0-a1fb-2969a871663c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.375 253665 DEBUG nova.network.os_vif_util [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.376 253665 DEBUG os_vif [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.378 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb173a545-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.382 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca4b1bd-1a40-4f3d-a793-13f258b9a09b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.386 253665 INFO os_vif [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f1:9e,bridge_name='br-int',has_traffic_filtering=True,id=b173a545-d888-43c0-a1fb-2969a871663c,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb173a545-d8')#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[40882fc8-8467-46d5-af80-fdbce2c5578c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337875, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.431 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbb9380-35d8-4fff-9c56-072f39752df6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337890, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337890, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.433 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:06 np0005532048 nova_compute[253661]: 2025-11-22 09:25:06.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.436 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.436 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.436 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:06.437 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.251 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803492.249956, a207d8c4-4fce-4fe6-9ba5-548a92e757ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.252 253665 INFO nova.compute.manager [-] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.269 253665 DEBUG nova.compute.manager [None req-168e5866-8b99-437e-b9b2-cb00efd11b3b - - - - - -] [instance: a207d8c4-4fce-4fe6-9ba5-548a92e757ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.362 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.362 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.362 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.363 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.363 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.365 253665 INFO nova.compute.manager [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Terminating instance#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.366 253665 DEBUG nova.compute.manager [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:25:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 293 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 151 op/s
Nov 22 04:25:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:07 np0005532048 kernel: tap702cad91-d4 (unregistering): left promiscuous mode
Nov 22 04:25:07 np0005532048 NetworkManager[48920]: <info>  [1763803507.8444] device (tap702cad91-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:25:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:07Z|00838|binding|INFO|Releasing lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 from this chassis (sb_readonly=0)
Nov 22 04:25:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:07Z|00839|binding|INFO|Setting lport 702cad91-d4bb-4f0c-b378-7e05e928ad09 down in Southbound
Nov 22 04:25:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:07Z|00840|binding|INFO|Removing iface tap702cad91-d4 ovn-installed in OVS
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.865 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:29:75 10.100.0.14'], port_security=['fa:16:3e:ab:29:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '493b70aa-aaa2-4c40-bfea-6eff7ffec547', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=702cad91-d4bb-4f0c-b378-7e05e928ad09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.866 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 702cad91-d4bb-4f0c-b378-7e05e928ad09 in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 unbound from our chassis#033[00m
Nov 22 04:25:07 np0005532048 nova_compute[253661]: 2025-11-22 09:25:07.868 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.868 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0936cc0d-3697-4210-9c23-8f3e8e452e86#033[00m
Nov 22 04:25:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b819623-0143-4f06-9018-ab86faeab669]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:07 np0005532048 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000052.scope: Deactivated successfully.
Nov 22 04:25:07 np0005532048 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000052.scope: Consumed 4.295s CPU time.
Nov 22 04:25:07 np0005532048 systemd-machined[215941]: Machine qemu-100-instance-00000052 terminated.
Nov 22 04:25:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.927 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cceb74b5-f4fe-46c2-9824-512690a97c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.931 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cd800a-7284-4fbf-9344-f43de28e54e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:07.971 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b103f462-5a5e-4570-a09d-0b08498e36bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.000 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.005 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17a383cc-7ab1-420e-8562-78adf5bcc3c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0936cc0d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:f0:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622131, 'reachable_time': 36436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337906, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.015 253665 INFO nova.virt.libvirt.driver [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Instance destroyed successfully.#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.016 253665 DEBUG nova.objects.instance [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'resources' on Instance uuid 493b70aa-aaa2-4c40-bfea-6eff7ffec547 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fd793a26-7020-4683-8a90-6b586f7cfdd7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622144, 'tstamp': 622144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337914, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0936cc0d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622147, 'tstamp': 622147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337914, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.028 253665 DEBUG nova.virt.libvirt.vif [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1274648896',display_name='tempest-ServerActionsTestOtherA-server-1274648896',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1274648896',id=82,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-n007hk6i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:03Z,user_data=None,user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=493b70aa-aaa2-4c40-bfea-6eff7ffec547,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.029 253665 DEBUG nova.network.os_vif_util [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.029 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.030 253665 DEBUG nova.network.os_vif_util [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.030 253665 DEBUG os_vif [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.031 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.032 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap702cad91-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.033 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.034 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.036 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0936cc0d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.037 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.037 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0936cc0d-30, col_values=(('external_ids', {'iface-id': 'a1484e81-5431-4cb7-9298-4572e8674d4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.038 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.039 253665 INFO os_vif [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:29:75,bridge_name='br-int',has_traffic_filtering=True,id=702cad91-d4bb-4f0c-b378-7e05e928ad09,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap702cad91-d4')#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.191 253665 DEBUG nova.compute.manager [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-changed-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.192 253665 DEBUG nova.compute.manager [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Refreshing instance network info cache due to event network-changed-702cad91-d4bb-4f0c-b378-7e05e928ad09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.192 253665 DEBUG oslo_concurrency.lockutils [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.192 253665 DEBUG oslo_concurrency.lockutils [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:08 np0005532048 nova_compute[253661]: 2025-11-22 09:25:08.192 253665 DEBUG nova.network.neutron [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Refreshing network info cache for port 702cad91-d4bb-4f0c-b378-7e05e928ad09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:25:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:08.457 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:09Z|00841|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 04:25:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:09Z|00842|binding|INFO|Releasing lport a1484e81-5431-4cb7-9298-4572e8674d4a from this chassis (sb_readonly=0)
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.147 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:09 np0005532048 podman[337936]: 2025-11-22 09:25:09.419262855 +0000 UTC m=+0.102492819 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.507 253665 DEBUG nova.compute.manager [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-unplugged-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.507 253665 DEBUG oslo_concurrency.lockutils [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.508 253665 DEBUG oslo_concurrency.lockutils [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.508 253665 DEBUG oslo_concurrency.lockutils [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.508 253665 DEBUG nova.compute.manager [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] No waiting events found dispatching network-vif-unplugged-b173a545-d888-43c0-a1fb-2969a871663c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.508 253665 DEBUG nova.compute.manager [req-8681290d-d2fb-4a87-945c-5f4343f63e56 req-d1db461a-f1e9-4b02-be3f-b913a997e67d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-unplugged-b173a545-d888-43c0-a1fb-2969a871663c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:25:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 293 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 234 op/s
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.648 253665 DEBUG nova.network.neutron [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updated VIF entry in instance network info cache for port 702cad91-d4bb-4f0c-b378-7e05e928ad09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.649 253665 DEBUG nova.network.neutron [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updating instance_info_cache with network_info: [{"id": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "address": "fa:16:3e:ab:29:75", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap702cad91-d4", "ovs_interfaceid": "702cad91-d4bb-4f0c-b378-7e05e928ad09", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:09 np0005532048 nova_compute[253661]: 2025-11-22 09:25:09.665 253665 DEBUG oslo_concurrency.lockutils [req-c8cf49bb-ed78-4fb1-ac0e-03d79b342f6b req-ea0417dd-84c2-4b8c-9058-a82c4092dc38 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-493b70aa-aaa2-4c40-bfea-6eff7ffec547" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:10 np0005532048 nova_compute[253661]: 2025-11-22 09:25:10.510 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.530 253665 INFO nova.virt.libvirt.driver [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Deleting instance files /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298_del#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.531 253665 INFO nova.virt.libvirt.driver [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Deletion of /var/lib/nova/instances/672288f2-2f9b-4643-9ebf-a949ad316298_del complete#033[00m
Nov 22 04:25:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 293 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1022 KiB/s wr, 202 op/s
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.591 253665 INFO nova.compute.manager [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Took 5.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.592 253665 DEBUG oslo.service.loopingcall [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.592 253665 DEBUG nova.compute.manager [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.592 253665 DEBUG nova.network.neutron [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.684 253665 DEBUG nova.compute.manager [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 DEBUG oslo_concurrency.lockutils [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 DEBUG oslo_concurrency.lockutils [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 DEBUG oslo_concurrency.lockutils [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 DEBUG nova.compute.manager [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] No waiting events found dispatching network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:11 np0005532048 nova_compute[253661]: 2025-11-22 09:25:11.685 253665 WARNING nova.compute.manager [req-cb55fd7e-e84e-4909-9c24-2248f163ab60 req-d96b403e-d1f3-467d-af46-770fc9c88721 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received unexpected event network-vif-plugged-b173a545-d888-43c0-a1fb-2969a871663c for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.301 253665 DEBUG nova.network.neutron [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.324 253665 INFO nova.compute.manager [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Took 0.73 seconds to deallocate network for instance.#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.340 253665 INFO nova.virt.libvirt.driver [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Deleting instance files /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547_del#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.341 253665 INFO nova.virt.libvirt.driver [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Deletion of /var/lib/nova/instances/493b70aa-aaa2-4c40-bfea-6eff7ffec547_del complete#033[00m
Nov 22 04:25:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:25:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3187497265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:25:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:25:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3187497265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.375 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.376 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.400 253665 INFO nova.compute.manager [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Took 5.03 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.401 253665 DEBUG oslo.service.loopingcall [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.401 253665 DEBUG nova.compute.manager [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.402 253665 DEBUG nova.network.neutron [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.408 253665 DEBUG nova.compute.manager [req-b1b53664-e7ab-48ff-968f-e01a97a46a03 req-15369513-b405-4b58-81a5-5a15d2fe5a77 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Received event network-vif-deleted-b173a545-d888-43c0-a1fb-2969a871663c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:12 np0005532048 nova_compute[253661]: 2025-11-22 09:25:12.477 253665 DEBUG oslo_concurrency.processutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606131714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:13 np0005532048 nova_compute[253661]: 2025-11-22 09:25:13.001 253665 DEBUG oslo_concurrency.processutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:13 np0005532048 nova_compute[253661]: 2025-11-22 09:25:13.049 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:13 np0005532048 nova_compute[253661]: 2025-11-22 09:25:13.055 253665 DEBUG nova.compute.provider_tree [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:13 np0005532048 nova_compute[253661]: 2025-11-22 09:25:13.076 253665 DEBUG nova.scheduler.client.report [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:13 np0005532048 nova_compute[253661]: 2025-11-22 09:25:13.100 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:13 np0005532048 nova_compute[253661]: 2025-11-22 09:25:13.124 253665 INFO nova.scheduler.client.report [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance 672288f2-2f9b-4643-9ebf-a949ad316298#033[00m
Nov 22 04:25:13 np0005532048 nova_compute[253661]: 2025-11-22 09:25:13.195 253665 DEBUG oslo_concurrency.lockutils [None req-1f14585f-9cea-4bf2-8385-a5d6ba3d448a 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "672288f2-2f9b-4643-9ebf-a949ad316298" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 233 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1024 KiB/s wr, 231 op/s
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.518 253665 DEBUG nova.compute.manager [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-unplugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.519 253665 DEBUG oslo_concurrency.lockutils [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.519 253665 DEBUG oslo_concurrency.lockutils [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.519 253665 DEBUG oslo_concurrency.lockutils [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.520 253665 DEBUG nova.compute.manager [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] No waiting events found dispatching network-vif-unplugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.520 253665 DEBUG nova.compute.manager [req-c8ec2a6d-04fc-4abb-819e-5508cd61ea8c req-2f334f12-69dc-4e2b-a83a-c479154743fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-unplugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.612 253665 DEBUG nova.network.neutron [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.627 253665 INFO nova.compute.manager [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Took 2.23 seconds to deallocate network for instance.#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.674 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.674 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:14 np0005532048 nova_compute[253661]: 2025-11-22 09:25:14.755 253665 DEBUG oslo_concurrency.processutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/809544080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.273 253665 DEBUG oslo_concurrency.processutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.282 253665 DEBUG nova.compute.provider_tree [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.296 253665 DEBUG nova.scheduler.client.report [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.315 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.340 253665 INFO nova.scheduler.client.report [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Deleted allocations for instance 493b70aa-aaa2-4c40-bfea-6eff7ffec547#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.409 253665 DEBUG oslo_concurrency.lockutils [None req-1c2f7810-f2cb-4b41-a197-c7707ddb8f31 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 200 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 29 KiB/s wr, 198 op/s
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.826 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.827 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.827 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.827 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.827 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.828 253665 INFO nova.compute.manager [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Terminating instance#033[00m
Nov 22 04:25:15 np0005532048 nova_compute[253661]: 2025-11-22 09:25:15.829 253665 DEBUG nova.compute.manager [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:25:16 np0005532048 kernel: tapecdb3a4e-ac (unregistering): left promiscuous mode
Nov 22 04:25:16 np0005532048 NetworkManager[48920]: <info>  [1763803516.1282] device (tapecdb3a4e-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:25:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:16Z|00843|binding|INFO|Releasing lport ecdb3a4e-ac28-4357-9db5-41ebf06a4adc from this chassis (sb_readonly=0)
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.171 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:16Z|00844|binding|INFO|Setting lport ecdb3a4e-ac28-4357-9db5-41ebf06a4adc down in Southbound
Nov 22 04:25:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:16Z|00845|binding|INFO|Removing iface tapecdb3a4e-ac ovn-installed in OVS
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.173 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.178 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3e:fb 10.100.0.5'], port_security=['fa:16:3e:e0:3e:fb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9096405c-eb66-4d27-abbb-e709b767afea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9b06c711e582499ab500917d85e27e3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6456c660-bee8-4527-8966-f035b8f73def', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e990bb56-0110-4888-afa3-540f1481188b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.180 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ecdb3a4e-ac28-4357-9db5-41ebf06a4adc in datapath 0936cc0d-3697-4210-9c23-8f3e8e452e86 unbound from our chassis#033[00m
Nov 22 04:25:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.181 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0936cc0d-3697-4210-9c23-8f3e8e452e86, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:25:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.183 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e68923b-1b37-412b-8dd2-e89fd9876e1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:16.184 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86 namespace which is not needed anymore#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:16 np0005532048 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Nov 22 04:25:16 np0005532048 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Consumed 19.644s CPU time.
Nov 22 04:25:16 np0005532048 systemd-machined[215941]: Machine qemu-90-instance-0000004b terminated.
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.276 253665 INFO nova.virt.libvirt.driver [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Instance destroyed successfully.#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.278 253665 DEBUG nova.objects.instance [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lazy-loading 'resources' on Instance uuid 9096405c-eb66-4d27-abbb-e709b767afea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.292 253665 DEBUG nova.virt.libvirt.vif [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:22:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-903328611',display_name='tempest-ServerActionsTestOtherA-server-903328611',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-903328611',id=75,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFom9y+7W1OzHUVkvflqnu/6xnxZe0N+aQAyRSLRBCSgO6CoYZ20Adms5sFPGUitwuO09dh9qM8uob9/gGVzUyIJo9HanjWjMYRoIceLs8pZBGhLtn51xjZTJ05EGeq1rA==',key_name='tempest-keypair-2136084686',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:22:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9b06c711e582499ab500917d85e27e3c',ramdisk_id='',reservation_id='r-o7pshihd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1527475006',owner_user_name='tempest-ServerActionsTestOtherA-1527475006-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:22:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7e5709393702478dbf0bd566dc94d7fe',uuid=9096405c-eb66-4d27-abbb-e709b767afea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.293 253665 DEBUG nova.network.os_vif_util [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converting VIF {"id": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "address": "fa:16:3e:e0:3e:fb", "network": {"id": "0936cc0d-3697-4210-9c23-8f3e8e452e86", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-980255863-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b06c711e582499ab500917d85e27e3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdb3a4e-ac", "ovs_interfaceid": "ecdb3a4e-ac28-4357-9db5-41ebf06a4adc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.294 253665 DEBUG nova.network.os_vif_util [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.295 253665 DEBUG os_vif [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.297 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecdb3a4e-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.299 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.304 253665 INFO os_vif [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:3e:fb,bridge_name='br-int',has_traffic_filtering=True,id=ecdb3a4e-ac28-4357-9db5-41ebf06a4adc,network=Network(0936cc0d-3697-4210-9c23-8f3e8e452e86),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdb3a4e-ac')#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.619 253665 DEBUG nova.compute.manager [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.619 253665 DEBUG oslo_concurrency.lockutils [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.619 253665 DEBUG oslo_concurrency.lockutils [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.620 253665 DEBUG oslo_concurrency.lockutils [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "493b70aa-aaa2-4c40-bfea-6eff7ffec547-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.620 253665 DEBUG nova.compute.manager [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] No waiting events found dispatching network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.620 253665 WARNING nova.compute.manager [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received unexpected event network-vif-plugged-702cad91-d4bb-4f0c-b378-7e05e928ad09 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.620 253665 DEBUG nova.compute.manager [req-0d47d1c6-e964-447a-8821-283ef6bc1460 req-5bbb8cb2-b708-4b21-82e6-ed0a9a53b637 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Received event network-vif-deleted-702cad91-d4bb-4f0c-b378-7e05e928ad09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:16 np0005532048 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [NOTICE]   (331425) : haproxy version is 2.8.14-c23fe91
Nov 22 04:25:16 np0005532048 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [NOTICE]   (331425) : path to executable is /usr/sbin/haproxy
Nov 22 04:25:16 np0005532048 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [WARNING]  (331425) : Exiting Master process...
Nov 22 04:25:16 np0005532048 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [ALERT]    (331425) : Current worker (331427) exited with code 143 (Terminated)
Nov 22 04:25:16 np0005532048 neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86[331421]: [WARNING]  (331425) : All workers exited. Exiting... (0)
Nov 22 04:25:16 np0005532048 systemd[1]: libpod-4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3.scope: Deactivated successfully.
Nov 22 04:25:16 np0005532048 podman[338042]: 2025-11-22 09:25:16.64318171 +0000 UTC m=+0.346353604 container died 4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.796 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.796 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.811 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.895 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.896 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.903 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:25:16 np0005532048 nova_compute[253661]: 2025-11-22 09:25:16.904 253665 INFO nova.compute.claims [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.018 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1019a479c4f9b61416483ab6dc0c9e11f221309433f7114f4bdcf465ffa0866d-merged.mount: Deactivated successfully.
Nov 22 04:25:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3-userdata-shm.mount: Deactivated successfully.
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.351 253665 DEBUG nova.compute.manager [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-unplugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.352 253665 DEBUG oslo_concurrency.lockutils [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.352 253665 DEBUG oslo_concurrency.lockutils [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.353 253665 DEBUG oslo_concurrency.lockutils [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.353 253665 DEBUG nova.compute.manager [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] No waiting events found dispatching network-vif-unplugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.354 253665 DEBUG nova.compute.manager [req-e84d55d0-109e-4fd9-a8d8-22d3f7e22a1b req-1e2e40c4-dedc-40cb-ba1f-0c6ef478656c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-unplugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:25:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364463731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.510 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.517 253665 DEBUG nova.compute.provider_tree [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.532 253665 DEBUG nova.scheduler.client.report [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.560 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 200 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 16 KiB/s wr, 189 op/s
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.562 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:25:17 np0005532048 podman[338042]: 2025-11-22 09:25:17.591464926 +0000 UTC m=+1.294636830 container cleanup 4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:25:17 np0005532048 systemd[1]: libpod-conmon-4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3.scope: Deactivated successfully.
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.606 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.606 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.621 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:25:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.641 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:25:17 np0005532048 podman[338112]: 2025-11-22 09:25:17.701467826 +0000 UTC m=+0.084631717 container remove 4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:25:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.707 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e16a7a37-6e21-42ab-80c4-767ace086a83]: (4, ('Sat Nov 22 09:25:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86 (4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3)\n4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3\nSat Nov 22 09:25:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86 (4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3)\n4bdcbea8a2b6b6d4f5b9cdcd9aeb109e7b0127937fa4572fd192dad5964dc9e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.710 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[063a8be2-cefb-4e1a-a50b-dbae8b484737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.711 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0936cc0d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.715 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.716 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.717 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Creating image(s)#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.739 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:17 np0005532048 kernel: tap0936cc0d-30: left promiscuous mode
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.767 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.779 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3e3e73-fcfa-46b9-be10-fdff0e35d8a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.794 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c6be25-4006-4360-bd42-67f38e76c575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef06cc0b-f1e1-4385-a707-4b0fcf451e1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.807 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.812 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.818 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0da8bd46-ee2e-4498-b293-48ab2e87b943]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622124, 'reachable_time': 31977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338178, 'error': None, 'target': 'ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.821 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0936cc0d-3697-4210-9c23-8f3e8e452e86 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:25:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:17.821 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fbe51f-0fb9-4f7a-8491-b233d48aa448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:17 np0005532048 systemd[1]: run-netns-ovnmeta\x2d0936cc0d\x2d3697\x2d4210\x2d9c23\x2d8f3e8e452e86.mount: Deactivated successfully.
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.869 253665 DEBUG nova.policy [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.910 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.911 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.911 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.912 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.946 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:17 np0005532048 nova_compute[253661]: 2025-11-22 09:25:17.952 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bd717644-36b1-45c9-a56f-b2719ae77e72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.338 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Successfully created port: ca4c64d8-4f02-4ed0-8099-f18eccb17951 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.361 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 bd717644-36b1-45c9-a56f-b2719ae77e72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.429 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.463 253665 INFO nova.virt.libvirt.driver [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Deleting instance files /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea_del#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.464 253665 INFO nova.virt.libvirt.driver [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Deletion of /var/lib/nova/instances/9096405c-eb66-4d27-abbb-e709b767afea_del complete#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.519 253665 DEBUG nova.objects.instance [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid bd717644-36b1-45c9-a56f-b2719ae77e72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.521 253665 INFO nova.compute.manager [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Took 2.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.521 253665 DEBUG oslo.service.loopingcall [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.521 253665 DEBUG nova.compute.manager [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.521 253665 DEBUG nova.network.neutron [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.528 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.529 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Ensure instance console log exists: /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.529 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.529 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:18 np0005532048 nova_compute[253661]: 2025-11-22 09:25:18.530 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.468 253665 DEBUG nova.compute.manager [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.469 253665 DEBUG oslo_concurrency.lockutils [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9096405c-eb66-4d27-abbb-e709b767afea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.469 253665 DEBUG oslo_concurrency.lockutils [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.469 253665 DEBUG oslo_concurrency.lockutils [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.470 253665 DEBUG nova.compute.manager [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] No waiting events found dispatching network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.470 253665 WARNING nova.compute.manager [req-c9469f0c-261f-4f81-9ab6-51fdfc7717bb req-aacaa8cc-8818-4f18-be03-4411aca6180d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received unexpected event network-vif-plugged-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:25:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 190 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 369 KiB/s wr, 159 op/s
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.665 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Successfully updated port: ca4c64d8-4f02-4ed0-8099-f18eccb17951 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.683 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.684 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.684 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.715 253665 DEBUG nova.network.neutron [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.747 253665 INFO nova.compute.manager [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Took 1.23 seconds to deallocate network for instance.#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.779 253665 DEBUG nova.compute.manager [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-changed-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.780 253665 DEBUG nova.compute.manager [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Refreshing instance network info cache due to event network-changed-ca4c64d8-4f02-4ed0-8099-f18eccb17951. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.780 253665 DEBUG oslo_concurrency.lockutils [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.790 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.791 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.869 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:25:19 np0005532048 nova_compute[253661]: 2025-11-22 09:25:19.874 253665 DEBUG oslo_concurrency.processutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3487432762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:20 np0005532048 nova_compute[253661]: 2025-11-22 09:25:20.358 253665 DEBUG oslo_concurrency.processutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:20 np0005532048 nova_compute[253661]: 2025-11-22 09:25:20.367 253665 DEBUG nova.compute.provider_tree [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:20 np0005532048 nova_compute[253661]: 2025-11-22 09:25:20.381 253665 DEBUG nova.scheduler.client.report [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:20 np0005532048 nova_compute[253661]: 2025-11-22 09:25:20.399 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:20 np0005532048 nova_compute[253661]: 2025-11-22 09:25:20.426 253665 INFO nova.scheduler.client.report [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Deleted allocations for instance 9096405c-eb66-4d27-abbb-e709b767afea#033[00m
Nov 22 04:25:20 np0005532048 nova_compute[253661]: 2025-11-22 09:25:20.484 253665 DEBUG oslo_concurrency.lockutils [None req-f43b8f19-dcd1-4241-86dc-946b460783e8 7e5709393702478dbf0bd566dc94d7fe 9b06c711e582499ab500917d85e27e3c - - default default] Lock "9096405c-eb66-4d27-abbb-e709b767afea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:20 np0005532048 nova_compute[253661]: 2025-11-22 09:25:20.515 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.184 253665 DEBUG nova.network.neutron [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Updating instance_info_cache with network_info: [{"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.203 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.204 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance network_info: |[{"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.205 253665 DEBUG oslo_concurrency.lockutils [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.206 253665 DEBUG nova.network.neutron [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Refreshing network info cache for port ca4c64d8-4f02-4ed0-8099-f18eccb17951 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.210 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start _get_guest_xml network_info=[{"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.217 253665 WARNING nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.225 253665 DEBUG nova.virt.libvirt.host [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.226 253665 DEBUG nova.virt.libvirt.host [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.229 253665 DEBUG nova.virt.libvirt.host [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.230 253665 DEBUG nova.virt.libvirt.host [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.230 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.230 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.231 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.231 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.231 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.232 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.232 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.232 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.232 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.233 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.233 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.233 253665 DEBUG nova.virt.hardware [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.236 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.362 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803506.3607337, 672288f2-2f9b-4643-9ebf-a949ad316298 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.363 253665 INFO nova.compute.manager [-] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.381 253665 DEBUG nova.compute.manager [None req-c84b47de-d97d-4e71-8a5b-1b6926e399b0 - - - - - -] [instance: 672288f2-2f9b-4643-9ebf-a949ad316298] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.557 253665 DEBUG nova.compute.manager [req-037d40a9-371a-4163-a059-29566d7b2bc0 req-ef9fd099-0c29-4404-a3a6-c9727fbb69b3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Received event network-vif-deleted-ecdb3a4e-ac28-4357-9db5-41ebf06a4adc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 190 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 369 KiB/s wr, 57 op/s
Nov 22 04:25:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783269925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.716 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.745 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:21 np0005532048 nova_compute[253661]: 2025-11-22 09:25:21.752 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3091257609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.240 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.242 253665 DEBUG nova.virt.libvirt.vif [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=83,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-pl500xqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:17Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=bd717644-36b1-45c9-a56f-b2719ae77e72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.243 253665 DEBUG nova.network.os_vif_util [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.244 253665 DEBUG nova.network.os_vif_util [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.245 253665 DEBUG nova.objects.instance [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd717644-36b1-45c9-a56f-b2719ae77e72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.259 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <uuid>bd717644-36b1-45c9-a56f-b2719ae77e72</uuid>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <name>instance-00000053</name>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersTestJSON-server-219797360</nova:name>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:25:21</nova:creationTime>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <nova:port uuid="ca4c64d8-4f02-4ed0-8099-f18eccb17951">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <entry name="serial">bd717644-36b1-45c9-a56f-b2719ae77e72</entry>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <entry name="uuid">bd717644-36b1-45c9-a56f-b2719ae77e72</entry>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/bd717644-36b1-45c9-a56f-b2719ae77e72_disk">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:f3:36:77"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <target dev="tapca4c64d8-4f"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/console.log" append="off"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:25:22 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:25:22 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:25:22 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:25:22 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.260 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Preparing to wait for external event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.261 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.261 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.261 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.262 253665 DEBUG nova.virt.libvirt.vif [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=83,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-pl500xqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:17Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=bd717644-36b1-45c9-a56f-b2719ae77e72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.263 253665 DEBUG nova.network.os_vif_util [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.263 253665 DEBUG nova.network.os_vif_util [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.264 253665 DEBUG os_vif [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.265 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.266 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.270 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.271 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca4c64d8-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.271 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca4c64d8-4f, col_values=(('external_ids', {'iface-id': 'ca4c64d8-4f02-4ed0-8099-f18eccb17951', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:36:77', 'vm-uuid': 'bd717644-36b1-45c9-a56f-b2719ae77e72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.273 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:22 np0005532048 NetworkManager[48920]: <info>  [1763803522.2743] manager: (tapca4c64d8-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.281 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.284 253665 INFO os_vif [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f')#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.336 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.336 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.336 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:f3:36:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.337 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Using config drive#033[00m
Nov 22 04:25:22 np0005532048 nova_compute[253661]: 2025-11-22 09:25:22.369 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:25:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:25:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:25:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:25:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:25:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:25:23 np0005532048 nova_compute[253661]: 2025-11-22 09:25:23.024 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803508.0138268, 493b70aa-aaa2-4c40-bfea-6eff7ffec547 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:23 np0005532048 nova_compute[253661]: 2025-11-22 09:25:23.024 253665 INFO nova.compute.manager [-] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:25:23 np0005532048 nova_compute[253661]: 2025-11-22 09:25:23.039 253665 DEBUG nova.compute.manager [None req-b7756356-006d-422f-ab11-27528b42c9c8 - - - - - -] [instance: 493b70aa-aaa2-4c40-bfea-6eff7ffec547] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 167 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Nov 22 04:25:23 np0005532048 nova_compute[253661]: 2025-11-22 09:25:23.604 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Creating config drive at /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config#033[00m
Nov 22 04:25:23 np0005532048 nova_compute[253661]: 2025-11-22 09:25:23.611 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3okub307 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:23 np0005532048 nova_compute[253661]: 2025-11-22 09:25:23.756 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3okub307" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:23 np0005532048 nova_compute[253661]: 2025-11-22 09:25:23.784 253665 DEBUG nova.storage.rbd_utils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:23 np0005532048 nova_compute[253661]: 2025-11-22 09:25:23.788 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:24 np0005532048 nova_compute[253661]: 2025-11-22 09:25:24.643 253665 DEBUG nova.network.neutron [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Updated VIF entry in instance network info cache for port ca4c64d8-4f02-4ed0-8099-f18eccb17951. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:25:24 np0005532048 nova_compute[253661]: 2025-11-22 09:25:24.644 253665 DEBUG nova.network.neutron [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Updating instance_info_cache with network_info: [{"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:24 np0005532048 nova_compute[253661]: 2025-11-22 09:25:24.659 253665 DEBUG oslo_concurrency.lockutils [req-faed5a7c-06e9-4ba4-9804-31341b1d8a3d req-279869df-7c59-41f5-ae80-d415c98e4163 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-bd717644-36b1-45c9-a56f-b2719ae77e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:24 np0005532048 nova_compute[253661]: 2025-11-22 09:25:24.844 253665 DEBUG oslo_concurrency.processutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config bd717644-36b1-45c9-a56f-b2719ae77e72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:24 np0005532048 nova_compute[253661]: 2025-11-22 09:25:24.845 253665 INFO nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Deleting local config drive /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72/disk.config because it was imported into RBD.#033[00m
Nov 22 04:25:24 np0005532048 kernel: tapca4c64d8-4f: entered promiscuous mode
Nov 22 04:25:24 np0005532048 NetworkManager[48920]: <info>  [1763803524.8946] manager: (tapca4c64d8-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Nov 22 04:25:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:24Z|00846|binding|INFO|Claiming lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 for this chassis.
Nov 22 04:25:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:24Z|00847|binding|INFO|ca4c64d8-4f02-4ed0-8099-f18eccb17951: Claiming fa:16:3e:f3:36:77 10.100.0.7
Nov 22 04:25:24 np0005532048 nova_compute[253661]: 2025-11-22 09:25:24.896 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.903 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:36:77 10.100.0.7'], port_security=['fa:16:3e:f3:36:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bd717644-36b1-45c9-a56f-b2719ae77e72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ca4c64d8-4f02-4ed0-8099-f18eccb17951) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.905 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ca4c64d8-4f02-4ed0-8099-f18eccb17951 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis#033[00m
Nov 22 04:25:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.906 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:25:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:24Z|00848|binding|INFO|Setting lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 ovn-installed in OVS
Nov 22 04:25:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:24Z|00849|binding|INFO|Setting lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 up in Southbound
Nov 22 04:25:24 np0005532048 nova_compute[253661]: 2025-11-22 09:25:24.917 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:24 np0005532048 nova_compute[253661]: 2025-11-22 09:25:24.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:24 np0005532048 systemd-udevd[338451]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:25:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.926 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d379143-2429-41fc-999c-e7d41eb98d7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:24 np0005532048 systemd-machined[215941]: New machine qemu-101-instance-00000053.
Nov 22 04:25:24 np0005532048 NetworkManager[48920]: <info>  [1763803524.9431] device (tapca4c64d8-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:25:24 np0005532048 NetworkManager[48920]: <info>  [1763803524.9438] device (tapca4c64d8-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:25:24 np0005532048 systemd[1]: Started Virtual Machine qemu-101-instance-00000053.
Nov 22 04:25:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.961 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[50cd7b45-b7a4-4598-8243-f7050777db33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:24.965 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[83005afc-8a02-4137-bb9c-a25ef47daf00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.000 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bb29b5e8-1cbb-470c-8d5a-38bd8b4997d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.024 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad477a0-8e9e-45e0-bd26-a99dc589c214]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338465, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.049 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[57a22df3-ed31-4864-91fd-b58657287a70]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338467, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338467, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.051 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.056 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.057 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.057 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:25.058 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.893 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803525.8927553, bd717644-36b1-45c9-a56f-b2719ae77e72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.895 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] VM Started (Lifecycle Event)#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.914 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.918 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803525.8929062, bd717644-36b1-45c9-a56f-b2719ae77e72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.919 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.937 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.942 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:25 np0005532048 nova_compute[253661]: 2025-11-22 09:25:25.960 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:26 np0005532048 nova_compute[253661]: 2025-11-22 09:25:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:27 np0005532048 nova_compute[253661]: 2025-11-22 09:25:27.275 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Nov 22 04:25:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:27.968 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:27.968 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:27.969 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:28 np0005532048 nova_compute[253661]: 2025-11-22 09:25:28.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:28Z|00850|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 04:25:28 np0005532048 nova_compute[253661]: 2025-11-22 09:25:28.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:28Z|00851|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 04:25:28 np0005532048 nova_compute[253661]: 2025-11-22 09:25:28.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.663 253665 DEBUG nova.compute.manager [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.664 253665 DEBUG oslo_concurrency.lockutils [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.664 253665 DEBUG oslo_concurrency.lockutils [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.664 253665 DEBUG oslo_concurrency.lockutils [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.664 253665 DEBUG nova.compute.manager [req-8d264bc7-6479-4be8-bdb9-05aa46fc6086 req-c6906f39-14e4-4750-9a6d-315e979e8c16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Processing event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.665 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.668 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803529.6685505, bd717644-36b1-45c9-a56f-b2719ae77e72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.669 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.671 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.675 253665 INFO nova.virt.libvirt.driver [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance spawned successfully.#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.675 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.703 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.712 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.717 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.718 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.719 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.719 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.719 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.720 253665 DEBUG nova.virt.libvirt.driver [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.744 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.782 253665 INFO nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Took 12.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.783 253665 DEBUG nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.795 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.796 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.813 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.864 253665 INFO nova.compute.manager [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Took 12.99 seconds to build instance.#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.877 253665 DEBUG oslo_concurrency.lockutils [None req-c5a080b4-a14d-4f98-9a5b-795f21361eba 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.894 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.894 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.902 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:25:29 np0005532048 nova_compute[253661]: 2025-11-22 09:25:29.902 253665 INFO nova.compute.claims [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.058 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2439071742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.607 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.608 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.608 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.620 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.625 253665 DEBUG nova.compute.provider_tree [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.638 253665 DEBUG nova.scheduler.client.report [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.655 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.656 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.696 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.697 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.711 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.726 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.803 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.805 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.805 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Creating image(s)#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.834 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.863 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.891 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.898 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.957 253665 DEBUG nova.policy [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '65c2cce1aec04c50ab2c62bf0b87b756', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.990 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.991 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.991 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:30 np0005532048 nova_compute[253661]: 2025-11-22 09:25:30.991 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.030 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.040 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.275 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803516.2730086, 9096405c-eb66-4d27-abbb-e709b767afea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.276 253665 INFO nova.compute.manager [-] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.295 253665 DEBUG nova.compute.manager [None req-9dd33d35-a372-4fe5-adb8-a4c8f3ac1587 - - - - - -] [instance: 9096405c-eb66-4d27-abbb-e709b767afea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 167 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.4 MiB/s wr, 111 op/s
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.810 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Successfully created port: 99300516-c832-4292-af8c-850f873b6dda _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.891 253665 DEBUG nova.compute.manager [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.892 253665 DEBUG oslo_concurrency.lockutils [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.892 253665 DEBUG oslo_concurrency.lockutils [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.892 253665 DEBUG oslo_concurrency.lockutils [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.892 253665 DEBUG nova.compute.manager [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] No waiting events found dispatching network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:31 np0005532048 nova_compute[253661]: 2025-11-22 09:25:31.893 253665 WARNING nova.compute.manager [req-daa9863f-daf2-4a0a-b3d5-9421d306507b req-29aa7b15-43f4-4c50-874a-e7c913331153 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received unexpected event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:25:32 np0005532048 nova_compute[253661]: 2025-11-22 09:25:32.091 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:32 np0005532048 nova_compute[253661]: 2025-11-22 09:25:32.154 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] resizing rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:25:32 np0005532048 nova_compute[253661]: 2025-11-22 09:25:32.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.351776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532351829, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1513, "num_deletes": 253, "total_data_size": 2232418, "memory_usage": 2274736, "flush_reason": "Manual Compaction"}
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532369968, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2197897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36284, "largest_seqno": 37796, "table_properties": {"data_size": 2190860, "index_size": 4044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15448, "raw_average_key_size": 20, "raw_value_size": 2176469, "raw_average_value_size": 2875, "num_data_blocks": 180, "num_entries": 757, "num_filter_entries": 757, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803393, "oldest_key_time": 1763803393, "file_creation_time": 1763803532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 18327 microseconds, and 5814 cpu microseconds.
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.370105) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2197897 bytes OK
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.370155) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.372505) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.372520) EVENT_LOG_v1 {"time_micros": 1763803532372515, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.372549) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2225678, prev total WAL file size 2225678, number of live WAL files 2.
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.374074) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2146KB)], [80(7874KB)]
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532374112, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10261564, "oldest_snapshot_seqno": -1}
Nov 22 04:25:32 np0005532048 nova_compute[253661]: 2025-11-22 09:25:32.417 253665 DEBUG nova.objects.instance [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 0497bf95-95d6-40fb-8a33-aa3ea54bc542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:32 np0005532048 nova_compute[253661]: 2025-11-22 09:25:32.432 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:25:32 np0005532048 nova_compute[253661]: 2025-11-22 09:25:32.432 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Ensure instance console log exists: /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:25:32 np0005532048 nova_compute[253661]: 2025-11-22 09:25:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:32 np0005532048 nova_compute[253661]: 2025-11-22 09:25:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:32 np0005532048 nova_compute[253661]: 2025-11-22 09:25:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6264 keys, 8544304 bytes, temperature: kUnknown
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532449847, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8544304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8502931, "index_size": 24623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 158352, "raw_average_key_size": 25, "raw_value_size": 8390998, "raw_average_value_size": 1339, "num_data_blocks": 994, "num_entries": 6264, "num_filter_entries": 6264, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.450266) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8544304 bytes
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.452016) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.1 rd, 112.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(8.6) write-amplify(3.9) OK, records in: 6785, records dropped: 521 output_compression: NoCompression
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.452033) EVENT_LOG_v1 {"time_micros": 1763803532452024, "job": 46, "event": "compaction_finished", "compaction_time_micros": 75963, "compaction_time_cpu_micros": 24153, "output_level": 6, "num_output_files": 1, "total_output_size": 8544304, "num_input_records": 6785, "num_output_records": 6264, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532452863, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803532454374, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.373682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:25:32.454600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:25:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:33 np0005532048 podman[338699]: 2025-11-22 09:25:33.387936395 +0000 UTC m=+0.073036847 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:25:33 np0005532048 podman[338700]: 2025-11-22 09:25:33.396281376 +0000 UTC m=+0.079759868 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.476 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updating instance_info_cache with network_info: [{"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.521 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.522 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.522 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.522 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.523 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.557 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.558 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.558 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.558 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.559 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 178 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 167 op/s
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.732 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Successfully updated port: 99300516-c832-4292-af8c-850f873b6dda _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.757 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.757 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquired lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:33 np0005532048 nova_compute[253661]: 2025-11-22 09:25:33.758 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:25:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.031 253665 DEBUG nova.compute.manager [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received event network-changed-99300516-c832-4292-af8c-850f873b6dda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105432811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.033 253665 DEBUG nova.compute.manager [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Refreshing instance network info cache due to event network-changed-99300516-c832-4292-af8c-850f873b6dda. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.033 253665 DEBUG oslo_concurrency.lockutils [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.054 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.072 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.130 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.130 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.134 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.134 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.325 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.326 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.341 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.357 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.359 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3587MB free_disk=59.92188262939453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.359 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.360 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.412 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.434 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e0b05f62-6966-4bf3-aee5-e4d2137a6cfc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.434 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance bd717644-36b1-45c9-a56f-b2719ae77e72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.434 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 0497bf95-95d6-40fb-8a33-aa3ea54bc542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.450 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 23a926e6-c6a7-4e40-82d1-654f68980549 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.451 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.451 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:25:34 np0005532048 nova_compute[253661]: 2025-11-22 09:25:34.570 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1891505159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.078 253665 DEBUG nova.network.neutron [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Updating instance_info_cache with network_info: [{"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.090 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.097 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.101 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Releasing lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.102 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance network_info: |[{"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.103 253665 DEBUG oslo_concurrency.lockutils [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.103 253665 DEBUG nova.network.neutron [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Refreshing network info cache for port 99300516-c832-4292-af8c-850f873b6dda _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.108 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start _get_guest_xml network_info=[{"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.111 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.124 253665 WARNING nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.130 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.131 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.132 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.133 253665 DEBUG nova.virt.libvirt.host [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.133 253665 DEBUG nova.virt.libvirt.host [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.140 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.141 253665 INFO nova.compute.claims [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.145 253665 DEBUG nova.virt.libvirt.host [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.145 253665 DEBUG nova.virt.libvirt.host [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.146 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.146 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.146 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.147 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.147 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.147 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.147 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.148 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.148 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.148 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.148 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.149 253665 DEBUG nova.virt.hardware [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.153 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.312 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 189 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 134 op/s
Nov 22 04:25:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1500485100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.660 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.685 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.690 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790578603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:35 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:25:35 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.780 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.788 253665 DEBUG nova.compute.provider_tree [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.801 253665 DEBUG nova.scheduler.client.report [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.824 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.825 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.837 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.838 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.838 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.863 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.864 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.879 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.894 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.964 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:25:35 np0005532048 nova_compute[253661]: 2025-11-22 09:25:35.966 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:25:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701533542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.232 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Creating image(s)#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.262 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.293 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.320 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.325 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.370 253665 DEBUG nova.policy [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.377 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.379 253665 DEBUG nova.virt.libvirt.vif [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-690745541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-690745541',id=84,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40dd9bb14a354dc591ef4aa8f9ab41e4',ramdisk_id='',reservation_id='r-a4vp5xxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1329496417',owner_user_name='tempest-ServerTagsTestJSON-1329496417-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:30Z,user_data=None,user_id='65c2cce1aec04c50ab2c62bf0b87b756',uuid=0497bf95-95d6-40fb-8a33-aa3ea54bc542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.380 253665 DEBUG nova.network.os_vif_util [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converting VIF {"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.381 253665 DEBUG nova.network.os_vif_util [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.382 253665 DEBUG nova.objects.instance [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0497bf95-95d6-40fb-8a33-aa3ea54bc542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.397 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <uuid>0497bf95-95d6-40fb-8a33-aa3ea54bc542</uuid>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <name>instance-00000054</name>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerTagsTestJSON-server-690745541</nova:name>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:25:35</nova:creationTime>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <nova:user uuid="65c2cce1aec04c50ab2c62bf0b87b756">tempest-ServerTagsTestJSON-1329496417-project-member</nova:user>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <nova:project uuid="40dd9bb14a354dc591ef4aa8f9ab41e4">tempest-ServerTagsTestJSON-1329496417</nova:project>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <nova:port uuid="99300516-c832-4292-af8c-850f873b6dda">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <entry name="serial">0497bf95-95d6-40fb-8a33-aa3ea54bc542</entry>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <entry name="uuid">0497bf95-95d6-40fb-8a33-aa3ea54bc542</entry>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:38:c9:52"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <target dev="tap99300516-c8"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/console.log" append="off"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:25:36 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:25:36 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:25:36 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:25:36 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.405 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Preparing to wait for external event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.406 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.406 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.407 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.408 253665 DEBUG nova.virt.libvirt.vif [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-690745541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-690745541',id=84,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40dd9bb14a354dc591ef4aa8f9ab41e4',ramdisk_id='',reservation_id='r-a4vp5xxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1329496417',owner_user_name='tempest-ServerTagsTestJSON-1329496417-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:30Z,user_data=None,user_id='65c2cce1aec04c50ab2c62bf0b87b756',uuid=0497bf95-95d6-40fb-8a33-aa3ea54bc542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.408 253665 DEBUG nova.network.os_vif_util [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converting VIF {"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.409 253665 DEBUG nova.network.os_vif_util [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.409 253665 DEBUG os_vif [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.411 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.411 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.416 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99300516-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.417 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99300516-c8, col_values=(('external_ids', {'iface-id': '99300516-c832-4292-af8c-850f873b6dda', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:c9:52', 'vm-uuid': '0497bf95-95d6-40fb-8a33-aa3ea54bc542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.418 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.419 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:36 np0005532048 NetworkManager[48920]: <info>  [1763803536.4197] manager: (tap99300516-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.419 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.420 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.445 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.470 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 23a926e6-c6a7-4e40-82d1-654f68980549_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.525 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.526 253665 INFO os_vif [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8')#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.680 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.681 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.682 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] No VIF found with MAC fa:16:3e:38:c9:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.682 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Using config drive#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.726 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.741 253665 DEBUG nova.network.neutron [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Updated VIF entry in instance network info cache for port 99300516-c832-4292-af8c-850f873b6dda. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.742 253665 DEBUG nova.network.neutron [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Updating instance_info_cache with network_info: [{"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.773 253665 DEBUG oslo_concurrency.lockutils [req-ca5f52ce-db79-45f7-94db-7367552f95f5 req-babacebe-df33-4aa2-aa6f-fc853ea05ffd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-0497bf95-95d6-40fb-8a33-aa3ea54bc542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:36 np0005532048 nova_compute[253661]: 2025-11-22 09:25:36.950 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 23a926e6-c6a7-4e40-82d1-654f68980549_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.026 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.079 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Creating config drive at /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.084 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7nf8act execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.191 253665 DEBUG nova.objects.instance [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid 23a926e6-c6a7-4e40-82d1-654f68980549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.223 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.223 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Ensure instance console log exists: /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.224 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.224 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.224 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.240 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7nf8act" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.269 253665 DEBUG nova.storage.rbd_utils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] rbd image 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.274 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.327 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Successfully created port: 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.454 253665 DEBUG oslo_concurrency.processutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config 0497bf95-95d6-40fb-8a33-aa3ea54bc542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.455 253665 INFO nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Deleting local config drive /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542/disk.config because it was imported into RBD.#033[00m
Nov 22 04:25:37 np0005532048 kernel: tap99300516-c8: entered promiscuous mode
Nov 22 04:25:37 np0005532048 NetworkManager[48920]: <info>  [1763803537.5173] manager: (tap99300516-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Nov 22 04:25:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:37Z|00852|binding|INFO|Claiming lport 99300516-c832-4292-af8c-850f873b6dda for this chassis.
Nov 22 04:25:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:37Z|00853|binding|INFO|99300516-c832-4292-af8c-850f873b6dda: Claiming fa:16:3e:38:c9:52 10.100.0.6
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.544 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:c9:52 10.100.0.6'], port_security=['fa:16:3e:38:c9:52 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0497bf95-95d6-40fb-8a33-aa3ea54bc542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3163a5d6-dad4-424f-9409-6ea9b8d6c858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c2254-6707-4d0b-939a-dc2b7ceb50e6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=99300516-c832-4292-af8c-850f873b6dda) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.546 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 99300516-c832-4292-af8c-850f873b6dda in datapath c908d88a-c35e-45c0-9b18-f3ea9ab34dfe bound to our chassis#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.548 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c908d88a-c35e-45c0-9b18-f3ea9ab34dfe#033[00m
Nov 22 04:25:37 np0005532048 systemd-udevd[339103]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.565 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c956bcf3-ce65-465f-8e56-2ad72fb4790e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.566 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc908d88a-c1 in ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:25:37 np0005532048 NetworkManager[48920]: <info>  [1763803537.5705] device (tap99300516-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:25:37 np0005532048 NetworkManager[48920]: <info>  [1763803537.5718] device (tap99300516-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.569 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc908d88a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.570 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e64e6b-75d7-440f-89b4-87bd6fd10ce7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.573 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[02523c78-d774-434b-8ba6-2b537483518c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 219 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 129 op/s
Nov 22 04:25:37 np0005532048 systemd-machined[215941]: New machine qemu-102-instance-00000054.
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.588 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d400fa50-5170-4cd0-ba26-c8bcbc065389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 systemd[1]: Started Virtual Machine qemu-102-instance-00000054.
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:37Z|00854|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda ovn-installed in OVS
Nov 22 04:25:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:37Z|00855|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda up in Southbound
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.605 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.622 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[89ac1ced-1293-45d7-aa91-a772bec40b1e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.654 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe38768e-2a08-4267-9406-a8d9c3e82e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 systemd-udevd[339109]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.661 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c50332a7-3f34-499f-a2ca-fe93f538e857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 NetworkManager[48920]: <info>  [1763803537.6629] manager: (tapc908d88a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.701 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7588c73b-a006-4197-a8bc-bd7491f3f7a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.704 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84cac2b2-5ec7-49dc-bccb-c2b495466213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 NetworkManager[48920]: <info>  [1763803537.7373] device (tapc908d88a-c0): carrier: link connected
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.746 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6d044d72-1d8b-45c4-8b03-429ba3ed4e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.769 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d5b0ea-fce4-4e0e-beb7-21480cb975ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc908d88a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:2a:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638847, 'reachable_time': 35601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339139, 'error': None, 'target': 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.792 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8abfaef4-90c6-45c7-970d-caf3f14a4dcf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:2a1f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638847, 'tstamp': 638847}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339140, 'error': None, 'target': 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.815 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d01bbdc-f5ad-4df7-ae82-a49fda560413]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc908d88a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:2a:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638847, 'reachable_time': 35601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339141, 'error': None, 'target': 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.848 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1236df-62df-4631-9477-0f3382b7d314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.915 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5eb917b-4437-4e8f-9d11-1d879feb4ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.917 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc908d88a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.918 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.918 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc908d88a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:37 np0005532048 NetworkManager[48920]: <info>  [1763803537.9585] manager: (tapc908d88a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Nov 22 04:25:37 np0005532048 kernel: tapc908d88a-c0: entered promiscuous mode
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.963 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc908d88a-c0, col_values=(('external_ids', {'iface-id': '22da3234-fb70-4ef8-828a-a612debb32b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:37Z|00856|binding|INFO|Releasing lport 22da3234-fb70-4ef8-828a-a612debb32b7 from this chassis (sb_readonly=0)
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.984 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:37 np0005532048 nova_compute[253661]: 2025-11-22 09:25:37.988 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.989 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c908d88a-c35e-45c0-9b18-f3ea9ab34dfe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c908d88a-c35e-45c0-9b18-f3ea9ab34dfe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.990 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2e16b7bf-6607-42f8-a628-a16449ad8728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.991 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/c908d88a-c35e-45c0-9b18-f3ea9ab34dfe.pid.haproxy
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID c908d88a-c35e-45c0-9b18-f3ea9ab34dfe
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:25:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:37.995 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'env', 'PROCESS_TAG=haproxy-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c908d88a-c35e-45c0-9b18-f3ea9ab34dfe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.191 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803538.1905398, 0497bf95-95d6-40fb-8a33-aa3ea54bc542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.191 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] VM Started (Lifecycle Event)#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.215 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.223 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803538.19076, 0497bf95-95d6-40fb-8a33-aa3ea54bc542 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.223 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.247 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.251 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.255 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Successfully updated port: 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.267 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.268 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.269 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.269 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:25:38 np0005532048 podman[339215]: 2025-11-22 09:25:38.440875044 +0000 UTC m=+0.064854058 container create 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.440 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:25:38 np0005532048 systemd[1]: Started libpod-conmon-5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4.scope.
Nov 22 04:25:38 np0005532048 podman[339215]: 2025-11-22 09:25:38.399878633 +0000 UTC m=+0.023857667 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:25:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:25:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c30b3101e7c9381790a2a0961a3d0682b383a5a7c70453278a7e7b8e52e2486/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:38 np0005532048 podman[339215]: 2025-11-22 09:25:38.543068325 +0000 UTC m=+0.167047369 container init 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:25:38 np0005532048 podman[339215]: 2025-11-22 09:25:38.550226249 +0000 UTC m=+0.174205263 container start 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 04:25:38 np0005532048 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [NOTICE]   (339234) : New worker (339236) forked
Nov 22 04:25:38 np0005532048 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [NOTICE]   (339234) : Loading success.
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.694 253665 DEBUG nova.compute.manager [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-changed-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.695 253665 DEBUG nova.compute.manager [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Refreshing instance network info cache due to event network-changed-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:25:38 np0005532048 nova_compute[253661]: 2025-11-22 09:25:38.695 253665 DEBUG oslo_concurrency.lockutils [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.407 253665 DEBUG nova.network.neutron [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Updating instance_info_cache with network_info: [{"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.422 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.422 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance network_info: |[{"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.423 253665 DEBUG oslo_concurrency.lockutils [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.423 253665 DEBUG nova.network.neutron [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Refreshing network info cache for port 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.426 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start _get_guest_xml network_info=[{"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.431 253665 WARNING nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.438 253665 DEBUG nova.virt.libvirt.host [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.438 253665 DEBUG nova.virt.libvirt.host [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.445 253665 DEBUG nova.virt.libvirt.host [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.446 253665 DEBUG nova.virt.libvirt.host [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.447 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.447 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.447 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.448 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.449 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.449 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.449 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.449 253665 DEBUG nova.virt.hardware [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.453 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 260 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 127 op/s
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.635 253665 DEBUG nova.compute.manager [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.635 253665 DEBUG oslo_concurrency.lockutils [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.636 253665 DEBUG oslo_concurrency.lockutils [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.636 253665 DEBUG oslo_concurrency.lockutils [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.636 253665 DEBUG nova.compute.manager [req-5ca4f2a0-01e6-4dfe-987a-f1a97a2ec1a4 req-f1cea969-4cca-436e-b007-ff96e1242d36 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Processing event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.637 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.641 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803539.6415293, 0497bf95-95d6-40fb-8a33-aa3ea54bc542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.642 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.645 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.649 253665 INFO nova.virt.libvirt.driver [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance spawned successfully.#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.650 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.664 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.673 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.677 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.678 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.678 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.678 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.679 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.679 253665 DEBUG nova.virt.libvirt.driver [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.704 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.743 253665 INFO nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Took 8.94 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.744 253665 DEBUG nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.814 253665 INFO nova.compute.manager [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Took 9.95 seconds to build instance.#033[00m
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.833 253665 DEBUG oslo_concurrency.lockutils [None req-06e318a2-59c3-4d3e-88b4-f816bd1d14b4 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838099758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:39 np0005532048 nova_compute[253661]: 2025-11-22 09:25:39.986 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.025 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.030 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:40 np0005532048 podman[339306]: 2025-11-22 09:25:40.415291949 +0000 UTC m=+0.105991484 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.528 253665 DEBUG nova.network.neutron [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Updated VIF entry in instance network info cache for port 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.529 253665 DEBUG nova.network.neutron [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Updating instance_info_cache with network_info: [{"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.542 253665 DEBUG oslo_concurrency.lockutils [req-22e417d2-c6e7-4e3c-b74c-9a8db2f8ed95 req-886ba653-4fd4-4846-8cbf-494d43353ff6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-23a926e6-c6a7-4e40-82d1-654f68980549" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013202772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.571 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.573 253665 DEBUG nova.virt.libvirt.vif [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=85,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-0kmnieue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:35Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=23a926e6-c6a7-4e40-82d1-654f68980549,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.573 253665 DEBUG nova.network.os_vif_util [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.574 253665 DEBUG nova.network.os_vif_util [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.576 253665 DEBUG nova.objects.instance [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23a926e6-c6a7-4e40-82d1-654f68980549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.592 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <uuid>23a926e6-c6a7-4e40-82d1-654f68980549</uuid>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <name>instance-00000055</name>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersTestJSON-server-219797360</nova:name>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:25:39</nova:creationTime>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <nova:port uuid="9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <entry name="serial">23a926e6-c6a7-4e40-82d1-654f68980549</entry>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <entry name="uuid">23a926e6-c6a7-4e40-82d1-654f68980549</entry>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/23a926e6-c6a7-4e40-82d1-654f68980549_disk">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/23a926e6-c6a7-4e40-82d1-654f68980549_disk.config">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:77:c0:58"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <target dev="tap9b10c9c8-f1"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/console.log" append="off"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:25:40 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:25:40 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:25:40 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:25:40 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.593 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Preparing to wait for external event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.593 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.594 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.594 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.595 253665 DEBUG nova.virt.libvirt.vif [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=85,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-0kmnieue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:35Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=23a926e6-c6a7-4e40-82d1-654f68980549,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.595 253665 DEBUG nova.network.os_vif_util [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.596 253665 DEBUG nova.network.os_vif_util [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.596 253665 DEBUG os_vif [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.597 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.597 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.600 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b10c9c8-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.600 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b10c9c8-f1, col_values=(('external_ids', {'iface-id': '9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:c0:58', 'vm-uuid': '23a926e6-c6a7-4e40-82d1-654f68980549'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.602 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:40 np0005532048 NetworkManager[48920]: <info>  [1763803540.6029] manager: (tap9b10c9c8-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.612 253665 INFO os_vif [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1')#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.670 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.670 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.671 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:77:c0:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.671 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Using config drive#033[00m
Nov 22 04:25:40 np0005532048 nova_compute[253661]: 2025-11-22 09:25:40.694 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.198 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Creating config drive at /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.205 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdnjy2vde execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.371 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdnjy2vde" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.399 253665 DEBUG nova.storage.rbd_utils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.403 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 260 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 114 op/s
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.758 253665 DEBUG nova.compute.manager [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.759 253665 DEBUG oslo_concurrency.lockutils [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.759 253665 DEBUG oslo_concurrency.lockutils [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.760 253665 DEBUG oslo_concurrency.lockutils [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.760 253665 DEBUG nova.compute.manager [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] No waiting events found dispatching network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:41 np0005532048 nova_compute[253661]: 2025-11-22 09:25:41.760 253665 WARNING nova.compute.manager [req-6d0ce886-25ad-4fae-8e0d-820a14973dac req-dc652be2-568e-488c-bf6d-85be060d7773 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received unexpected event network-vif-plugged-99300516-c832-4292-af8c-850f873b6dda for instance with vm_state active and task_state None.#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.268 253665 DEBUG oslo_concurrency.processutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config 23a926e6-c6a7-4e40-82d1-654f68980549_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.865s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.269 253665 INFO nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Deleting local config drive /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549/disk.config because it was imported into RBD.#033[00m
Nov 22 04:25:42 np0005532048 kernel: tap9b10c9c8-f1: entered promiscuous mode
Nov 22 04:25:42 np0005532048 NetworkManager[48920]: <info>  [1763803542.3413] manager: (tap9b10c9c8-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00857|binding|INFO|Claiming lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d for this chassis.
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00858|binding|INFO|9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d: Claiming fa:16:3e:77:c0:58 10.100.0.14
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.346 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.357 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:c0:58 10.100.0.14'], port_security=['fa:16:3e:77:c0:58 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23a926e6-c6a7-4e40-82d1-654f68980549', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.359 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.361 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00859|binding|INFO|Setting lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d ovn-installed in OVS
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00860|binding|INFO|Setting lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d up in Southbound
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 systemd-udevd[339409]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.386 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d671a46-a46b-4c01-a587-bb7662fcb94a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:42 np0005532048 systemd-machined[215941]: New machine qemu-103-instance-00000055.
Nov 22 04:25:42 np0005532048 systemd[1]: Started Virtual Machine qemu-103-instance-00000055.
Nov 22 04:25:42 np0005532048 NetworkManager[48920]: <info>  [1763803542.4020] device (tap9b10c9c8-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:25:42 np0005532048 NetworkManager[48920]: <info>  [1763803542.4039] device (tap9b10c9c8-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.427 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[db1a9ea6-c4fe-4fc0-9540-df92d79a1022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.431 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ceedf5-4924-47c8-946c-a59c312988df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.463 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44dfa6d9-8863-4185-9429-5160585e068d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.493 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df8bcd63-f132-423b-8d47-96b2fe3761a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339421, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.510 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.510 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.511 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.511 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.512 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.513 253665 INFO nova.compute.manager [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Terminating instance#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.515 253665 DEBUG nova.compute.manager [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.514 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3997b5-4648-4afb-859b-43e1aca37b88]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339423, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339423, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.516 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.522 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.522 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.523 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.523 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:42 np0005532048 kernel: tap99300516-c8 (unregistering): left promiscuous mode
Nov 22 04:25:42 np0005532048 NetworkManager[48920]: <info>  [1763803542.5638] device (tap99300516-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00861|binding|INFO|Releasing lport 99300516-c832-4292-af8c-850f873b6dda from this chassis (sb_readonly=0)
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00862|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda down in Southbound
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00863|binding|INFO|Removing iface tap99300516-c8 ovn-installed in OVS
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.575 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.580 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:c9:52 10.100.0.6'], port_security=['fa:16:3e:38:c9:52 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0497bf95-95d6-40fb-8a33-aa3ea54bc542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3163a5d6-dad4-424f-9409-6ea9b8d6c858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c2254-6707-4d0b-939a-dc2b7ceb50e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=99300516-c832-4292-af8c-850f873b6dda) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.581 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 99300516-c832-4292-af8c-850f873b6dda in datapath c908d88a-c35e-45c0-9b18-f3ea9ab34dfe unbound from our chassis#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.583 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.584 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[896fb62d-75a2-496d-b55e-d773bc0ad92a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.585 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe namespace which is not needed anymore#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000054.scope: Deactivated successfully.
Nov 22 04:25:42 np0005532048 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000054.scope: Consumed 3.450s CPU time.
Nov 22 04:25:42 np0005532048 systemd-machined[215941]: Machine qemu-102-instance-00000054 terminated.
Nov 22 04:25:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:42 np0005532048 kernel: tap99300516-c8: entered promiscuous mode
Nov 22 04:25:42 np0005532048 NetworkManager[48920]: <info>  [1763803542.7368] manager: (tap99300516-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Nov 22 04:25:42 np0005532048 systemd-udevd[339413]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:25:42 np0005532048 kernel: tap99300516-c8 (unregistering): left promiscuous mode
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00864|binding|INFO|Claiming lport 99300516-c832-4292-af8c-850f873b6dda for this chassis.
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00865|binding|INFO|99300516-c832-4292-af8c-850f873b6dda: Claiming fa:16:3e:38:c9:52 10.100.0.6
Nov 22 04:25:42 np0005532048 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [NOTICE]   (339234) : haproxy version is 2.8.14-c23fe91
Nov 22 04:25:42 np0005532048 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [NOTICE]   (339234) : path to executable is /usr/sbin/haproxy
Nov 22 04:25:42 np0005532048 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [WARNING]  (339234) : Exiting Master process...
Nov 22 04:25:42 np0005532048 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [ALERT]    (339234) : Current worker (339236) exited with code 143 (Terminated)
Nov 22 04:25:42 np0005532048 neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe[339230]: [WARNING]  (339234) : All workers exited. Exiting... (0)
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.799 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:c9:52 10.100.0.6'], port_security=['fa:16:3e:38:c9:52 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0497bf95-95d6-40fb-8a33-aa3ea54bc542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3163a5d6-dad4-424f-9409-6ea9b8d6c858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c2254-6707-4d0b-939a-dc2b7ceb50e6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=99300516-c832-4292-af8c-850f873b6dda) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:42 np0005532048 systemd[1]: libpod-5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4.scope: Deactivated successfully.
Nov 22 04:25:42 np0005532048 podman[339444]: 2025-11-22 09:25:42.813746964 +0000 UTC m=+0.097716653 container died 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.813 253665 INFO nova.virt.libvirt.driver [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Instance destroyed successfully.#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.814 253665 DEBUG nova.objects.instance [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lazy-loading 'resources' on Instance uuid 0497bf95-95d6-40fb-8a33-aa3ea54bc542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00866|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda ovn-installed in OVS
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00867|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda up in Southbound
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00868|binding|INFO|Releasing lport 99300516-c832-4292-af8c-850f873b6dda from this chassis (sb_readonly=1)
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00869|if_status|INFO|Dropped 2 log messages in last 236 seconds (most recently, 236 seconds ago) due to excessive rate
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00870|if_status|INFO|Not setting lport 99300516-c832-4292-af8c-850f873b6dda down as sb is readonly
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00871|binding|INFO|Removing iface tap99300516-c8 ovn-installed in OVS
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00872|binding|INFO|Releasing lport 99300516-c832-4292-af8c-850f873b6dda from this chassis (sb_readonly=1)
Nov 22 04:25:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:42Z|00873|binding|INFO|Setting lport 99300516-c832-4292-af8c-850f873b6dda down in Southbound
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.828 253665 DEBUG nova.virt.libvirt.vif [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:25:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-690745541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-690745541',id=84,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='40dd9bb14a354dc591ef4aa8f9ab41e4',ramdisk_id='',reservation_id='r-a4vp5xxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1329496417',owner_user_name='tempest-ServerTagsTestJSON-1329496417-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:39Z,user_data=None,user_id='65c2cce1aec04c50ab2c62bf0b87b756',uuid=0497bf95-95d6-40fb-8a33-aa3ea54bc542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.830 253665 DEBUG nova.network.os_vif_util [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converting VIF {"id": "99300516-c832-4292-af8c-850f873b6dda", "address": "fa:16:3e:38:c9:52", "network": {"id": "c908d88a-c35e-45c0-9b18-f3ea9ab34dfe", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-480785620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40dd9bb14a354dc591ef4aa8f9ab41e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99300516-c8", "ovs_interfaceid": "99300516-c832-4292-af8c-850f873b6dda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.831 253665 DEBUG nova.network.os_vif_util [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.831 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:c9:52 10.100.0.6'], port_security=['fa:16:3e:38:c9:52 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0497bf95-95d6-40fb-8a33-aa3ea54bc542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40dd9bb14a354dc591ef4aa8f9ab41e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3163a5d6-dad4-424f-9409-6ea9b8d6c858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c2254-6707-4d0b-939a-dc2b7ceb50e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=99300516-c832-4292-af8c-850f873b6dda) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.831 253665 DEBUG os_vif [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.835 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99300516-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.845 253665 INFO os_vif [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:c9:52,bridge_name='br-int',has_traffic_filtering=True,id=99300516-c832-4292-af8c-850f873b6dda,network=Network(c908d88a-c35e-45c0-9b18-f3ea9ab34dfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99300516-c8')#033[00m
Nov 22 04:25:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4-userdata-shm.mount: Deactivated successfully.
Nov 22 04:25:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1c30b3101e7c9381790a2a0961a3d0682b383a5a7c70453278a7e7b8e52e2486-merged.mount: Deactivated successfully.
Nov 22 04:25:42 np0005532048 podman[339444]: 2025-11-22 09:25:42.873363625 +0000 UTC m=+0.157333314 container cleanup 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:25:42 np0005532048 systemd[1]: libpod-conmon-5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4.scope: Deactivated successfully.
Nov 22 04:25:42 np0005532048 podman[339490]: 2025-11-22 09:25:42.951006923 +0000 UTC m=+0.051522717 container remove 5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.959 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd9adc1-c32b-4bfe-b495-ab347c88f5ce]: (4, ('Sat Nov 22 09:25:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe (5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4)\n5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4\nSat Nov 22 09:25:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe (5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4)\n5541db9676e162baa1dac86d800f692a8a261195f8ba6353ee1f3ddcb505ffc4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.962 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f2eca620-9dfb-4b95-8d37-922897a2cc41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.963 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc908d88a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 kernel: tapc908d88a-c0: left promiscuous mode
Nov 22 04:25:42 np0005532048 nova_compute[253661]: 2025-11-22 09:25:42.985 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:42.990 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7b71857e-e14a-416f-99fb-05ca46cbbf16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.003 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2e0ef3-e41d-4e63-800d-e258baab7b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.004 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5baa690b-ee2c-4206-a909-4b72b0b5e86a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad6bad5-4f68-4618-97e0-ce7b12a68afa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638838, 'reachable_time': 43983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339508, 'error': None, 'target': 'ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:43 np0005532048 systemd[1]: run-netns-ovnmeta\x2dc908d88a\x2dc35e\x2d45c0\x2d9b18\x2df3ea9ab34dfe.mount: Deactivated successfully.
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.050 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c908d88a-c35e-45c0-9b18-f3ea9ab34dfe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.050 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8feae550-c753-4aa5-b6d2-ea441b3c943b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.052 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 99300516-c832-4292-af8c-850f873b6dda in datapath c908d88a-c35e-45c0-9b18-f3ea9ab34dfe unbound from our chassis#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.053 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6ee6d8-1428-4ba5-990b-3fd0a799f2e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.055 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 99300516-c832-4292-af8c-850f873b6dda in datapath c908d88a-c35e-45c0-9b18-f3ea9ab34dfe unbound from our chassis#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.056 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c908d88a-c35e-45c0-9b18-f3ea9ab34dfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:25:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:43.058 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[830ad58d-af72-46ee-a2ed-5c2e75f4bac3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.343 253665 INFO nova.virt.libvirt.driver [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Deleting instance files /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542_del#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.344 253665 INFO nova.virt.libvirt.driver [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Deletion of /var/lib/nova/instances/0497bf95-95d6-40fb-8a33-aa3ea54bc542_del complete#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.396 253665 INFO nova.compute.manager [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.398 253665 DEBUG oslo.service.loopingcall [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.398 253665 DEBUG nova.compute.manager [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.398 253665 DEBUG nova.network.neutron [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.470 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803543.4700127, 23a926e6-c6a7-4e40-82d1-654f68980549 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.471 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] VM Started (Lifecycle Event)#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.487 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.492 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803543.4709501, 23a926e6-c6a7-4e40-82d1-654f68980549 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.492 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.511 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.515 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:43 np0005532048 nova_compute[253661]: 2025-11-22 09:25:43.537 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 266 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.2 MiB/s wr, 188 op/s
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.113 253665 DEBUG nova.network.neutron [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.129 253665 INFO nova.compute.manager [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Took 0.73 seconds to deallocate network for instance.#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.186 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.187 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.295 253665 DEBUG oslo_concurrency.processutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.341 253665 DEBUG nova.compute.manager [req-52a5caf9-7894-4341-b4a8-1133d10fbd51 req-ae902705-3f30-43d7-808e-2b975e72eeba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Received event network-vif-deleted-99300516-c832-4292-af8c-850f873b6dda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292005100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:44Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:36:77 10.100.0.7
Nov 22 04:25:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:44Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:36:77 10.100.0.7
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.784 253665 DEBUG oslo_concurrency.processutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.791 253665 DEBUG nova.compute.provider_tree [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.806 253665 DEBUG nova.scheduler.client.report [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.828 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.852 253665 INFO nova.scheduler.client.report [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Deleted allocations for instance 0497bf95-95d6-40fb-8a33-aa3ea54bc542#033[00m
Nov 22 04:25:44 np0005532048 nova_compute[253661]: 2025-11-22 09:25:44.905 253665 DEBUG oslo_concurrency.lockutils [None req-cb885e47-ce5b-48de-b185-55240adc8d56 65c2cce1aec04c50ab2c62bf0b87b756 40dd9bb14a354dc591ef4aa8f9ab41e4 - - default default] Lock "0497bf95-95d6-40fb-8a33-aa3ea54bc542" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.527 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 265 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 179 op/s
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.632 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.633 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.649 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.720 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.721 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.729 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.729 253665 INFO nova.compute.claims [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.875 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.951 253665 DEBUG nova.compute.manager [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.951 253665 DEBUG oslo_concurrency.lockutils [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.952 253665 DEBUG oslo_concurrency.lockutils [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.952 253665 DEBUG oslo_concurrency.lockutils [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.952 253665 DEBUG nova.compute.manager [req-df6a721b-4f1f-47d5-b5e0-48e88b48b30a req-d44b09d8-8478-4ad1-8cfd-d475fbb10f19 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Processing event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.953 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.957 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803545.9569583, 23a926e6-c6a7-4e40-82d1-654f68980549 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.957 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.960 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.963 253665 INFO nova.virt.libvirt.driver [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance spawned successfully.#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.963 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.984 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:45 np0005532048 nova_compute[253661]: 2025-11-22 09:25:45.989 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.003 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.004 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.005 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.005 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.005 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.006 253665 DEBUG nova.virt.libvirt.driver [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.010 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.053 253665 INFO nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Took 10.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.054 253665 DEBUG nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.107 253665 INFO nova.compute.manager [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Took 11.71 seconds to build instance.#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.121 253665 DEBUG oslo_concurrency.lockutils [None req-8922964d-ae0f-4867-9f8b-e135e80b9383 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3868983397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.339 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.345 253665 DEBUG nova.compute.provider_tree [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.361 253665 DEBUG nova.scheduler.client.report [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.382 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.383 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.444 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.444 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.462 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.485 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.589 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.591 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.591 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating image(s)#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.616 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.648 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.677 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.682 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.730 253665 DEBUG nova.policy [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ce82551204d04546a5ae9c6f99cccfc8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a246689624d4630a70f69b70d048883', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.770 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.772 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.773 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.773 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.800 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:46 np0005532048 nova_compute[253661]: 2025-11-22 09:25:46.808 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e4f9440c-7476-4022-8d08-1b3151a9db79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.238 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e4f9440c-7476-4022-8d08-1b3151a9db79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.310 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] resizing rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.419 253665 DEBUG nova.objects.instance [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'migration_context' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.434 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.435 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Ensure instance console log exists: /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.436 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.436 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.437 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 247 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 196 op/s
Nov 22 04:25:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:47Z|00874|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 04:25:47 np0005532048 nova_compute[253661]: 2025-11-22 09:25:47.933 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Successfully created port: b1fc96be-009e-46a8-829c-b7a0bc42af60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:25:48 np0005532048 nova_compute[253661]: 2025-11-22 09:25:48.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:48 np0005532048 nova_compute[253661]: 2025-11-22 09:25:48.490 253665 DEBUG nova.compute.manager [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:48 np0005532048 nova_compute[253661]: 2025-11-22 09:25:48.490 253665 DEBUG oslo_concurrency.lockutils [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:48 np0005532048 nova_compute[253661]: 2025-11-22 09:25:48.491 253665 DEBUG oslo_concurrency.lockutils [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:48 np0005532048 nova_compute[253661]: 2025-11-22 09:25:48.491 253665 DEBUG oslo_concurrency.lockutils [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:48 np0005532048 nova_compute[253661]: 2025-11-22 09:25:48.491 253665 DEBUG nova.compute.manager [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] No waiting events found dispatching network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:48 np0005532048 nova_compute[253661]: 2025-11-22 09:25:48.492 253665 WARNING nova.compute.manager [req-f6fb1afd-c8ae-44be-96f6-6beaa2940019 req-9012c4b9-90e5-49d9-82db-c08cd0b785dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received unexpected event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d for instance with vm_state active and task_state None.#033[00m
Nov 22 04:25:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 282 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 270 op/s
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.204 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Successfully updated port: b1fc96be-009e-46a8-829c-b7a0bc42af60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.284 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.285 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.285 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.449 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.450 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.450 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.451 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.451 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.452 253665 INFO nova.compute.manager [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Terminating instance#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.453 253665 DEBUG nova.compute.manager [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.539 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.573 253665 DEBUG nova.compute.manager [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.574 253665 DEBUG nova.compute.manager [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing instance network info cache due to event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.575 253665 DEBUG oslo_concurrency.lockutils [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:50 np0005532048 kernel: tap9b10c9c8-f1 (unregistering): left promiscuous mode
Nov 22 04:25:50 np0005532048 NetworkManager[48920]: <info>  [1763803550.5870] device (tap9b10c9c8-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:25:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:50Z|00875|binding|INFO|Releasing lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d from this chassis (sb_readonly=0)
Nov 22 04:25:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:50Z|00876|binding|INFO|Setting lport 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d down in Southbound
Nov 22 04:25:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:50Z|00877|binding|INFO|Removing iface tap9b10c9c8-f1 ovn-installed in OVS
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.603 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:c0:58 10.100.0.14'], port_security=['fa:16:3e:77:c0:58 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '23a926e6-c6a7-4e40-82d1-654f68980549', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.605 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.608 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.619 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.633 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9676e337-44c0-4e01-a6b0-00ac8901fdc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:50 np0005532048 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 22 04:25:50 np0005532048 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000055.scope: Consumed 5.575s CPU time.
Nov 22 04:25:50 np0005532048 systemd-machined[215941]: Machine qemu-103-instance-00000055 terminated.
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.676 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b416932f-6e34-4805-8e18-265010823337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.681 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f660ea2-6275-4866-828a-0baf4745a6d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.715 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7f740d16-a7bf-4a2d-97c1-9141df188cc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.739 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b5739b6c-c840-4e48-986f-c923634f8de9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339773, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.759 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a454fd9-59c1-469b-a29a-11d45e4152a1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339774, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339774, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.760 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.770 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.770 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.771 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.771 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:50.772 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.905 253665 INFO nova.virt.libvirt.driver [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Instance destroyed successfully.#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.906 253665 DEBUG nova.objects.instance [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid 23a926e6-c6a7-4e40-82d1-654f68980549 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.920 253665 DEBUG nova.virt.libvirt.vif [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:25:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=85,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-0kmnieue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:46Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=23a926e6-c6a7-4e40-82d1-654f68980549,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.920 253665 DEBUG nova.network.os_vif_util [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "address": "fa:16:3e:77:c0:58", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b10c9c8-f1", "ovs_interfaceid": "9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.921 253665 DEBUG nova.network.os_vif_util [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.921 253665 DEBUG os_vif [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.923 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b10c9c8-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.926 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:50 np0005532048 nova_compute[253661]: 2025-11-22 09:25:50.930 253665 INFO os_vif [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:c0:58,bridge_name='br-int',has_traffic_filtering=True,id=9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b10c9c8-f1')#033[00m
Nov 22 04:25:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 282 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 252 op/s
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.780 253665 INFO nova.virt.libvirt.driver [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Deleting instance files /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549_del#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.782 253665 INFO nova.virt.libvirt.driver [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Deletion of /var/lib/nova/instances/23a926e6-c6a7-4e40-82d1-654f68980549_del complete#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.828 253665 DEBUG nova.network.neutron [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.840 253665 INFO nova.compute.manager [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Took 1.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.842 253665 DEBUG oslo.service.loopingcall [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.843 253665 DEBUG nova.compute.manager [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.843 253665 DEBUG nova.network.neutron [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.847 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.848 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance network_info: |[{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.848 253665 DEBUG oslo_concurrency.lockutils [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.848 253665 DEBUG nova.network.neutron [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.852 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start _get_guest_xml network_info=[{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.857 253665 WARNING nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.867 253665 DEBUG nova.virt.libvirt.host [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.867 253665 DEBUG nova.virt.libvirt.host [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.875 253665 DEBUG nova.virt.libvirt.host [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.876 253665 DEBUG nova.virt.libvirt.host [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.876 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.877 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.877 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.877 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.877 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.878 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.879 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.879 253665 DEBUG nova.virt.hardware [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:25:51 np0005532048 nova_compute[253661]: 2025-11-22 09:25:51.882 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:25:52
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'images']
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:25:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880324457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.332 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.358 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.364 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.665 253665 DEBUG nova.network.neutron [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.689 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-unplugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.690 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.691 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.691 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.691 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] No waiting events found dispatching network-vif-unplugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.692 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-unplugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.692 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.692 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.693 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.693 253665 DEBUG oslo_concurrency.lockutils [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.693 253665 DEBUG nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] No waiting events found dispatching network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.694 253665 WARNING nova.compute.manager [req-9a9b74c5-6cd9-4e73-ba06-4e28351a018d req-498183bd-a775-46d5-96b0-2a78c4c739b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received unexpected event network-vif-plugged-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.697 253665 INFO nova.compute.manager [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Took 0.85 seconds to deallocate network for instance.#033[00m
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:25:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.743 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.744 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.849 253665 DEBUG oslo_concurrency.processutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:25:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3391007212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.915 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.918 253665 DEBUG nova.virt.libvirt.vif [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/cRG5bfHD3LbYWZfZhBZW64Gzk9NiecmZChn56cNdUeqOvdqm8gZ047E1aOD+/1rWy6Q/20jfwuj+tARiRMK9Fr/axSxMkwZvm5uYPBSn1o0uJaQf1m6OZmN9YqP8SQ==',key_name='tempest-keypair-427391145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.918 253665 DEBUG nova.network.os_vif_util [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.920 253665 DEBUG nova.network.os_vif_util [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.921 253665 DEBUG nova.objects.instance [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.936 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <uuid>e4f9440c-7476-4022-8d08-1b3151a9db79</uuid>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <name>instance-00000056</name>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerActionsTestOtherB-server-1215087159</nova:name>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:25:51</nova:creationTime>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <nova:user uuid="ce82551204d04546a5ae9c6f99cccfc8">tempest-ServerActionsTestOtherB-985895222-project-member</nova:user>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <nova:project uuid="8a246689624d4630a70f69b70d048883">tempest-ServerActionsTestOtherB-985895222</nova:project>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <nova:port uuid="b1fc96be-009e-46a8-829c-b7a0bc42af60">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <entry name="serial">e4f9440c-7476-4022-8d08-1b3151a9db79</entry>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <entry name="uuid">e4f9440c-7476-4022-8d08-1b3151a9db79</entry>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:38:67:ca"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <target dev="tapb1fc96be-00"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/console.log" append="off"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:25:52 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:25:52 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:25:52 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:25:52 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.937 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Preparing to wait for external event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.938 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.938 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.938 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.939 253665 DEBUG nova.virt.libvirt.vif [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/cRG5bfHD3LbYWZfZhBZW64Gzk9NiecmZChn56cNdUeqOvdqm8gZ047E1aOD+/1rWy6Q/20jfwuj+tARiRMK9Fr/axSxMkwZvm5uYPBSn1o0uJaQf1m6OZmN9YqP8SQ==',key_name='tempest-keypair-427391145',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:25:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.940 253665 DEBUG nova.network.os_vif_util [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.940 253665 DEBUG nova.network.os_vif_util [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.941 253665 DEBUG os_vif [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.942 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.942 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.943 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.947 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1fc96be-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.948 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1fc96be-00, col_values=(('external_ids', {'iface-id': 'b1fc96be-009e-46a8-829c-b7a0bc42af60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:67:ca', 'vm-uuid': 'e4f9440c-7476-4022-8d08-1b3151a9db79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:52 np0005532048 NetworkManager[48920]: <info>  [1763803552.9517] manager: (tapb1fc96be-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:25:52 np0005532048 nova_compute[253661]: 2025-11-22 09:25:52.959 253665 INFO os_vif [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00')#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.017 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.017 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.018 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No VIF found with MAC fa:16:3e:38:67:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.018 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Using config drive#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.042 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.343 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating config drive at /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.351 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr0_660or execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433808988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.390 253665 DEBUG oslo_concurrency.processutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.397 253665 DEBUG nova.compute.provider_tree [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.413 253665 DEBUG nova.scheduler.client.report [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.431 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.454 253665 INFO nova.scheduler.client.report [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance 23a926e6-c6a7-4e40-82d1-654f68980549#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.501 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr0_660or" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.526 253665 DEBUG nova.storage.rbd_utils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.531 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.576 253665 DEBUG nova.network.neutron [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated VIF entry in instance network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.577 253665 DEBUG nova.network.neutron [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 271 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 282 op/s
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.582 253665 DEBUG oslo_concurrency.lockutils [None req-694a9558-82ed-49d3-a42c-8dba1999d3d8 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "23a926e6-c6a7-4e40-82d1-654f68980549" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.598 253665 DEBUG oslo_concurrency.lockutils [req-5f834435-53e4-43d0-8078-f7ed29a5ebf1 req-5c33e707-183f-44cd-94ed-111867752526 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.703 253665 DEBUG oslo_concurrency.processutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.704 253665 INFO nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting local config drive /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config because it was imported into RBD.#033[00m
Nov 22 04:25:53 np0005532048 kernel: tapb1fc96be-00: entered promiscuous mode
Nov 22 04:25:53 np0005532048 NetworkManager[48920]: <info>  [1763803553.7573] manager: (tapb1fc96be-00): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Nov 22 04:25:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:53Z|00878|binding|INFO|Claiming lport b1fc96be-009e-46a8-829c-b7a0bc42af60 for this chassis.
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:53Z|00879|binding|INFO|b1fc96be-009e-46a8-829c-b7a0bc42af60: Claiming fa:16:3e:38:67:ca 10.100.0.10
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.770 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:67:ca 10.100.0.10'], port_security=['fa:16:3e:38:67:ca 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e4f9440c-7476-4022-8d08-1b3151a9db79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33563511-c966-495c-93cb-386deb50a2bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b1fc96be-009e-46a8-829c-b7a0bc42af60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.772 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b1fc96be-009e-46a8-829c-b7a0bc42af60 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da bound to our chassis#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.773 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da#033[00m
Nov 22 04:25:53 np0005532048 systemd-udevd[339960]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.790 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d169feb3-9bcd-463d-8095-dba283416a63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.791 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape37df2c8-41 in ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.793 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape37df2c8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b8df565f-e9d4-44e0-a0b7-7d3f5572a828]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.795 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c816bab-32b6-46a7-aa4c-101a454e2991]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 systemd-machined[215941]: New machine qemu-104-instance-00000056.
Nov 22 04:25:53 np0005532048 NetworkManager[48920]: <info>  [1763803553.8031] device (tapb1fc96be-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:25:53 np0005532048 NetworkManager[48920]: <info>  [1763803553.8047] device (tapb1fc96be-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.811 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[2e63f22e-4b4d-485c-8336-5b9120be4605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.830 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38dde595-8d63-446c-82d8-91cc6ed8cbc2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:53 np0005532048 systemd[1]: Started Virtual Machine qemu-104-instance-00000056.
Nov 22 04:25:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:53Z|00880|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 ovn-installed in OVS
Nov 22 04:25:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:53Z|00881|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 up in Southbound
Nov 22 04:25:53 np0005532048 nova_compute[253661]: 2025-11-22 09:25:53.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.866 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d55adf9c-3d03-4161-9b41-9c6b9a74f2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 NetworkManager[48920]: <info>  [1763803553.8725] manager: (tape37df2c8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/366)
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f62adfa-c57d-4016-b952-1d7b6124ba3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.910 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ec92e5e9-9d48-4ae5-85fc-647074c80fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.917 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b95a2672-b32b-44ef-9fb2-a403363a61ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 NetworkManager[48920]: <info>  [1763803553.9469] device (tape37df2c8-40): carrier: link connected
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.953 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c8834d66-33c9-452c-867a-b76eafb8f5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:53.976 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3fab7fd6-0a22-4cab-828f-c0d0dbd1e0be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339993, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34ce3ce6-c88f-4ecf-9ebd-753c9a3439d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:c448'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640468, 'tstamp': 640468}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339994, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b09f52b6-28e7-40b6-b34c-da12ffdfbbbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339995, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.039 253665 DEBUG nova.compute.manager [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.039 253665 DEBUG oslo_concurrency.lockutils [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.040 253665 DEBUG oslo_concurrency.lockutils [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.040 253665 DEBUG oslo_concurrency.lockutils [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.040 253665 DEBUG nova.compute.manager [req-c82416d6-cf6d-47b1-8fb5-c4b285e6d336 req-1c8a2043-1ddb-40ff-8155-027ec543121c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Processing event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.085 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0f7996-c535-4045-a42f-9fff1ba9b6c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.166 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[519602a0-8b80-4b3b-a11a-43373b77e997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.168 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.168 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.169 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:54 np0005532048 kernel: tape37df2c8-40: entered promiscuous mode
Nov 22 04:25:54 np0005532048 NetworkManager[48920]: <info>  [1763803554.1713] manager: (tape37df2c8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.170 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.174 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:54Z|00882|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.175 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.191 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e37df2c8-4dc4-418d-92f1-b394537a30da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e37df2c8-4dc4-418d-92f1-b394537a30da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01d8164f-1965-42f5-badd-55bc471ec1bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.193 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/e37df2c8-4dc4-418d-92f1-b394537a30da.pid.haproxy
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID e37df2c8-4dc4-418d-92f1-b394537a30da
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:25:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:54.194 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'env', 'PROCESS_TAG=haproxy-e37df2c8-4dc4-418d-92f1-b394537a30da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e37df2c8-4dc4-418d-92f1-b394537a30da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.277 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803554.276151, e4f9440c-7476-4022-8d08-1b3151a9db79 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.277 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Started (Lifecycle Event)#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.280 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.284 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.288 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance spawned successfully.#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.288 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.295 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.299 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.308 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.308 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.309 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.309 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.310 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.310 253665 DEBUG nova.virt.libvirt.driver [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.317 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.318 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803554.2764754, e4f9440c-7476-4022-8d08-1b3151a9db79 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.318 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.342 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.347 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803554.2832325, e4f9440c-7476-4022-8d08-1b3151a9db79 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.347 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.365 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.370 253665 INFO nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 7.78 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.371 253665 DEBUG nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.372 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.397 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.431 253665 INFO nova.compute.manager [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 8.73 seconds to build instance.#033[00m
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.449 253665 DEBUG oslo_concurrency.lockutils [None req-1800c784-6048-4285-ae2c-01ebb8298271 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:54 np0005532048 podman[340068]: 2025-11-22 09:25:54.620391472 +0000 UTC m=+0.063495716 container create ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 04:25:54 np0005532048 systemd[1]: Started libpod-conmon-ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f.scope.
Nov 22 04:25:54 np0005532048 podman[340068]: 2025-11-22 09:25:54.584948615 +0000 UTC m=+0.028052889 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:25:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:25:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec1bcf14e6b9291c965d546b0551f59e40ba033f0d7af433bf80952dca416338/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:25:54 np0005532048 podman[340068]: 2025-11-22 09:25:54.7278564 +0000 UTC m=+0.170960644 container init ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:25:54 np0005532048 podman[340068]: 2025-11-22 09:25:54.734792938 +0000 UTC m=+0.177897182 container start ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:25:54 np0005532048 nova_compute[253661]: 2025-11-22 09:25:54.755 253665 DEBUG nova.compute.manager [req-a4618cee-dc0a-4c56-8f75-949a07efba84 req-08757fde-9903-41bf-8a26-6040eb324c02 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Received event network-vif-deleted-9b10c9c8-f10b-4cbb-8b28-8de4f8729f6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:54 np0005532048 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [NOTICE]   (340087) : New worker (340089) forked
Nov 22 04:25:54 np0005532048 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [NOTICE]   (340087) : Loading success.
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.289 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.291 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.291 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.291 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.292 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.293 253665 INFO nova.compute.manager [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Terminating instance#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.294 253665 DEBUG nova.compute.manager [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:25:55 np0005532048 kernel: tapca4c64d8-4f (unregistering): left promiscuous mode
Nov 22 04:25:55 np0005532048 NetworkManager[48920]: <info>  [1763803555.3848] device (tapca4c64d8-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:25:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:55Z|00883|binding|INFO|Releasing lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 from this chassis (sb_readonly=0)
Nov 22 04:25:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:55Z|00884|binding|INFO|Setting lport ca4c64d8-4f02-4ed0-8099-f18eccb17951 down in Southbound
Nov 22 04:25:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:55Z|00885|binding|INFO|Removing iface tapca4c64d8-4f ovn-installed in OVS
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.415 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:36:77 10.100.0.7'], port_security=['fa:16:3e:f3:36:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bd717644-36b1-45c9-a56f-b2719ae77e72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ca4c64d8-4f02-4ed0-8099-f18eccb17951) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.416 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ca4c64d8-4f02-4ed0-8099-f18eccb17951 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.420 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.418 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.441 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66a98a41-0cd4-4517-844c-1febd299633a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:55 np0005532048 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Deactivated successfully.
Nov 22 04:25:55 np0005532048 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000053.scope: Consumed 14.435s CPU time.
Nov 22 04:25:55 np0005532048 systemd-machined[215941]: Machine qemu-101-instance-00000053 terminated.
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.477 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e449d1aa-9de3-4b12-bc8e-a996091eee47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.481 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4daecd59-4259-42c7-80ac-b3ed50d872e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:55 np0005532048 NetworkManager[48920]: <info>  [1763803555.5187] manager: (tapca4c64d8-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.521 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84b317cd-0e60-4ee4-a391-611221b19850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.525 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.539 253665 INFO nova.virt.libvirt.driver [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Instance destroyed successfully.#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.540 253665 DEBUG nova.objects.instance [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid bd717644-36b1-45c9-a56f-b2719ae77e72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98e4f6f2-afff-4e8e-84d2-8a0dafe11059]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340112, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.550 253665 DEBUG nova.virt.libvirt.vif [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:25:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-219797360',display_name='tempest-ServersTestJSON-server-219797360',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-219797360',id=83,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-pl500xqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:25:29Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=bd717644-36b1-45c9-a56f-b2719ae77e72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.551 253665 DEBUG nova.network.os_vif_util [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "address": "fa:16:3e:f3:36:77", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca4c64d8-4f", "ovs_interfaceid": "ca4c64d8-4f02-4ed0-8099-f18eccb17951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.552 253665 DEBUG nova.network.os_vif_util [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.552 253665 DEBUG os_vif [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.555 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca4c64d8-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.564 253665 INFO os_vif [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:36:77,bridge_name='br-int',has_traffic_filtering=True,id=ca4c64d8-4f02-4ed0-8099-f18eccb17951,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca4c64d8-4f')#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.570 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[66c384b4-d416-4ff9-8898-26e363071beb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340118, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340118, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.572 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.576 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.576 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.576 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:25:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:25:55.576 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 246 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 221 op/s
Nov 22 04:25:55 np0005532048 nova_compute[253661]: 2025-11-22 09:25:55.584 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:25:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.089 253665 INFO nova.virt.libvirt.driver [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Deleting instance files /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72_del#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.090 253665 INFO nova.virt.libvirt.driver [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Deletion of /var/lib/nova/instances/bd717644-36b1-45c9-a56f-b2719ae77e72_del complete#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.163 253665 INFO nova.compute.manager [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.164 253665 DEBUG oslo.service.loopingcall [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.164 253665 DEBUG nova.compute.manager [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.165 253665 DEBUG nova.network.neutron [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.316 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.316 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 WARNING nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.317 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-unplugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] No waiting events found dispatching network-vif-unplugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.318 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-unplugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG oslo_concurrency.lockutils [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.319 253665 DEBUG nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] No waiting events found dispatching network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.320 253665 WARNING nova.compute.manager [req-56fad84c-6611-4612-b9a7-a32fa316e60d req-163fbca6-9696-4eb9-a10f-0b351f7eed01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received unexpected event network-vif-plugged-ca4c64d8-4f02-4ed0-8099-f18eccb17951 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:25:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:56Z|00886|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 04:25:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:56Z|00887|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 04:25:56 np0005532048 NetworkManager[48920]: <info>  [1763803556.3934] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:56 np0005532048 NetworkManager[48920]: <info>  [1763803556.3945] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Nov 22 04:25:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:56Z|00888|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 04:25:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:25:56Z|00889|binding|INFO|Releasing lport d9f2322f-7849-4ad7-8b1c-0b79523c4fde from this chassis (sb_readonly=0)
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.852 253665 DEBUG nova.compute.manager [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.852 253665 DEBUG nova.compute.manager [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing instance network info cache due to event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.853 253665 DEBUG oslo_concurrency.lockutils [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.853 253665 DEBUG oslo_concurrency.lockutils [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:25:56 np0005532048 nova_compute[253661]: 2025-11-22 09:25:56.853 253665 DEBUG nova.network.neutron [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.194 253665 DEBUG nova.network.neutron [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.224 253665 INFO nova.compute.manager [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.265 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.266 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.356 253665 DEBUG oslo_concurrency.processutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:25:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 218 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 218 op/s
Nov 22 04:25:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.810 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803542.808146, 0497bf95-95d6-40fb-8a33-aa3ea54bc542 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.811 253665 INFO nova.compute.manager [-] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:25:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:25:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253257867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.838 253665 DEBUG nova.compute.manager [None req-3200093c-3cf0-4763-b879-8a78b3e30a7a - - - - - -] [instance: 0497bf95-95d6-40fb-8a33-aa3ea54bc542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.857 253665 DEBUG oslo_concurrency.processutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.866 253665 DEBUG nova.compute.provider_tree [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.882 253665 DEBUG nova.scheduler.client.report [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.908 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:57 np0005532048 nova_compute[253661]: 2025-11-22 09:25:57.936 253665 INFO nova.scheduler.client.report [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance bd717644-36b1-45c9-a56f-b2719ae77e72#033[00m
Nov 22 04:25:58 np0005532048 nova_compute[253661]: 2025-11-22 09:25:58.014 253665 DEBUG oslo_concurrency.lockutils [None req-5ed04b1a-0ffb-4c93-a61a-5d60564a3ba3 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "bd717644-36b1-45c9-a56f-b2719ae77e72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:25:58 np0005532048 nova_compute[253661]: 2025-11-22 09:25:58.078 253665 DEBUG nova.network.neutron [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated VIF entry in instance network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:25:58 np0005532048 nova_compute[253661]: 2025-11-22 09:25:58.078 253665 DEBUG nova.network.neutron [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:25:58 np0005532048 nova_compute[253661]: 2025-11-22 09:25:58.102 253665 DEBUG oslo_concurrency.lockutils [req-52a1f0f8-b86e-4c2a-98c0-3a0e6ecd6c8e req-1400b811-4b03-48a9-a636-389e7a65d719 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:25:58 np0005532048 nova_compute[253661]: 2025-11-22 09:25:58.430 253665 DEBUG nova.compute.manager [req-1192df44-1692-471e-ba8f-fc5496fe9571 req-96b2908d-abeb-41f8-b139-8b9b2282f236 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Received event network-vif-deleted-ca4c64d8-4f02-4ed0-8099-f18eccb17951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:25:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 167 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 232 op/s
Nov 22 04:26:00 np0005532048 nova_compute[253661]: 2025-11-22 09:26:00.533 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:00 np0005532048 nova_compute[253661]: 2025-11-22 09:26:00.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.103 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.104 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.119 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.195 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.195 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.203 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.204 253665 INFO nova.compute.claims [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.332 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 167 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 488 KiB/s wr, 144 op/s
Nov 22 04:26:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:26:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236492097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.801 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.808 253665 DEBUG nova.compute.provider_tree [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.822 253665 DEBUG nova.scheduler.client.report [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.851 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.851 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.892 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.895 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.923 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:26:01 np0005532048 nova_compute[253661]: 2025-11-22 09:26:01.947 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.045 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.046 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.047 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Creating image(s)#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.102 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.131 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.157 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.162 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.198 253665 DEBUG nova.policy [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.239 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.240 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.240 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.241 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.261 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:02 np0005532048 nova_compute[253661]: 2025-11-22 09:26:02.265 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 cca5bcee-0493-45bc-976f-32bd793dbf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011072414045712052 of space, bias 1.0, pg target 0.33217242137136155 quantized to 32 (current 32)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:26:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.755391) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562755428, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 516, "num_deletes": 255, "total_data_size": 451685, "memory_usage": 461464, "flush_reason": "Manual Compaction"}
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562796431, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 447306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37797, "largest_seqno": 38312, "table_properties": {"data_size": 444453, "index_size": 825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6689, "raw_average_key_size": 18, "raw_value_size": 438753, "raw_average_value_size": 1205, "num_data_blocks": 37, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803533, "oldest_key_time": 1763803533, "file_creation_time": 1763803562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 41104 microseconds, and 2116 cpu microseconds.
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.796491) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 447306 bytes OK
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.796519) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.814360) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.814407) EVENT_LOG_v1 {"time_micros": 1763803562814395, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.814435) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 448682, prev total WAL file size 448682, number of live WAL files 2.
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.815149) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323534' seq:72057594037927935, type:22 .. '6C6F676D0031353035' seq:0, type:0; will stop at (end)
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(436KB)], [83(8344KB)]
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562815188, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8991610, "oldest_snapshot_seqno": -1}
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6110 keys, 8875933 bytes, temperature: kUnknown
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562896481, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8875933, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8834662, "index_size": 24905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 156115, "raw_average_key_size": 25, "raw_value_size": 8724516, "raw_average_value_size": 1427, "num_data_blocks": 1002, "num_entries": 6110, "num_filter_entries": 6110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.896776) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8875933 bytes
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.900578) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.5 rd, 109.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 8.1 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(39.9) write-amplify(19.8) OK, records in: 6628, records dropped: 518 output_compression: NoCompression
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.900600) EVENT_LOG_v1 {"time_micros": 1763803562900591, "job": 48, "event": "compaction_finished", "compaction_time_micros": 81395, "compaction_time_cpu_micros": 24102, "output_level": 6, "num_output_files": 1, "total_output_size": 8875933, "num_input_records": 6628, "num_output_records": 6110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562900780, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803562902190, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.815045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:26:02 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:26:02.902523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:26:03 np0005532048 nova_compute[253661]: 2025-11-22 09:26:03.076 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Successfully created port: a812758f-4f22-4843-9cfa-447a7ab9c46a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:26:03 np0005532048 nova_compute[253661]: 2025-11-22 09:26:03.179 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 cca5bcee-0493-45bc-976f-32bd793dbf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:03 np0005532048 nova_compute[253661]: 2025-11-22 09:26:03.245 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:26:03 np0005532048 nova_compute[253661]: 2025-11-22 09:26:03.548 253665 DEBUG nova.objects.instance [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid cca5bcee-0493-45bc-976f-32bd793dbf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:03 np0005532048 nova_compute[253661]: 2025-11-22 09:26:03.561 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:26:03 np0005532048 nova_compute[253661]: 2025-11-22 09:26:03.562 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Ensure instance console log exists: /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:26:03 np0005532048 nova_compute[253661]: 2025-11-22 09:26:03.563 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:03 np0005532048 nova_compute[253661]: 2025-11-22 09:26:03.563 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:03 np0005532048 nova_compute[253661]: 2025-11-22 09:26:03.563 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 167 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 488 KiB/s wr, 144 op/s
Nov 22 04:26:04 np0005532048 nova_compute[253661]: 2025-11-22 09:26:04.085 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Successfully updated port: a812758f-4f22-4843-9cfa-447a7ab9c46a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:26:04 np0005532048 nova_compute[253661]: 2025-11-22 09:26:04.105 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:26:04 np0005532048 nova_compute[253661]: 2025-11-22 09:26:04.106 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:26:04 np0005532048 nova_compute[253661]: 2025-11-22 09:26:04.106 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:26:04 np0005532048 nova_compute[253661]: 2025-11-22 09:26:04.255 253665 DEBUG nova.compute.manager [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-changed-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:04 np0005532048 nova_compute[253661]: 2025-11-22 09:26:04.255 253665 DEBUG nova.compute.manager [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Refreshing instance network info cache due to event network-changed-a812758f-4f22-4843-9cfa-447a7ab9c46a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:26:04 np0005532048 nova_compute[253661]: 2025-11-22 09:26:04.256 253665 DEBUG oslo_concurrency.lockutils [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:26:04 np0005532048 nova_compute[253661]: 2025-11-22 09:26:04.283 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:26:04 np0005532048 podman[340348]: 2025-11-22 09:26:04.407990597 +0000 UTC m=+0.083721974 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:26:04 np0005532048 podman[340349]: 2025-11-22 09:26:04.42793106 +0000 UTC m=+0.103698218 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 04:26:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:04.473 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:04.474 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:26:04 np0005532048 nova_compute[253661]: 2025-11-22 09:26:04.476 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.422 253665 DEBUG nova.network.neutron [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Updating instance_info_cache with network_info: [{"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.441 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.441 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance network_info: |[{"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.441 253665 DEBUG oslo_concurrency.lockutils [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.442 253665 DEBUG nova.network.neutron [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Refreshing network info cache for port a812758f-4f22-4843-9cfa-447a7ab9c46a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.444 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start _get_guest_xml network_info=[{"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.451 253665 WARNING nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.460 253665 DEBUG nova.virt.libvirt.host [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.461 253665 DEBUG nova.virt.libvirt.host [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.467 253665 DEBUG nova.virt.libvirt.host [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.468 253665 DEBUG nova.virt.libvirt.host [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.468 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.468 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.469 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.470 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.471 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.471 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.471 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.472 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.472 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.472 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.473 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.474 253665 DEBUG nova.virt.hardware [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.479 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 183 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 478 KiB/s wr, 139 op/s
Nov 22 04:26:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:26:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:26:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.905 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803550.903665, 23a926e6-c6a7-4e40-82d1-654f68980549 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.905 253665 INFO nova.compute.manager [-] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:26:05 np0005532048 nova_compute[253661]: 2025-11-22 09:26:05.927 253665 DEBUG nova.compute.manager [None req-75935937-e969-4e33-9509-319867fa50d0 - - - - - -] [instance: 23a926e6-c6a7-4e40-82d1-654f68980549] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:26:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033577931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.021 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.044 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.050 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:26:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340608851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.660 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.662 253665 DEBUG nova.virt.libvirt.vif [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:26:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-79942486',display_name='tempest-ServersTestJSON-server-79942486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-79942486',id=87,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-fbpdzrbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:26:01Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=cca5bcee-0493-45bc-976f-32bd793dbf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.662 253665 DEBUG nova.network.os_vif_util [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.663 253665 DEBUG nova.network.os_vif_util [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.664 253665 DEBUG nova.objects.instance [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid cca5bcee-0493-45bc-976f-32bd793dbf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.676 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <uuid>cca5bcee-0493-45bc-976f-32bd793dbf01</uuid>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <name>instance-00000057</name>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersTestJSON-server-79942486</nova:name>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:26:05</nova:creationTime>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <nova:port uuid="a812758f-4f22-4843-9cfa-447a7ab9c46a">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <entry name="serial">cca5bcee-0493-45bc-976f-32bd793dbf01</entry>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <entry name="uuid">cca5bcee-0493-45bc-976f-32bd793dbf01</entry>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/cca5bcee-0493-45bc-976f-32bd793dbf01_disk">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:80:f5:32"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <target dev="tapa812758f-4f"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/console.log" append="off"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:26:06 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:26:06 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:26:06 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:26:06 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.677 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Preparing to wait for external event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.677 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.677 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.677 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.678 253665 DEBUG nova.virt.libvirt.vif [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:26:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-79942486',display_name='tempest-ServersTestJSON-server-79942486',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-79942486',id=87,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-fbpdzrbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:26:01Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=cca5bcee-0493-45bc-976f-32bd793dbf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.678 253665 DEBUG nova.network.os_vif_util [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.679 253665 DEBUG nova.network.os_vif_util [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.679 253665 DEBUG os_vif [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.680 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.680 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.680 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.683 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.684 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa812758f-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.684 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa812758f-4f, col_values=(('external_ids', {'iface-id': 'a812758f-4f22-4843-9cfa-447a7ab9c46a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:f5:32', 'vm-uuid': 'cca5bcee-0493-45bc-976f-32bd793dbf01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.686 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:06 np0005532048 NetworkManager[48920]: <info>  [1763803566.6869] manager: (tapa812758f-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.694 253665 INFO os_vif [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f')#033[00m
Nov 22 04:26:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:26:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:26:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:26:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:26:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:26:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9bcadd1d-0fdd-4a74-961d-b18759d8e7ca does not exist
Nov 22 04:26:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6d8d5f2c-899b-4c47-9f9e-88995f008492 does not exist
Nov 22 04:26:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8de019a6-4f37-403f-846c-e541ac31d934 does not exist
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.891 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.891 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.891 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:80:f5:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:26:06 np0005532048 nova_compute[253661]: 2025-11-22 09:26:06.892 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Using config drive#033[00m
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:26:07 np0005532048 nova_compute[253661]: 2025-11-22 09:26:07.058 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:26:07 np0005532048 nova_compute[253661]: 2025-11-22 09:26:07.504 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Creating config drive at /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config#033[00m
Nov 22 04:26:07 np0005532048 nova_compute[253661]: 2025-11-22 09:26:07.510 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9l686sph execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 206 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 133 op/s
Nov 22 04:26:07 np0005532048 nova_compute[253661]: 2025-11-22 09:26:07.662 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9l686sph" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:07 np0005532048 nova_compute[253661]: 2025-11-22 09:26:07.692 253665 DEBUG nova.storage.rbd_utils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:07 np0005532048 nova_compute[253661]: 2025-11-22 09:26:07.695 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:07 np0005532048 podman[340859]: 2025-11-22 09:26:07.658625645 +0000 UTC m=+0.024341059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:26:07 np0005532048 podman[340859]: 2025-11-22 09:26:07.846014305 +0000 UTC m=+0.211729689 container create 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 04:26:08 np0005532048 nova_compute[253661]: 2025-11-22 09:26:08.037 253665 DEBUG nova.network.neutron [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Updated VIF entry in instance network info cache for port a812758f-4f22-4843-9cfa-447a7ab9c46a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:26:08 np0005532048 nova_compute[253661]: 2025-11-22 09:26:08.038 253665 DEBUG nova.network.neutron [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Updating instance_info_cache with network_info: [{"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:26:08 np0005532048 nova_compute[253661]: 2025-11-22 09:26:08.052 253665 DEBUG oslo_concurrency.lockutils [req-a8713a2b-987d-4a88-a42a-f5921475430a req-3b70ee03-0fe2-43ec-b520-4b57273a7230 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-cca5bcee-0493-45bc-976f-32bd793dbf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:26:08 np0005532048 systemd[1]: Started libpod-conmon-22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847.scope.
Nov 22 04:26:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:26:08 np0005532048 podman[340859]: 2025-11-22 09:26:08.210648871 +0000 UTC m=+0.576364285 container init 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 04:26:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:26:08 np0005532048 podman[340859]: 2025-11-22 09:26:08.224425454 +0000 UTC m=+0.590140848 container start 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:26:08 np0005532048 practical_swanson[340909]: 167 167
Nov 22 04:26:08 np0005532048 systemd[1]: libpod-22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847.scope: Deactivated successfully.
Nov 22 04:26:08 np0005532048 podman[340859]: 2025-11-22 09:26:08.449662239 +0000 UTC m=+0.815377633 container attach 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:26:08 np0005532048 podman[340859]: 2025-11-22 09:26:08.451401641 +0000 UTC m=+0.817117065 container died 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:26:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-580afe7c6bae772f492d4c1245ccc572e00bd30300315a9f9bf15e8d7af2fa86-merged.mount: Deactivated successfully.
Nov 22 04:26:09 np0005532048 podman[340859]: 2025-11-22 09:26:09.16710623 +0000 UTC m=+1.532821654 container remove 22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swanson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:26:09 np0005532048 systemd[1]: libpod-conmon-22af0c17c1378f852562ba59321bf3f64430eefa7edf3011d50a54f2bc765847.scope: Deactivated successfully.
Nov 22 04:26:09 np0005532048 podman[340935]: 2025-11-22 09:26:09.353203308 +0000 UTC m=+0.035512733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:26:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 217 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 106 op/s
Nov 22 04:26:09 np0005532048 podman[340935]: 2025-11-22 09:26:09.613010872 +0000 UTC m=+0.295320197 container create 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:26:09 np0005532048 systemd[1]: Started libpod-conmon-1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508.scope.
Nov 22 04:26:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:26:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:09 np0005532048 podman[340935]: 2025-11-22 09:26:09.980967744 +0000 UTC m=+0.663277059 container init 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:26:09 np0005532048 podman[340935]: 2025-11-22 09:26:09.990195785 +0000 UTC m=+0.672505100 container start 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.041 253665 DEBUG oslo_concurrency.processutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config cca5bcee-0493-45bc-976f-32bd793dbf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.042 253665 INFO nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Deleting local config drive /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01/disk.config because it was imported into RBD.#033[00m
Nov 22 04:26:10 np0005532048 podman[340935]: 2025-11-22 09:26:10.055537699 +0000 UTC m=+0.737847014 container attach 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:26:10 np0005532048 kernel: tapa812758f-4f: entered promiscuous mode
Nov 22 04:26:10 np0005532048 NetworkManager[48920]: <info>  [1763803570.1176] manager: (tapa812758f-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Nov 22 04:26:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:10Z|00890|binding|INFO|Claiming lport a812758f-4f22-4843-9cfa-447a7ab9c46a for this chassis.
Nov 22 04:26:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:10Z|00891|binding|INFO|a812758f-4f22-4843-9cfa-447a7ab9c46a: Claiming fa:16:3e:80:f5:32 10.100.0.6
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.121 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.126 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f5:32 10.100.0.6'], port_security=['fa:16:3e:80:f5:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cca5bcee-0493-45bc-976f-32bd793dbf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a812758f-4f22-4843-9cfa-447a7ab9c46a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.127 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a812758f-4f22-4843-9cfa-447a7ab9c46a in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.129 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:26:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:10Z|00892|binding|INFO|Setting lport a812758f-4f22-4843-9cfa-447a7ab9c46a up in Southbound
Nov 22 04:26:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:10Z|00893|binding|INFO|Setting lport a812758f-4f22-4843-9cfa-447a7ab9c46a ovn-installed in OVS
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.159 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.160 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[39c30b0a-38aa-4a78-8b66-cbb56196d838]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:10 np0005532048 systemd-udevd[340967]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:26:10 np0005532048 NetworkManager[48920]: <info>  [1763803570.1821] device (tapa812758f-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:26:10 np0005532048 NetworkManager[48920]: <info>  [1763803570.1829] device (tapa812758f-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:26:10 np0005532048 systemd-machined[215941]: New machine qemu-105-instance-00000057.
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.200 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[61b526ca-ec3f-4d75-9ffe-482be5f8f8a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:10 np0005532048 systemd[1]: Started Virtual Machine qemu-105-instance-00000057.
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.204 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5294d6-cea8-4726-a1f7-64bf26f69b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.242 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec28f05-1541-4da9-a9d7-547a621df6e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.266 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[29b4a6ca-a31c-4c0c-aff4-0cc30a2434a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340982, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.293 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b7aa2d27-33f4-4be5-b36a-f9d04ce5b1f0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340983, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340983, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.295 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.298 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.299 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.300 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.301 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:10.301 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.429 253665 DEBUG nova.compute.manager [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.430 253665 DEBUG oslo_concurrency.lockutils [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.430 253665 DEBUG oslo_concurrency.lockutils [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.430 253665 DEBUG oslo_concurrency.lockutils [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.430 253665 DEBUG nova.compute.manager [req-fa8d4c7b-b5ce-4fb1-89e2-984e7b9b74df req-668a90a8-1c01-42d8-bf6c-dae2d5b30a45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Processing event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.539 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803555.537082, bd717644-36b1-45c9-a56f-b2719ae77e72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.540 253665 INFO nova.compute.manager [-] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:26:10 np0005532048 nova_compute[253661]: 2025-11-22 09:26:10.554 253665 DEBUG nova.compute.manager [None req-f69642a2-8512-4de1-bc8a-d0d4789b3766 - - - - - -] [instance: bd717644-36b1-45c9-a56f-b2719ae77e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.136 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.137 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803571.1354618, cca5bcee-0493-45bc-976f-32bd793dbf01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.137 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] VM Started (Lifecycle Event)#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.140 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.144 253665 INFO nova.virt.libvirt.driver [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance spawned successfully.#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.145 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.162 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.173 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.178 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.180 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.181 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.182 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.183 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.183 253665 DEBUG nova.virt.libvirt.driver [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.204 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.205 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803571.1366146, cca5bcee-0493-45bc-976f-32bd793dbf01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.205 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:26:11 np0005532048 jolly_dhawan[340951]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:26:11 np0005532048 jolly_dhawan[340951]: --> relative data size: 1.0
Nov 22 04:26:11 np0005532048 jolly_dhawan[340951]: --> All data devices are unavailable
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.232 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.237 253665 INFO nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Took 9.19 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.238 253665 DEBUG nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.243 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803571.139664, cca5bcee-0493-45bc-976f-32bd793dbf01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.243 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:26:11 np0005532048 systemd[1]: libpod-1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508.scope: Deactivated successfully.
Nov 22 04:26:11 np0005532048 systemd[1]: libpod-1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508.scope: Consumed 1.082s CPU time.
Nov 22 04:26:11 np0005532048 podman[340935]: 2025-11-22 09:26:11.246448604 +0000 UTC m=+1.928757939 container died 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.270 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.274 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.295 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.308 253665 INFO nova.compute.manager [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Took 10.13 seconds to build instance.#033[00m
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.324 253665 DEBUG oslo_concurrency.lockutils [None req-ff512077-271c-4994-b983-f78e393c6a27 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5c70674a2a1df57034c7a669abebe3a4c7086016729d5429568121956d5986e3-merged.mount: Deactivated successfully.
Nov 22 04:26:11 np0005532048 podman[341052]: 2025-11-22 09:26:11.586143735 +0000 UTC m=+0.302903947 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 22 04:26:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 217 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.5 MiB/s wr, 48 op/s
Nov 22 04:26:11 np0005532048 podman[340935]: 2025-11-22 09:26:11.683387181 +0000 UTC m=+2.365696496 container remove 1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:26:11 np0005532048 nova_compute[253661]: 2025-11-22 09:26:11.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:11 np0005532048 systemd[1]: libpod-conmon-1d5d10147eb7914d7b7ba77d2c98d2dcaeb541c8bc9677bd59661d3f4b41e508.scope: Deactivated successfully.
Nov 22 04:26:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:26:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/112024807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:26:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:26:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/112024807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:26:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:12.476 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:12 np0005532048 podman[341223]: 2025-11-22 09:26:12.490951466 +0000 UTC m=+0.094174649 container create 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:26:12 np0005532048 podman[341223]: 2025-11-22 09:26:12.424225649 +0000 UTC m=+0.027448852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:26:12 np0005532048 nova_compute[253661]: 2025-11-22 09:26:12.595 253665 DEBUG nova.compute.manager [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:12 np0005532048 nova_compute[253661]: 2025-11-22 09:26:12.596 253665 DEBUG oslo_concurrency.lockutils [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:12 np0005532048 nova_compute[253661]: 2025-11-22 09:26:12.596 253665 DEBUG oslo_concurrency.lockutils [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:12 np0005532048 nova_compute[253661]: 2025-11-22 09:26:12.597 253665 DEBUG oslo_concurrency.lockutils [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:12 np0005532048 nova_compute[253661]: 2025-11-22 09:26:12.597 253665 DEBUG nova.compute.manager [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] No waiting events found dispatching network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:26:12 np0005532048 nova_compute[253661]: 2025-11-22 09:26:12.597 253665 WARNING nova.compute.manager [req-35d0c479-0a9a-4be5-b6d8-fe01aee09f71 req-dad2356d-d501-449b-8457-b884f627190f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received unexpected event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a for instance with vm_state active and task_state None.#033[00m
Nov 22 04:26:12 np0005532048 systemd[1]: Started libpod-conmon-146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c.scope.
Nov 22 04:26:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:26:12 np0005532048 podman[341223]: 2025-11-22 09:26:12.740183053 +0000 UTC m=+0.343406236 container init 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:26:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:12 np0005532048 podman[341223]: 2025-11-22 09:26:12.751918399 +0000 UTC m=+0.355141582 container start 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:26:12 np0005532048 jovial_saha[341239]: 167 167
Nov 22 04:26:12 np0005532048 systemd[1]: libpod-146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c.scope: Deactivated successfully.
Nov 22 04:26:12 np0005532048 conmon[341239]: conmon 146284058b47cb833a12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c.scope/container/memory.events
Nov 22 04:26:12 np0005532048 podman[341223]: 2025-11-22 09:26:12.811889287 +0000 UTC m=+0.415112470 container attach 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:26:12 np0005532048 podman[341223]: 2025-11-22 09:26:12.812384379 +0000 UTC m=+0.415607572 container died 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:26:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:13Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:67:ca 10.100.0.10
Nov 22 04:26:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-23554277dc2383230cd78f012fa64d1cefdbfdac7b1c11249ce2e66f1d0f08d5-merged.mount: Deactivated successfully.
Nov 22 04:26:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:13Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:67:ca 10.100.0.10
Nov 22 04:26:13 np0005532048 podman[341223]: 2025-11-22 09:26:13.262271651 +0000 UTC m=+0.865494874 container remove 146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_saha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:26:13 np0005532048 systemd[1]: libpod-conmon-146284058b47cb833a122fc020e2340adceb76c0952b4aa6271034c2a784698c.scope: Deactivated successfully.
Nov 22 04:26:13 np0005532048 podman[341263]: 2025-11-22 09:26:13.495720191 +0000 UTC m=+0.032848506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:26:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 231 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 674 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Nov 22 04:26:13 np0005532048 podman[341263]: 2025-11-22 09:26:13.709715782 +0000 UTC m=+0.246844077 container create ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:26:13 np0005532048 systemd[1]: Started libpod-conmon-ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99.scope.
Nov 22 04:26:13 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:26:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:13 np0005532048 podman[341263]: 2025-11-22 09:26:13.991301533 +0000 UTC m=+0.528429858 container init ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 04:26:14 np0005532048 podman[341263]: 2025-11-22 09:26:14.003375197 +0000 UTC m=+0.540503492 container start ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:26:14 np0005532048 podman[341263]: 2025-11-22 09:26:14.076369912 +0000 UTC m=+0.613498247 container attach ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:26:14 np0005532048 festive_raman[341279]: {
Nov 22 04:26:14 np0005532048 festive_raman[341279]:    "0": [
Nov 22 04:26:14 np0005532048 festive_raman[341279]:        {
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "devices": [
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "/dev/loop3"
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            ],
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_name": "ceph_lv0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_size": "21470642176",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "name": "ceph_lv0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "tags": {
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.cluster_name": "ceph",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.crush_device_class": "",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.encrypted": "0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.osd_id": "0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.type": "block",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.vdo": "0"
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            },
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "type": "block",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "vg_name": "ceph_vg0"
Nov 22 04:26:14 np0005532048 festive_raman[341279]:        }
Nov 22 04:26:14 np0005532048 festive_raman[341279]:    ],
Nov 22 04:26:14 np0005532048 festive_raman[341279]:    "1": [
Nov 22 04:26:14 np0005532048 festive_raman[341279]:        {
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "devices": [
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "/dev/loop4"
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            ],
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_name": "ceph_lv1",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_size": "21470642176",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "name": "ceph_lv1",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "tags": {
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.cluster_name": "ceph",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.crush_device_class": "",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.encrypted": "0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.osd_id": "1",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.type": "block",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.vdo": "0"
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            },
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "type": "block",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "vg_name": "ceph_vg1"
Nov 22 04:26:14 np0005532048 festive_raman[341279]:        }
Nov 22 04:26:14 np0005532048 festive_raman[341279]:    ],
Nov 22 04:26:14 np0005532048 festive_raman[341279]:    "2": [
Nov 22 04:26:14 np0005532048 festive_raman[341279]:        {
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "devices": [
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "/dev/loop5"
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            ],
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_name": "ceph_lv2",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_size": "21470642176",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "name": "ceph_lv2",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "tags": {
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.cluster_name": "ceph",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.crush_device_class": "",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.encrypted": "0",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.osd_id": "2",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.type": "block",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:                "ceph.vdo": "0"
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            },
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "type": "block",
Nov 22 04:26:14 np0005532048 festive_raman[341279]:            "vg_name": "ceph_vg2"
Nov 22 04:26:14 np0005532048 festive_raman[341279]:        }
Nov 22 04:26:14 np0005532048 festive_raman[341279]:    ]
Nov 22 04:26:14 np0005532048 festive_raman[341279]: }
Nov 22 04:26:14 np0005532048 systemd[1]: libpod-ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99.scope: Deactivated successfully.
Nov 22 04:26:14 np0005532048 podman[341263]: 2025-11-22 09:26:14.849044341 +0000 UTC m=+1.386172656 container died ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:26:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5b3cfd83faf27acd5aa25d7a08c3cf1ad33c53fb6a2d2d89a65b88b33d1aeb1c-merged.mount: Deactivated successfully.
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.343 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.344 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.344 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.344 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.345 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.346 253665 INFO nova.compute.manager [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Terminating instance#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.347 253665 DEBUG nova.compute.manager [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.540 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 239 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 121 op/s
Nov 22 04:26:15 np0005532048 podman[341263]: 2025-11-22 09:26:15.880295271 +0000 UTC m=+2.417423566 container remove ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:26:15 np0005532048 kernel: tapa812758f-4f (unregistering): left promiscuous mode
Nov 22 04:26:15 np0005532048 systemd[1]: libpod-conmon-ae903dd9e1a099aa55393a83ecbff0486ef7ac82cc42f5999814626222d58f99.scope: Deactivated successfully.
Nov 22 04:26:15 np0005532048 NetworkManager[48920]: <info>  [1763803575.8916] device (tapa812758f-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:26:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:15Z|00894|binding|INFO|Releasing lport a812758f-4f22-4843-9cfa-447a7ab9c46a from this chassis (sb_readonly=0)
Nov 22 04:26:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:15Z|00895|binding|INFO|Setting lport a812758f-4f22-4843-9cfa-447a7ab9c46a down in Southbound
Nov 22 04:26:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:15Z|00896|binding|INFO|Removing iface tapa812758f-4f ovn-installed in OVS
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.909 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:f5:32 10.100.0.6'], port_security=['fa:16:3e:80:f5:32 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'cca5bcee-0493-45bc-976f-32bd793dbf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a812758f-4f22-4843-9cfa-447a7ab9c46a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.910 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a812758f-4f22-4843-9cfa-447a7ab9c46a in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis#033[00m
Nov 22 04:26:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.912 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.933 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f07d1b-acad-4305-ab20-665dc573f32c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:15 np0005532048 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000057.scope: Deactivated successfully.
Nov 22 04:26:15 np0005532048 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000057.scope: Consumed 4.702s CPU time.
Nov 22 04:26:15 np0005532048 systemd-machined[215941]: Machine qemu-105-instance-00000057 terminated.
Nov 22 04:26:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.979 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae688cc-e678-4ed4-9b31-7b9496207f9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:15.984 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2a2a5-7de3-4e3d-a503-94e1705bd06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.990 253665 INFO nova.virt.libvirt.driver [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Instance destroyed successfully.#033[00m
Nov 22 04:26:15 np0005532048 nova_compute[253661]: 2025-11-22 09:26:15.991 253665 DEBUG nova.objects.instance [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid cca5bcee-0493-45bc-976f-32bd793dbf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.001 253665 DEBUG nova.virt.libvirt.vif [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:26:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-79942486',display_name='tempest-ServersTestJSON-server-79942486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-79942486',id=87,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:26:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-fbpdzrbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:26:13Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=cca5bcee-0493-45bc-976f-32bd793dbf01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.002 253665 DEBUG nova.network.os_vif_util [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "address": "fa:16:3e:80:f5:32", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa812758f-4f", "ovs_interfaceid": "a812758f-4f22-4843-9cfa-447a7ab9c46a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.003 253665 DEBUG nova.network.os_vif_util [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.003 253665 DEBUG os_vif [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.004 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.005 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa812758f-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.008 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.010 253665 INFO os_vif [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:f5:32,bridge_name='br-int',has_traffic_filtering=True,id=a812758f-4f22-4843-9cfa-447a7ab9c46a,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa812758f-4f')#033[00m
Nov 22 04:26:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.031 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0cdfb8-36a8-45ba-912b-63d4993aa326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.060 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9c88541e-bfda-45ba-8065-27f3c05590b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341366, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.085 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e7839ed7-7378-472e-b939-6e79f20fa79a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341384, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341384, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.087 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:16 np0005532048 nova_compute[253661]: 2025-11-22 09:26:16.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.092 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.092 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:16.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:16 np0005532048 podman[341485]: 2025-11-22 09:26:16.650852497 +0000 UTC m=+0.027853532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:26:16 np0005532048 podman[341485]: 2025-11-22 09:26:16.824358958 +0000 UTC m=+0.201359973 container create fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:26:16 np0005532048 systemd[1]: Started libpod-conmon-fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17.scope.
Nov 22 04:26:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:26:17 np0005532048 podman[341485]: 2025-11-22 09:26:17.11074387 +0000 UTC m=+0.487744985 container init fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:26:17 np0005532048 podman[341485]: 2025-11-22 09:26:17.119568342 +0000 UTC m=+0.496569357 container start fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:26:17 np0005532048 flamboyant_meninsky[341502]: 167 167
Nov 22 04:26:17 np0005532048 systemd[1]: libpod-fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17.scope: Deactivated successfully.
Nov 22 04:26:17 np0005532048 conmon[341502]: conmon fbcdfad9dd51b2cf37fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17.scope/container/memory.events
Nov 22 04:26:17 np0005532048 podman[341485]: 2025-11-22 09:26:17.178267878 +0000 UTC m=+0.555268923 container attach fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 04:26:17 np0005532048 podman[341485]: 2025-11-22 09:26:17.180434633 +0000 UTC m=+0.557435648 container died fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:26:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0013d719f656b2eec6f2c2827a0a1ec4e07e4e839aa81984af2803ea610b4d82-merged.mount: Deactivated successfully.
Nov 22 04:26:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 241 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 134 op/s
Nov 22 04:26:17 np0005532048 podman[341485]: 2025-11-22 09:26:17.714118532 +0000 UTC m=+1.091119547 container remove fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:26:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:17 np0005532048 systemd[1]: libpod-conmon-fbcdfad9dd51b2cf37fc4198b5dd161e8dccd54a8e7c949002e1c97973788f17.scope: Deactivated successfully.
Nov 22 04:26:17 np0005532048 podman[341526]: 2025-11-22 09:26:17.882449424 +0000 UTC m=+0.026940168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:26:18 np0005532048 podman[341526]: 2025-11-22 09:26:18.035588445 +0000 UTC m=+0.180079169 container create 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:26:18 np0005532048 systemd[1]: Started libpod-conmon-7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727.scope.
Nov 22 04:26:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:26:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:26:18 np0005532048 podman[341526]: 2025-11-22 09:26:18.231744068 +0000 UTC m=+0.376234812 container init 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:26:18 np0005532048 podman[341526]: 2025-11-22 09:26:18.23899744 +0000 UTC m=+0.383488204 container start 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:26:18 np0005532048 podman[341526]: 2025-11-22 09:26:18.29547014 +0000 UTC m=+0.439960894 container attach 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]: {
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "osd_id": 1,
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "type": "bluestore"
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:    },
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "osd_id": 0,
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "type": "bluestore"
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:    },
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "osd_id": 2,
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:        "type": "bluestore"
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]:    }
Nov 22 04:26:19 np0005532048 optimistic_snyder[341543]: }
Nov 22 04:26:19 np0005532048 systemd[1]: libpod-7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727.scope: Deactivated successfully.
Nov 22 04:26:19 np0005532048 systemd[1]: libpod-7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727.scope: Consumed 1.080s CPU time.
Nov 22 04:26:19 np0005532048 podman[341577]: 2025-11-22 09:26:19.377561038 +0000 UTC m=+0.029339748 container died 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 04:26:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dd36d206a345597e5fce1242aea7c866d02bcacf9d82bc7cd45f246bb60b0ad1-merged.mount: Deactivated successfully.
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.563 253665 INFO nova.virt.libvirt.driver [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Deleting instance files /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01_del#033[00m
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.566 253665 INFO nova.virt.libvirt.driver [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Deletion of /var/lib/nova/instances/cca5bcee-0493-45bc-976f-32bd793dbf01_del complete#033[00m
Nov 22 04:26:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 224 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 154 op/s
Nov 22 04:26:19 np0005532048 podman[341577]: 2025-11-22 09:26:19.598602407 +0000 UTC m=+0.250381107 container remove 7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:26:19 np0005532048 systemd[1]: libpod-conmon-7dee33d71902b65953c22003c4e772cd102b4522c5e77bc45127445d5035d727.scope: Deactivated successfully.
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.620 253665 INFO nova.compute.manager [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Took 4.27 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.621 253665 DEBUG oslo.service.loopingcall [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.621 253665 DEBUG nova.compute.manager [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.622 253665 DEBUG nova.network.neutron [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:26:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.672 253665 DEBUG nova.compute.manager [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-unplugged-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.672 253665 DEBUG oslo_concurrency.lockutils [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.672 253665 DEBUG oslo_concurrency.lockutils [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.673 253665 DEBUG oslo_concurrency.lockutils [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.673 253665 DEBUG nova.compute.manager [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] No waiting events found dispatching network-vif-unplugged-a812758f-4f22-4843-9cfa-447a7ab9c46a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:26:19 np0005532048 nova_compute[253661]: 2025-11-22 09:26:19.673 253665 DEBUG nova.compute.manager [req-b3f3a871-ea35-4216-b7cd-5e584a7802ed req-a81eb5bb-9354-4607-a6cd-1376c7e6bb10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-unplugged-a812758f-4f22-4843-9cfa-447a7ab9c46a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:26:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:26:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:19 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a1a8b41d-981f-4270-9ba4-4564e588ce7e does not exist
Nov 22 04:26:19 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fa3de5f8-a3af-4976-82c7-e929705e465d does not exist
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.300 253665 DEBUG nova.network.neutron [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.318 253665 INFO nova.compute.manager [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Took 0.70 seconds to deallocate network for instance.#033[00m
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.382 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.382 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.421 253665 DEBUG nova.compute.manager [req-f634a83c-0441-4186-8a34-4bfddbfa1be1 req-76b8a3be-84e4-4138-a0e3-e3bcfd8cfd80 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-deleted-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.468 253665 DEBUG oslo_concurrency.processutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:26:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:26:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4041259500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.956 253665 DEBUG oslo_concurrency.processutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.965 253665 DEBUG nova.compute.provider_tree [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:26:20 np0005532048 nova_compute[253661]: 2025-11-22 09:26:20.983 253665 DEBUG nova.scheduler.client.report [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.010 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.040 253665 INFO nova.scheduler.client.report [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance cca5bcee-0493-45bc-976f-32bd793dbf01#033[00m
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.105 253665 DEBUG oslo_concurrency.lockutils [None req-9691a84f-24af-4a89-acbb-eb567b9f8058 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 224 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 138 op/s
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.855 253665 DEBUG nova.compute.manager [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.855 253665 DEBUG oslo_concurrency.lockutils [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.856 253665 DEBUG oslo_concurrency.lockutils [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.856 253665 DEBUG oslo_concurrency.lockutils [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cca5bcee-0493-45bc-976f-32bd793dbf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.856 253665 DEBUG nova.compute.manager [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] No waiting events found dispatching network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:26:21 np0005532048 nova_compute[253661]: 2025-11-22 09:26:21.856 253665 WARNING nova.compute.manager [req-f065b43c-6e70-41da-902c-2db42526ecfa req-97ecf85c-29cb-45a1-98b0-00b9d3135ec4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Received unexpected event network-vif-plugged-a812758f-4f22-4843-9cfa-447a7ab9c46a for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:26:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:26:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:26:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:26:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:26:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:26:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:26:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:23.426 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:23.428 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated#033[00m
Nov 22 04:26:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:23.430 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:26:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:23.431 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[65b02e5d-c19c-4160-a0c9-087ae0e1dc2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 200 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 141 op/s
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.153 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.153 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.172 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.244 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.244 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.251 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.251 253665 INFO nova.compute.claims [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.414 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 200 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 310 KiB/s wr, 106 op/s
Nov 22 04:26:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:26:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762345784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.929 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.937 253665 DEBUG nova.compute.provider_tree [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.951 253665 DEBUG nova.scheduler.client.report [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.972 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:25 np0005532048 nova_compute[253661]: 2025-11-22 09:26:25.973 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.016 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.017 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.035 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.054 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.140 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.143 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.143 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Creating image(s)#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.172 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.211 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.239 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.244 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.309 253665 DEBUG nova.policy [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9517b176edf1498d8cf7afc439fc7f04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4426b820f0e4f21a32402b443ca6282', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.345 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.346 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.347 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.348 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.371 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.376 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:26.563 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:26.564 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated#033[00m
Nov 22 04:26:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:26.566 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:26:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:26.568 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31619e0e-5172-4f35-8364-e163ff72f5ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.744 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.813 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] resizing rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.918 253665 DEBUG nova.objects.instance [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.941 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.942 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Ensure instance console log exists: /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.942 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.942 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:26 np0005532048 nova_compute[253661]: 2025-11-22 09:26:26.943 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:27 np0005532048 nova_compute[253661]: 2025-11-22 09:26:27.154 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Successfully created port: fa819bcd-7193-4627-920d-254828dcdfea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:26:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 217 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 1.0 MiB/s wr, 75 op/s
Nov 22 04:26:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.739 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.740 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated#033[00m
Nov 22 04:26:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.741 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:26:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.742 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b31013a3-fa68-42b6-bbc4-1e33e2f87083]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.968 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.969 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:28 np0005532048 nova_compute[253661]: 2025-11-22 09:26:28.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:26:28 np0005532048 nova_compute[253661]: 2025-11-22 09:26:28.342 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Successfully updated port: fa819bcd-7193-4627-920d-254828dcdfea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:26:28 np0005532048 nova_compute[253661]: 2025-11-22 09:26:28.357 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:26:28 np0005532048 nova_compute[253661]: 2025-11-22 09:26:28.357 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquired lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:26:28 np0005532048 nova_compute[253661]: 2025-11-22 09:26:28.357 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:26:28 np0005532048 nova_compute[253661]: 2025-11-22 09:26:28.553 253665 DEBUG nova.compute.manager [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-changed-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:28 np0005532048 nova_compute[253661]: 2025-11-22 09:26:28.554 253665 DEBUG nova.compute.manager [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Refreshing instance network info cache due to event network-changed-fa819bcd-7193-4627-920d-254828dcdfea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:26:28 np0005532048 nova_compute[253661]: 2025-11-22 09:26:28.554 253665 DEBUG oslo_concurrency.lockutils [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:26:28 np0005532048 nova_compute[253661]: 2025-11-22 09:26:28.879 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:26:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 246 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.455 253665 DEBUG nova.network.neutron [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Updating instance_info_cache with network_info: [{"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.478 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Releasing lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.478 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance network_info: |[{"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.479 253665 DEBUG oslo_concurrency.lockutils [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.479 253665 DEBUG nova.network.neutron [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Refreshing network info cache for port fa819bcd-7193-4627-920d-254828dcdfea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.483 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start _get_guest_xml network_info=[{"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.488 253665 WARNING nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.495 253665 DEBUG nova.virt.libvirt.host [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.496 253665 DEBUG nova.virt.libvirt.host [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.501 253665 DEBUG nova.virt.libvirt.host [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.501 253665 DEBUG nova.virt.libvirt.host [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.501 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.502 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.502 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.503 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.503 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.503 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.503 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.504 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.504 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.504 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.504 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.505 253665 DEBUG nova.virt.hardware [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.508 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.988 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803575.9870281, cca5bcee-0493-45bc-976f-32bd793dbf01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:26:30 np0005532048 nova_compute[253661]: 2025-11-22 09:26:30.988 253665 INFO nova.compute.manager [-] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:26:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:26:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550599483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.006 253665 DEBUG nova.compute.manager [None req-064a0215-c0c6-442d-9477-de44834fa42f - - - - - -] [instance: cca5bcee-0493-45bc-976f-32bd793dbf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.021 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.048 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.053 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:26:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/73068458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.513 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.515 253665 DEBUG nova.virt.libvirt.vif [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:26:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-582111700',display_name='tempest-ServersTestJSON-server-582111700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-582111700',id=88,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-n4pnb8ja',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:26:26Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=4e9344da-4e80-4749-8d61-a2fe5ffe0cf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.515 253665 DEBUG nova.network.os_vif_util [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.516 253665 DEBUG nova.network.os_vif_util [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.517 253665 DEBUG nova.objects.instance [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.532 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <uuid>4e9344da-4e80-4749-8d61-a2fe5ffe0cf7</uuid>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <name>instance-00000058</name>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersTestJSON-server-582111700</nova:name>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:26:30</nova:creationTime>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <nova:user uuid="9517b176edf1498d8cf7afc439fc7f04">tempest-ServersTestJSON-1454009974-project-member</nova:user>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <nova:project uuid="b4426b820f0e4f21a32402b443ca6282">tempest-ServersTestJSON-1454009974</nova:project>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <nova:port uuid="fa819bcd-7193-4627-920d-254828dcdfea">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <entry name="serial">4e9344da-4e80-4749-8d61-a2fe5ffe0cf7</entry>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <entry name="uuid">4e9344da-4e80-4749-8d61-a2fe5ffe0cf7</entry>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:31:45:80"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <target dev="tapfa819bcd-71"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/console.log" append="off"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:26:31 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:26:31 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:26:31 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:26:31 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.533 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Preparing to wait for external event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.533 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.534 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.534 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.535 253665 DEBUG nova.virt.libvirt.vif [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:26:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-582111700',display_name='tempest-ServersTestJSON-server-582111700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-582111700',id=88,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-n4pnb8ja',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:26:26Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=4e9344da-4e80-4749-8d61-a2fe5ffe0cf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.535 253665 DEBUG nova.network.os_vif_util [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.535 253665 DEBUG nova.network.os_vif_util [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.536 253665 DEBUG os_vif [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.537 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.537 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.541 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa819bcd-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.542 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa819bcd-71, col_values=(('external_ids', {'iface-id': 'fa819bcd-7193-4627-920d-254828dcdfea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:45:80', 'vm-uuid': '4e9344da-4e80-4749-8d61-a2fe5ffe0cf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:31 np0005532048 NetworkManager[48920]: <info>  [1763803591.5449] manager: (tapfa819bcd-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.553 253665 INFO os_vif [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71')#033[00m
Nov 22 04:26:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 246 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.606 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.606 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.607 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] No VIF found with MAC fa:16:3e:31:45:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.607 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Using config drive#033[00m
Nov 22 04:26:31 np0005532048 nova_compute[253661]: 2025-11-22 09:26:31.629 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:31.714 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:31.716 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated#033[00m
Nov 22 04:26:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:31.718 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:26:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:31.719 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6189b62-f446-4f8f-beb6-75ae6e3ccb7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.250 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.294 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Creating config drive at /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.299 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3wis6s_c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.445 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3wis6s_c" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.469 253665 DEBUG nova.storage.rbd_utils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] rbd image 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.473 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.590 253665 DEBUG nova.compute.manager [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.626 253665 INFO nova.compute.manager [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] instance snapshotting#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.627 253665 DEBUG nova.objects.instance [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'flavor' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.670 253665 DEBUG oslo_concurrency.processutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.671 253665 INFO nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Deleting local config drive /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7/disk.config because it was imported into RBD.#033[00m
Nov 22 04:26:32 np0005532048 kernel: tapfa819bcd-71: entered promiscuous mode
Nov 22 04:26:32 np0005532048 NetworkManager[48920]: <info>  [1763803592.7295] manager: (tapfa819bcd-71): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Nov 22 04:26:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:32Z|00897|binding|INFO|Claiming lport fa819bcd-7193-4627-920d-254828dcdfea for this chassis.
Nov 22 04:26:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:32Z|00898|binding|INFO|fa819bcd-7193-4627-920d-254828dcdfea: Claiming fa:16:3e:31:45:80 10.100.0.5
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.737 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:45:80 10.100.0.5'], port_security=['fa:16:3e:31:45:80 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4e9344da-4e80-4749-8d61-a2fe5ffe0cf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fa819bcd-7193-4627-920d-254828dcdfea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.738 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fa819bcd-7193-4627-920d-254828dcdfea in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 bound to our chassis#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.740 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:26:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:32Z|00899|binding|INFO|Setting lport fa819bcd-7193-4627-920d-254828dcdfea ovn-installed in OVS
Nov 22 04:26:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:32Z|00900|binding|INFO|Setting lport fa819bcd-7193-4627-920d-254828dcdfea up in Southbound
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.747 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.760 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[548c6245-9ed8-4dcb-809d-3e464da65ba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:32 np0005532048 systemd-udevd[341990]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:26:32 np0005532048 systemd-machined[215941]: New machine qemu-106-instance-00000058.
Nov 22 04:26:32 np0005532048 systemd[1]: Started Virtual Machine qemu-106-instance-00000058.
Nov 22 04:26:32 np0005532048 NetworkManager[48920]: <info>  [1763803592.7874] device (tapfa819bcd-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:26:32 np0005532048 NetworkManager[48920]: <info>  [1763803592.7882] device (tapfa819bcd-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.792 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0736698b-45e7-4ffc-8476-3863d066678f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.796 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2b30cdec-39c5-4918-a3c1-b9c851b68409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.828 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.829 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.829 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.829 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e0b05f62-6966-4bf3-aee5-e4d2137a6cfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.829 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8888abd1-bbd5-4628-9df8-8e917549de62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.833 253665 DEBUG nova.network.neutron [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Updated VIF entry in instance network info cache for port fa819bcd-7193-4627-920d-254828dcdfea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.834 253665 DEBUG nova.network.neutron [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Updating instance_info_cache with network_info: [{"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.849 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cd79df-f794-41fc-8401-8f891b34f5a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341999, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.858 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.859 253665 DEBUG oslo_concurrency.lockutils [req-3a628936-0779-4a66-953a-a328ed2425d9 req-a53aaf87-8918-4fc5-88f7-2585befe2590 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af279f7f-728a-4e3c-8808-f31504783d43]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342002, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342002, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.870 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:32 np0005532048 nova_compute[253661]: 2025-11-22 09:26:32.873 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.875 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.877 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.878 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:26:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:32.879 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b65c1eaf-39cd-4c9e-90c7-4a7618c63ab2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.167 253665 INFO nova.virt.libvirt.driver [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Beginning live snapshot process#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.176 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803593.176301, 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.177 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] VM Started (Lifecycle Event)#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.210 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.221 253665 DEBUG nova.compute.manager [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.221 253665 DEBUG oslo_concurrency.lockutils [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.221 253665 DEBUG oslo_concurrency.lockutils [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.222 253665 DEBUG oslo_concurrency.lockutils [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.222 253665 DEBUG nova.compute.manager [req-0c51443c-2aa1-48af-8ba3-af19989ddb3f req-e0ce584e-fb8d-4564-a5fb-482f3a788346 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Processing event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.223 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.224 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803593.1765378, 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.224 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.228 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.231 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance spawned successfully.#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.231 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.253 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.260 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803593.2267067, 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.260 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.263 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.264 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.264 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.264 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.265 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.265 253665 DEBUG nova.virt.libvirt.driver [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.313 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.317 253665 DEBUG nova.virt.libvirt.imagebackend [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.322 253665 INFO nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Took 7.18 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.322 253665 DEBUG nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.324 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.356 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.392 253665 INFO nova.compute.manager [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Took 8.16 seconds to build instance.#033[00m
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.405 253665 DEBUG oslo_concurrency.lockutils [None req-1fb71b5b-01c3-41ad-a57e-cbdf34b95309 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 246 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 22 04:26:33 np0005532048 nova_compute[253661]: 2025-11-22 09:26:33.850 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(1d1c33183bfc4c66b184e71a2e1fd599) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:26:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 22 04:26:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 22 04:26:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.207 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk@1d1c33183bfc4c66b184e71a2e1fd599 to images/84078c1f-f45a-4974-ab60-fbf47bdc21a1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.315 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/84078c1f-f45a-4974-ab60-fbf47bdc21a1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.688 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updating instance_info_cache with network_info: [{"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.702 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.702 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.702 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.703 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.729 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.730 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.730 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.730 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:26:34 np0005532048 nova_compute[253661]: 2025-11-22 09:26:34.730 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:26:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/65132115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.387 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:35 np0005532048 podman[342172]: 2025-11-22 09:26:35.419100538 +0000 UTC m=+0.084817614 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 22 04:26:35 np0005532048 podman[342173]: 2025-11-22 09:26:35.42355828 +0000 UTC m=+0.089802609 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.441 253665 DEBUG nova.compute.manager [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.441 253665 DEBUG oslo_concurrency.lockutils [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.442 253665 DEBUG oslo_concurrency.lockutils [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.442 253665 DEBUG oslo_concurrency.lockutils [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.442 253665 DEBUG nova.compute.manager [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] No waiting events found dispatching network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.442 253665 WARNING nova.compute.manager [req-e718df07-7b30-43d1-8f33-92e5ebc4f24f req-cee5c251-8033-4424-95e2-9608cd1f3a3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received unexpected event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea for instance with vm_state active and task_state None.#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.467 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.468 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.471 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.471 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.475 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.475 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 246 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.714 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(1d1c33183bfc4c66b184e71a2e1fd599) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.847 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.848 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3312MB free_disk=59.87630844116211GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.848 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.848 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.925 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e0b05f62-6966-4bf3-aee5-e4d2137a6cfc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.926 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e4f9440c-7476-4022-8d08-1b3151a9db79 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.926 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.926 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.926 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.939 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.953 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.954 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.969 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:26:35 np0005532048 nova_compute[253661]: 2025-11-22 09:26:35.987 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.052 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:36.438 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:36.439 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated#033[00m
Nov 22 04:26:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:36.441 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:26:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:36.442 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae05d192-ee5c-4c3f-8345-c0958dcde07c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 22 04:26:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 22 04:26:36 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 22 04:26:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:26:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31841227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.536 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.543 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.558 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.568 253665 DEBUG nova.storage.rbd_utils [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(84078c1f-f45a-4974-ab60-fbf47bdc21a1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.601 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.602 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.964 253665 DEBUG oslo_concurrency.lockutils [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.964 253665 DEBUG oslo_concurrency.lockutils [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.965 253665 DEBUG nova.compute.manager [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.968 253665 DEBUG nova.compute.manager [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.969 253665 DEBUG nova.objects.instance [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'flavor' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:36 np0005532048 nova_compute[253661]: 2025-11-22 09:26:36.992 253665 DEBUG nova.virt.libvirt.driver [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:26:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:37.405 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:37.407 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated#033[00m
Nov 22 04:26:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:37.408 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:26:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:37.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a02012f8-8c50-4c63-87fb-81fbe59ed666]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 22 04:26:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 22 04:26:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 259 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 587 KiB/s wr, 94 op/s
Nov 22 04:26:37 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 22 04:26:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:38 np0005532048 nova_compute[253661]: 2025-11-22 09:26:38.127 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:26:38 np0005532048 nova_compute[253661]: 2025-11-22 09:26:38.128 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:26:39 np0005532048 nova_compute[253661]: 2025-11-22 09:26:39.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:26:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 326 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 294 op/s
Nov 22 04:26:39 np0005532048 nova_compute[253661]: 2025-11-22 09:26:39.714 253665 INFO nova.virt.libvirt.driver [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Snapshot image upload complete#033[00m
Nov 22 04:26:39 np0005532048 nova_compute[253661]: 2025-11-22 09:26:39.714 253665 INFO nova.compute.manager [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 7.07 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:26:40 np0005532048 nova_compute[253661]: 2025-11-22 09:26:40.145 253665 DEBUG nova.compute.manager [None req-836a6ce9-c9c8-4e46-a96f-245d6117b01d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 22 04:26:40 np0005532048 nova_compute[253661]: 2025-11-22 09:26:40.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:41 np0005532048 nova_compute[253661]: 2025-11-22 09:26:41.131 253665 DEBUG nova.compute.manager [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:41 np0005532048 nova_compute[253661]: 2025-11-22 09:26:41.169 253665 INFO nova.compute.manager [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] instance snapshotting#033[00m
Nov 22 04:26:41 np0005532048 nova_compute[253661]: 2025-11-22 09:26:41.170 253665 DEBUG nova.objects.instance [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'flavor' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:41 np0005532048 nova_compute[253661]: 2025-11-22 09:26:41.459 253665 INFO nova.virt.libvirt.driver [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Beginning live snapshot process#033[00m
Nov 22 04:26:41 np0005532048 nova_compute[253661]: 2025-11-22 09:26:41.592 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:41 np0005532048 nova_compute[253661]: 2025-11-22 09:26:41.599 253665 DEBUG nova.virt.libvirt.imagebackend [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:26:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 326 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 6.2 MiB/s wr, 215 op/s
Nov 22 04:26:41 np0005532048 nova_compute[253661]: 2025-11-22 09:26:41.782 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(8063ee849ec444f896e547beef62fa35) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:26:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:42.338 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 10.100.0.2 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:42.340 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated#033[00m
Nov 22 04:26:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:42.341 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:26:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:42.342 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d762645-b461-4f98-a718-99988e576797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:42 np0005532048 podman[342320]: 2025-11-22 09:26:42.424417635 +0000 UTC m=+0.107146465 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:26:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 22 04:26:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 22 04:26:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 22 04:26:43 np0005532048 nova_compute[253661]: 2025-11-22 09:26:43.157 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk@8063ee849ec444f896e547beef62fa35 to images/59a900cc-5a77-42a2-a590-ba279de1eb2e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:26:43 np0005532048 nova_compute[253661]: 2025-11-22 09:26:43.372 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/59a900cc-5a77-42a2-a590-ba279de1eb2e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:26:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 326 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 6.6 MiB/s wr, 248 op/s
Nov 22 04:26:45 np0005532048 nova_compute[253661]: 2025-11-22 09:26:45.495 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(8063ee849ec444f896e547beef62fa35) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:26:45 np0005532048 nova_compute[253661]: 2025-11-22 09:26:45.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 338 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.1 MiB/s wr, 183 op/s
Nov 22 04:26:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 22 04:26:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 22 04:26:46 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 22 04:26:46 np0005532048 nova_compute[253661]: 2025-11-22 09:26:46.212 253665 DEBUG nova.storage.rbd_utils [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(59a900cc-5a77-42a2-a590-ba279de1eb2e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:26:46 np0005532048 nova_compute[253661]: 2025-11-22 09:26:46.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:47 np0005532048 nova_compute[253661]: 2025-11-22 09:26:47.037 253665 DEBUG nova.virt.libvirt.driver [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:26:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 22 04:26:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 22 04:26:47 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 22 04:26:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:47Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:31:45:80 10.100.0.5
Nov 22 04:26:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:47Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:31:45:80 10.100.0.5
Nov 22 04:26:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 366 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 110 op/s
Nov 22 04:26:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:48 np0005532048 nova_compute[253661]: 2025-11-22 09:26:48.706 253665 INFO nova.virt.libvirt.driver [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Snapshot image upload complete#033[00m
Nov 22 04:26:48 np0005532048 nova_compute[253661]: 2025-11-22 09:26:48.706 253665 INFO nova.compute.manager [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 7.52 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:26:48 np0005532048 nova_compute[253661]: 2025-11-22 09:26:48.968 253665 DEBUG nova.compute.manager [None req-e0af6b18-452a-4e6a-a683-df62c7c44840 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 22 04:26:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 432 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 276 op/s
Nov 22 04:26:50 np0005532048 kernel: tapfa819bcd-71 (unregistering): left promiscuous mode
Nov 22 04:26:50 np0005532048 NetworkManager[48920]: <info>  [1763803610.3504] device (tapfa819bcd-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:26:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:50Z|00901|binding|INFO|Releasing lport fa819bcd-7193-4627-920d-254828dcdfea from this chassis (sb_readonly=0)
Nov 22 04:26:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:50Z|00902|binding|INFO|Setting lport fa819bcd-7193-4627-920d-254828dcdfea down in Southbound
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:50Z|00903|binding|INFO|Removing iface tapfa819bcd-71 ovn-installed in OVS
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.374 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:45:80 10.100.0.5'], port_security=['fa:16:3e:31:45:80 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4e9344da-4e80-4749-8d61-a2fe5ffe0cf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fa819bcd-7193-4627-920d-254828dcdfea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.376 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fa819bcd-7193-4627-920d-254828dcdfea in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.377 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 502d021b-7c33-4c22-8cd9-32a451fdf556#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.389 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.398 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25ff838a-9d03-4e57-bf40-d446252b625f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:50 np0005532048 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000058.scope: Deactivated successfully.
Nov 22 04:26:50 np0005532048 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000058.scope: Consumed 13.579s CPU time.
Nov 22 04:26:50 np0005532048 systemd-machined[215941]: Machine qemu-106-instance-00000058 terminated.
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.435 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[665dad5f-d049-4e5f-bd5e-5db492a4923a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.438 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e5655577-dc0d-46c5-a689-2e32a3208db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.457 253665 DEBUG nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.470 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc23dc3-6d8e-4db4-b3cd-5a5e43afc92f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.489 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[62ce9960-4fbd-4548-80f3-f61e6921dfff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap502d021b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:71:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630091, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342448, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.499 253665 INFO nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] instance snapshotting#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.500 253665 DEBUG nova.objects.instance [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'flavor' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.508 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9220fbea-e787-44d8-837e-c81dbe903738]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630103, 'tstamp': 630103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342449, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap502d021b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630106, 'tstamp': 630106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342449, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.510 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap502d021b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap502d021b-70, col_values=(('external_ids', {'iface-id': 'd9f2322f-7849-4ad7-8b1c-0b79523c4fde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:50.520 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.749 253665 DEBUG nova.compute.manager [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-unplugged-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.750 253665 DEBUG oslo_concurrency.lockutils [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.751 253665 DEBUG oslo_concurrency.lockutils [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.751 253665 DEBUG oslo_concurrency.lockutils [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.751 253665 DEBUG nova.compute.manager [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] No waiting events found dispatching network-vif-unplugged-fa819bcd-7193-4627-920d-254828dcdfea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.752 253665 WARNING nova.compute.manager [req-03a727d7-ba30-418c-893a-43fe9d27f5cd req-a16fa29b-36a2-4113-8aca-d8ff94cfc3db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received unexpected event network-vif-unplugged-fa819bcd-7193-4627-920d-254828dcdfea for instance with vm_state active and task_state powering-off.#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.761 253665 INFO nova.virt.libvirt.driver [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Beginning live snapshot process#033[00m
Nov 22 04:26:50 np0005532048 nova_compute[253661]: 2025-11-22 09:26:50.916 253665 DEBUG nova.virt.libvirt.imagebackend [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:26:51 np0005532048 nova_compute[253661]: 2025-11-22 09:26:51.054 253665 INFO nova.virt.libvirt.driver [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance shutdown successfully after 14 seconds.#033[00m
Nov 22 04:26:51 np0005532048 nova_compute[253661]: 2025-11-22 09:26:51.060 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance destroyed successfully.#033[00m
Nov 22 04:26:51 np0005532048 nova_compute[253661]: 2025-11-22 09:26:51.061 253665 DEBUG nova.objects.instance [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:51 np0005532048 nova_compute[253661]: 2025-11-22 09:26:51.072 253665 DEBUG nova.compute.manager [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:26:51 np0005532048 nova_compute[253661]: 2025-11-22 09:26:51.129 253665 DEBUG oslo_concurrency.lockutils [None req-cb49f173-c0b3-419a-86cf-bc3391f86217 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:51 np0005532048 nova_compute[253661]: 2025-11-22 09:26:51.275 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(77d525a265944c66affe9c6402eb1519) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:26:51 np0005532048 nova_compute[253661]: 2025-11-22 09:26:51.597 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 432 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 9.0 MiB/s wr, 232 op/s
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:26:52
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'images']
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:26:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 22 04:26:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 22 04:26:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 22 04:26:52 np0005532048 nova_compute[253661]: 2025-11-22 09:26:52.352 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk@77d525a265944c66affe9c6402eb1519 to images/1e999d71-f227-48a5-af57-b8e4ea55b8dc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:26:52 np0005532048 nova_compute[253661]: 2025-11-22 09:26:52.463 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/1e999d71-f227-48a5-af57-b8e4ea55b8dc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:26:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:26:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 22 04:26:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 22 04:26:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 22 04:26:52 np0005532048 nova_compute[253661]: 2025-11-22 09:26:52.888 253665 DEBUG nova.compute.manager [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:52 np0005532048 nova_compute[253661]: 2025-11-22 09:26:52.889 253665 DEBUG oslo_concurrency.lockutils [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:52 np0005532048 nova_compute[253661]: 2025-11-22 09:26:52.889 253665 DEBUG oslo_concurrency.lockutils [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:52 np0005532048 nova_compute[253661]: 2025-11-22 09:26:52.889 253665 DEBUG oslo_concurrency.lockutils [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:52 np0005532048 nova_compute[253661]: 2025-11-22 09:26:52.890 253665 DEBUG nova.compute.manager [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] No waiting events found dispatching network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:26:52 np0005532048 nova_compute[253661]: 2025-11-22 09:26:52.890 253665 WARNING nova.compute.manager [req-943062a5-f080-4a8c-af52-6d38e586f3d3 req-ad08f0b5-d715-444c-bb99-1546e6a14bc6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received unexpected event network-vif-plugged-fa819bcd-7193-4627-920d-254828dcdfea for instance with vm_state stopped and task_state None.#033[00m
Nov 22 04:26:52 np0005532048 nova_compute[253661]: 2025-11-22 09:26:52.980 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(77d525a265944c66affe9c6402eb1519) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:26:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 472 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 10 MiB/s wr, 284 op/s
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.795 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.796 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.796 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.796 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.797 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.798 253665 INFO nova.compute.manager [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Terminating instance#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.798 253665 DEBUG nova.compute.manager [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.807 253665 INFO nova.virt.libvirt.driver [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Instance destroyed successfully.#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.808 253665 DEBUG nova.objects.instance [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.821 253665 DEBUG nova.virt.libvirt.vif [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:26:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-582111700',display_name='tempest-Íñstáñcé-785856032',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-582111700',id=88,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:26:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-n4pnb8ja',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:26:52Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=4e9344da-4e80-4749-8d61-a2fe5ffe0cf7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.821 253665 DEBUG nova.network.os_vif_util [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "fa819bcd-7193-4627-920d-254828dcdfea", "address": "fa:16:3e:31:45:80", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa819bcd-71", "ovs_interfaceid": "fa819bcd-7193-4627-920d-254828dcdfea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.822 253665 DEBUG nova.network.os_vif_util [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.822 253665 DEBUG os_vif [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.824 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.825 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa819bcd-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:53 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.830 253665 INFO os_vif [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:45:80,bridge_name='br-int',has_traffic_filtering=True,id=fa819bcd-7193-4627-920d-254828dcdfea,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa819bcd-71')#033[00m
Nov 22 04:26:53 np0005532048 nova_compute[253661]: 2025-11-22 09:26:53.865 253665 DEBUG nova.storage.rbd_utils [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(1e999d71-f227-48a5-af57-b8e4ea55b8dc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:26:54 np0005532048 nova_compute[253661]: 2025-11-22 09:26:54.423 253665 INFO nova.virt.libvirt.driver [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Deleting instance files /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_del#033[00m
Nov 22 04:26:54 np0005532048 nova_compute[253661]: 2025-11-22 09:26:54.424 253665 INFO nova.virt.libvirt.driver [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Deletion of /var/lib/nova/instances/4e9344da-4e80-4749-8d61-a2fe5ffe0cf7_del complete#033[00m
Nov 22 04:26:54 np0005532048 nova_compute[253661]: 2025-11-22 09:26:54.469 253665 INFO nova.compute.manager [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:26:54 np0005532048 nova_compute[253661]: 2025-11-22 09:26:54.470 253665 DEBUG oslo.service.loopingcall [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:26:54 np0005532048 nova_compute[253661]: 2025-11-22 09:26:54.470 253665 DEBUG nova.compute.manager [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:26:54 np0005532048 nova_compute[253661]: 2025-11-22 09:26:54.470 253665 DEBUG nova.network.neutron [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:26:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 22 04:26:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 22 04:26:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 22 04:26:54 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 22 04:26:55 np0005532048 nova_compute[253661]: 2025-11-22 09:26:55.369 253665 DEBUG nova.network.neutron [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:26:55 np0005532048 nova_compute[253661]: 2025-11-22 09:26:55.388 253665 INFO nova.compute.manager [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Took 0.92 seconds to deallocate network for instance.#033[00m
Nov 22 04:26:55 np0005532048 nova_compute[253661]: 2025-11-22 09:26:55.442 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:55 np0005532048 nova_compute[253661]: 2025-11-22 09:26:55.443 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:55 np0005532048 nova_compute[253661]: 2025-11-22 09:26:55.543 253665 DEBUG oslo_concurrency.processutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:55 np0005532048 nova_compute[253661]: 2025-11-22 09:26:55.585 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 454 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 6.7 MiB/s wr, 241 op/s
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:26:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:26:55 np0005532048 nova_compute[253661]: 2025-11-22 09:26:55.752 253665 DEBUG nova.compute.manager [req-e15d6d1a-6b16-4bbe-9528-7fa77b63ee51 req-8e45f07e-2f60-429c-95ed-75b00be9c5d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Received event network-vif-deleted-fa819bcd-7193-4627-920d-254828dcdfea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:26:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763173564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.037 253665 DEBUG oslo_concurrency.processutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.045 253665 DEBUG nova.compute.provider_tree [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.066 253665 DEBUG nova.scheduler.client.report [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.092 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.117 253665 INFO nova.scheduler.client.report [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.180 253665 DEBUG oslo_concurrency.lockutils [None req-c76555b4-98f2-4ad1-8248-b7556d13bd81 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "4e9344da-4e80-4749-8d61-a2fe5ffe0cf7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.362 253665 INFO nova.virt.libvirt.driver [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Snapshot image upload complete#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.363 253665 INFO nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 5.84 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.610 253665 DEBUG nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.610 253665 DEBUG nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Nov 22 04:26:56 np0005532048 nova_compute[253661]: 2025-11-22 09:26:56.610 253665 DEBUG nova.compute.manager [None req-910f8efe-0b65-4e64-8ad3-46067185f38d ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting image 84078c1f-f45a-4974-ab60-fbf47bdc21a1 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Nov 22 04:26:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 22 04:26:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 22 04:26:56 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 22 04:26:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 10 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 291 active+clean; 452 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 9.7 MiB/s wr, 267 op/s
Nov 22 04:26:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:26:57 np0005532048 nova_compute[253661]: 2025-11-22 09:26:57.967 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:57 np0005532048 nova_compute[253661]: 2025-11-22 09:26:57.967 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:57 np0005532048 nova_compute[253661]: 2025-11-22 09:26:57.968 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:57 np0005532048 nova_compute[253661]: 2025-11-22 09:26:57.968 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:57 np0005532048 nova_compute[253661]: 2025-11-22 09:26:57.968 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:57 np0005532048 nova_compute[253661]: 2025-11-22 09:26:57.970 253665 INFO nova.compute.manager [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Terminating instance#033[00m
Nov 22 04:26:57 np0005532048 nova_compute[253661]: 2025-11-22 09:26:57.971 253665 DEBUG nova.compute.manager [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:26:58 np0005532048 kernel: tapdedad4aa-19 (unregistering): left promiscuous mode
Nov 22 04:26:58 np0005532048 NetworkManager[48920]: <info>  [1763803618.0378] device (tapdedad4aa-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.045 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:58Z|00904|binding|INFO|Releasing lport dedad4aa-19bb-4bc6-a08c-d75d3024d553 from this chassis (sb_readonly=0)
Nov 22 04:26:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:58Z|00905|binding|INFO|Setting lport dedad4aa-19bb-4bc6-a08c-d75d3024d553 down in Southbound
Nov 22 04:26:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:26:58Z|00906|binding|INFO|Removing iface tapdedad4aa-19 ovn-installed in OVS
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.051 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:9d:63 10.100.0.12'], port_security=['fa:16:3e:2b:9d:63 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e0b05f62-6966-4bf3-aee5-e4d2137a6cfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-502d021b-7c33-4c22-8cd9-32a451fdf556', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4426b820f0e4f21a32402b443ca6282', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f36ba3b6-2510-4c95-b197-155a69f12391', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc73e8cd-dd4e-4f31-af35-4fd5a12d2a22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=dedad4aa-19bb-4bc6-a08c-d75d3024d553) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.052 162862 INFO neutron.agent.ovn.metadata.agent [-] Port dedad4aa-19bb-4bc6-a08c-d75d3024d553 in datapath 502d021b-7c33-4c22-8cd9-32a451fdf556 unbound from our chassis#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.053 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 502d021b-7c33-4c22-8cd9-32a451fdf556, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[604655f6-30be-4d00-b75d-66730ecee51c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.055 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556 namespace which is not needed anymore#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:58 np0005532048 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Nov 22 04:26:58 np0005532048 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004f.scope: Consumed 22.199s CPU time.
Nov 22 04:26:58 np0005532048 systemd-machined[215941]: Machine qemu-95-instance-0000004f terminated.
Nov 22 04:26:58 np0005532048 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [NOTICE]   (334722) : haproxy version is 2.8.14-c23fe91
Nov 22 04:26:58 np0005532048 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [NOTICE]   (334722) : path to executable is /usr/sbin/haproxy
Nov 22 04:26:58 np0005532048 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [WARNING]  (334722) : Exiting Master process...
Nov 22 04:26:58 np0005532048 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [WARNING]  (334722) : Exiting Master process...
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.220 253665 INFO nova.virt.libvirt.driver [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Instance destroyed successfully.#033[00m
Nov 22 04:26:58 np0005532048 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [ALERT]    (334722) : Current worker (334724) exited with code 143 (Terminated)
Nov 22 04:26:58 np0005532048 neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556[334718]: [WARNING]  (334722) : All workers exited. Exiting... (0)
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.220 253665 DEBUG nova.objects.instance [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lazy-loading 'resources' on Instance uuid e0b05f62-6966-4bf3-aee5-e4d2137a6cfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:26:58 np0005532048 systemd[1]: libpod-dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84.scope: Deactivated successfully.
Nov 22 04:26:58 np0005532048 podman[342668]: 2025-11-22 09:26:58.229662824 +0000 UTC m=+0.049871796 container died dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.231 253665 DEBUG nova.virt.libvirt.vif [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:24:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1428624522',display_name='tempest-₡-1428624522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1428624522',id=79,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:24:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4426b820f0e4f21a32402b443ca6282',ramdisk_id='',reservation_id='r-d8zy45mf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1454009974',owner_user_name='tempest-ServersTestJSON-1454009974-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:24:10Z,user_data=None,user_id='9517b176edf1498d8cf7afc439fc7f04',uuid=e0b05f62-6966-4bf3-aee5-e4d2137a6cfc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.235 253665 DEBUG nova.network.os_vif_util [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converting VIF {"id": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "address": "fa:16:3e:2b:9d:63", "network": {"id": "502d021b-7c33-4c22-8cd9-32a451fdf556", "bridge": "br-int", "label": "tempest-ServersTestJSON-2107730556-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4426b820f0e4f21a32402b443ca6282", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdedad4aa-19", "ovs_interfaceid": "dedad4aa-19bb-4bc6-a08c-d75d3024d553", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.237 253665 DEBUG nova.network.os_vif_util [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.237 253665 DEBUG os_vif [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.241 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdedad4aa-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.247 253665 INFO os_vif [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:9d:63,bridge_name='br-int',has_traffic_filtering=True,id=dedad4aa-19bb-4bc6-a08c-d75d3024d553,network=Network(502d021b-7c33-4c22-8cd9-32a451fdf556),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdedad4aa-19')#033[00m
Nov 22 04:26:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84-userdata-shm.mount: Deactivated successfully.
Nov 22 04:26:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a035ba10fc4c82f058aee7a6871d6f3764a5e61791891e4008e90692352fd688-merged.mount: Deactivated successfully.
Nov 22 04:26:58 np0005532048 podman[342668]: 2025-11-22 09:26:58.286800169 +0000 UTC m=+0.107009151 container cleanup dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:26:58 np0005532048 systemd[1]: libpod-conmon-dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84.scope: Deactivated successfully.
Nov 22 04:26:58 np0005532048 podman[342727]: 2025-11-22 09:26:58.400673003 +0000 UTC m=+0.087152272 container remove dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.409 253665 DEBUG nova.compute.manager [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-unplugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.409 253665 DEBUG oslo_concurrency.lockutils [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.409 253665 DEBUG oslo_concurrency.lockutils [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.410 253665 DEBUG oslo_concurrency.lockutils [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.410 253665 DEBUG nova.compute.manager [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] No waiting events found dispatching network-vif-unplugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.410 253665 DEBUG nova.compute.manager [req-14fb02f0-93af-4f53-917b-3098c6c584ea req-9fda2e3c-8cdb-450e-b2bc-ead2de7c6c2e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-unplugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.412 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9cf784-9c53-4c00-b401-447c2570f439]: (4, ('Sat Nov 22 09:26:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556 (dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84)\ndc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84\nSat Nov 22 09:26:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556 (dc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84)\ndc0d0f7929b15efaf30993edfd33cc825230d133bf3f8b6261f02dfa50945d84\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.414 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e41169d-e8b1-4cf3-a810-e91b74bbb2ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.415 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap502d021b-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:58 np0005532048 kernel: tap502d021b-70: left promiscuous mode
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.434 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.437 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8635b8e5-35d0-4698-adaa-691ad52ad46e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.457 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4de3f9-1b12-4136-ad69-e5b8ffd38f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.458 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5c860e-ab37-410f-961e-6790db284512]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.482 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b402518e-9faa-4505-bfbd-f563b6e243f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630083, 'reachable_time': 29194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342741, 'error': None, 'target': 'ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.486 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-502d021b-7c33-4c22-8cd9-32a451fdf556 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:26:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:26:58.487 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbb313e-0f4f-42de-8420-fdafc6404e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:26:58 np0005532048 systemd[1]: run-netns-ovnmeta\x2d502d021b\x2d7c33\x2d4c22\x2d8cd9\x2d32a451fdf556.mount: Deactivated successfully.
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.863 253665 INFO nova.virt.libvirt.driver [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Deleting instance files /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_del#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.864 253665 INFO nova.virt.libvirt.driver [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Deletion of /var/lib/nova/instances/e0b05f62-6966-4bf3-aee5-e4d2137a6cfc_del complete#033[00m
Nov 22 04:26:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 22 04:26:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 22 04:26:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.922 253665 INFO nova.compute.manager [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.923 253665 DEBUG oslo.service.loopingcall [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.923 253665 DEBUG nova.compute.manager [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:26:58 np0005532048 nova_compute[253661]: 2025-11-22 09:26:58.924 253665 DEBUG nova.network.neutron [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:26:59 np0005532048 nova_compute[253661]: 2025-11-22 09:26:59.505 253665 DEBUG nova.network.neutron [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:26:59 np0005532048 nova_compute[253661]: 2025-11-22 09:26:59.523 253665 INFO nova.compute.manager [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Took 0.60 seconds to deallocate network for instance.#033[00m
Nov 22 04:26:59 np0005532048 nova_compute[253661]: 2025-11-22 09:26:59.576 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:26:59 np0005532048 nova_compute[253661]: 2025-11-22 09:26:59.577 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:26:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 14 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 287 active+clean; 335 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.4 MiB/s wr, 227 op/s
Nov 22 04:26:59 np0005532048 nova_compute[253661]: 2025-11-22 09:26:59.647 253665 DEBUG oslo_concurrency.processutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:26:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 22 04:26:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 22 04:26:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 22 04:27:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:27:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78926417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.122 253665 DEBUG oslo_concurrency.processutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.131 253665 DEBUG nova.compute.provider_tree [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.147 253665 DEBUG nova.scheduler.client.report [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.170 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.208 253665 INFO nova.scheduler.client.report [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Deleted allocations for instance e0b05f62-6966-4bf3-aee5-e4d2137a6cfc#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.273 253665 DEBUG oslo_concurrency.lockutils [None req-6b82c63b-164c-46fc-bead-917fa80e2ad0 9517b176edf1498d8cf7afc439fc7f04 b4426b820f0e4f21a32402b443ca6282 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.561 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.569 253665 DEBUG nova.compute.manager [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.569 253665 DEBUG oslo_concurrency.lockutils [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.570 253665 DEBUG oslo_concurrency.lockutils [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.570 253665 DEBUG oslo_concurrency.lockutils [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e0b05f62-6966-4bf3-aee5-e4d2137a6cfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.571 253665 DEBUG nova.compute.manager [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] No waiting events found dispatching network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.571 253665 WARNING nova.compute.manager [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received unexpected event network-vif-plugged-dedad4aa-19bb-4bc6-a08c-d75d3024d553 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:27:00 np0005532048 nova_compute[253661]: 2025-11-22 09:27:00.572 253665 DEBUG nova.compute.manager [req-f184366d-1f7e-49e6-a139-0d110009b627 req-9297811b-83e5-494b-9eef-0de9f1a4ba25 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Received event network-vif-deleted-dedad4aa-19bb-4bc6-a08c-d75d3024d553 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 14 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 287 active+clean; 335 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 136 op/s
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011676525770007162 of space, bias 1.0, pg target 0.35029577310021487 quantized to 32 (current 32)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0022523192802956305 of space, bias 1.0, pg target 0.6756957840886891 quantized to 32 (current 32)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:27:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:27:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 22 04:27:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 22 04:27:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 22 04:27:03 np0005532048 nova_compute[253661]: 2025-11-22 09:27:03.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 166 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 7.5 KiB/s wr, 178 op/s
Nov 22 04:27:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:04Z|00907|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 04:27:04 np0005532048 nova_compute[253661]: 2025-11-22 09:27:04.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:04.526 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:27:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:04.527 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:27:04 np0005532048 nova_compute[253661]: 2025-11-22 09:27:04.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:05 np0005532048 nova_compute[253661]: 2025-11-22 09:27:05.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:05 np0005532048 nova_compute[253661]: 2025-11-22 09:27:05.609 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803610.6073596, 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:05 np0005532048 nova_compute[253661]: 2025-11-22 09:27:05.609 253665 INFO nova.compute.manager [-] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:27:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 7.0 KiB/s wr, 153 op/s
Nov 22 04:27:05 np0005532048 nova_compute[253661]: 2025-11-22 09:27:05.629 253665 DEBUG nova.compute.manager [None req-b6c6b744-1942-478c-83cf-73e2614d2f1e - - - - - -] [instance: 4e9344da-4e80-4749-8d61-a2fe5ffe0cf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:06.034 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8:0:1:f816:3eff:feea:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbd8dae8-4b4f-476a-a4ed-496bd456447a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bef057d6-1b23-426b-a951-28862f212f74) old=Port_Binding(mac=['fa:16:3e:ea:19:93 2001:db8::f816:3eff:feea:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feea:1993/64', 'neutron:device_id': 'ovnmeta-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e7f2999-c25e-42f1-9734-59fa46bdc34b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29239487e67d40f399c14bc317bcacd7', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:27:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:06.037 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bef057d6-1b23-426b-a951-28862f212f74 in datapath 0e7f2999-c25e-42f1-9734-59fa46bdc34b updated#033[00m
Nov 22 04:27:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:06.039 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e7f2999-c25e-42f1-9734-59fa46bdc34b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:27:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:06.040 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4d0797-462b-4d8c-a6bb-09918e6ef209]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:06 np0005532048 podman[342765]: 2025-11-22 09:27:06.378293608 +0000 UTC m=+0.069790856 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:27:06 np0005532048 podman[342766]: 2025-11-22 09:27:06.433963197 +0000 UTC m=+0.114900400 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 22 04:27:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 4.6 KiB/s wr, 103 op/s
Nov 22 04:27:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 22 04:27:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 22 04:27:07 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 22 04:27:08 np0005532048 nova_compute[253661]: 2025-11-22 09:27:08.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:08 np0005532048 nova_compute[253661]: 2025-11-22 09:27:08.412 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:08 np0005532048 nova_compute[253661]: 2025-11-22 09:27:08.412 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:08 np0005532048 nova_compute[253661]: 2025-11-22 09:27:08.427 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:27:08 np0005532048 nova_compute[253661]: 2025-11-22 09:27:08.530 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:08 np0005532048 nova_compute[253661]: 2025-11-22 09:27:08.530 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:08 np0005532048 nova_compute[253661]: 2025-11-22 09:27:08.537 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:27:08 np0005532048 nova_compute[253661]: 2025-11-22 09:27:08.537 253665 INFO nova.compute.claims [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:27:08 np0005532048 nova_compute[253661]: 2025-11-22 09:27:08.646 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:27:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442797825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.152 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.161 253665 DEBUG nova.compute.provider_tree [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.183 253665 DEBUG nova.scheduler.client.report [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.212 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.213 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.261 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.262 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.280 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.301 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.386 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.389 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.390 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Creating image(s)#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.428 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.458 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.482 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.487 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.534 253665 DEBUG nova.policy [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ce82551204d04546a5ae9c6f99cccfc8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a246689624d4630a70f69b70d048883', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.572 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.573 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.573 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.574 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.600 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.604 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 4.6 KiB/s wr, 103 op/s
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.699 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "be0569c8-2c59-4525-a348-590d878662d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.699 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.713 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.778 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.779 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.789 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.790 253665 INFO nova.compute.claims [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:27:09 np0005532048 nova_compute[253661]: 2025-11-22 09:27:09.920 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.393 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Successfully created port: 6eb31688-c2e8-4f7b-a3df-3008c2065663 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:27:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:27:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833664491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.459 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.466 253665 DEBUG nova.compute.provider_tree [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.497 253665 DEBUG nova.scheduler.client.report [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.521 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.523 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.566 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.569 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.581 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.601 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.685 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.686 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.687 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating image(s)#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.716 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.742 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.764 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.768 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.842 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.843 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.844 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.844 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.877 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:10 np0005532048 nova_compute[253661]: 2025-11-22 09:27:10.881 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 be0569c8-2c59-4525-a348-590d878662d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.327 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Successfully updated port: 6eb31688-c2e8-4f7b-a3df-3008c2065663 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.344 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.344 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.345 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.465 253665 DEBUG nova.compute.manager [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-changed-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.466 253665 DEBUG nova.compute.manager [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Refreshing instance network info cache due to event network-changed-6eb31688-c2e8-4f7b-a3df-3008c2065663. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.466 253665 DEBUG oslo_concurrency.lockutils [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.551 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:27:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.855 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:11 np0005532048 nova_compute[253661]: 2025-11-22 09:27:11.930 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] resizing rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:27:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:27:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2822086503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:27:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:27:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2822086503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.463 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 be0569c8-2c59-4525-a348-590d878662d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.540 253665 DEBUG nova.network.neutron [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updating instance_info_cache with network_info: [{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.587 253665 DEBUG nova.objects.instance [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.615 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.616 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance network_info: |[{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.616 253665 DEBUG oslo_concurrency.lockutils [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.616 253665 DEBUG nova.network.neutron [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Refreshing network info cache for port 6eb31688-c2e8-4f7b-a3df-3008c2065663 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.626 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] resizing rbd image be0569c8-2c59-4525-a348-590d878662d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.704 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.705 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Ensure instance console log exists: /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.705 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.705 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.706 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.707 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start _get_guest_xml network_info=[{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.714 253665 WARNING nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.719 253665 DEBUG nova.virt.libvirt.host [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.720 253665 DEBUG nova.virt.libvirt.host [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.723 253665 DEBUG nova.virt.libvirt.host [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.723 253665 DEBUG nova.virt.libvirt.host [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.724 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.724 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.724 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.724 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.725 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.726 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.726 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.726 253665 DEBUG nova.virt.hardware [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:27:12 np0005532048 nova_compute[253661]: 2025-11-22 09:27:12.729 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.025 253665 DEBUG nova.objects.instance [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'migration_context' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.044 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.044 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Ensure instance console log exists: /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.044 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.045 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.045 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.046 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.051 253665 WARNING nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.061 253665 DEBUG nova.virt.libvirt.host [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.062 253665 DEBUG nova.virt.libvirt.host [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.065 253665 DEBUG nova.virt.libvirt.host [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.066 253665 DEBUG nova.virt.libvirt.host [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.066 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.066 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.067 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.067 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.067 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.067 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.068 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.069 253665 DEBUG nova.virt.hardware [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.071 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:27:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838187365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.191 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.215 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.220 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.259 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.262 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803618.216964, e0b05f62-6966-4bf3-aee5-e4d2137a6cfc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.263 253665 INFO nova.compute.manager [-] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.279 253665 DEBUG nova.compute.manager [None req-23951622-87aa-415f-ac33-f27f5860db91 - - - - - -] [instance: e0b05f62-6966-4bf3-aee5-e4d2137a6cfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:13 np0005532048 podman[343243]: 2025-11-22 09:27:13.390613572 +0000 UTC m=+0.086387044 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 04:27:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:27:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/442847026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.538 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.562 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.566 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 150 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Nov 22 04:27:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:27:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529119042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.678 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.680 253665 DEBUG nova.virt.libvirt.vif [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:27:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1520180186',display_name='tempest-ServerActionsTestOtherB-server-1520180186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1520180186',id=89,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-09vxfowe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:27:09Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=2b5cb3fb-8c82-432e-a88b-1ca3fef4f208,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.681 253665 DEBUG nova.network.os_vif_util [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.682 253665 DEBUG nova.network.os_vif_util [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.683 253665 DEBUG nova.objects.instance [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.695 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <uuid>2b5cb3fb-8c82-432e-a88b-1ca3fef4f208</uuid>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <name>instance-00000059</name>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerActionsTestOtherB-server-1520180186</nova:name>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:27:12</nova:creationTime>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <nova:user uuid="ce82551204d04546a5ae9c6f99cccfc8">tempest-ServerActionsTestOtherB-985895222-project-member</nova:user>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <nova:project uuid="8a246689624d4630a70f69b70d048883">tempest-ServerActionsTestOtherB-985895222</nova:project>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <nova:port uuid="6eb31688-c2e8-4f7b-a3df-3008c2065663">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <entry name="serial">2b5cb3fb-8c82-432e-a88b-1ca3fef4f208</entry>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <entry name="uuid">2b5cb3fb-8c82-432e-a88b-1ca3fef4f208</entry>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:26:cd:01"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <target dev="tap6eb31688-c2"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/console.log" append="off"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:27:13 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:27:13 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:27:13 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:27:13 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.696 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Preparing to wait for external event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.697 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.697 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.698 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.699 253665 DEBUG nova.virt.libvirt.vif [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:27:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1520180186',display_name='tempest-ServerActionsTestOtherB-server-1520180186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1520180186',id=89,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-09vxfowe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:27:09Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=2b5cb3fb-8c82-432e-a88b-1ca3fef4f208,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.699 253665 DEBUG nova.network.os_vif_util [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.700 253665 DEBUG nova.network.os_vif_util [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.702 253665 DEBUG os_vif [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.703 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.704 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.704 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.707 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eb31688-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.707 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6eb31688-c2, col_values=(('external_ids', {'iface-id': '6eb31688-c2e8-4f7b-a3df-3008c2065663', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:cd:01', 'vm-uuid': '2b5cb3fb-8c82-432e-a88b-1ca3fef4f208'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:13 np0005532048 NetworkManager[48920]: <info>  [1763803633.7261] manager: (tap6eb31688-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.727 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.730 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.732 253665 INFO os_vif [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2')#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.800 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.801 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.801 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No VIF found with MAC fa:16:3e:26:cd:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.802 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Using config drive#033[00m
Nov 22 04:27:13 np0005532048 nova_compute[253661]: 2025-11-22 09:27:13.831 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:27:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272900218' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.026 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.028 253665 DEBUG nova.objects.instance [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'pci_devices' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.040 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <uuid>be0569c8-2c59-4525-a348-590d878662d8</uuid>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <name>instance-0000005a</name>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerShowV254Test-server-794044049</nova:name>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:27:13</nova:creationTime>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <nova:user uuid="6f9df33c6ddf4ec9a99024bbc6085706">tempest-ServerShowV254Test-1012776663-project-member</nova:user>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <nova:project uuid="8b6aee60ba934808adf8732a1c4457cb">tempest-ServerShowV254Test-1012776663</nova:project>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <entry name="serial">be0569c8-2c59-4525-a348-590d878662d8</entry>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <entry name="uuid">be0569c8-2c59-4525-a348-590d878662d8</entry>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/be0569c8-2c59-4525-a348-590d878662d8_disk">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/be0569c8-2c59-4525-a348-590d878662d8_disk.config">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/console.log" append="off"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:27:14 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:27:14 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:27:14 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:27:14 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.118 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.119 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.119 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Using config drive#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.149 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.205 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Creating config drive at /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.211 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe045x__9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.252 253665 DEBUG nova.network.neutron [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updated VIF entry in instance network info cache for port 6eb31688-c2e8-4f7b-a3df-3008c2065663. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.253 253665 DEBUG nova.network.neutron [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updating instance_info_cache with network_info: [{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.269 253665 DEBUG oslo_concurrency.lockutils [req-0b536a25-2334-4bf4-899c-7a33eecc3bfa req-9fc3c08e-d4b6-4560-8052-fddc18b8e8ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.326 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating config drive at /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.332 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7_6i55o6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.368 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe045x__9" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.395 253665 DEBUG nova.storage.rbd_utils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.399 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.475 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7_6i55o6" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.503 253665 DEBUG nova.storage.rbd_utils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.506 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config be0569c8-2c59-4525-a348-590d878662d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:14.529 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.922 253665 DEBUG oslo_concurrency.processutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.923 253665 INFO nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Deleting local config drive /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208/disk.config because it was imported into RBD.#033[00m
Nov 22 04:27:14 np0005532048 virtqemud[254229]: End of file while reading data: Input/output error
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.957 253665 DEBUG oslo_concurrency.processutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config be0569c8-2c59-4525-a348-590d878662d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.957 253665 INFO nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deleting local config drive /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config because it was imported into RBD.#033[00m
Nov 22 04:27:14 np0005532048 kernel: tap6eb31688-c2: entered promiscuous mode
Nov 22 04:27:14 np0005532048 NetworkManager[48920]: <info>  [1763803634.9845] manager: (tap6eb31688-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Nov 22 04:27:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:14Z|00908|binding|INFO|Claiming lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 for this chassis.
Nov 22 04:27:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:14Z|00909|binding|INFO|6eb31688-c2e8-4f7b-a3df-3008c2065663: Claiming fa:16:3e:26:cd:01 10.100.0.13
Nov 22 04:27:14 np0005532048 nova_compute[253661]: 2025-11-22 09:27:14.985 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:14.995 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:cd:01 10.100.0.13'], port_security=['fa:16:3e:26:cd:01 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b5cb3fb-8c82-432e-a88b-1ca3fef4f208', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '2', 'neutron:security_group_ids': '565d4bba-9c09-4fbf-9eb5-c7cb7133e1fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=6eb31688-c2e8-4f7b-a3df-3008c2065663) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:27:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:14.996 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 6eb31688-c2e8-4f7b-a3df-3008c2065663 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da bound to our chassis#033[00m
Nov 22 04:27:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:14.998 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da#033[00m
Nov 22 04:27:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:15Z|00910|binding|INFO|Setting lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 ovn-installed in OVS
Nov 22 04:27:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:15Z|00911|binding|INFO|Setting lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 up in Southbound
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:15 np0005532048 systemd-udevd[343470]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf23329-6ff2-4e19-a602-ce88c8923a6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:15 np0005532048 NetworkManager[48920]: <info>  [1763803635.0360] device (tap6eb31688-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:27:15 np0005532048 NetworkManager[48920]: <info>  [1763803635.0368] device (tap6eb31688-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:27:15 np0005532048 systemd-machined[215941]: New machine qemu-107-instance-00000059.
Nov 22 04:27:15 np0005532048 systemd[1]: Started Virtual Machine qemu-107-instance-00000059.
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.053 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[99182b32-c715-464f-a913-4ed37fed169f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.057 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e798d461-63f0-4f65-8ef6-996773877b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:15 np0005532048 systemd-machined[215941]: New machine qemu-108-instance-0000005a.
Nov 22 04:27:15 np0005532048 systemd[1]: Started Virtual Machine qemu-108-instance-0000005a.
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.088 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5c67871d-fe68-437f-86ab-9b4ba676c262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.109 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3ea4cb-3e52-41fc-b34b-e290866912a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343489, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.134 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e21ccbd0-9aa9-4ce9-aa38-03aa01c46eb8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343494, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343494, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.139 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.140 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.141 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.141 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:15.142 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.253 253665 DEBUG nova.compute.manager [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.253 253665 DEBUG oslo_concurrency.lockutils [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.254 253665 DEBUG oslo_concurrency.lockutils [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.254 253665 DEBUG oslo_concurrency.lockutils [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.254 253665 DEBUG nova.compute.manager [req-d5727956-fba8-44b0-a5f3-53408b864634 req-43554032-72d5-4d8c-8142-dd192de279ed 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Processing event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.513 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.5129354, be0569c8-2c59-4525-a348-590d878662d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.513 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.516 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.517 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.520 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance spawned successfully.#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.521 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.534 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.539 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.544 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.545 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.545 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.546 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.546 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.546 253665 DEBUG nova.virt.libvirt.driver [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.566 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.567 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.5131536, be0569c8-2c59-4525-a348-590d878662d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.567 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Started (Lifecycle Event)#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.568 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.594 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.601 253665 INFO nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Took 4.92 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.602 253665 DEBUG nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.604 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:27:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 191 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 MiB/s wr, 62 op/s
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.657 253665 INFO nova.compute.manager [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Took 5.90 seconds to build instance.#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.672 253665 DEBUG oslo_concurrency.lockutils [None req-53c78b68-f8b7-4efb-bdc2-5d8a15af2d1e 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.976 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.9764066, 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.977 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] VM Started (Lifecycle Event)#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.978 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.982 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.985 253665 INFO nova.virt.libvirt.driver [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance spawned successfully.#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.985 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:27:15 np0005532048 nova_compute[253661]: 2025-11-22 09:27:15.997 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.004 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.006 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.007 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.007 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.007 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.008 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.008 253665 DEBUG nova.virt.libvirt.driver [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.036 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.036 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.976573, 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.037 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.061 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.067 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803635.981308, 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.067 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.075 253665 INFO nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Took 6.69 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.076 253665 DEBUG nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.097 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.122 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.136 253665 INFO nova.compute.manager [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Took 7.66 seconds to build instance.#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.150 253665 DEBUG oslo_concurrency.lockutils [None req-fb9fa513-6a0e-467d-bb5e-fde90a6606b1 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:16 np0005532048 nova_compute[253661]: 2025-11-22 09:27:16.994 253665 INFO nova.compute.manager [None req-6a5609b2-4946-440a-b88a-75cb108cad7f ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Get console output#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.000 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.091 253665 INFO nova.compute.manager [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Rebuilding instance#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.296 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'trusted_certs' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.308 253665 DEBUG nova.compute.manager [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.317 253665 DEBUG nova.compute.manager [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.318 253665 DEBUG oslo_concurrency.lockutils [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.318 253665 DEBUG oslo_concurrency.lockutils [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.318 253665 DEBUG oslo_concurrency.lockutils [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.318 253665 DEBUG nova.compute.manager [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] No waiting events found dispatching network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.319 253665 WARNING nova.compute.manager [req-2f141838-f294-4e35-98f1-9e1d561ad764 req-e8aecaf4-4cc6-4103-bfd5-b8386922e3c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received unexpected event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.352 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'pci_requests' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.360 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'pci_devices' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.374 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'resources' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.384 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'migration_context' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.393 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:27:17 np0005532048 nova_compute[253661]: 2025-11-22 09:27:17.396 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:27:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 213 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.3 MiB/s wr, 110 op/s
Nov 22 04:27:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:18 np0005532048 nova_compute[253661]: 2025-11-22 09:27:18.194 253665 INFO nova.compute.manager [None req-5cf462f4-2278-4724-b861-d2eb206e6b52 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Get console output#033[00m
Nov 22 04:27:18 np0005532048 nova_compute[253661]: 2025-11-22 09:27:18.200 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:27:18 np0005532048 nova_compute[253661]: 2025-11-22 09:27:18.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 198 op/s
Nov 22 04:27:20 np0005532048 nova_compute[253661]: 2025-11-22 09:27:20.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:27:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4b944829-0f44-4e8d-8491-25b9473e156c does not exist
Nov 22 04:27:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7ef56b29-a0ed-43d5-9823-d49e1893e25d does not exist
Nov 22 04:27:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e0af6662-810f-4752-b7bf-6108908fe0b7 does not exist
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:27:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:27:21 np0005532048 podman[343851]: 2025-11-22 09:27:21.362968024 +0000 UTC m=+0.044948391 container create 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:27:21 np0005532048 systemd[1]: Started libpod-conmon-8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66.scope.
Nov 22 04:27:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:27:21 np0005532048 podman[343851]: 2025-11-22 09:27:21.339931014 +0000 UTC m=+0.021911411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:27:21 np0005532048 podman[343851]: 2025-11-22 09:27:21.44710137 +0000 UTC m=+0.129081757 container init 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:27:21 np0005532048 podman[343851]: 2025-11-22 09:27:21.454088325 +0000 UTC m=+0.136068692 container start 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:27:21 np0005532048 podman[343851]: 2025-11-22 09:27:21.458363623 +0000 UTC m=+0.140344020 container attach 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:27:21 np0005532048 tender_mcnulty[343867]: 167 167
Nov 22 04:27:21 np0005532048 systemd[1]: libpod-8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66.scope: Deactivated successfully.
Nov 22 04:27:21 np0005532048 conmon[343867]: conmon 8975b2eb2034f5fb4dde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66.scope/container/memory.events
Nov 22 04:27:21 np0005532048 podman[343851]: 2025-11-22 09:27:21.461974093 +0000 UTC m=+0.143954470 container died 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:27:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7b44bd8b030fcc766023d2875ff6b6b33f562da4d5457e2e415b215e4a70a730-merged.mount: Deactivated successfully.
Nov 22 04:27:21 np0005532048 podman[343851]: 2025-11-22 09:27:21.51033801 +0000 UTC m=+0.192318377 container remove 8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcnulty, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:27:21 np0005532048 systemd[1]: libpod-conmon-8975b2eb2034f5fb4dde59dcd1726c0c6f43fd2f0530d706561376518012dd66.scope: Deactivated successfully.
Nov 22 04:27:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Nov 22 04:27:21 np0005532048 podman[343890]: 2025-11-22 09:27:21.693159647 +0000 UTC m=+0.041997127 container create e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:27:21 np0005532048 systemd[1]: Started libpod-conmon-e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7.scope.
Nov 22 04:27:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:27:21 np0005532048 podman[343890]: 2025-11-22 09:27:21.675297578 +0000 UTC m=+0.024135078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:27:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:21 np0005532048 podman[343890]: 2025-11-22 09:27:21.809129082 +0000 UTC m=+0.157966582 container init e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 22 04:27:21 np0005532048 podman[343890]: 2025-11-22 09:27:21.817895153 +0000 UTC m=+0.166732633 container start e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:27:21 np0005532048 podman[343890]: 2025-11-22 09:27:21.823711569 +0000 UTC m=+0.172549099 container attach e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:27:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:27:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:27:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:27:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:27:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:27:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:27:22 np0005532048 adoring_newton[343907]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:27:22 np0005532048 adoring_newton[343907]: --> relative data size: 1.0
Nov 22 04:27:22 np0005532048 adoring_newton[343907]: --> All data devices are unavailable
Nov 22 04:27:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:22 np0005532048 systemd[1]: libpod-e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7.scope: Deactivated successfully.
Nov 22 04:27:22 np0005532048 podman[343890]: 2025-11-22 09:27:22.940153212 +0000 UTC m=+1.288990692 container died e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:27:22 np0005532048 systemd[1]: libpod-e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7.scope: Consumed 1.055s CPU time.
Nov 22 04:27:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-16cb662d574ae700466feff8ae463c1adafd8e0122a14b411639785943af8c38-merged.mount: Deactivated successfully.
Nov 22 04:27:23 np0005532048 podman[343890]: 2025-11-22 09:27:23.078299166 +0000 UTC m=+1.427136646 container remove e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_newton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:27:23 np0005532048 systemd[1]: libpod-conmon-e20ec427ca9fb582378e90a1a28cde6fe988b031e35a12ff9730648e9d7046f7.scope: Deactivated successfully.
Nov 22 04:27:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 204 op/s
Nov 22 04:27:23 np0005532048 podman[344088]: 2025-11-22 09:27:23.709051615 +0000 UTC m=+0.040136569 container create a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:27:23 np0005532048 nova_compute[253661]: 2025-11-22 09:27:23.727 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:23 np0005532048 systemd[1]: Started libpod-conmon-a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f.scope.
Nov 22 04:27:23 np0005532048 podman[344088]: 2025-11-22 09:27:23.690615572 +0000 UTC m=+0.021700586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:27:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:27:23 np0005532048 podman[344088]: 2025-11-22 09:27:23.812093527 +0000 UTC m=+0.143178471 container init a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:27:23 np0005532048 podman[344088]: 2025-11-22 09:27:23.824509069 +0000 UTC m=+0.155594033 container start a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:27:23 np0005532048 podman[344088]: 2025-11-22 09:27:23.828645003 +0000 UTC m=+0.159729977 container attach a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:27:23 np0005532048 romantic_austin[344104]: 167 167
Nov 22 04:27:23 np0005532048 systemd[1]: libpod-a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f.scope: Deactivated successfully.
Nov 22 04:27:23 np0005532048 podman[344088]: 2025-11-22 09:27:23.830688674 +0000 UTC m=+0.161773668 container died a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:27:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-df77246e3de010d2ef5e2685a893b2c12a8f2308e63290bd956a83832356db2e-merged.mount: Deactivated successfully.
Nov 22 04:27:23 np0005532048 podman[344088]: 2025-11-22 09:27:23.869691755 +0000 UTC m=+0.200776699 container remove a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_austin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:27:23 np0005532048 systemd[1]: libpod-conmon-a601ad06d46a22b2a63594b3a6e587b19b993610c73a1cd716006c8cfd627c5f.scope: Deactivated successfully.
Nov 22 04:27:24 np0005532048 podman[344128]: 2025-11-22 09:27:24.084529868 +0000 UTC m=+0.049597999 container create 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:27:24 np0005532048 systemd[1]: Started libpod-conmon-527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1.scope.
Nov 22 04:27:24 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:27:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:24 np0005532048 podman[344128]: 2025-11-22 09:27:24.063983441 +0000 UTC m=+0.029051602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:27:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:24 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:24 np0005532048 podman[344128]: 2025-11-22 09:27:24.176671374 +0000 UTC m=+0.141739525 container init 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:27:24 np0005532048 podman[344128]: 2025-11-22 09:27:24.184562532 +0000 UTC m=+0.149630653 container start 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:27:24 np0005532048 podman[344128]: 2025-11-22 09:27:24.188517982 +0000 UTC m=+0.153586123 container attach 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:27:24 np0005532048 nova_compute[253661]: 2025-11-22 09:27:24.530 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:24 np0005532048 nova_compute[253661]: 2025-11-22 09:27:24.530 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:24 np0005532048 nova_compute[253661]: 2025-11-22 09:27:24.546 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:27:24 np0005532048 nova_compute[253661]: 2025-11-22 09:27:24.619 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:24 np0005532048 nova_compute[253661]: 2025-11-22 09:27:24.620 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:24 np0005532048 nova_compute[253661]: 2025-11-22 09:27:24.627 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:27:24 np0005532048 nova_compute[253661]: 2025-11-22 09:27:24.627 253665 INFO nova.compute.claims [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:27:24 np0005532048 nova_compute[253661]: 2025-11-22 09:27:24.759 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]: {
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:    "0": [
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:        {
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "devices": [
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "/dev/loop3"
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            ],
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_name": "ceph_lv0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_size": "21470642176",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "name": "ceph_lv0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "tags": {
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.cluster_name": "ceph",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.crush_device_class": "",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.encrypted": "0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.osd_id": "0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.type": "block",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.vdo": "0"
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            },
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "type": "block",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "vg_name": "ceph_vg0"
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:        }
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:    ],
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:    "1": [
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:        {
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "devices": [
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "/dev/loop4"
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            ],
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_name": "ceph_lv1",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_size": "21470642176",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "name": "ceph_lv1",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "tags": {
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.cluster_name": "ceph",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.crush_device_class": "",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.encrypted": "0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.osd_id": "1",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.type": "block",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.vdo": "0"
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            },
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "type": "block",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "vg_name": "ceph_vg1"
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:        }
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:    ],
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:    "2": [
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:        {
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "devices": [
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "/dev/loop5"
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            ],
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_name": "ceph_lv2",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_size": "21470642176",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "name": "ceph_lv2",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "tags": {
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.cluster_name": "ceph",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.crush_device_class": "",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.encrypted": "0",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.osd_id": "2",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.type": "block",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:                "ceph.vdo": "0"
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            },
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "type": "block",
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:            "vg_name": "ceph_vg2"
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:        }
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]:    ]
Nov 22 04:27:25 np0005532048 mystifying_euclid[344145]: }
Nov 22 04:27:25 np0005532048 systemd[1]: libpod-527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1.scope: Deactivated successfully.
Nov 22 04:27:25 np0005532048 podman[344174]: 2025-11-22 09:27:25.109253703 +0000 UTC m=+0.034179450 container died 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:27:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-00577d2460d4b14f146e3a083bf37b7ba07bea93298b7afe9fa4a48510a0dd8a-merged.mount: Deactivated successfully.
Nov 22 04:27:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:27:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350919452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.230 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.238 253665 DEBUG nova.compute.provider_tree [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.255 253665 DEBUG nova.scheduler.client.report [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:27:25 np0005532048 podman[344174]: 2025-11-22 09:27:25.26422766 +0000 UTC m=+0.189153347 container remove 527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:27:25 np0005532048 systemd[1]: libpod-conmon-527c058cec50121dbd881173da81168701dee4ebf8a2859eae99ad0b7f5987c1.scope: Deactivated successfully.
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.276 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.291 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.331 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.331 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.355 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.375 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.473 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.477 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.478 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Creating image(s)#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.517 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.554 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.585 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.590 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 214 MiB data, 729 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 200 op/s
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.637 253665 DEBUG nova.policy [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ce82551204d04546a5ae9c6f99cccfc8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a246689624d4630a70f69b70d048883', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.680 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.681 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.682 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.682 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.706 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:25 np0005532048 nova_compute[253661]: 2025-11-22 09:27:25.711 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:26 np0005532048 podman[344422]: 2025-11-22 09:27:25.973768252 +0000 UTC m=+0.044106671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:27:26 np0005532048 podman[344422]: 2025-11-22 09:27:26.069348585 +0000 UTC m=+0.139687014 container create 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:27:26 np0005532048 systemd[1]: Started libpod-conmon-1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558.scope.
Nov 22 04:27:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:27:26 np0005532048 nova_compute[253661]: 2025-11-22 09:27:26.229 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:26 np0005532048 podman[344422]: 2025-11-22 09:27:26.254573742 +0000 UTC m=+0.324912161 container init 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 04:27:26 np0005532048 podman[344422]: 2025-11-22 09:27:26.263010534 +0000 UTC m=+0.333348943 container start 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:27:26 np0005532048 systemd[1]: libpod-1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558.scope: Deactivated successfully.
Nov 22 04:27:26 np0005532048 xenodochial_ritchie[344436]: 167 167
Nov 22 04:27:26 np0005532048 conmon[344436]: conmon 1ea825c691e0a9f6148f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558.scope/container/memory.events
Nov 22 04:27:26 np0005532048 podman[344422]: 2025-11-22 09:27:26.276487413 +0000 UTC m=+0.346825812 container attach 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:27:26 np0005532048 podman[344422]: 2025-11-22 09:27:26.277298414 +0000 UTC m=+0.347636813 container died 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:27:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-01a3355fb2c43af039164722f183a887884c2cb7e3ef79ce35696b9d18b8ee0e-merged.mount: Deactivated successfully.
Nov 22 04:27:26 np0005532048 podman[344422]: 2025-11-22 09:27:26.3384128 +0000 UTC m=+0.408751199 container remove 1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:27:26 np0005532048 nova_compute[253661]: 2025-11-22 09:27:26.338 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] resizing rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:27:26 np0005532048 systemd[1]: libpod-conmon-1ea825c691e0a9f6148f7de9177a12a49455d830b10c1fe0a85d282bbddce558.scope: Deactivated successfully.
Nov 22 04:27:26 np0005532048 nova_compute[253661]: 2025-11-22 09:27:26.504 253665 DEBUG nova.objects.instance [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'migration_context' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:26 np0005532048 nova_compute[253661]: 2025-11-22 09:27:26.518 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:27:26 np0005532048 nova_compute[253661]: 2025-11-22 09:27:26.518 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Ensure instance console log exists: /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:27:26 np0005532048 nova_compute[253661]: 2025-11-22 09:27:26.519 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:26 np0005532048 nova_compute[253661]: 2025-11-22 09:27:26.519 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:26 np0005532048 nova_compute[253661]: 2025-11-22 09:27:26.519 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:26 np0005532048 podman[344530]: 2025-11-22 09:27:26.532788068 +0000 UTC m=+0.046322836 container create 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:27:26 np0005532048 systemd[1]: Started libpod-conmon-86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0.scope.
Nov 22 04:27:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:27:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:26 np0005532048 podman[344530]: 2025-11-22 09:27:26.511569174 +0000 UTC m=+0.025103952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:27:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:27:26 np0005532048 podman[344530]: 2025-11-22 09:27:26.628437513 +0000 UTC m=+0.141972281 container init 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:27:26 np0005532048 podman[344530]: 2025-11-22 09:27:26.635834969 +0000 UTC m=+0.149369737 container start 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:27:26 np0005532048 podman[344530]: 2025-11-22 09:27:26.640332572 +0000 UTC m=+0.153867320 container attach 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:27:26 np0005532048 nova_compute[253661]: 2025-11-22 09:27:26.710 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Successfully created port: 28cfa04a-0181-49ef-808d-38b57a093820 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:27:27 np0005532048 nova_compute[253661]: 2025-11-22 09:27:27.515 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:27:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 235 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 154 op/s
Nov 22 04:27:27 np0005532048 reverent_jang[344549]: {
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "osd_id": 1,
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "type": "bluestore"
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:    },
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "osd_id": 0,
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "type": "bluestore"
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:    },
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "osd_id": 2,
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:        "type": "bluestore"
Nov 22 04:27:27 np0005532048 reverent_jang[344549]:    }
Nov 22 04:27:27 np0005532048 reverent_jang[344549]: }
Nov 22 04:27:27 np0005532048 systemd[1]: libpod-86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0.scope: Deactivated successfully.
Nov 22 04:27:27 np0005532048 systemd[1]: libpod-86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0.scope: Consumed 1.022s CPU time.
Nov 22 04:27:27 np0005532048 podman[344530]: 2025-11-22 09:27:27.687863422 +0000 UTC m=+1.201398170 container died 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:27:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8712df3f733db2f1af97da4d88d7ad9d4fb1c839a8004af2a77503484de9c58a-merged.mount: Deactivated successfully.
Nov 22 04:27:27 np0005532048 podman[344530]: 2025-11-22 09:27:27.75820321 +0000 UTC m=+1.271737958 container remove 86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jang, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:27:27 np0005532048 systemd[1]: libpod-conmon-86d9267866e550afad7a778167322635aed0f731f3c920d7d936005c65730da0.scope: Deactivated successfully.
Nov 22 04:27:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:27:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:27:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:27:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:27:27 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 40cecc41-241e-4f5f-a74b-221b8ffba620 does not exist
Nov 22 04:27:27 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 30dedeef-a0ad-4172-ad1c-f27b104439b7 does not exist
Nov 22 04:27:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:27.969 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:28 np0005532048 nova_compute[253661]: 2025-11-22 09:27:28.471 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Successfully updated port: 28cfa04a-0181-49ef-808d-38b57a093820 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:27:28 np0005532048 nova_compute[253661]: 2025-11-22 09:27:28.491 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:27:28 np0005532048 nova_compute[253661]: 2025-11-22 09:27:28.492 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:27:28 np0005532048 nova_compute[253661]: 2025-11-22 09:27:28.492 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:27:28 np0005532048 nova_compute[253661]: 2025-11-22 09:27:28.609 253665 DEBUG nova.compute.manager [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-changed-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:28 np0005532048 nova_compute[253661]: 2025-11-22 09:27:28.610 253665 DEBUG nova.compute.manager [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Refreshing instance network info cache due to event network-changed-28cfa04a-0181-49ef-808d-38b57a093820. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:27:28 np0005532048 nova_compute[253661]: 2025-11-22 09:27:28.610 253665 DEBUG oslo_concurrency.lockutils [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:27:28 np0005532048 nova_compute[253661]: 2025-11-22 09:27:28.716 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:27:28 np0005532048 nova_compute[253661]: 2025-11-22 09:27:28.743 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:27:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:27:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:29Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:cd:01 10.100.0.13
Nov 22 04:27:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:29Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:cd:01 10.100.0.13
Nov 22 04:27:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 289 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.0 MiB/s wr, 196 op/s
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.041 253665 DEBUG nova.network.neutron [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updating instance_info_cache with network_info: [{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.055 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.056 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance network_info: |[{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.056 253665 DEBUG oslo_concurrency.lockutils [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.056 253665 DEBUG nova.network.neutron [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Refreshing network info cache for port 28cfa04a-0181-49ef-808d-38b57a093820 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.058 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Start _get_guest_xml network_info=[{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.063 253665 WARNING nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.070 253665 DEBUG nova.virt.libvirt.host [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.070 253665 DEBUG nova.virt.libvirt.host [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.074 253665 DEBUG nova.virt.libvirt.host [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.074 253665 DEBUG nova.virt.libvirt.host [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.075 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.075 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.075 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.076 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.077 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.077 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.077 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.077 253665 DEBUG nova.virt.hardware [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.081 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:27:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1290931064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.572 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.582 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.608 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:30 np0005532048 nova_compute[253661]: 2025-11-22 09:27:30.611 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:27:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555189802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.074 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.076 253665 DEBUG nova.virt.libvirt.vif [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-695887300',display_name='tempest-ServerActionsTestOtherB-server-695887300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-695887300',id=91,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-h01c4fzy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:27:25Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=753ce408-3988-4fe2-b140-9cae60fcdd6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.076 253665 DEBUG nova.network.os_vif_util [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.078 253665 DEBUG nova.network.os_vif_util [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.080 253665 DEBUG nova.objects.instance [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_devices' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.095 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <uuid>753ce408-3988-4fe2-b140-9cae60fcdd6b</uuid>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <name>instance-0000005b</name>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerActionsTestOtherB-server-695887300</nova:name>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:27:30</nova:creationTime>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <nova:user uuid="ce82551204d04546a5ae9c6f99cccfc8">tempest-ServerActionsTestOtherB-985895222-project-member</nova:user>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <nova:project uuid="8a246689624d4630a70f69b70d048883">tempest-ServerActionsTestOtherB-985895222</nova:project>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <nova:port uuid="28cfa04a-0181-49ef-808d-38b57a093820">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <entry name="serial">753ce408-3988-4fe2-b140-9cae60fcdd6b</entry>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <entry name="uuid">753ce408-3988-4fe2-b140-9cae60fcdd6b</entry>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/753ce408-3988-4fe2-b140-9cae60fcdd6b_disk">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:26:df:55"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <target dev="tap28cfa04a-01"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/console.log" append="off"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:27:31 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:27:31 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:27:31 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:27:31 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.097 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Preparing to wait for external event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.097 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.097 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.097 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.098 253665 DEBUG nova.virt.libvirt.vif [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-695887300',display_name='tempest-ServerActionsTestOtherB-server-695887300',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-695887300',id=91,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-h01c4fzy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:27:25Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=753ce408-3988-4fe2-b140-9cae60fcdd6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.098 253665 DEBUG nova.network.os_vif_util [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.099 253665 DEBUG nova.network.os_vif_util [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.099 253665 DEBUG os_vif [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.100 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.101 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.101 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.105 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.106 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28cfa04a-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.106 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28cfa04a-01, col_values=(('external_ids', {'iface-id': '28cfa04a-0181-49ef-808d-38b57a093820', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:df:55', 'vm-uuid': '753ce408-3988-4fe2-b140-9cae60fcdd6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:31 np0005532048 NetworkManager[48920]: <info>  [1763803651.1088] manager: (tap28cfa04a-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.110 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.115 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.116 253665 INFO os_vif [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01')#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.222 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.223 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.223 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No VIF found with MAC fa:16:3e:26:df:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.224 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Using config drive#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.248 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 289 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 4.0 MiB/s wr, 93 op/s
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.739 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Creating config drive at /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.744 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu5hnakw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.780 253665 DEBUG nova.network.neutron [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updated VIF entry in instance network info cache for port 28cfa04a-0181-49ef-808d-38b57a093820. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.781 253665 DEBUG nova.network.neutron [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updating instance_info_cache with network_info: [{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.797 253665 DEBUG oslo_concurrency.lockutils [req-ef17fc72-3aa6-4551-91ae-c3fed64ace66 req-b7df2f15-c461-43cd-91c4-5d9ce84a36b1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.887 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu5hnakw" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.989 253665 DEBUG nova.storage.rbd_utils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:31 np0005532048 nova_compute[253661]: 2025-11-22 09:27:31.993 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:32 np0005532048 nova_compute[253661]: 2025-11-22 09:27:32.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:32 np0005532048 nova_compute[253661]: 2025-11-22 09:27:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:27:32 np0005532048 nova_compute[253661]: 2025-11-22 09:27:32.739 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:27:32 np0005532048 nova_compute[253661]: 2025-11-22 09:27:32.740 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:27:32 np0005532048 nova_compute[253661]: 2025-11-22 09:27:32.740 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:27:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.442 253665 DEBUG oslo_concurrency.processutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config 753ce408-3988-4fe2-b140-9cae60fcdd6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.443 253665 INFO nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Deleting local config drive /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b/disk.config because it was imported into RBD.#033[00m
Nov 22 04:27:33 np0005532048 kernel: tap28cfa04a-01: entered promiscuous mode
Nov 22 04:27:33 np0005532048 NetworkManager[48920]: <info>  [1763803653.5181] manager: (tap28cfa04a-01): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Nov 22 04:27:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:33Z|00912|binding|INFO|Claiming lport 28cfa04a-0181-49ef-808d-38b57a093820 for this chassis.
Nov 22 04:27:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:33Z|00913|binding|INFO|28cfa04a-0181-49ef-808d-38b57a093820: Claiming fa:16:3e:26:df:55 10.100.0.3
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:33Z|00914|binding|INFO|Setting lport 28cfa04a-0181-49ef-808d-38b57a093820 ovn-installed in OVS
Nov 22 04:27:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:33Z|00915|binding|INFO|Setting lport 28cfa04a-0181-49ef-808d-38b57a093820 up in Southbound
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.534 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:df:55 10.100.0.3'], port_security=['fa:16:3e:26:df:55 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '753ce408-3988-4fe2-b140-9cae60fcdd6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '2', 'neutron:security_group_ids': '565d4bba-9c09-4fbf-9eb5-c7cb7133e1fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=28cfa04a-0181-49ef-808d-38b57a093820) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.537 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 28cfa04a-0181-49ef-808d-38b57a093820 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da bound to our chassis#033[00m
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.541 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da#033[00m
Nov 22 04:27:33 np0005532048 systemd-udevd[344784]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:27:33 np0005532048 systemd-machined[215941]: New machine qemu-109-instance-0000005b.
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.564 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6019d3ab-9771-4b3b-9a78-a66de3a1b9c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:33 np0005532048 systemd[1]: Started Virtual Machine qemu-109-instance-0000005b.
Nov 22 04:27:33 np0005532048 NetworkManager[48920]: <info>  [1763803653.5777] device (tap28cfa04a-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:27:33 np0005532048 NetworkManager[48920]: <info>  [1763803653.5785] device (tap28cfa04a-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.607 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e96b62eb-810f-4e2f-a4e1-79e997b0cb61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.612 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[567c3a86-b20d-438f-ab9d-d63322febb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 322 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 905 KiB/s rd, 6.0 MiB/s wr, 145 op/s
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.648 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[40823b05-139a-46c6-b39e-bc9c2d9bd9aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.671 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af265c35-abf1-475f-b761-5923173491d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344796, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.693 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[751b7826-d4b5-489c-97b4-2bb72d1b7844]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344798, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344798, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.695 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.701 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.702 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.703 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:33.704 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.769 253665 DEBUG nova.compute.manager [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.770 253665 DEBUG oslo_concurrency.lockutils [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.770 253665 DEBUG oslo_concurrency.lockutils [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.770 253665 DEBUG oslo_concurrency.lockutils [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:33 np0005532048 nova_compute[253661]: 2025-11-22 09:27:33.770 253665 DEBUG nova.compute.manager [req-676e9729-0119-4d9e-887d-9299a66ae51b req-4f408cb4-052c-4bf7-8ea2-9690ed069370 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Processing event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.218 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803654.2172937, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.219 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Started (Lifecycle Event)#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.221 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.225 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.230 253665 INFO nova.virt.libvirt.driver [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance spawned successfully.#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.231 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.249 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.257 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.265 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.266 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.267 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.268 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.268 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.269 253665 DEBUG nova.virt.libvirt.driver [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.296 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.297 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803654.217594, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.297 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.320 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.324 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803654.225146, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.324 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.335 253665 INFO nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Took 8.86 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.336 253665 DEBUG nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.347 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.369 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.389 253665 INFO nova.compute.manager [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Took 9.80 seconds to build instance.#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.403 253665 DEBUG oslo_concurrency.lockutils [None req-a4c843cb-c8a2-4fb4-a513-063da74e4f21 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.829 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.845 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.846 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.846 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.846 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.847 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.847 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.871 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.871 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.872 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.872 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:27:34 np0005532048 nova_compute[253661]: 2025-11-22 09:27:34.872 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:27:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510388740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.339 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.452 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.452 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.456 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.457 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.460 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.460 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.464 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.464 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 6.0 MiB/s wr, 155 op/s
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.641 253665 INFO nova.compute.manager [None req-e39c48ab-a8a5-4e58-859c-0a24c8235ff5 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Pausing#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.643 253665 DEBUG nova.objects.instance [None req-e39c48ab-a8a5-4e58-859c-0a24c8235ff5 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'flavor' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.661 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.662 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3286MB free_disk=59.83132553100586GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.663 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.663 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.675 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803655.6754906, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.676 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.677 253665 DEBUG nova.compute.manager [None req-e39c48ab-a8a5-4e58-859c-0a24c8235ff5 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.712 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.716 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.770 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.804 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e4f9440c-7476-4022-8d08-1b3151a9db79 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.805 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.805 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance be0569c8-2c59-4525-a348-590d878662d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.805 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 753ce408-3988-4fe2-b140-9cae60fcdd6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.805 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.806 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.852 253665 DEBUG nova.compute.manager [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.853 253665 DEBUG oslo_concurrency.lockutils [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.854 253665 DEBUG oslo_concurrency.lockutils [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.854 253665 DEBUG oslo_concurrency.lockutils [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.854 253665 DEBUG nova.compute.manager [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] No waiting events found dispatching network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.854 253665 WARNING nova.compute.manager [req-52c7d710-4b99-4bfb-88cb-728b4844de93 req-baecf9c6-e266-4ee5-a780-6ac5025295f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received unexpected event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 for instance with vm_state paused and task_state None.#033[00m
Nov 22 04:27:35 np0005532048 nova_compute[253661]: 2025-11-22 09:27:35.921 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:36 np0005532048 nova_compute[253661]: 2025-11-22 09:27:36.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:27:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2623197426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:27:36 np0005532048 nova_compute[253661]: 2025-11-22 09:27:36.446 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:36 np0005532048 nova_compute[253661]: 2025-11-22 09:27:36.452 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:27:36 np0005532048 nova_compute[253661]: 2025-11-22 09:27:36.471 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:27:36 np0005532048 nova_compute[253661]: 2025-11-22 09:27:36.505 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:27:36 np0005532048 nova_compute[253661]: 2025-11-22 09:27:36.506 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:37 np0005532048 podman[344886]: 2025-11-22 09:27:37.40908763 +0000 UTC m=+0.065807536 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:27:37 np0005532048 podman[344887]: 2025-11-22 09:27:37.41426778 +0000 UTC m=+0.069757835 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:27:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1006 KiB/s rd, 6.0 MiB/s wr, 173 op/s
Nov 22 04:27:37 np0005532048 nova_compute[253661]: 2025-11-22 09:27:37.887 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:37 np0005532048 nova_compute[253661]: 2025-11-22 09:27:37.887 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:27:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.166 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.167 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.167 253665 INFO nova.compute.manager [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Shelving#033[00m
Nov 22 04:27:38 np0005532048 kernel: tap28cfa04a-01 (unregistering): left promiscuous mode
Nov 22 04:27:38 np0005532048 NetworkManager[48920]: <info>  [1763803658.4645] device (tap28cfa04a-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:27:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:38Z|00916|binding|INFO|Releasing lport 28cfa04a-0181-49ef-808d-38b57a093820 from this chassis (sb_readonly=0)
Nov 22 04:27:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:38Z|00917|binding|INFO|Setting lport 28cfa04a-0181-49ef-808d-38b57a093820 down in Southbound
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.472 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:38Z|00918|binding|INFO|Removing iface tap28cfa04a-01 ovn-installed in OVS
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.480 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:df:55 10.100.0.3'], port_security=['fa:16:3e:26:df:55 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '753ce408-3988-4fe2-b140-9cae60fcdd6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '4', 'neutron:security_group_ids': '565d4bba-9c09-4fbf-9eb5-c7cb7133e1fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=28cfa04a-0181-49ef-808d-38b57a093820) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.481 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 28cfa04a-0181-49ef-808d-38b57a093820 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da unbound from our chassis#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.483 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:38 np0005532048 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.505 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[67b8b4f2-83dd-4ea5-aa5a-240658cd43db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:38 np0005532048 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005b.scope: Consumed 1.973s CPU time.
Nov 22 04:27:38 np0005532048 systemd-machined[215941]: Machine qemu-109-instance-0000005b terminated.
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.544 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6b826fe0-b952-4ec7-8ce7-181f0cc582c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.548 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[90e00937-6e46-4b07-b14f-ac5b901f2a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.564 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.590 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[af6f4357-678e-4495-bce6-bd5e2608287b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.625 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2d9c7b-bcaa-47bc-9e8b-647ce4cda397]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344941, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.626 253665 INFO nova.virt.libvirt.driver [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance destroyed successfully.#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.627 253665 DEBUG nova.objects.instance [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'numa_topology' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.656 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d5044a67-16fe-411b-8023-8191be447ca6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344948, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344948, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.660 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.669 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.670 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.671 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.672 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:38.672 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.792 253665 DEBUG nova.compute.manager [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-vif-unplugged-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.792 253665 DEBUG oslo_concurrency.lockutils [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.793 253665 DEBUG oslo_concurrency.lockutils [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.793 253665 DEBUG oslo_concurrency.lockutils [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.793 253665 DEBUG nova.compute.manager [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] No waiting events found dispatching network-vif-unplugged-28cfa04a-0181-49ef-808d-38b57a093820 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.793 253665 WARNING nova.compute.manager [req-14009079-fa3a-4a9a-93a9-b76f4a3635c9 req-457a8f8c-5a6d-4244-bd24-c7f1aa40a4eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received unexpected event network-vif-unplugged-28cfa04a-0181-49ef-808d-38b57a093820 for instance with vm_state paused and task_state shelving.#033[00m
Nov 22 04:27:38 np0005532048 nova_compute[253661]: 2025-11-22 09:27:38.887 253665 INFO nova.virt.libvirt.driver [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Beginning cold snapshot process#033[00m
Nov 22 04:27:39 np0005532048 nova_compute[253661]: 2025-11-22 09:27:39.026 253665 DEBUG nova.virt.libvirt.imagebackend [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:27:39 np0005532048 nova_compute[253661]: 2025-11-22 09:27:39.270 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(ef11e4df533b4e6db3d19539cbf5b443) on rbd image(753ce408-3988-4fe2-b140-9cae60fcdd6b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:27:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 22 04:27:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 22 04:27:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 22 04:27:39 np0005532048 nova_compute[253661]: 2025-11-22 09:27:39.495 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/753ce408-3988-4fe2-b140-9cae60fcdd6b_disk@ef11e4df533b4e6db3d19539cbf5b443 to images/e371ade6-6f76-4ac1-a229-6a09e518f8de clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:27:39 np0005532048 nova_compute[253661]: 2025-11-22 09:27:39.621 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/e371ade6-6f76-4ac1-a229-6a09e518f8de flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:27:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Nov 22 04:27:39 np0005532048 nova_compute[253661]: 2025-11-22 09:27:39.931 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(ef11e4df533b4e6db3d19539cbf5b443) on rbd image(753ce408-3988-4fe2-b140-9cae60fcdd6b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:27:40 np0005532048 nova_compute[253661]: 2025-11-22 09:27:40.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 22 04:27:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 22 04:27:40 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 22 04:27:40 np0005532048 nova_compute[253661]: 2025-11-22 09:27:40.456 253665 DEBUG nova.storage.rbd_utils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(e371ade6-6f76-4ac1-a229-6a09e518f8de) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:27:40 np0005532048 nova_compute[253661]: 2025-11-22 09:27:40.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:40 np0005532048 nova_compute[253661]: 2025-11-22 09:27:40.922 253665 DEBUG nova.compute.manager [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:40 np0005532048 nova_compute[253661]: 2025-11-22 09:27:40.923 253665 DEBUG oslo_concurrency.lockutils [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:40 np0005532048 nova_compute[253661]: 2025-11-22 09:27:40.923 253665 DEBUG oslo_concurrency.lockutils [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:40 np0005532048 nova_compute[253661]: 2025-11-22 09:27:40.924 253665 DEBUG oslo_concurrency.lockutils [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:40 np0005532048 nova_compute[253661]: 2025-11-22 09:27:40.924 253665 DEBUG nova.compute.manager [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] No waiting events found dispatching network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:27:40 np0005532048 nova_compute[253661]: 2025-11-22 09:27:40.924 253665 WARNING nova.compute.manager [req-53e647ed-e2d8-4a45-b0b2-0de5dc02e99c req-82bc68c1-2a94-474d-a1fa-11b40275565b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received unexpected event network-vif-plugged-28cfa04a-0181-49ef-808d-38b57a093820 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Nov 22 04:27:41 np0005532048 nova_compute[253661]: 2025-11-22 09:27:41.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:41 np0005532048 nova_compute[253661]: 2025-11-22 09:27:41.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:27:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 22 04:27:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 22 04:27:41 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 22 04:27:41 np0005532048 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 22 04:27:41 np0005532048 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d0000005a.scope: Consumed 13.493s CPU time.
Nov 22 04:27:41 np0005532048 systemd-machined[215941]: Machine qemu-108-instance-0000005a terminated.
Nov 22 04:27:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 326 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 74 KiB/s wr, 99 op/s
Nov 22 04:27:41 np0005532048 nova_compute[253661]: 2025-11-22 09:27:41.761 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance shutdown successfully after 24 seconds.#033[00m
Nov 22 04:27:41 np0005532048 nova_compute[253661]: 2025-11-22 09:27:41.768 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance destroyed successfully.#033[00m
Nov 22 04:27:41 np0005532048 nova_compute[253661]: 2025-11-22 09:27:41.774 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance destroyed successfully.#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.204 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deleting instance files /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8_del#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.205 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deletion of /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8_del complete#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.412 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.413 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating image(s)#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.456 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.618 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.646 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.653 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.767 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.768 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.770 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.770 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.806 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:42 np0005532048 nova_compute[253661]: 2025-11-22 09:27:42.811 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 be0569c8-2c59-4525-a348-590d878662d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 351 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.442 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 be0569c8-2c59-4525-a348-590d878662d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:44 np0005532048 podman[345208]: 2025-11-22 09:27:44.458202248 +0000 UTC m=+0.137754685 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.530 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] resizing rbd image be0569c8-2c59-4525-a348-590d878662d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.677 253665 INFO nova.virt.libvirt.driver [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Snapshot image upload complete#033[00m
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.678 253665 DEBUG nova.compute.manager [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.747 253665 INFO nova.compute.manager [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Shelve offloading#033[00m
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.761 253665 INFO nova.virt.libvirt.driver [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance destroyed successfully.#033[00m
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.762 253665 DEBUG nova.compute.manager [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.765 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.765 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:27:44 np0005532048 nova_compute[253661]: 2025-11-22 09:27:44.765 253665 DEBUG nova.network.neutron [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.096 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.097 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Ensure instance console log exists: /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.098 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.098 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.099 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.101 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.107 253665 WARNING nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.115 253665 DEBUG nova.virt.libvirt.host [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.116 253665 DEBUG nova.virt.libvirt.host [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.122 253665 DEBUG nova.virt.libvirt.host [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.122 253665 DEBUG nova.virt.libvirt.host [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.123 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.123 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.124 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.124 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.125 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.125 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.125 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.125 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.126 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.126 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.126 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.126 253665 DEBUG nova.virt.hardware [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.127 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'vcpu_model' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.141 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.578 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:27:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322883611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.618 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 347 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.6 MiB/s wr, 162 op/s
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.652 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:45 np0005532048 nova_compute[253661]: 2025-11-22 09:27:45.657 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:27:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209297601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.144 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.160 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.165 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <uuid>be0569c8-2c59-4525-a348-590d878662d8</uuid>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <name>instance-0000005a</name>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerShowV254Test-server-794044049</nova:name>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:27:45</nova:creationTime>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <nova:user uuid="6f9df33c6ddf4ec9a99024bbc6085706">tempest-ServerShowV254Test-1012776663-project-member</nova:user>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <nova:project uuid="8b6aee60ba934808adf8732a1c4457cb">tempest-ServerShowV254Test-1012776663</nova:project>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <entry name="serial">be0569c8-2c59-4525-a348-590d878662d8</entry>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <entry name="uuid">be0569c8-2c59-4525-a348-590d878662d8</entry>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/be0569c8-2c59-4525-a348-590d878662d8_disk">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/be0569c8-2c59-4525-a348-590d878662d8_disk.config">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/console.log" append="off"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:27:46 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:27:46 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:27:46 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:27:46 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.238 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.238 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.239 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Using config drive#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.265 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.284 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'ec2_ids' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.471 253665 DEBUG nova.network.neutron [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updating instance_info_cache with network_info: [{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.486 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.508 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Creating config drive at /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.513 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd65biy7_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.658 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd65biy7_" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.687 253665 DEBUG nova.storage.rbd_utils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] rbd image be0569c8-2c59-4525-a348-590d878662d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.691 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config be0569c8-2c59-4525-a348-590d878662d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.889 253665 DEBUG oslo_concurrency.processutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config be0569c8-2c59-4525-a348-590d878662d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:46 np0005532048 nova_compute[253661]: 2025-11-22 09:27:46.890 253665 INFO nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deleting local config drive /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8/disk.config because it was imported into RBD.#033[00m
Nov 22 04:27:46 np0005532048 systemd-machined[215941]: New machine qemu-110-instance-0000005a.
Nov 22 04:27:46 np0005532048 systemd[1]: Started Virtual Machine qemu-110-instance-0000005a.
Nov 22 04:27:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 331 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 134 op/s
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.907 253665 INFO nova.virt.libvirt.driver [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Instance destroyed successfully.#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.909 253665 DEBUG nova.objects.instance [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'resources' on Instance uuid 753ce408-3988-4fe2-b140-9cae60fcdd6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.920 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for be0569c8-2c59-4525-a348-590d878662d8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.920 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803667.9195416, be0569c8-2c59-4525-a348-590d878662d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.920 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:27:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.927 253665 DEBUG nova.virt.libvirt.vif [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-695887300',display_name='tempest-ServerActionsTestOtherB-server-695887300',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-695887300',id=91,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-h01c4fzy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member',shelved_at='2025-11-22T09:27:44.678146',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e371ade6-6f76-4ac1-a229-6a09e518f8de'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:27:38Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=753ce408-3988-4fe2-b140-9cae60fcdd6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.928 253665 DEBUG nova.network.os_vif_util [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28cfa04a-01", "ovs_interfaceid": "28cfa04a-0181-49ef-808d-38b57a093820", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:27:47 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.929 253665 DEBUG nova.network.os_vif_util [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.930 253665 DEBUG os_vif [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.933 253665 DEBUG nova.compute.manager [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.933 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.934 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.934 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28cfa04a-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.938 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.941 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.942 253665 INFO os_vif [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:df:55,bridge_name='br-int',has_traffic_filtering=True,id=28cfa04a-0181-49ef-808d-38b57a093820,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28cfa04a-01')#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.970 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance spawned successfully.#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.971 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:27:47 np0005532048 nova_compute[253661]: 2025-11-22 09:27:47.974 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.000 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.001 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803667.921067, be0569c8-2c59-4525-a348-590d878662d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.002 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Started (Lifecycle Event)#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.007 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.008 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.008 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.009 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.009 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.010 253665 DEBUG nova.virt.libvirt.driver [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.018 253665 DEBUG nova.compute.manager [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Received event network-changed-28cfa04a-0181-49ef-808d-38b57a093820 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.019 253665 DEBUG nova.compute.manager [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Refreshing instance network info cache due to event network-changed-28cfa04a-0181-49ef-808d-38b57a093820. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.019 253665 DEBUG oslo_concurrency.lockutils [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.019 253665 DEBUG oslo_concurrency.lockutils [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.020 253665 DEBUG nova.network.neutron [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Refreshing network info cache for port 28cfa04a-0181-49ef-808d-38b57a093820 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.021 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.026 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.052 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.075 253665 DEBUG nova.compute.manager [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.129 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.129 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.130 253665 DEBUG nova.objects.instance [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.184 253665 DEBUG oslo_concurrency.lockutils [None req-f81f0d79-0827-4fe0-a506-aae6d7127b02 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.386 253665 INFO nova.virt.libvirt.driver [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Deleting instance files /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b_del#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.387 253665 INFO nova.virt.libvirt.driver [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Deletion of /var/lib/nova/instances/753ce408-3988-4fe2-b140-9cae60fcdd6b_del complete#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.480 253665 INFO nova.scheduler.client.report [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Deleted allocations for instance 753ce408-3988-4fe2-b140-9cae60fcdd6b#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.521 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.521 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.575 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "be0569c8-2c59-4525-a348-590d878662d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.575 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.576 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "be0569c8-2c59-4525-a348-590d878662d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.576 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.576 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.577 253665 INFO nova.compute.manager [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Terminating instance#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.578 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "refresh_cache-be0569c8-2c59-4525-a348-590d878662d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.578 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquired lock "refresh_cache-be0569c8-2c59-4525-a348-590d878662d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.578 253665 DEBUG nova.network.neutron [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:27:48 np0005532048 nova_compute[253661]: 2025-11-22 09:27:48.624 253665 DEBUG oslo_concurrency.processutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:27:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860459903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:27:49 np0005532048 nova_compute[253661]: 2025-11-22 09:27:49.113 253665 DEBUG oslo_concurrency.processutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:49 np0005532048 nova_compute[253661]: 2025-11-22 09:27:49.119 253665 DEBUG nova.compute.provider_tree [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:27:49 np0005532048 nova_compute[253661]: 2025-11-22 09:27:49.137 253665 DEBUG nova.scheduler.client.report [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:27:49 np0005532048 nova_compute[253661]: 2025-11-22 09:27:49.157 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:49 np0005532048 nova_compute[253661]: 2025-11-22 09:27:49.215 253665 DEBUG oslo_concurrency.lockutils [None req-8f064d10-e220-4abd-a64f-8e47ae85a074 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "753ce408-3988-4fe2-b140-9cae60fcdd6b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:49 np0005532048 nova_compute[253661]: 2025-11-22 09:27:49.447 253665 DEBUG nova.network.neutron [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:27:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 258 op/s
Nov 22 04:27:50 np0005532048 nova_compute[253661]: 2025-11-22 09:27:50.179 253665 DEBUG nova.network.neutron [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:27:50 np0005532048 nova_compute[253661]: 2025-11-22 09:27:50.195 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Releasing lock "refresh_cache-be0569c8-2c59-4525-a348-590d878662d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:27:50 np0005532048 nova_compute[253661]: 2025-11-22 09:27:50.196 253665 DEBUG nova.compute.manager [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:27:50 np0005532048 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 22 04:27:50 np0005532048 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Consumed 3.179s CPU time.
Nov 22 04:27:50 np0005532048 systemd-machined[215941]: Machine qemu-110-instance-0000005a terminated.
Nov 22 04:27:50 np0005532048 nova_compute[253661]: 2025-11-22 09:27:50.422 253665 INFO nova.virt.libvirt.driver [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance destroyed successfully.#033[00m
Nov 22 04:27:50 np0005532048 nova_compute[253661]: 2025-11-22 09:27:50.423 253665 DEBUG nova.objects.instance [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lazy-loading 'resources' on Instance uuid be0569c8-2c59-4525-a348-590d878662d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:50 np0005532048 nova_compute[253661]: 2025-11-22 09:27:50.580 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:50 np0005532048 nova_compute[253661]: 2025-11-22 09:27:50.619 253665 DEBUG nova.network.neutron [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updated VIF entry in instance network info cache for port 28cfa04a-0181-49ef-808d-38b57a093820. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:27:50 np0005532048 nova_compute[253661]: 2025-11-22 09:27:50.620 253665 DEBUG nova.network.neutron [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Updating instance_info_cache with network_info: [{"id": "28cfa04a-0181-49ef-808d-38b57a093820", "address": "fa:16:3e:26:df:55", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": null, "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap28cfa04a-01", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:27:50 np0005532048 nova_compute[253661]: 2025-11-22 09:27:50.639 253665 DEBUG oslo_concurrency.lockutils [req-49f1f94b-163a-4cd5-aa6c-2f0620baacd9 req-4b76ba59-033d-4b30-a074-b57f4dd002cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-753ce408-3988-4fe2-b140-9cae60fcdd6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:27:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 212 op/s
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:27:52
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.data', 'vms']
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:27:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:27:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:52 np0005532048 nova_compute[253661]: 2025-11-22 09:27:52.925 253665 INFO nova.virt.libvirt.driver [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deleting instance files /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8_del#033[00m
Nov 22 04:27:52 np0005532048 nova_compute[253661]: 2025-11-22 09:27:52.926 253665 INFO nova.virt.libvirt.driver [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deletion of /var/lib/nova/instances/be0569c8-2c59-4525-a348-590d878662d8_del complete#033[00m
Nov 22 04:27:52 np0005532048 nova_compute[253661]: 2025-11-22 09:27:52.938 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:52 np0005532048 nova_compute[253661]: 2025-11-22 09:27:52.996 253665 INFO nova.compute.manager [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] [instance: be0569c8-2c59-4525-a348-590d878662d8] Took 2.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:27:52 np0005532048 nova_compute[253661]: 2025-11-22 09:27:52.996 253665 DEBUG oslo.service.loopingcall [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:27:52 np0005532048 nova_compute[253661]: 2025-11-22 09:27:52.996 253665 DEBUG nova.compute.manager [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:27:52 np0005532048 nova_compute[253661]: 2025-11-22 09:27:52.997 253665 DEBUG nova.network.neutron [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.137 253665 DEBUG nova.network.neutron [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.154 253665 DEBUG nova.network.neutron [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.158 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.158 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.159 253665 INFO nova.compute.manager [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Shelving#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.171 253665 INFO nova.compute.manager [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] Took 0.17 seconds to deallocate network for instance.#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.189 253665 DEBUG nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.219 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.219 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.293 253665 DEBUG oslo_concurrency.processutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.625 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803658.6240451, 753ce408-3988-4fe2-b140-9cae60fcdd6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.626 253665 INFO nova.compute.manager [-] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:27:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 285 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 234 op/s
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.644 253665 DEBUG nova.compute.manager [None req-235fe83e-6187-46fc-be7b-9c7c67970931 - - - - - -] [instance: 753ce408-3988-4fe2-b140-9cae60fcdd6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:27:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:27:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4114288654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.899 253665 DEBUG oslo_concurrency.processutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.906 253665 DEBUG nova.compute.provider_tree [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.926 253665 DEBUG nova.scheduler.client.report [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.949 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:53 np0005532048 nova_compute[253661]: 2025-11-22 09:27:53.989 253665 INFO nova.scheduler.client.report [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Deleted allocations for instance be0569c8-2c59-4525-a348-590d878662d8#033[00m
Nov 22 04:27:54 np0005532048 nova_compute[253661]: 2025-11-22 09:27:54.081 253665 DEBUG oslo_concurrency.lockutils [None req-f77b9afa-939e-4bc4-ada5-4e86ab10aae0 6f9df33c6ddf4ec9a99024bbc6085706 8b6aee60ba934808adf8732a1c4457cb - - default default] Lock "be0569c8-2c59-4525-a348-590d878662d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:55 np0005532048 nova_compute[253661]: 2025-11-22 09:27:55.583 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:27:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 261 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 836 KiB/s wr, 190 op/s
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.220 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance shutdown successfully after 4 seconds.#033[00m
Nov 22 04:27:57 np0005532048 kernel: tapb1fc96be-00 (unregistering): left promiscuous mode
Nov 22 04:27:57 np0005532048 NetworkManager[48920]: <info>  [1763803677.3022] device (tapb1fc96be-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:27:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:57Z|00919|binding|INFO|Releasing lport b1fc96be-009e-46a8-829c-b7a0bc42af60 from this chassis (sb_readonly=0)
Nov 22 04:27:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:57Z|00920|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 down in Southbound
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.318 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:27:57Z|00921|binding|INFO|Removing iface tapb1fc96be-00 ovn-installed in OVS
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.322 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.332 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:67:ca 10.100.0.10'], port_security=['fa:16:3e:38:67:ca 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e4f9440c-7476-4022-8d08-1b3151a9db79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33563511-c966-495c-93cb-386deb50a2bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b1fc96be-009e-46a8-829c-b7a0bc42af60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.335 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b1fc96be-009e-46a8-829c-b7a0bc42af60 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da unbound from our chassis#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.338 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da#033[00m
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.352 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.369 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3e7f48-a42f-4b94-bfce-67d30f43048f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:57 np0005532048 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000056.scope: Deactivated successfully.
Nov 22 04:27:57 np0005532048 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000056.scope: Consumed 19.367s CPU time.
Nov 22 04:27:57 np0005532048 systemd-machined[215941]: Machine qemu-104-instance-00000056 terminated.
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.417 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9be7921c-f59a-472b-a83d-f75af3cc4010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.422 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b94c4b78-39db-410b-8507-df16c247ab7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.466 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[abf40546-4da7-46a3-8637-677e8a914f06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.467 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance destroyed successfully.#033[00m
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.468 253665 DEBUG nova.objects.instance [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'numa_topology' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.495 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e60965f-e60d-44ca-8880-27e68b5628a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345587, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.518 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2dea3212-7824-46fa-87d6-b5f8034bc3f1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345588, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345588, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.559 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.560 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.560 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:27:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:27:57.561 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:27:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 376 KiB/s wr, 190 op/s
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.904 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Beginning cold snapshot process#033[00m
Nov 22 04:27:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:27:57 np0005532048 nova_compute[253661]: 2025-11-22 09:27:57.940 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:27:58 np0005532048 nova_compute[253661]: 2025-11-22 09:27:58.057 253665 DEBUG nova.virt.libvirt.imagebackend [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:27:58 np0005532048 nova_compute[253661]: 2025-11-22 09:27:58.237 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(afac88730f284b09ab567ec387418c7b) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:27:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 22 04:27:59 np0005532048 nova_compute[253661]: 2025-11-22 09:27:59.219 253665 DEBUG nova.compute.manager [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:27:59 np0005532048 nova_compute[253661]: 2025-11-22 09:27:59.219 253665 DEBUG oslo_concurrency.lockutils [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:27:59 np0005532048 nova_compute[253661]: 2025-11-22 09:27:59.219 253665 DEBUG oslo_concurrency.lockutils [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:27:59 np0005532048 nova_compute[253661]: 2025-11-22 09:27:59.220 253665 DEBUG oslo_concurrency.lockutils [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:27:59 np0005532048 nova_compute[253661]: 2025-11-22 09:27:59.220 253665 DEBUG nova.compute.manager [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:27:59 np0005532048 nova_compute[253661]: 2025-11-22 09:27:59.220 253665 WARNING nova.compute.manager [req-f0698645-4d81-4267-8695-f62fd073264b req-2febab03-4d12-4136-a8c9-1796b2ee7502 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 22 04:27:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 22 04:27:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 22 04:27:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 93 op/s
Nov 22 04:27:59 np0005532048 nova_compute[253661]: 2025-11-22 09:27:59.996 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk@afac88730f284b09ab567ec387418c7b to images/38789998-3bd3-4ad6-a223-58e845dd36f2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:28:00 np0005532048 nova_compute[253661]: 2025-11-22 09:28:00.586 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:01 np0005532048 nova_compute[253661]: 2025-11-22 09:28:01.191 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening images/38789998-3bd3-4ad6-a223-58e845dd36f2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:28:01 np0005532048 nova_compute[253661]: 2025-11-22 09:28:01.475 253665 DEBUG nova.compute.manager [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:01 np0005532048 nova_compute[253661]: 2025-11-22 09:28:01.476 253665 DEBUG oslo_concurrency.lockutils [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:01 np0005532048 nova_compute[253661]: 2025-11-22 09:28:01.476 253665 DEBUG oslo_concurrency.lockutils [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:01 np0005532048 nova_compute[253661]: 2025-11-22 09:28:01.476 253665 DEBUG oslo_concurrency.lockutils [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:01 np0005532048 nova_compute[253661]: 2025-11-22 09:28:01.476 253665 DEBUG nova.compute.manager [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:28:01 np0005532048 nova_compute[253661]: 2025-11-22 09:28:01.477 253665 WARNING nova.compute.manager [req-8a5daf33-950a-4460-96ab-0c14e42e6d7b req-247d2dc8-ffb8-4365-935a-84fe997b0706 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 22 04:28:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 93 op/s
Nov 22 04:28:01 np0005532048 nova_compute[253661]: 2025-11-22 09:28:01.788 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] removing snapshot(afac88730f284b09ab567ec387418c7b) on rbd image(e4f9440c-7476-4022-8d08-1b3151a9db79_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:28:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 22 04:28:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 22 04:28:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015192456005404702 of space, bias 1.0, pg target 0.4557736801621411 quantized to 32 (current 32)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001336931041218988 of space, bias 1.0, pg target 0.40107931236569644 quantized to 32 (current 32)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:28:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:28:02 np0005532048 nova_compute[253661]: 2025-11-22 09:28:02.765 253665 DEBUG nova.storage.rbd_utils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] creating snapshot(snap) on rbd image(38789998-3bd3-4ad6-a223-58e845dd36f2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:28:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:02 np0005532048 nova_compute[253661]: 2025-11-22 09:28:02.942 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 318 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 5.4 MiB/s wr, 59 op/s
Nov 22 04:28:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 22 04:28:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 22 04:28:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 22 04:28:04 np0005532048 nova_compute[253661]: 2025-11-22 09:28:04.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:04.832 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:28:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:04.833 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:28:04 np0005532048 nova_compute[253661]: 2025-11-22 09:28:04.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.421 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803670.4197698, be0569c8-2c59-4525-a348-590d878662d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.422 253665 INFO nova.compute.manager [-] [instance: be0569c8-2c59-4525-a348-590d878662d8] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.447 253665 DEBUG nova.compute.manager [None req-173c971e-254e-4772-aa4c-a7c7ab2c551d - - - - - -] [instance: be0569c8-2c59-4525-a348-590d878662d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.588 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 325 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.5 MiB/s wr, 131 op/s
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.793 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Snapshot image upload complete#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.793 253665 DEBUG nova.compute.manager [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.857 253665 INFO nova.compute.manager [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Shelve offloading#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.866 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance destroyed successfully.#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.867 253665 DEBUG nova.compute.manager [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.870 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.870 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:28:05 np0005532048 nova_compute[253661]: 2025-11-22 09:28:05.870 253665 DEBUG nova.network.neutron [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:28:06 np0005532048 nova_compute[253661]: 2025-11-22 09:28:06.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 325 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 104 op/s
Nov 22 04:28:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:07.836 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:07 np0005532048 nova_compute[253661]: 2025-11-22 09:28:07.944 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:08 np0005532048 nova_compute[253661]: 2025-11-22 09:28:08.415 253665 DEBUG nova.network.neutron [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:28:08 np0005532048 podman[345732]: 2025-11-22 09:28:08.417713302 +0000 UTC m=+0.091147373 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:28:08 np0005532048 podman[345731]: 2025-11-22 09:28:08.424910473 +0000 UTC m=+0.109650678 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 04:28:08 np0005532048 nova_compute[253661]: 2025-11-22 09:28:08.437 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:28:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:08Z|00922|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 04:28:08 np0005532048 nova_compute[253661]: 2025-11-22 09:28:08.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:09Z|00923|binding|INFO|Releasing lport 93c31381-1979-4cee-982c-9507d8ee6c9a from this chassis (sb_readonly=0)
Nov 22 04:28:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 325 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 122 op/s
Nov 22 04:28:09 np0005532048 nova_compute[253661]: 2025-11-22 09:28:09.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.529 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance destroyed successfully.#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.529 253665 DEBUG nova.objects.instance [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'resources' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.540 253665 DEBUG nova.virt.libvirt.vif [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/cRG5bfHD3LbYWZfZhBZW64Gzk9NiecmZChn56cNdUeqOvdqm8gZ047E1aOD+/1rWy6Q/20jfwuj+tARiRMK9Fr/axSxMkwZvm5uYPBSn1o0uJaQf1m6OZmN9YqP8SQ==',key_name='tempest-keypair-427391145',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member',shelved_at='2025-11-22T09:28:05.793825',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='38789998-3bd3-4ad6-a223-58e845dd36f2'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:27:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.540 253665 DEBUG nova.network.os_vif_util [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.541 253665 DEBUG nova.network.os_vif_util [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.541 253665 DEBUG os_vif [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.543 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1fc96be-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.551 253665 INFO os_vif [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00')#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.662 253665 DEBUG nova.compute.manager [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.662 253665 DEBUG nova.compute.manager [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing instance network info cache due to event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.663 253665 DEBUG oslo_concurrency.lockutils [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.663 253665 DEBUG oslo_concurrency.lockutils [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.663 253665 DEBUG nova.network.neutron [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.987 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting instance files /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79_del#033[00m
Nov 22 04:28:10 np0005532048 nova_compute[253661]: 2025-11-22 09:28:10.988 253665 INFO nova.virt.libvirt.driver [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deletion of /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79_del complete#033[00m
Nov 22 04:28:11 np0005532048 nova_compute[253661]: 2025-11-22 09:28:11.081 253665 INFO nova.scheduler.client.report [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Deleted allocations for instance e4f9440c-7476-4022-8d08-1b3151a9db79#033[00m
Nov 22 04:28:11 np0005532048 nova_compute[253661]: 2025-11-22 09:28:11.122 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:11 np0005532048 nova_compute[253661]: 2025-11-22 09:28:11.123 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:11 np0005532048 nova_compute[253661]: 2025-11-22 09:28:11.177 253665 DEBUG oslo_concurrency.processutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:28:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3349675935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:28:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 325 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.0 MiB/s wr, 98 op/s
Nov 22 04:28:11 np0005532048 nova_compute[253661]: 2025-11-22 09:28:11.650 253665 DEBUG oslo_concurrency.processutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:11 np0005532048 nova_compute[253661]: 2025-11-22 09:28:11.655 253665 DEBUG nova.compute.provider_tree [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:28:11 np0005532048 nova_compute[253661]: 2025-11-22 09:28:11.671 253665 DEBUG nova.scheduler.client.report [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:28:11 np0005532048 nova_compute[253661]: 2025-11-22 09:28:11.694 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:11 np0005532048 nova_compute[253661]: 2025-11-22 09:28:11.737 253665 DEBUG oslo_concurrency.lockutils [None req-901807fc-0093-464e-bb74-89c063e40562 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:28:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902416551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:28:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:28:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902416551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:28:12 np0005532048 nova_compute[253661]: 2025-11-22 09:28:12.462 253665 DEBUG nova.network.neutron [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated VIF entry in instance network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:28:12 np0005532048 nova_compute[253661]: 2025-11-22 09:28:12.463 253665 DEBUG nova.network.neutron [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": null, "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapb1fc96be-00", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:28:12 np0005532048 nova_compute[253661]: 2025-11-22 09:28:12.464 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803677.4639907, e4f9440c-7476-4022-8d08-1b3151a9db79 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:28:12 np0005532048 nova_compute[253661]: 2025-11-22 09:28:12.465 253665 INFO nova.compute.manager [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:28:12 np0005532048 nova_compute[253661]: 2025-11-22 09:28:12.494 253665 DEBUG nova.compute.manager [None req-4e836ea3-53af-4367-9d66-f74835c4ae39 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:28:12 np0005532048 nova_compute[253661]: 2025-11-22 09:28:12.495 253665 DEBUG oslo_concurrency.lockutils [req-c6c3feee-8025-4574-bb6b-66fc5b62ef33 req-ae25a120-f76e-4c5f-83f1-f2bb93fc33b7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:28:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 22 04:28:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 22 04:28:12 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 22 04:28:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 264 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 715 KiB/s rd, 404 KiB/s wr, 83 op/s
Nov 22 04:28:15 np0005532048 podman[345811]: 2025-11-22 09:28:15.394552723 +0000 UTC m=+0.083313087 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 22 04:28:15 np0005532048 nova_compute[253661]: 2025-11-22 09:28:15.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:15 np0005532048 nova_compute[253661]: 2025-11-22 09:28:15.593 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Nov 22 04:28:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 47 op/s
Nov 22 04:28:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.374 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.374 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.375 253665 INFO nova.compute.manager [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Unshelving#033[00m
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.470 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.471 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.478 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_requests' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.495 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'numa_topology' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.509 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.509 253665 INFO nova.compute.claims [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:28:19 np0005532048 nova_compute[253661]: 2025-11-22 09:28:19.637 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 KiB/s wr, 34 op/s
Nov 22 04:28:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:28:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/624732545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:28:20 np0005532048 nova_compute[253661]: 2025-11-22 09:28:20.097 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:20 np0005532048 nova_compute[253661]: 2025-11-22 09:28:20.104 253665 DEBUG nova.compute.provider_tree [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:28:20 np0005532048 nova_compute[253661]: 2025-11-22 09:28:20.118 253665 DEBUG nova.scheduler.client.report [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:28:20 np0005532048 nova_compute[253661]: 2025-11-22 09:28:20.135 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:20 np0005532048 nova_compute[253661]: 2025-11-22 09:28:20.423 253665 INFO nova.network.neutron [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating port b1fc96be-009e-46a8-829c-b7a0bc42af60 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 22 04:28:20 np0005532048 nova_compute[253661]: 2025-11-22 09:28:20.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:20 np0005532048 nova_compute[253661]: 2025-11-22 09:28:20.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:21 np0005532048 nova_compute[253661]: 2025-11-22 09:28:21.083 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:28:21 np0005532048 nova_compute[253661]: 2025-11-22 09:28:21.083 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:28:21 np0005532048 nova_compute[253661]: 2025-11-22 09:28:21.084 253665 DEBUG nova.network.neutron [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:28:21 np0005532048 nova_compute[253661]: 2025-11-22 09:28:21.314 253665 DEBUG nova.compute.manager [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:21 np0005532048 nova_compute[253661]: 2025-11-22 09:28:21.314 253665 DEBUG nova.compute.manager [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing instance network info cache due to event network-changed-b1fc96be-009e-46a8-829c-b7a0bc42af60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:28:21 np0005532048 nova_compute[253661]: 2025-11-22 09:28:21.315 253665 DEBUG oslo_concurrency.lockutils [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:28:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 KiB/s wr, 34 op/s
Nov 22 04:28:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:28:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:28:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:28:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:28:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:28:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:28:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 9.4 KiB/s wr, 17 op/s
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.045 253665 DEBUG nova.network.neutron [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.071 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.074 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.075 253665 INFO nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating image(s)#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.106 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.111 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.115 253665 DEBUG oslo_concurrency.lockutils [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.115 253665 DEBUG nova.network.neutron [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Refreshing network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.159 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.184 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.187 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "d4dd4b0658d65fea5dfcd7dfdb0a5b794029a769" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.188 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "d4dd4b0658d65fea5dfcd7dfdb0a5b794029a769" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.535 253665 DEBUG nova.virt.libvirt.imagebackend [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38789998-3bd3-4ad6-a223-58e845dd36f2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38789998-3bd3-4ad6-a223-58e845dd36f2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.590 253665 DEBUG nova.virt.libvirt.imagebackend [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Selected location: {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38789998-3bd3-4ad6-a223-58e845dd36f2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.591 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] cloning images/38789998-3bd3-4ad6-a223-58e845dd36f2@snap to None/e4f9440c-7476-4022-8d08-1b3151a9db79_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.733 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "d4dd4b0658d65fea5dfcd7dfdb0a5b794029a769" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.846 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'migration_context' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:24 np0005532048 nova_compute[253661]: 2025-11-22 09:28:24.898 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] flattening vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.434 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Image rbd:vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.435 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.435 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Ensure instance console log exists: /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.436 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.436 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.436 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.438 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start _get_guest_xml network_info=[{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:27:53Z,direct_url=<?>,disk_format='raw',id=38789998-3bd3-4ad6-a223-58e845dd36f2,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1215087159-shelved',owner='8a246689624d4630a70f69b70d048883',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:28:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.447 253665 WARNING nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.458 253665 DEBUG nova.virt.libvirt.host [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.459 253665 DEBUG nova.virt.libvirt.host [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.464 253665 DEBUG nova.virt.libvirt.host [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.465 253665 DEBUG nova.virt.libvirt.host [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.465 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.465 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:27:53Z,direct_url=<?>,disk_format='raw',id=38789998-3bd3-4ad6-a223-58e845dd36f2,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1215087159-shelved',owner='8a246689624d4630a70f69b70d048883',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:28:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.466 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.466 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.466 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.466 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.467 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.468 253665 DEBUG nova.virt.hardware [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.468 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.483 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.597 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.644 253665 DEBUG nova.network.neutron [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updated VIF entry in instance network info cache for port b1fc96be-009e-46a8-829c-b7a0bc42af60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.644 253665 DEBUG nova.network.neutron [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [{"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:28:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 246 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 8.4 KiB/s wr, 13 op/s
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.659 253665 DEBUG oslo_concurrency.lockutils [req-1bebd1c6-6af1-44ee-858e-aacda09a9d7e req-de2d58fc-ee51-4cfc-80b0-b8431bef576a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e4f9440c-7476-4022-8d08-1b3151a9db79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:28:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:28:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2634363782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:28:25 np0005532048 nova_compute[253661]: 2025-11-22 09:28:25.994 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.018 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.023 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:28:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448644035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.507 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.510 253665 DEBUG nova.virt.libvirt.vif [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='38789998-3bd3-4ad6-a223-58e845dd36f2',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-427391145',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member',shelved_at='2025-11-22T09:28:05.793825',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='38789998-3bd3-4ad6-a223-58e845dd36f2'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:28:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.510 253665 DEBUG nova.network.os_vif_util [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.511 253665 DEBUG nova.network.os_vif_util [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.513 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.527 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <uuid>e4f9440c-7476-4022-8d08-1b3151a9db79</uuid>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <name>instance-00000056</name>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerActionsTestOtherB-server-1215087159</nova:name>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:28:25</nova:creationTime>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <nova:user uuid="ce82551204d04546a5ae9c6f99cccfc8">tempest-ServerActionsTestOtherB-985895222-project-member</nova:user>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <nova:project uuid="8a246689624d4630a70f69b70d048883">tempest-ServerActionsTestOtherB-985895222</nova:project>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="38789998-3bd3-4ad6-a223-58e845dd36f2"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <nova:port uuid="b1fc96be-009e-46a8-829c-b7a0bc42af60">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <entry name="serial">e4f9440c-7476-4022-8d08-1b3151a9db79</entry>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <entry name="uuid">e4f9440c-7476-4022-8d08-1b3151a9db79</entry>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:38:67:ca"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <target dev="tapb1fc96be-00"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/console.log" append="off"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <input type="keyboard" bus="usb"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:28:26 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:28:26 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:28:26 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:28:26 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.527 253665 DEBUG nova.compute.manager [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Preparing to wait for external event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.528 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.528 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.528 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.529 253665 DEBUG nova.virt.libvirt.vif [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='38789998-3bd3-4ad6-a223-58e845dd36f2',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-427391145',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:25:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member',shelved_at='2025-11-22T09:28:05.793825',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='38789998-3bd3-4ad6-a223-58e845dd36f2'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:28:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.529 253665 DEBUG nova.network.os_vif_util [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.530 253665 DEBUG nova.network.os_vif_util [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.531 253665 DEBUG os_vif [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.532 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.534 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.539 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.539 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1fc96be-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.540 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1fc96be-00, col_values=(('external_ids', {'iface-id': 'b1fc96be-009e-46a8-829c-b7a0bc42af60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:67:ca', 'vm-uuid': 'e4f9440c-7476-4022-8d08-1b3151a9db79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.542 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:26 np0005532048 NetworkManager[48920]: <info>  [1763803706.5430] manager: (tapb1fc96be-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.549 253665 INFO os_vif [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00')#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.603 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.604 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.604 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] No VIF found with MAC fa:16:3e:38:67:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.604 253665 INFO nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Using config drive#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.626 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.645 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:26 np0005532048 nova_compute[253661]: 2025-11-22 09:28:26.681 253665 DEBUG nova.objects.instance [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'keypairs' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.192 253665 INFO nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Creating config drive at /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config#033[00m
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.205 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy_p0v863 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.361 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy_p0v863" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.391 253665 DEBUG nova.storage.rbd_utils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] rbd image e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.395 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 284 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 43 op/s
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.718 253665 DEBUG oslo_concurrency.processutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config e4f9440c-7476-4022-8d08-1b3151a9db79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.720 253665 INFO nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting local config drive /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79/disk.config because it was imported into RBD.#033[00m
Nov 22 04:28:27 np0005532048 kernel: tapb1fc96be-00: entered promiscuous mode
Nov 22 04:28:27 np0005532048 NetworkManager[48920]: <info>  [1763803707.7854] manager: (tapb1fc96be-00): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Nov 22 04:28:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:27Z|00924|binding|INFO|Claiming lport b1fc96be-009e-46a8-829c-b7a0bc42af60 for this chassis.
Nov 22 04:28:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:27Z|00925|binding|INFO|b1fc96be-009e-46a8-829c-b7a0bc42af60: Claiming fa:16:3e:38:67:ca 10.100.0.10
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.793 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:67:ca 10.100.0.10'], port_security=['fa:16:3e:38:67:ca 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e4f9440c-7476-4022-8d08-1b3151a9db79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '7', 'neutron:security_group_ids': '33563511-c966-495c-93cb-386deb50a2bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b1fc96be-009e-46a8-829c-b7a0bc42af60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.794 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b1fc96be-009e-46a8-829c-b7a0bc42af60 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da bound to our chassis#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.795 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da#033[00m
Nov 22 04:28:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:27Z|00926|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 ovn-installed in OVS
Nov 22 04:28:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:27Z|00927|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 up in Southbound
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.809 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.819 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f805b50c-6a7e-43c5-89ed-6da1da7ba73c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:27 np0005532048 systemd-udevd[346208]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:28:27 np0005532048 systemd-machined[215941]: New machine qemu-111-instance-00000056.
Nov 22 04:28:27 np0005532048 systemd[1]: Started Virtual Machine qemu-111-instance-00000056.
Nov 22 04:28:27 np0005532048 NetworkManager[48920]: <info>  [1763803707.8486] device (tapb1fc96be-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:28:27 np0005532048 NetworkManager[48920]: <info>  [1763803707.8497] device (tapb1fc96be-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.867 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd7ec9b-f293-4fc6-b03c-0de3b5396433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.872 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6523d3c3-8e17-4468-a1f0-dab7fc87cc6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.909 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8086cd46-8e15-47d0-8dfd-87a224d2b2ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bd9a0e-a764-41c8-93f2-d38a6bc50fbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346221, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.963 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcfd690c-8aab-4c4b-b9da-4915a73fa3da]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346223, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346223, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.967 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.970 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:27 np0005532048 nova_compute[253661]: 2025-11-22 09:28:27.970 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.971 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:27.973 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:28:28 np0005532048 nova_compute[253661]: 2025-11-22 09:28:28.203 253665 DEBUG nova.compute.manager [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:28 np0005532048 nova_compute[253661]: 2025-11-22 09:28:28.203 253665 DEBUG oslo_concurrency.lockutils [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:28 np0005532048 nova_compute[253661]: 2025-11-22 09:28:28.203 253665 DEBUG oslo_concurrency.lockutils [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:28 np0005532048 nova_compute[253661]: 2025-11-22 09:28:28.204 253665 DEBUG oslo_concurrency.lockutils [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:28 np0005532048 nova_compute[253661]: 2025-11-22 09:28:28.204 253665 DEBUG nova.compute.manager [req-f3c9dcda-94f1-4b6a-940a-46b272c2c533 req-1620da36-97b9-4147-8ff4-6f0d3b80e009 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Processing event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:28:28 np0005532048 podman[346398]: 2025-11-22 09:28:28.80961002 +0000 UTC m=+0.094340922 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:28:28 np0005532048 podman[346398]: 2025-11-22 09:28:28.911463852 +0000 UTC m=+0.196194754 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.101 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803709.1002975, e4f9440c-7476-4022-8d08-1b3151a9db79 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.102 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Started (Lifecycle Event)#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.105 253665 DEBUG nova.compute.manager [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.110 253665 DEBUG nova.virt.libvirt.driver [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.115 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance spawned successfully.#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.174 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.180 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.209 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.210 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803709.1008117, e4f9440c-7476-4022-8d08-1b3151a9db79 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.210 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.243 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803709.1090562, e4f9440c-7476-4022-8d08-1b3151a9db79 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.244 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.272 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.277 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:28:29 np0005532048 nova_compute[253661]: 2025-11-22 09:28:29.298 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:28:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:28:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:28:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 325 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 80 op/s
Nov 22 04:28:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:30 np0005532048 nova_compute[253661]: 2025-11-22 09:28:30.419 253665 DEBUG nova.compute.manager [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:30 np0005532048 nova_compute[253661]: 2025-11-22 09:28:30.420 253665 DEBUG oslo_concurrency.lockutils [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:30 np0005532048 nova_compute[253661]: 2025-11-22 09:28:30.421 253665 DEBUG oslo_concurrency.lockutils [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:30 np0005532048 nova_compute[253661]: 2025-11-22 09:28:30.421 253665 DEBUG oslo_concurrency.lockutils [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:30 np0005532048 nova_compute[253661]: 2025-11-22 09:28:30.421 253665 DEBUG nova.compute.manager [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:28:30 np0005532048 nova_compute[253661]: 2025-11-22 09:28:30.422 253665 WARNING nova.compute.manager [req-c9c894a9-e7c5-456f-915f-8fa4b3d26ed1 req-90a68358-a1c8-45be-a1d7-1b2b2decaea3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:30 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7c21884a-5fc1-47d9-bd76-d454269cb95a does not exist
Nov 22 04:28:30 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 62d8594d-9cfe-455a-948f-b4522ca4e0af does not exist
Nov 22 04:28:30 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e313a013-0ba7-48b6-8dba-52cc174f87bf does not exist
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:28:30 np0005532048 nova_compute[253661]: 2025-11-22 09:28:30.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 22 04:28:30 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 22 04:28:31 np0005532048 nova_compute[253661]: 2025-11-22 09:28:31.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:31 np0005532048 podman[346866]: 2025-11-22 09:28:31.230604185 +0000 UTC m=+0.055897476 container create a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:28:31 np0005532048 systemd[1]: Started libpod-conmon-a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012.scope.
Nov 22 04:28:31 np0005532048 nova_compute[253661]: 2025-11-22 09:28:31.279 253665 DEBUG nova.compute.manager [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:28:31 np0005532048 podman[346866]: 2025-11-22 09:28:31.205549125 +0000 UTC m=+0.030842456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:28:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:28:31 np0005532048 podman[346866]: 2025-11-22 09:28:31.339775981 +0000 UTC m=+0.165069372 container init a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 04:28:31 np0005532048 podman[346866]: 2025-11-22 09:28:31.350717546 +0000 UTC m=+0.176010847 container start a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:28:31 np0005532048 podman[346866]: 2025-11-22 09:28:31.359551508 +0000 UTC m=+0.184844849 container attach a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:28:31 np0005532048 zen_snyder[346882]: 167 167
Nov 22 04:28:31 np0005532048 systemd[1]: libpod-a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012.scope: Deactivated successfully.
Nov 22 04:28:31 np0005532048 conmon[346882]: conmon a842ccc35616e1d47b41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012.scope/container/memory.events
Nov 22 04:28:31 np0005532048 podman[346866]: 2025-11-22 09:28:31.364855811 +0000 UTC m=+0.190149152 container died a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:28:31 np0005532048 nova_compute[253661]: 2025-11-22 09:28:31.374 253665 DEBUG oslo_concurrency.lockutils [None req-6e75271d-78b6-462c-a5fe-d231da601748 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-71f653bbcf99d54241a8209df25db0f0980441b8e58bcad379723b682657b7a1-merged.mount: Deactivated successfully.
Nov 22 04:28:31 np0005532048 podman[346866]: 2025-11-22 09:28:31.441325114 +0000 UTC m=+0.266618415 container remove a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:28:31 np0005532048 systemd[1]: libpod-conmon-a842ccc35616e1d47b414c9f24c78e7c9bf909e0cdc64a790ad17032b7df8012.scope: Deactivated successfully.
Nov 22 04:28:31 np0005532048 nova_compute[253661]: 2025-11-22 09:28:31.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:31 np0005532048 podman[346905]: 2025-11-22 09:28:31.638605005 +0000 UTC m=+0.052875441 container create 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:28:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 325 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 94 op/s
Nov 22 04:28:31 np0005532048 systemd[1]: Started libpod-conmon-91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb.scope.
Nov 22 04:28:31 np0005532048 podman[346905]: 2025-11-22 09:28:31.612413856 +0000 UTC m=+0.026684332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:28:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:28:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:31 np0005532048 podman[346905]: 2025-11-22 09:28:31.728633019 +0000 UTC m=+0.142903495 container init 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:28:31 np0005532048 podman[346905]: 2025-11-22 09:28:31.739807809 +0000 UTC m=+0.154078255 container start 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:28:31 np0005532048 podman[346905]: 2025-11-22 09:28:31.744648491 +0000 UTC m=+0.158918997 container attach 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:28:32 np0005532048 nova_compute[253661]: 2025-11-22 09:28:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:32 np0005532048 nova_compute[253661]: 2025-11-22 09:28:32.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:28:32 np0005532048 nova_compute[253661]: 2025-11-22 09:28:32.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:28:32 np0005532048 nova_compute[253661]: 2025-11-22 09:28:32.540 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:28:32 np0005532048 nova_compute[253661]: 2025-11-22 09:28:32.540 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:28:32 np0005532048 nova_compute[253661]: 2025-11-22 09:28:32.541 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:28:32 np0005532048 nova_compute[253661]: 2025-11-22 09:28:32.541 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:32 np0005532048 focused_villani[346922]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:28:32 np0005532048 focused_villani[346922]: --> relative data size: 1.0
Nov 22 04:28:32 np0005532048 focused_villani[346922]: --> All data devices are unavailable
Nov 22 04:28:32 np0005532048 systemd[1]: libpod-91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb.scope: Deactivated successfully.
Nov 22 04:28:32 np0005532048 podman[346905]: 2025-11-22 09:28:32.993060762 +0000 UTC m=+1.407331198 container died 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:28:32 np0005532048 systemd[1]: libpod-91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb.scope: Consumed 1.173s CPU time.
Nov 22 04:28:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b5b0a94fee11fc6c544a7a13ad19fbfe4055265f80d281006b2947da7cf15409-merged.mount: Deactivated successfully.
Nov 22 04:28:33 np0005532048 podman[346905]: 2025-11-22 09:28:33.169414937 +0000 UTC m=+1.583685383 container remove 91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 04:28:33 np0005532048 systemd[1]: libpod-conmon-91d4b462c1f19cbe8e2c120be04b2b78c9210eb6fcb4bc46ab61b33d7f57d0eb.scope: Deactivated successfully.
Nov 22 04:28:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 253 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 4.7 MiB/s wr, 194 op/s
Nov 22 04:28:33 np0005532048 podman[347105]: 2025-11-22 09:28:33.919480337 +0000 UTC m=+0.057617711 container create 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:28:33 np0005532048 podman[347105]: 2025-11-22 09:28:33.887601695 +0000 UTC m=+0.025739089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:28:33 np0005532048 systemd[1]: Started libpod-conmon-0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412.scope.
Nov 22 04:28:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:28:34 np0005532048 podman[347105]: 2025-11-22 09:28:34.04169992 +0000 UTC m=+0.179837314 container init 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:28:34 np0005532048 podman[347105]: 2025-11-22 09:28:34.050295586 +0000 UTC m=+0.188432960 container start 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:28:34 np0005532048 vigorous_swanson[347122]: 167 167
Nov 22 04:28:34 np0005532048 systemd[1]: libpod-0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412.scope: Deactivated successfully.
Nov 22 04:28:34 np0005532048 podman[347105]: 2025-11-22 09:28:34.064620146 +0000 UTC m=+0.202757660 container attach 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:28:34 np0005532048 podman[347105]: 2025-11-22 09:28:34.065403416 +0000 UTC m=+0.203540790 container died 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:28:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a4f315c7a28e1a9dffee950f8cd78c59f0c55c9a1aaf6dc256db6e38cb145e0e-merged.mount: Deactivated successfully.
Nov 22 04:28:34 np0005532048 podman[347105]: 2025-11-22 09:28:34.186102691 +0000 UTC m=+0.324240065 container remove 0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:28:34 np0005532048 systemd[1]: libpod-conmon-0dfabecb1eadf12ccb6c06e0b35dcea951a659201e8868a87ce3b823b26b5412.scope: Deactivated successfully.
Nov 22 04:28:34 np0005532048 nova_compute[253661]: 2025-11-22 09:28:34.446 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updating instance_info_cache with network_info: [{"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:28:34 np0005532048 podman[347147]: 2025-11-22 09:28:34.356868915 +0000 UTC m=+0.030370455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:28:34 np0005532048 nova_compute[253661]: 2025-11-22 09:28:34.472 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:28:34 np0005532048 nova_compute[253661]: 2025-11-22 09:28:34.473 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:28:34 np0005532048 nova_compute[253661]: 2025-11-22 09:28:34.475 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:34 np0005532048 nova_compute[253661]: 2025-11-22 09:28:34.476 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:34 np0005532048 nova_compute[253661]: 2025-11-22 09:28:34.477 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:34 np0005532048 nova_compute[253661]: 2025-11-22 09:28:34.477 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:34 np0005532048 podman[347147]: 2025-11-22 09:28:34.527628949 +0000 UTC m=+0.201130499 container create a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:28:34 np0005532048 systemd[1]: Started libpod-conmon-a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813.scope.
Nov 22 04:28:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 22 04:28:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:28:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 22 04:28:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 22 04:28:34 np0005532048 podman[347147]: 2025-11-22 09:28:34.941288749 +0000 UTC m=+0.614790279 container init a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:28:34 np0005532048 podman[347147]: 2025-11-22 09:28:34.949862956 +0000 UTC m=+0.623364466 container start a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:28:35 np0005532048 podman[347147]: 2025-11-22 09:28:35.099347404 +0000 UTC m=+0.772848924 container attach a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:28:35 np0005532048 nova_compute[253661]: 2025-11-22 09:28:35.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:35 np0005532048 nova_compute[253661]: 2025-11-22 09:28:35.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 246 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]: {
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:    "0": [
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:        {
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "devices": [
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "/dev/loop3"
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            ],
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_name": "ceph_lv0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_size": "21470642176",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "name": "ceph_lv0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "tags": {
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.cluster_name": "ceph",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.crush_device_class": "",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.encrypted": "0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.osd_id": "0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.type": "block",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.vdo": "0"
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            },
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "type": "block",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "vg_name": "ceph_vg0"
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:        }
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:    ],
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:    "1": [
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:        {
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "devices": [
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "/dev/loop4"
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            ],
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_name": "ceph_lv1",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_size": "21470642176",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "name": "ceph_lv1",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "tags": {
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.cluster_name": "ceph",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.crush_device_class": "",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.encrypted": "0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.osd_id": "1",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.type": "block",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.vdo": "0"
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            },
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "type": "block",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "vg_name": "ceph_vg1"
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:        }
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:    ],
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:    "2": [
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:        {
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "devices": [
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "/dev/loop5"
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            ],
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_name": "ceph_lv2",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_size": "21470642176",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "name": "ceph_lv2",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "tags": {
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.cluster_name": "ceph",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.crush_device_class": "",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.encrypted": "0",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.osd_id": "2",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.type": "block",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:                "ceph.vdo": "0"
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            },
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "type": "block",
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:            "vg_name": "ceph_vg2"
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:        }
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]:    ]
Nov 22 04:28:35 np0005532048 compassionate_goldstine[347164]: }
Nov 22 04:28:35 np0005532048 systemd[1]: libpod-a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813.scope: Deactivated successfully.
Nov 22 04:28:35 np0005532048 podman[347147]: 2025-11-22 09:28:35.874468984 +0000 UTC m=+1.547970484 container died a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:28:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7e4fd02a835616b8ecd7424f63f52bc5bf79a0117370ac151dacabd7192aa517-merged.mount: Deactivated successfully.
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:36 np0005532048 podman[347147]: 2025-11-22 09:28:36.548824921 +0000 UTC m=+2.222326431 container remove a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.582 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:36 np0005532048 systemd[1]: libpod-conmon-a65bfe3a0fabe74567ce8846942876011367ad92952d4af947658fbe48515813.scope: Deactivated successfully.
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.719624) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803716719674, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1937, "num_deletes": 260, "total_data_size": 2820162, "memory_usage": 2869552, "flush_reason": "Manual Compaction"}
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2859859650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803716893109, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2753371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38313, "largest_seqno": 40249, "table_properties": {"data_size": 2744485, "index_size": 5508, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18930, "raw_average_key_size": 20, "raw_value_size": 2726500, "raw_average_value_size": 2989, "num_data_blocks": 242, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803563, "oldest_key_time": 1763803563, "file_creation_time": 1763803716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 173532 microseconds, and 6995 cpu microseconds.
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.912 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.893151) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2753371 bytes OK
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.893177) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.963943) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.964014) EVENT_LOG_v1 {"time_micros": 1763803716964000, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.964053) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2811803, prev total WAL file size 2811803, number of live WAL files 2.
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.965816) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2688KB)], [86(8667KB)]
Nov 22 04:28:36 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803716965856, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11629304, "oldest_snapshot_seqno": -1}
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.994 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.994 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.998 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:28:36 np0005532048 nova_compute[253661]: 2025-11-22 09:28:36.998 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6490 keys, 9955749 bytes, temperature: kUnknown
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803717159980, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9955749, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9910551, "index_size": 27872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 164920, "raw_average_key_size": 25, "raw_value_size": 9792492, "raw_average_value_size": 1508, "num_data_blocks": 1122, "num_entries": 6490, "num_filter_entries": 6490, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:28:37 np0005532048 nova_compute[253661]: 2025-11-22 09:28:37.207 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:28:37 np0005532048 nova_compute[253661]: 2025-11-22 09:28:37.208 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3505MB free_disk=59.89714050292969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:28:37 np0005532048 nova_compute[253661]: 2025-11-22 09:28:37.209 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:37 np0005532048 nova_compute[253661]: 2025-11-22 09:28:37.209 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.160643) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9955749 bytes
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.427697) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.9 rd, 51.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.5 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 7022, records dropped: 532 output_compression: NoCompression
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.427750) EVENT_LOG_v1 {"time_micros": 1763803717427733, "job": 50, "event": "compaction_finished", "compaction_time_micros": 194229, "compaction_time_cpu_micros": 30750, "output_level": 6, "num_output_files": 1, "total_output_size": 9955749, "num_input_records": 7022, "num_output_records": 6490, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803717429495, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803717431535, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:36.965735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:28:37.431590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:28:37 np0005532048 podman[347349]: 2025-11-22 09:28:37.553677067 +0000 UTC m=+0.053905656 container create 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:28:37 np0005532048 systemd[1]: Started libpod-conmon-7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756.scope.
Nov 22 04:28:37 np0005532048 nova_compute[253661]: 2025-11-22 09:28:37.611 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:28:37 np0005532048 nova_compute[253661]: 2025-11-22 09:28:37.612 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e4f9440c-7476-4022-8d08-1b3151a9db79 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:28:37 np0005532048 nova_compute[253661]: 2025-11-22 09:28:37.612 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:28:37 np0005532048 nova_compute[253661]: 2025-11-22 09:28:37.612 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:28:37 np0005532048 podman[347349]: 2025-11-22 09:28:37.525477638 +0000 UTC m=+0.025706237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:28:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:28:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 246 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 144 op/s
Nov 22 04:28:37 np0005532048 podman[347349]: 2025-11-22 09:28:37.682944278 +0000 UTC m=+0.183172887 container init 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:28:37 np0005532048 podman[347349]: 2025-11-22 09:28:37.693986745 +0000 UTC m=+0.194215334 container start 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:28:37 np0005532048 festive_chatelet[347365]: 167 167
Nov 22 04:28:37 np0005532048 systemd[1]: libpod-7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756.scope: Deactivated successfully.
Nov 22 04:28:37 np0005532048 podman[347349]: 2025-11-22 09:28:37.711072766 +0000 UTC m=+0.211301365 container attach 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:28:37 np0005532048 podman[347349]: 2025-11-22 09:28:37.712873931 +0000 UTC m=+0.213102510 container died 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 04:28:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay-70c979b2e44a5e2c85948b3708f2698823f9990823a89253edb8c5476736129d-merged.mount: Deactivated successfully.
Nov 22 04:28:37 np0005532048 podman[347349]: 2025-11-22 09:28:37.890867086 +0000 UTC m=+0.391095675 container remove 7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chatelet, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:28:37 np0005532048 systemd[1]: libpod-conmon-7b235a32aef9bcdc7c9c9605e64e03173470e0fa317a9d4a13dc41ce403b2756.scope: Deactivated successfully.
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 22 04:28:37 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 22 04:28:38 np0005532048 podman[347390]: 2025-11-22 09:28:38.109826891 +0000 UTC m=+0.072450742 container create 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:28:38 np0005532048 nova_compute[253661]: 2025-11-22 09:28:38.137 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:38 np0005532048 podman[347390]: 2025-11-22 09:28:38.065808255 +0000 UTC m=+0.028432136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:28:38 np0005532048 systemd[1]: Started libpod-conmon-957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231.scope.
Nov 22 04:28:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:28:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:28:38 np0005532048 podman[347390]: 2025-11-22 09:28:38.293168762 +0000 UTC m=+0.255792633 container init 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 04:28:38 np0005532048 podman[347390]: 2025-11-22 09:28:38.303155663 +0000 UTC m=+0.265779514 container start 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:28:38 np0005532048 podman[347390]: 2025-11-22 09:28:38.394761797 +0000 UTC m=+0.357385678 container attach 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:28:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:28:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2334154375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:28:38 np0005532048 nova_compute[253661]: 2025-11-22 09:28:38.665 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:38 np0005532048 nova_compute[253661]: 2025-11-22 09:28:38.675 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:28:38 np0005532048 nova_compute[253661]: 2025-11-22 09:28:38.697 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:28:38 np0005532048 nova_compute[253661]: 2025-11-22 09:28:38.725 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:28:38 np0005532048 nova_compute[253661]: 2025-11-22 09:28:38.726 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:39 np0005532048 podman[347457]: 2025-11-22 09:28:39.393675273 +0000 UTC m=+0.072254047 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Nov 22 04:28:39 np0005532048 charming_merkle[347408]: {
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "osd_id": 1,
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "type": "bluestore"
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:    },
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "osd_id": 0,
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "type": "bluestore"
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:    },
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "osd_id": 2,
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:        "type": "bluestore"
Nov 22 04:28:39 np0005532048 charming_merkle[347408]:    }
Nov 22 04:28:39 np0005532048 charming_merkle[347408]: }
Nov 22 04:28:39 np0005532048 podman[347456]: 2025-11-22 09:28:39.416728744 +0000 UTC m=+0.095140644 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:28:39 np0005532048 systemd[1]: libpod-957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231.scope: Deactivated successfully.
Nov 22 04:28:39 np0005532048 systemd[1]: libpod-957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231.scope: Consumed 1.113s CPU time.
Nov 22 04:28:39 np0005532048 podman[347390]: 2025-11-22 09:28:39.445325522 +0000 UTC m=+1.407949373 container died 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:28:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-95c7e3426af58930e08720e4d3a23600f3d6ec009ed894a2fcf9045b44313999-merged.mount: Deactivated successfully.
Nov 22 04:28:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 224 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 24 KiB/s wr, 170 op/s
Nov 22 04:28:39 np0005532048 podman[347390]: 2025-11-22 09:28:39.825537363 +0000 UTC m=+1.788161214 container remove 957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:28:39 np0005532048 systemd[1]: libpod-conmon-957ce684124789d388c81b34d57f44aadfbd2c9765d0462368f0aad08e64f231.scope: Deactivated successfully.
Nov 22 04:28:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:28:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:28:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b429e693-67ef-4be8-97d7-61f805a1f34d does not exist
Nov 22 04:28:39 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4bc0da75-0191-41dd-aed0-3053c0233ab4 does not exist
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.107 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.108 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.108 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.108 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.108 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.110 253665 INFO nova.compute.manager [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Terminating instance#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.111 253665 DEBUG nova.compute.manager [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:28:40 np0005532048 kernel: tap6eb31688-c2 (unregistering): left promiscuous mode
Nov 22 04:28:40 np0005532048 NetworkManager[48920]: <info>  [1763803720.3294] device (tap6eb31688-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:28:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:40Z|00928|binding|INFO|Releasing lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 from this chassis (sb_readonly=0)
Nov 22 04:28:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:40Z|00929|binding|INFO|Setting lport 6eb31688-c2e8-4f7b-a3df-3008c2065663 down in Southbound
Nov 22 04:28:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:40Z|00930|binding|INFO|Removing iface tap6eb31688-c2 ovn-installed in OVS
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.349 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:cd:01 10.100.0.13'], port_security=['fa:16:3e:26:cd:01 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b5cb3fb-8c82-432e-a88b-1ca3fef4f208', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '4', 'neutron:security_group_ids': '565d4bba-9c09-4fbf-9eb5-c7cb7133e1fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=6eb31688-c2e8-4f7b-a3df-3008c2065663) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.351 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 6eb31688-c2e8-4f7b-a3df-3008c2065663 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da unbound from our chassis#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.352 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e37df2c8-4dc4-418d-92f1-b394537a30da#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000059.scope: Deactivated successfully.
Nov 22 04:28:40 np0005532048 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000059.scope: Consumed 17.304s CPU time.
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.381 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc5243e-bd92-47dd-8529-ce34cc9cb7f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:40 np0005532048 systemd-machined[215941]: Machine qemu-107-instance-00000059 terminated.
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.429 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[770b8f38-b100-434b-acfb-9e10be6f817a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.434 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ffed57-f0d4-4bda-8ae6-89256fa5bca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.469 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b693dd-f543-4864-9a9f-4c5c12efa751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.492 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ace00daf-a2c2-4c50-a0e2-6e5329609e94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape37df2c8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c4:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640468, 'reachable_time': 21674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347576, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[00f66fb6-0155-446a-ac7b-e0f6e29410f2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640485, 'tstamp': 640485}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347577, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape37df2c8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640489, 'tstamp': 640489}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347577, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.515 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.523 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape37df2c8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.523 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.524 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape37df2c8-40, col_values=(('external_ids', {'iface-id': '93c31381-1979-4cee-982c-9507d8ee6c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:40.524 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.539 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.557 253665 INFO nova.virt.libvirt.driver [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Instance destroyed successfully.#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.558 253665 DEBUG nova.objects.instance [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'resources' on Instance uuid 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.571 253665 DEBUG nova.virt.libvirt.vif [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:27:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1520180186',display_name='tempest-ServerActionsTestOtherB-server-1520180186',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1520180186',id=89,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:27:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-09vxfowe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:27:16Z,user_data=None,user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=2b5cb3fb-8c82-432e-a88b-1ca3fef4f208,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.572 253665 DEBUG nova.network.os_vif_util [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "address": "fa:16:3e:26:cd:01", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6eb31688-c2", "ovs_interfaceid": "6eb31688-c2e8-4f7b-a3df-3008c2065663", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.573 253665 DEBUG nova.network.os_vif_util [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.573 253665 DEBUG os_vif [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.575 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.575 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eb31688-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.579 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.581 253665 INFO os_vif [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:cd:01,bridge_name='br-int',has_traffic_filtering=True,id=6eb31688-c2e8-4f7b-a3df-3008c2065663,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6eb31688-c2')#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.603 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.632 253665 DEBUG nova.compute.manager [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-unplugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG oslo_concurrency.lockutils [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG oslo_concurrency.lockutils [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG oslo_concurrency.lockutils [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG nova.compute.manager [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] No waiting events found dispatching network-vif-unplugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:28:40 np0005532048 nova_compute[253661]: 2025-11-22 09:28:40.633 253665 DEBUG nova.compute.manager [req-b9035a69-61d6-4f0c-aa84-d6bf67c59f48 req-bf549770-397f-497b-9515-2646f6507f4b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-unplugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:28:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:28:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 224 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Nov 22 04:28:42 np0005532048 nova_compute[253661]: 2025-11-22 09:28:42.443 253665 INFO nova.virt.libvirt.driver [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Deleting instance files /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_del#033[00m
Nov 22 04:28:42 np0005532048 nova_compute[253661]: 2025-11-22 09:28:42.445 253665 INFO nova.virt.libvirt.driver [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Deletion of /var/lib/nova/instances/2b5cb3fb-8c82-432e-a88b-1ca3fef4f208_del complete#033[00m
Nov 22 04:28:42 np0005532048 nova_compute[253661]: 2025-11-22 09:28:42.504 253665 INFO nova.compute.manager [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Took 2.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:28:42 np0005532048 nova_compute[253661]: 2025-11-22 09:28:42.505 253665 DEBUG oslo.service.loopingcall [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:28:42 np0005532048 nova_compute[253661]: 2025-11-22 09:28:42.505 253665 DEBUG nova.compute.manager [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:28:42 np0005532048 nova_compute[253661]: 2025-11-22 09:28:42.505 253665 DEBUG nova.network.neutron [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:28:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 22 04:28:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 22 04:28:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.035 253665 DEBUG nova.compute.manager [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.036 253665 DEBUG oslo_concurrency.lockutils [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.036 253665 DEBUG oslo_concurrency.lockutils [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.037 253665 DEBUG oslo_concurrency.lockutils [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.037 253665 DEBUG nova.compute.manager [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] No waiting events found dispatching network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.037 253665 WARNING nova.compute.manager [req-d66a5682-017f-45a5-be89-b54ac7854e29 req-ead4efd8-14da-468c-b09a-f751c1ffd875 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received unexpected event network-vif-plugged-6eb31688-c2e8-4f7b-a3df-3008c2065663 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.301 253665 DEBUG nova.network.neutron [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.320 253665 INFO nova.compute.manager [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Took 0.81 seconds to deallocate network for instance.#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.379 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.379 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.459 253665 DEBUG nova.compute.manager [req-978e2a2d-c541-48c9-a762-6b2f19d0a872 req-6a79f31d-401d-4a4f-a2ee-07627187d617 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Received event network-vif-deleted-6eb31688-c2e8-4f7b-a3df-3008c2065663 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.461 253665 DEBUG oslo_concurrency.processutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 151 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 18 KiB/s wr, 76 op/s
Nov 22 04:28:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:28:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1656220135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.945 253665 DEBUG oslo_concurrency.processutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.954 253665 DEBUG nova.compute.provider_tree [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.974 253665 DEBUG nova.scheduler.client.report [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:28:43 np0005532048 nova_compute[253661]: 2025-11-22 09:28:43.993 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:44 np0005532048 nova_compute[253661]: 2025-11-22 09:28:44.021 253665 INFO nova.scheduler.client.report [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Deleted allocations for instance 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208#033[00m
Nov 22 04:28:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:44Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:67:ca 10.100.0.10
Nov 22 04:28:44 np0005532048 nova_compute[253661]: 2025-11-22 09:28:44.090 253665 DEBUG oslo_concurrency.lockutils [None req-10023bb3-3116-4332-880e-fbe559ec20f9 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "2b5cb3fb-8c82-432e-a88b-1ca3fef4f208" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.185 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.186 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.186 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.187 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.187 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.188 253665 INFO nova.compute.manager [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Terminating instance#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.189 253665 DEBUG nova.compute.manager [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:28:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.308 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.309 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 22 KiB/s wr, 108 op/s
Nov 22 04:28:45 np0005532048 podman[347630]: 2025-11-22 09:28:45.712630592 +0000 UTC m=+0.132130663 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 04:28:45 np0005532048 kernel: tapb1fc96be-00 (unregistering): left promiscuous mode
Nov 22 04:28:45 np0005532048 NetworkManager[48920]: <info>  [1763803725.7914] device (tapb1fc96be-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:28:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:45Z|00931|binding|INFO|Releasing lport b1fc96be-009e-46a8-829c-b7a0bc42af60 from this chassis (sb_readonly=0)
Nov 22 04:28:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:45Z|00932|binding|INFO|Setting lport b1fc96be-009e-46a8-829c-b7a0bc42af60 down in Southbound
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:28:45Z|00933|binding|INFO|Removing iface tapb1fc96be-00 ovn-installed in OVS
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.803 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.810 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:67:ca 10.100.0.10'], port_security=['fa:16:3e:38:67:ca 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e4f9440c-7476-4022-8d08-1b3151a9db79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e37df2c8-4dc4-418d-92f1-b394537a30da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a246689624d4630a70f69b70d048883', 'neutron:revision_number': '9', 'neutron:security_group_ids': '33563511-c966-495c-93cb-386deb50a2bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7135b765-78b7-490e-8e9e-3f8a3fb53933, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b1fc96be-009e-46a8-829c-b7a0bc42af60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:28:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.811 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b1fc96be-009e-46a8-829c-b7a0bc42af60 in datapath e37df2c8-4dc4-418d-92f1-b394537a30da unbound from our chassis#033[00m
Nov 22 04:28:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.812 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e37df2c8-4dc4-418d-92f1-b394537a30da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:28:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.813 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee8a1e1-df1f-4445-a7b0-dcbc751e79a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:45.814 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da namespace which is not needed anymore#033[00m
Nov 22 04:28:45 np0005532048 nova_compute[253661]: 2025-11-22 09:28:45.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:45 np0005532048 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000056.scope: Deactivated successfully.
Nov 22 04:28:45 np0005532048 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000056.scope: Consumed 15.148s CPU time.
Nov 22 04:28:45 np0005532048 systemd-machined[215941]: Machine qemu-111-instance-00000056 terminated.
Nov 22 04:28:45 np0005532048 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [NOTICE]   (340087) : haproxy version is 2.8.14-c23fe91
Nov 22 04:28:45 np0005532048 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [NOTICE]   (340087) : path to executable is /usr/sbin/haproxy
Nov 22 04:28:45 np0005532048 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [WARNING]  (340087) : Exiting Master process...
Nov 22 04:28:45 np0005532048 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [ALERT]    (340087) : Current worker (340089) exited with code 143 (Terminated)
Nov 22 04:28:45 np0005532048 neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da[340083]: [WARNING]  (340087) : All workers exited. Exiting... (0)
Nov 22 04:28:45 np0005532048 systemd[1]: libpod-ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f.scope: Deactivated successfully.
Nov 22 04:28:45 np0005532048 podman[347680]: 2025-11-22 09:28:45.980970719 +0000 UTC m=+0.067952999 container died ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.021 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.042 253665 INFO nova.virt.libvirt.driver [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Instance destroyed successfully.#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.043 253665 DEBUG nova.objects.instance [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lazy-loading 'resources' on Instance uuid e4f9440c-7476-4022-8d08-1b3151a9db79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.049 253665 DEBUG nova.compute.manager [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.049 253665 DEBUG oslo_concurrency.lockutils [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.050 253665 DEBUG oslo_concurrency.lockutils [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.050 253665 DEBUG oslo_concurrency.lockutils [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.050 253665 DEBUG nova.compute.manager [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.050 253665 DEBUG nova.compute.manager [req-e385733e-a2e3-4386-967e-c6007c6bc721 req-ef546201-e31a-4b8e-927a-db74365fdbfa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-unplugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.056 253665 DEBUG nova.virt.libvirt.vif [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:25:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1215087159',display_name='tempest-ServerActionsTestOtherB-server-1215087159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1215087159',id=86,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/cRG5bfHD3LbYWZfZhBZW64Gzk9NiecmZChn56cNdUeqOvdqm8gZ047E1aOD+/1rWy6Q/20jfwuj+tARiRMK9Fr/axSxMkwZvm5uYPBSn1o0uJaQf1m6OZmN9YqP8SQ==',key_name='tempest-keypair-427391145',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:28:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a246689624d4630a70f69b70d048883',ramdisk_id='',reservation_id='r-exobbdub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-985895222',owner_user_name='tempest-ServerActionsTestOtherB-985895222-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:28:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ce82551204d04546a5ae9c6f99cccfc8',uuid=e4f9440c-7476-4022-8d08-1b3151a9db79,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.057 253665 DEBUG nova.network.os_vif_util [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converting VIF {"id": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "address": "fa:16:3e:38:67:ca", "network": {"id": "e37df2c8-4dc4-418d-92f1-b394537a30da", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-36791520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a246689624d4630a70f69b70d048883", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1fc96be-00", "ovs_interfaceid": "b1fc96be-009e-46a8-829c-b7a0bc42af60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.057 253665 DEBUG nova.network.os_vif_util [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.058 253665 DEBUG os_vif [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:28:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f-userdata-shm.mount: Deactivated successfully.
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.061 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.061 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1fc96be-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ec1bcf14e6b9291c965d546b0551f59e40ba033f0d7af433bf80952dca416338-merged.mount: Deactivated successfully.
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.065 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.068 253665 INFO os_vif [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:67:ca,bridge_name='br-int',has_traffic_filtering=True,id=b1fc96be-009e-46a8-829c-b7a0bc42af60,network=Network(e37df2c8-4dc4-418d-92f1-b394537a30da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1fc96be-00')#033[00m
Nov 22 04:28:46 np0005532048 podman[347680]: 2025-11-22 09:28:46.107502701 +0000 UTC m=+0.194484981 container cleanup ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 22 04:28:46 np0005532048 systemd[1]: libpod-conmon-ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f.scope: Deactivated successfully.
Nov 22 04:28:46 np0005532048 podman[347736]: 2025-11-22 09:28:46.222168364 +0000 UTC m=+0.092261661 container remove ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 04:28:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.229 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f2325dc0-2a2b-4077-9a1d-fdd0f4a5bdad]: (4, ('Sat Nov 22 09:28:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da (ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f)\ned7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f\nSat Nov 22 09:28:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da (ed7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f)\ned7d29408f1e012b9cbb36c5aa121671d4d7247eb2b6e604a7dc398df020548f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.232 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a5fa17-d6c7-4f43-8b10-4b27304498f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.234 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape37df2c8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:46 np0005532048 kernel: tape37df2c8-40: left promiscuous mode
Nov 22 04:28:46 np0005532048 nova_compute[253661]: 2025-11-22 09:28:46.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.255 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a244c078-8a2d-494b-8627-e6a358d7eb74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.273 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[feb7f373-9ddc-46b5-970e-e0e2a98231a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.275 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab9ddb9-1805-422b-b1c6-e2a8db2a82b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.297 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[22b42623-3b55-4b31-be65-67cc86ce525c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640459, 'reachable_time': 35848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347750, 'error': None, 'target': 'ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.301 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e37df2c8-4dc4-418d-92f1-b394537a30da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:28:46 np0005532048 systemd[1]: run-netns-ovnmeta\x2de37df2c8\x2d4dc4\x2d418d\x2d92f1\x2db394537a30da.mount: Deactivated successfully.
Nov 22 04:28:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:46.301 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[830283f2-731b-4d07-b4e3-20453e623dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:28:47 np0005532048 nova_compute[253661]: 2025-11-22 09:28:47.510 253665 INFO nova.virt.libvirt.driver [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deleting instance files /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79_del#033[00m
Nov 22 04:28:47 np0005532048 nova_compute[253661]: 2025-11-22 09:28:47.511 253665 INFO nova.virt.libvirt.driver [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deletion of /var/lib/nova/instances/e4f9440c-7476-4022-8d08-1b3151a9db79_del complete#033[00m
Nov 22 04:28:47 np0005532048 nova_compute[253661]: 2025-11-22 09:28:47.568 253665 INFO nova.compute.manager [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 2.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:28:47 np0005532048 nova_compute[253661]: 2025-11-22 09:28:47.568 253665 DEBUG oslo.service.loopingcall [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:28:47 np0005532048 nova_compute[253661]: 2025-11-22 09:28:47.569 253665 DEBUG nova.compute.manager [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:28:47 np0005532048 nova_compute[253661]: 2025-11-22 09:28:47.569 253665 DEBUG nova.network.neutron [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:28:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 83 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 19 KiB/s wr, 127 op/s
Nov 22 04:28:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.323 253665 DEBUG nova.compute.manager [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.323 253665 DEBUG oslo_concurrency.lockutils [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.323 253665 DEBUG oslo_concurrency.lockutils [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.324 253665 DEBUG oslo_concurrency.lockutils [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.324 253665 DEBUG nova.compute.manager [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] No waiting events found dispatching network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.324 253665 WARNING nova.compute.manager [req-f14e0591-c9f1-4694-afeb-6cdd80e917d9 req-df01da90-db1e-45a8-b9ea-06a659f98eb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received unexpected event network-vif-plugged-b1fc96be-009e-46a8-829c-b7a0bc42af60 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.835 253665 DEBUG nova.network.neutron [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.859 253665 INFO nova.compute.manager [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Took 1.29 seconds to deallocate network for instance.#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.896 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.897 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:28:48 np0005532048 nova_compute[253661]: 2025-11-22 09:28:48.962 253665 DEBUG oslo_concurrency.processutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:28:49 np0005532048 nova_compute[253661]: 2025-11-22 09:28:49.028 253665 DEBUG nova.compute.manager [req-519d92e7-a480-4ea1-a98a-d96da91354b9 req-a56b6706-3326-4b14-9116-1c22eae2617e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Received event network-vif-deleted-b1fc96be-009e-46a8-829c-b7a0bc42af60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:28:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:28:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962236674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:28:49 np0005532048 nova_compute[253661]: 2025-11-22 09:28:49.469 253665 DEBUG oslo_concurrency.processutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:28:49 np0005532048 nova_compute[253661]: 2025-11-22 09:28:49.476 253665 DEBUG nova.compute.provider_tree [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:28:49 np0005532048 nova_compute[253661]: 2025-11-22 09:28:49.490 253665 DEBUG nova.scheduler.client.report [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:28:49 np0005532048 nova_compute[253661]: 2025-11-22 09:28:49.512 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:49 np0005532048 nova_compute[253661]: 2025-11-22 09:28:49.548 253665 INFO nova.scheduler.client.report [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Deleted allocations for instance e4f9440c-7476-4022-8d08-1b3151a9db79#033[00m
Nov 22 04:28:49 np0005532048 nova_compute[253661]: 2025-11-22 09:28:49.613 253665 DEBUG oslo_concurrency.lockutils [None req-7fc093d8-8cdf-4d9e-9d41-5b639f9ed0c6 ce82551204d04546a5ae9c6f99cccfc8 8a246689624d4630a70f69b70d048883 - - default default] Lock "e4f9440c-7476-4022-8d08-1b3151a9db79" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:28:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 42 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 17 KiB/s wr, 110 op/s
Nov 22 04:28:50 np0005532048 nova_compute[253661]: 2025-11-22 09:28:50.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:51 np0005532048 nova_compute[253661]: 2025-11-22 09:28:51.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 42 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 17 KiB/s wr, 110 op/s
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:28:52
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:28:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:28:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 592 KiB/s rd, 15 KiB/s wr, 103 op/s
Nov 22 04:28:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:28:54.312 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:28:54 np0005532048 nova_compute[253661]: 2025-11-22 09:28:54.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:54 np0005532048 nova_compute[253661]: 2025-11-22 09:28:54.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:55 np0005532048 nova_compute[253661]: 2025-11-22 09:28:55.556 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803720.5546837, 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:28:55 np0005532048 nova_compute[253661]: 2025-11-22 09:28:55.556 253665 INFO nova.compute.manager [-] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:28:55 np0005532048 nova_compute[253661]: 2025-11-22 09:28:55.577 253665 DEBUG nova.compute.manager [None req-cdba6d72-8ee4-4124-b401-10783f330d34 - - - - - -] [instance: 2b5cb3fb-8c82-432e-a88b-1ca3fef4f208] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:28:55 np0005532048 nova_compute[253661]: 2025-11-22 09:28:55.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:28:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 3.7 KiB/s wr, 69 op/s
Nov 22 04:28:56 np0005532048 nova_compute[253661]: 2025-11-22 09:28:56.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:28:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 1.3 KiB/s wr, 46 op/s
Nov 22 04:28:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:28:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 16 op/s
Nov 22 04:29:00 np0005532048 nova_compute[253661]: 2025-11-22 09:29:00.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:01 np0005532048 nova_compute[253661]: 2025-11-22 09:29:01.039 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803726.03845, e4f9440c-7476-4022-8d08-1b3151a9db79 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:29:01 np0005532048 nova_compute[253661]: 2025-11-22 09:29:01.040 253665 INFO nova.compute.manager [-] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:29:01 np0005532048 nova_compute[253661]: 2025-11-22 09:29:01.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:01 np0005532048 nova_compute[253661]: 2025-11-22 09:29:01.086 253665 DEBUG nova.compute.manager [None req-7b5b0b7e-7ed4-4277-aa91-5d08589d4d6c - - - - - -] [instance: e4f9440c-7476-4022-8d08-1b3151a9db79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:29:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 426 B/s wr, 9 op/s
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:29:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:29:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:03 np0005532048 nova_compute[253661]: 2025-11-22 09:29:03.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:03 np0005532048 nova_compute[253661]: 2025-11-22 09:29:03.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:29:03 np0005532048 nova_compute[253661]: 2025-11-22 09:29:03.262 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:29:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 426 B/s wr, 9 op/s
Nov 22 04:29:05 np0005532048 nova_compute[253661]: 2025-11-22 09:29:05.612 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:29:06 np0005532048 nova_compute[253661]: 2025-11-22 09:29:06.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:29:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:29:10 np0005532048 podman[347776]: 2025-11-22 09:29:10.382650754 +0000 UTC m=+0.060759279 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 22 04:29:10 np0005532048 podman[347777]: 2025-11-22 09:29:10.383518726 +0000 UTC m=+0.059889817 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 04:29:10 np0005532048 nova_compute[253661]: 2025-11-22 09:29:10.613 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:11 np0005532048 nova_compute[253661]: 2025-11-22 09:29:11.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:11 np0005532048 nova_compute[253661]: 2025-11-22 09:29:11.412 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:11 np0005532048 nova_compute[253661]: 2025-11-22 09:29:11.413 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:11 np0005532048 nova_compute[253661]: 2025-11-22 09:29:11.428 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:29:11 np0005532048 nova_compute[253661]: 2025-11-22 09:29:11.514 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:11 np0005532048 nova_compute[253661]: 2025-11-22 09:29:11.514 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:11 np0005532048 nova_compute[253661]: 2025-11-22 09:29:11.523 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:29:11 np0005532048 nova_compute[253661]: 2025-11-22 09:29:11.524 253665 INFO nova.compute.claims [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:29:11 np0005532048 nova_compute[253661]: 2025-11-22 09:29:11.660 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 41 MiB data, 651 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:29:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:29:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3116236503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.152 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.159 253665 DEBUG nova.compute.provider_tree [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.178 253665 DEBUG nova.scheduler.client.report [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.205 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.206 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.268 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.269 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.300 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.316 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:29:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:29:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2029530262' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:29:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:29:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2029530262' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.431 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.433 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.433 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Creating image(s)#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.467 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.495 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.517 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.521 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.606 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.608 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.609 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.609 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.638 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.643 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 db811577-e691-40e3-9e31-1a0a5929133d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:12 np0005532048 nova_compute[253661]: 2025-11-22 09:29:12.812 253665 DEBUG nova.policy [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0a91006ab4394e10a534a0887a0d170a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aaac6ab98bec41f7ac2ec49229374dc0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:29:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:13 np0005532048 nova_compute[253661]: 2025-11-22 09:29:13.646 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Successfully created port: f5951cb8-b8f6-4d52-850f-d4bdb04390ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:29:13 np0005532048 nova_compute[253661]: 2025-11-22 09:29:13.661 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 db811577-e691-40e3-9e31-1a0a5929133d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 45 MiB data, 651 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 178 KiB/s wr, 2 op/s
Nov 22 04:29:13 np0005532048 nova_compute[253661]: 2025-11-22 09:29:13.729 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] resizing rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:29:14 np0005532048 nova_compute[253661]: 2025-11-22 09:29:14.505 253665 DEBUG nova.objects.instance [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lazy-loading 'migration_context' on Instance uuid db811577-e691-40e3-9e31-1a0a5929133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:29:14 np0005532048 nova_compute[253661]: 2025-11-22 09:29:14.521 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:29:14 np0005532048 nova_compute[253661]: 2025-11-22 09:29:14.522 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Ensure instance console log exists: /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:29:14 np0005532048 nova_compute[253661]: 2025-11-22 09:29:14.522 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:14 np0005532048 nova_compute[253661]: 2025-11-22 09:29:14.522 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:14 np0005532048 nova_compute[253661]: 2025-11-22 09:29:14.523 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:15 np0005532048 nova_compute[253661]: 2025-11-22 09:29:15.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 63 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 392 KiB/s wr, 14 op/s
Nov 22 04:29:15 np0005532048 nova_compute[253661]: 2025-11-22 09:29:15.712 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Successfully updated port: f5951cb8-b8f6-4d52-850f-d4bdb04390ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:29:15 np0005532048 nova_compute[253661]: 2025-11-22 09:29:15.729 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:29:15 np0005532048 nova_compute[253661]: 2025-11-22 09:29:15.729 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquired lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:29:15 np0005532048 nova_compute[253661]: 2025-11-22 09:29:15.730 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:29:15 np0005532048 nova_compute[253661]: 2025-11-22 09:29:15.963 253665 DEBUG nova.compute.manager [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-changed-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:29:15 np0005532048 nova_compute[253661]: 2025-11-22 09:29:15.964 253665 DEBUG nova.compute.manager [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Refreshing instance network info cache due to event network-changed-f5951cb8-b8f6-4d52-850f-d4bdb04390ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:29:15 np0005532048 nova_compute[253661]: 2025-11-22 09:29:15.964 253665 DEBUG oslo_concurrency.lockutils [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:29:16 np0005532048 nova_compute[253661]: 2025-11-22 09:29:16.073 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:16 np0005532048 nova_compute[253661]: 2025-11-22 09:29:16.141 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:29:16 np0005532048 podman[348002]: 2025-11-22 09:29:16.414722868 +0000 UTC m=+0.103586766 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:29:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 80 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Nov 22 04:29:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.004 253665 DEBUG nova.network.neutron [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updating instance_info_cache with network_info: [{"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.043 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Releasing lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.044 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance network_info: |[{"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.046 253665 DEBUG oslo_concurrency.lockutils [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.046 253665 DEBUG nova.network.neutron [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Refreshing network info cache for port f5951cb8-b8f6-4d52-850f-d4bdb04390ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.049 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start _get_guest_xml network_info=[{"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.054 253665 WARNING nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.063 253665 DEBUG nova.virt.libvirt.host [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.063 253665 DEBUG nova.virt.libvirt.host [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.067 253665 DEBUG nova.virt.libvirt.host [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.068 253665 DEBUG nova.virt.libvirt.host [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.068 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.068 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.069 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.070 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.070 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.070 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.070 253665 DEBUG nova.virt.hardware [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.072 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:29:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338773131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.561 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.590 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:18 np0005532048 nova_compute[253661]: 2025-11-22 09:29:18.595 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:29:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613923299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.060 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.062 253665 DEBUG nova.virt.libvirt.vif [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-724968197',display_name='tempest-ServerAddressesNegativeTestJSON-server-724968197',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-724968197',id=92,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aaac6ab98bec41f7ac2ec49229374dc0',ramdisk_id='',reservation_id='r-5s3go2uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1369579025',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1369579025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:29:12Z,user_data=None,user_id='0a91006ab4394e10a534a0887a0d170a',uuid=db811577-e691-40e3-9e31-1a0a5929133d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.063 253665 DEBUG nova.network.os_vif_util [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converting VIF {"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.064 253665 DEBUG nova.network.os_vif_util [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.065 253665 DEBUG nova.objects.instance [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lazy-loading 'pci_devices' on Instance uuid db811577-e691-40e3-9e31-1a0a5929133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.110 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <uuid>db811577-e691-40e3-9e31-1a0a5929133d</uuid>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <name>instance-0000005c</name>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-724968197</nova:name>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:29:18</nova:creationTime>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <nova:user uuid="0a91006ab4394e10a534a0887a0d170a">tempest-ServerAddressesNegativeTestJSON-1369579025-project-member</nova:user>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <nova:project uuid="aaac6ab98bec41f7ac2ec49229374dc0">tempest-ServerAddressesNegativeTestJSON-1369579025</nova:project>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <nova:port uuid="f5951cb8-b8f6-4d52-850f-d4bdb04390ad">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <entry name="serial">db811577-e691-40e3-9e31-1a0a5929133d</entry>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <entry name="uuid">db811577-e691-40e3-9e31-1a0a5929133d</entry>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/db811577-e691-40e3-9e31-1a0a5929133d_disk">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/db811577-e691-40e3-9e31-1a0a5929133d_disk.config">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:33:74:7d"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <target dev="tapf5951cb8-b8"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/console.log" append="off"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:29:19 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:29:19 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:29:19 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:29:19 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.113 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Preparing to wait for external event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.113 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.113 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.114 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.114 253665 DEBUG nova.virt.libvirt.vif [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-724968197',display_name='tempest-ServerAddressesNegativeTestJSON-server-724968197',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-724968197',id=92,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aaac6ab98bec41f7ac2ec49229374dc0',ramdisk_id='',reservation_id='r-5s3go2uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1369579025',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1369579025-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:29:12Z,user_data=None,user_id='0a91006ab4394e10a534a0887a0d170a',uuid=db811577-e691-40e3-9e31-1a0a5929133d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.115 253665 DEBUG nova.network.os_vif_util [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converting VIF {"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.116 253665 DEBUG nova.network.os_vif_util [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.116 253665 DEBUG os_vif [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.117 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.118 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.123 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5951cb8-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.123 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf5951cb8-b8, col_values=(('external_ids', {'iface-id': 'f5951cb8-b8f6-4d52-850f-d4bdb04390ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:74:7d', 'vm-uuid': 'db811577-e691-40e3-9e31-1a0a5929133d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.126 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:19 np0005532048 NetworkManager[48920]: <info>  [1763803759.1264] manager: (tapf5951cb8-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.134 253665 INFO os_vif [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8')#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.208 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.208 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.208 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] No VIF found with MAC fa:16:3e:33:74:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.209 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Using config drive#033[00m
Nov 22 04:29:19 np0005532048 nova_compute[253661]: 2025-11-22 09:29:19.238 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.068 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Creating config drive at /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.074 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpttf2w44z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.226 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpttf2w44z" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.255 253665 DEBUG nova.storage.rbd_utils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] rbd image db811577-e691-40e3-9e31-1a0a5929133d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.259 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config db811577-e691-40e3-9e31-1a0a5929133d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.431 253665 DEBUG oslo_concurrency.processutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config db811577-e691-40e3-9e31-1a0a5929133d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.433 253665 INFO nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Deleting local config drive /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d/disk.config because it was imported into RBD.#033[00m
Nov 22 04:29:20 np0005532048 kernel: tapf5951cb8-b8: entered promiscuous mode
Nov 22 04:29:20 np0005532048 NetworkManager[48920]: <info>  [1763803760.4968] manager: (tapf5951cb8-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Nov 22 04:29:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:29:20Z|00934|binding|INFO|Claiming lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad for this chassis.
Nov 22 04:29:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:29:20Z|00935|binding|INFO|f5951cb8-b8f6-4d52-850f-d4bdb04390ad: Claiming fa:16:3e:33:74:7d 10.100.0.4
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.542 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:74:7d 10.100.0.4'], port_security=['fa:16:3e:33:74:7d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'db811577-e691-40e3-9e31-1a0a5929133d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaac6ab98bec41f7ac2ec49229374dc0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8fdff023-52d5-4ffb-998e-d0b2022a0bdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afa164a2-2d18-485c-9cf3-c424c8564fdf, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f5951cb8-b8f6-4d52-850f-d4bdb04390ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:29:20 np0005532048 systemd-machined[215941]: New machine qemu-112-instance-0000005c.
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.544 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f5951cb8-b8f6-4d52-850f-d4bdb04390ad in datapath 769b5006-dcff-42dc-96bc-e9baa5d2ce51 bound to our chassis#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.545 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 769b5006-dcff-42dc-96bc-e9baa5d2ce51#033[00m
Nov 22 04:29:20 np0005532048 systemd[1]: Started Virtual Machine qemu-112-instance-0000005c.
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.563 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[72f7e6b0-293e-4afb-a535-a7706cf6a3de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.565 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap769b5006-d1 in ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.568 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap769b5006-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.568 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6255843-e986-4de3-aa99-dfe3f7cf2c6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f55d29af-5c18-4de1-881e-4984383519ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 systemd-udevd[348167]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.585 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5b81ff-76e5-4354-8b8a-5193ec265013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 NetworkManager[48920]: <info>  [1763803760.5972] device (tapf5951cb8-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:29:20 np0005532048 NetworkManager[48920]: <info>  [1763803760.5982] device (tapf5951cb8-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:29:20Z|00936|binding|INFO|Setting lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad ovn-installed in OVS
Nov 22 04:29:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:29:20Z|00937|binding|INFO|Setting lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad up in Southbound
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.620 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.619 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6770d98-49d6-409e-84fb-50f561df00a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.663 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[75fcc5ea-d96e-4ac6-a311-9b5cd3ab6fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 NetworkManager[48920]: <info>  [1763803760.6722] manager: (tap769b5006-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.670 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[faa8687e-e2ff-4513-83e0-9af19ed8cb4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.714 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e99040c3-f272-44e1-ae75-46388d7d0132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.718 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[014fbb27-4fac-4a17-bb21-8cbe1853c55e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 NetworkManager[48920]: <info>  [1763803760.7515] device (tap769b5006-d0): carrier: link connected
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.757 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d379ac-d240-4095-bba3-b95a1c5909d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.778 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3974dfa-dda9-48b8-9f52-8c0cbf3b0dff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap769b5006-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:86:2c:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661148, 'reachable_time': 37605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348199, 'error': None, 'target': 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.804 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f89e8d1-9ca9-4b8c-9e19-efcf6617ea23]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe86:2cd1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661148, 'tstamp': 661148}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348200, 'error': None, 'target': 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.829 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c317284f-64d3-4fc8-bc3e-68dc9512e52d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap769b5006-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:86:2c:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661148, 'reachable_time': 37605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348201, 'error': None, 'target': 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.833 253665 DEBUG nova.network.neutron [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updated VIF entry in instance network info cache for port f5951cb8-b8f6-4d52-850f-d4bdb04390ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.834 253665 DEBUG nova.network.neutron [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updating instance_info_cache with network_info: [{"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.846 253665 DEBUG oslo_concurrency.lockutils [req-a294f444-584b-40e0-83d2-a100f7b8e17c req-beb9bac6-a782-47a4-977a-7c899bb925cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-db811577-e691-40e3-9e31-1a0a5929133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f42470e7-d659-4ab4-b8cb-c4edbcd76b35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.970 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78e3de3b-f24a-40b8-a539-93a9f4a9e97a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap769b5006-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.972 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap769b5006-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:29:20 np0005532048 kernel: tap769b5006-d0: entered promiscuous mode
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.979 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap769b5006-d0, col_values=(('external_ids', {'iface-id': '3a326757-71e3-4dd7-8bdc-9d3406640135'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:29:20 np0005532048 NetworkManager[48920]: <info>  [1763803760.9801] manager: (tap769b5006-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Nov 22 04:29:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:29:20Z|00938|binding|INFO|Releasing lport 3a326757-71e3-4dd7-8bdc-9d3406640135 from this chassis (sb_readonly=0)
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.982 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.982 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/769b5006-dcff-42dc-96bc-e9baa5d2ce51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/769b5006-dcff-42dc-96bc-e9baa5d2ce51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.987 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1e5d8c-982c-420c-82eb-a4cce598a92b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.988 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-769b5006-dcff-42dc-96bc-e9baa5d2ce51
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/769b5006-dcff-42dc-96bc-e9baa5d2ce51.pid.haproxy
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 769b5006-dcff-42dc-96bc-e9baa5d2ce51
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:29:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:20.989 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'env', 'PROCESS_TAG=haproxy-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/769b5006-dcff-42dc-96bc-e9baa5d2ce51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.990 253665 DEBUG nova.compute.manager [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.990 253665 DEBUG oslo_concurrency.lockutils [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.991 253665 DEBUG oslo_concurrency.lockutils [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.991 253665 DEBUG oslo_concurrency.lockutils [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.991 253665 DEBUG nova.compute.manager [req-835611ca-52fb-4a86-835e-18f2e9a65344 req-86aad518-b8b6-4e8a-813f-e95da4356b6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Processing event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:29:20 np0005532048 nova_compute[253661]: 2025-11-22 09:29:20.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.056 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.058 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803761.0557299, db811577-e691-40e3-9e31-1a0a5929133d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.058 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] VM Started (Lifecycle Event)#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.064 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.068 253665 INFO nova.virt.libvirt.driver [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance spawned successfully.#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.069 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.080 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.085 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.090 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.091 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.091 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.091 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.092 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.092 253665 DEBUG nova.virt.libvirt.driver [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.131 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.132 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803761.0560472, db811577-e691-40e3-9e31-1a0a5929133d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.132 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.163 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.167 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803761.0650063, db811577-e691-40e3-9e31-1a0a5929133d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.167 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.176 253665 INFO nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Took 8.74 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.176 253665 DEBUG nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.185 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.188 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.215 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.234 253665 INFO nova.compute.manager [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Took 9.75 seconds to build instance.#033[00m
Nov 22 04:29:21 np0005532048 nova_compute[253661]: 2025-11-22 09:29:21.249 253665 DEBUG oslo_concurrency.lockutils [None req-7b885a76-f439-4db9-b337-6d1071712e2f 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:21 np0005532048 podman[348275]: 2025-11-22 09:29:21.379642669 +0000 UTC m=+0.056275786 container create 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:29:21 np0005532048 systemd[1]: Started libpod-conmon-540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5.scope.
Nov 22 04:29:21 np0005532048 podman[348275]: 2025-11-22 09:29:21.34665847 +0000 UTC m=+0.023291597 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:29:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:29:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb22e7e7c58e8491e09d953a6e5f1185f7a55b65c910568c494974d374016cd0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:21 np0005532048 podman[348275]: 2025-11-22 09:29:21.467784785 +0000 UTC m=+0.144417922 container init 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:29:21 np0005532048 podman[348275]: 2025-11-22 09:29:21.477253114 +0000 UTC m=+0.153886221 container start 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 04:29:21 np0005532048 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [NOTICE]   (348295) : New worker (348297) forked
Nov 22 04:29:21 np0005532048 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [NOTICE]   (348295) : Loading success.
Nov 22 04:29:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:29:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:29:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:29:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:29:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:29:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:29:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:29:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:23 np0005532048 nova_compute[253661]: 2025-11-22 09:29:23.199 253665 DEBUG nova.compute.manager [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:29:23 np0005532048 nova_compute[253661]: 2025-11-22 09:29:23.200 253665 DEBUG oslo_concurrency.lockutils [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:23 np0005532048 nova_compute[253661]: 2025-11-22 09:29:23.200 253665 DEBUG oslo_concurrency.lockutils [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:23 np0005532048 nova_compute[253661]: 2025-11-22 09:29:23.201 253665 DEBUG oslo_concurrency.lockutils [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:23 np0005532048 nova_compute[253661]: 2025-11-22 09:29:23.201 253665 DEBUG nova.compute.manager [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] No waiting events found dispatching network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:29:23 np0005532048 nova_compute[253661]: 2025-11-22 09:29:23.202 253665 WARNING nova.compute.manager [req-7596471b-6ba5-45bc-aec1-50695d71f880 req-e81ce0be-6ec1-4e6b-8cd0-73c6d3d91430 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received unexpected event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad for instance with vm_state active and task_state None.#033[00m
Nov 22 04:29:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.126 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.361 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.362 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.362 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.362 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.363 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.364 253665 INFO nova.compute.manager [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Terminating instance#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.366 253665 DEBUG nova.compute.manager [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:29:24 np0005532048 kernel: tapf5951cb8-b8 (unregistering): left promiscuous mode
Nov 22 04:29:24 np0005532048 NetworkManager[48920]: <info>  [1763803764.4171] device (tapf5951cb8-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.430 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:29:24Z|00939|binding|INFO|Releasing lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad from this chassis (sb_readonly=0)
Nov 22 04:29:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:29:24Z|00940|binding|INFO|Setting lport f5951cb8-b8f6-4d52-850f-d4bdb04390ad down in Southbound
Nov 22 04:29:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:29:24Z|00941|binding|INFO|Removing iface tapf5951cb8-b8 ovn-installed in OVS
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.441 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:74:7d 10.100.0.4'], port_security=['fa:16:3e:33:74:7d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'db811577-e691-40e3-9e31-1a0a5929133d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aaac6ab98bec41f7ac2ec49229374dc0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8fdff023-52d5-4ffb-998e-d0b2022a0bdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afa164a2-2d18-485c-9cf3-c424c8564fdf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f5951cb8-b8f6-4d52-850f-d4bdb04390ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.442 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f5951cb8-b8f6-4d52-850f-d4bdb04390ad in datapath 769b5006-dcff-42dc-96bc-e9baa5d2ce51 unbound from our chassis#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.443 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 769b5006-dcff-42dc-96bc-e9baa5d2ce51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.445 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3c67b9-4bc4-4059-877f-fa9dabdad386]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.446 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 namespace which is not needed anymore#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:24 np0005532048 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Nov 22 04:29:24 np0005532048 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005c.scope: Consumed 3.854s CPU time.
Nov 22 04:29:24 np0005532048 systemd-machined[215941]: Machine qemu-112-instance-0000005c terminated.
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.589 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.594 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.607 253665 INFO nova.virt.libvirt.driver [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Instance destroyed successfully.#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.607 253665 DEBUG nova.objects.instance [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lazy-loading 'resources' on Instance uuid db811577-e691-40e3-9e31-1a0a5929133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:29:24 np0005532048 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [NOTICE]   (348295) : haproxy version is 2.8.14-c23fe91
Nov 22 04:29:24 np0005532048 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [NOTICE]   (348295) : path to executable is /usr/sbin/haproxy
Nov 22 04:29:24 np0005532048 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [WARNING]  (348295) : Exiting Master process...
Nov 22 04:29:24 np0005532048 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [WARNING]  (348295) : Exiting Master process...
Nov 22 04:29:24 np0005532048 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [ALERT]    (348295) : Current worker (348297) exited with code 143 (Terminated)
Nov 22 04:29:24 np0005532048 neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51[348291]: [WARNING]  (348295) : All workers exited. Exiting... (0)
Nov 22 04:29:24 np0005532048 systemd[1]: libpod-540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5.scope: Deactivated successfully.
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.637 253665 DEBUG nova.virt.libvirt.vif [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:29:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-724968197',display_name='tempest-ServerAddressesNegativeTestJSON-server-724968197',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-724968197',id=92,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:29:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aaac6ab98bec41f7ac2ec49229374dc0',ramdisk_id='',reservation_id='r-5s3go2uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1369579025',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1369579025-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:29:21Z,user_data=None,user_id='0a91006ab4394e10a534a0887a0d170a',uuid=db811577-e691-40e3-9e31-1a0a5929133d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.638 253665 DEBUG nova.network.os_vif_util [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converting VIF {"id": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "address": "fa:16:3e:33:74:7d", "network": {"id": "769b5006-dcff-42dc-96bc-e9baa5d2ce51", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-529012186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aaac6ab98bec41f7ac2ec49229374dc0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf5951cb8-b8", "ovs_interfaceid": "f5951cb8-b8f6-4d52-850f-d4bdb04390ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.639 253665 DEBUG nova.network.os_vif_util [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.639 253665 DEBUG os_vif [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.641 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.641 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5951cb8-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:29:24 np0005532048 podman[348331]: 2025-11-22 09:29:24.642290828 +0000 UTC m=+0.091999704 container died 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.650 253665 INFO os_vif [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:74:7d,bridge_name='br-int',has_traffic_filtering=True,id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad,network=Network(769b5006-dcff-42dc-96bc-e9baa5d2ce51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf5951cb8-b8')#033[00m
Nov 22 04:29:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5-userdata-shm.mount: Deactivated successfully.
Nov 22 04:29:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-eb22e7e7c58e8491e09d953a6e5f1185f7a55b65c910568c494974d374016cd0-merged.mount: Deactivated successfully.
Nov 22 04:29:24 np0005532048 podman[348331]: 2025-11-22 09:29:24.735803169 +0000 UTC m=+0.185512045 container cleanup 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:29:24 np0005532048 systemd[1]: libpod-conmon-540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5.scope: Deactivated successfully.
Nov 22 04:29:24 np0005532048 podman[348390]: 2025-11-22 09:29:24.820801486 +0000 UTC m=+0.058608454 container remove 540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.827 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8037c515-5207-46b6-aa35-286f2517c9f3]: (4, ('Sat Nov 22 09:29:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 (540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5)\n540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5\nSat Nov 22 09:29:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 (540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5)\n540770a2a34932e86bd2032a78f98c437d916fe9dddb67a9994599cd5c83a9e5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.830 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[612d635d-56a5-4c7c-a819-a87e8a0ca19a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.832 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap769b5006-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:24 np0005532048 kernel: tap769b5006-d0: left promiscuous mode
Nov 22 04:29:24 np0005532048 nova_compute[253661]: 2025-11-22 09:29:24.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.855 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd1cc3b-33d0-499d-9afb-8ab9c65ade8f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.875 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[723de831-a32c-463b-8be2-480f39eb1fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4b468a5f-d0bc-4904-8855-eda45cf0bae3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf17c8a-1252-4e72-a867-1623febff00b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661139, 'reachable_time': 17089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348405, 'error': None, 'target': 'ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.900 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-769b5006-dcff-42dc-96bc-e9baa5d2ce51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:29:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:24.900 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f426ee1e-cdc9-40d8-97bd-adb4c1cf1902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:29:24 np0005532048 systemd[1]: run-netns-ovnmeta\x2d769b5006\x2ddcff\x2d42dc\x2d96bc\x2de9baa5d2ce51.mount: Deactivated successfully.
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.176 253665 INFO nova.virt.libvirt.driver [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Deleting instance files /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d_del#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.177 253665 INFO nova.virt.libvirt.driver [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Deletion of /var/lib/nova/instances/db811577-e691-40e3-9e31-1a0a5929133d_del complete#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.324 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-unplugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.325 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.325 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.325 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.325 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] No waiting events found dispatching network-vif-unplugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-unplugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "db811577-e691-40e3-9e31-1a0a5929133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.326 253665 DEBUG oslo_concurrency.lockutils [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.327 253665 DEBUG nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] No waiting events found dispatching network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.327 253665 WARNING nova.compute.manager [req-9ff094f6-155b-4ed7-8f23-e75a763915a7 req-bc182b83-0117-48f4-b851-696b13ea2207 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received unexpected event network-vif-plugged-f5951cb8-b8f6-4d52-850f-d4bdb04390ad for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.6 MiB/s wr, 76 op/s
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.808 253665 INFO nova.compute.manager [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Took 1.44 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.809 253665 DEBUG oslo.service.loopingcall [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.809 253665 DEBUG nova.compute.manager [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:29:25 np0005532048 nova_compute[253661]: 2025-11-22 09:29:25.810 253665 DEBUG nova.network.neutron [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.202 253665 DEBUG nova.compute.manager [req-aee83331-773a-4767-b12d-3accc26e6e43 req-1a671b1e-49b2-4830-a105-538276ef7c2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Received event network-vif-deleted-f5951cb8-b8f6-4d52-850f-d4bdb04390ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.202 253665 INFO nova.compute.manager [req-aee83331-773a-4767-b12d-3accc26e6e43 req-1a671b1e-49b2-4830-a105-538276ef7c2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Neutron deleted interface f5951cb8-b8f6-4d52-850f-d4bdb04390ad; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.203 253665 DEBUG nova.network.neutron [req-aee83331-773a-4767-b12d-3accc26e6e43 req-1a671b1e-49b2-4830-a105-538276ef7c2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.275 253665 DEBUG nova.network.neutron [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.290 253665 INFO nova.compute.manager [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Took 1.48 seconds to deallocate network for instance.#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.298 253665 DEBUG nova.compute.manager [req-aee83331-773a-4767-b12d-3accc26e6e43 req-1a671b1e-49b2-4830-a105-538276ef7c2b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Detach interface failed, port_id=f5951cb8-b8f6-4d52-850f-d4bdb04390ad, reason: Instance db811577-e691-40e3-9e31-1a0a5929133d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.335 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.336 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.413 253665 DEBUG oslo_concurrency.processutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 71 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 88 op/s
Nov 22 04:29:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:29:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3123438005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.900 253665 DEBUG oslo_concurrency.processutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.907 253665 DEBUG nova.compute.provider_tree [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.921 253665 DEBUG nova.scheduler.client.report [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.948 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:27.972 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:27.972 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:27.972 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:27 np0005532048 nova_compute[253661]: 2025-11-22 09:29:27.979 253665 INFO nova.scheduler.client.report [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Deleted allocations for instance db811577-e691-40e3-9e31-1a0a5929133d#033[00m
Nov 22 04:29:28 np0005532048 nova_compute[253661]: 2025-11-22 09:29:28.038 253665 DEBUG oslo_concurrency.lockutils [None req-49bbe7ed-0bc0-4dd0-87ed-6d0d4e797324 0a91006ab4394e10a534a0887a0d170a aaac6ab98bec41f7ac2ec49229374dc0 - - default default] Lock "db811577-e691-40e3-9e31-1a0a5929133d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:28.198 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:29:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:28.198 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:29:28 np0005532048 nova_compute[253661]: 2025-11-22 09:29:28.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:29 np0005532048 nova_compute[253661]: 2025-11-22 09:29:29.645 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 611 KiB/s wr, 109 op/s
Nov 22 04:29:30 np0005532048 nova_compute[253661]: 2025-11-22 09:29:30.623 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:31 np0005532048 nova_compute[253661]: 2025-11-22 09:29:31.236 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 22 04:29:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 22 04:29:31 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 22 04:29:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 120 op/s
Nov 22 04:29:32 np0005532048 nova_compute[253661]: 2025-11-22 09:29:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:32 np0005532048 nova_compute[253661]: 2025-11-22 09:29:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:29:32 np0005532048 nova_compute[253661]: 2025-11-22 09:29:32.268 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:29:32 np0005532048 nova_compute[253661]: 2025-11-22 09:29:32.825 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.0 KiB/s wr, 87 op/s
Nov 22 04:29:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:29:34.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:29:34 np0005532048 nova_compute[253661]: 2025-11-22 09:29:34.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:34 np0005532048 nova_compute[253661]: 2025-11-22 09:29:34.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 22 04:29:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 22 04:29:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 22 04:29:34 np0005532048 nova_compute[253661]: 2025-11-22 09:29:34.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:35 np0005532048 nova_compute[253661]: 2025-11-22 09:29:35.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:35 np0005532048 nova_compute[253661]: 2025-11-22 09:29:35.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.7 KiB/s wr, 66 op/s
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.250 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 22 04:29:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 22 04:29:36 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 22 04:29:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:29:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500487013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:29:36 np0005532048 nova_compute[253661]: 2025-11-22 09:29:36.786 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.014 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.016 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3916MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.016 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.016 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.076 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.077 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.095 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:29:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3011873185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.582 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.590 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.607 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.648 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:29:37 np0005532048 nova_compute[253661]: 2025-11-22 09:29:37.649 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 5.5 KiB/s wr, 87 op/s
Nov 22 04:29:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 22 04:29:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 22 04:29:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 22 04:29:39 np0005532048 nova_compute[253661]: 2025-11-22 09:29:39.607 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803764.6051538, db811577-e691-40e3-9e31-1a0a5929133d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:29:39 np0005532048 nova_compute[253661]: 2025-11-22 09:29:39.608 253665 INFO nova.compute.manager [-] [instance: db811577-e691-40e3-9e31-1a0a5929133d] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:29:39 np0005532048 nova_compute[253661]: 2025-11-22 09:29:39.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 3 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 295 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 5.2 KiB/s wr, 77 op/s
Nov 22 04:29:39 np0005532048 nova_compute[253661]: 2025-11-22 09:29:39.833 253665 DEBUG nova.compute.manager [None req-7cd6ef78-8ba4-43ca-9cac-89b0fb77aae3 - - - - - -] [instance: db811577-e691-40e3-9e31-1a0a5929133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:29:40 np0005532048 nova_compute[253661]: 2025-11-22 09:29:40.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:29:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 03cd26b4-32bf-470c-a3b3-e30aa2f2f6c6 does not exist
Nov 22 04:29:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 86e15cd9-bb53-457c-8c9e-474a0a37cfde does not exist
Nov 22 04:29:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c17d16a4-40ed-4880-bcf5-98ab9b990068 does not exist
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:29:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:29:41 np0005532048 podman[348629]: 2025-11-22 09:29:41.059395152 +0000 UTC m=+0.060282607 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 04:29:41 np0005532048 podman[348630]: 2025-11-22 09:29:41.068551892 +0000 UTC m=+0.068959105 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:29:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:29:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:29:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:29:41 np0005532048 podman[348782]: 2025-11-22 09:29:41.573355525 +0000 UTC m=+0.027609055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:29:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 3 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 295 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.4 KiB/s wr, 60 op/s
Nov 22 04:29:41 np0005532048 podman[348782]: 2025-11-22 09:29:41.94783087 +0000 UTC m=+0.402084380 container create 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:29:42 np0005532048 systemd[1]: Started libpod-conmon-8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c.scope.
Nov 22 04:29:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:29:42 np0005532048 podman[348782]: 2025-11-22 09:29:42.09612031 +0000 UTC m=+0.550373870 container init 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:29:42 np0005532048 podman[348782]: 2025-11-22 09:29:42.110657636 +0000 UTC m=+0.564911146 container start 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:29:42 np0005532048 zealous_poitras[348798]: 167 167
Nov 22 04:29:42 np0005532048 systemd[1]: libpod-8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c.scope: Deactivated successfully.
Nov 22 04:29:42 np0005532048 conmon[348798]: conmon 8fd321adf2da83f617cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c.scope/container/memory.events
Nov 22 04:29:42 np0005532048 podman[348782]: 2025-11-22 09:29:42.196387472 +0000 UTC m=+0.650641002 container attach 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:29:42 np0005532048 podman[348782]: 2025-11-22 09:29:42.197885509 +0000 UTC m=+0.652139049 container died 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:29:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-679e8721c899451c05957f707b30798075def4850104fac1ec6b02a7cf16701e-merged.mount: Deactivated successfully.
Nov 22 04:29:42 np0005532048 nova_compute[253661]: 2025-11-22 09:29:42.651 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:42 np0005532048 podman[348782]: 2025-11-22 09:29:42.690045744 +0000 UTC m=+1.144299264 container remove 8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:29:42 np0005532048 systemd[1]: libpod-conmon-8fd321adf2da83f617cf2f140373ba919922a1cedc89ca73ec15f2d18dcfde8c.scope: Deactivated successfully.
Nov 22 04:29:42 np0005532048 podman[348823]: 2025-11-22 09:29:42.87879943 +0000 UTC m=+0.055387983 container create 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:29:42 np0005532048 systemd[1]: Started libpod-conmon-04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a.scope.
Nov 22 04:29:42 np0005532048 podman[348823]: 2025-11-22 09:29:42.851617687 +0000 UTC m=+0.028206340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:29:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:29:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:42 np0005532048 podman[348823]: 2025-11-22 09:29:42.972947008 +0000 UTC m=+0.149535571 container init 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:29:42 np0005532048 podman[348823]: 2025-11-22 09:29:42.979944264 +0000 UTC m=+0.156532807 container start 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:29:42 np0005532048 podman[348823]: 2025-11-22 09:29:42.985533914 +0000 UTC m=+0.162122457 container attach 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:29:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 4.4 KiB/s wr, 87 op/s
Nov 22 04:29:44 np0005532048 crazy_banach[348839]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:29:44 np0005532048 crazy_banach[348839]: --> relative data size: 1.0
Nov 22 04:29:44 np0005532048 crazy_banach[348839]: --> All data devices are unavailable
Nov 22 04:29:44 np0005532048 systemd[1]: libpod-04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a.scope: Deactivated successfully.
Nov 22 04:29:44 np0005532048 systemd[1]: libpod-04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a.scope: Consumed 1.083s CPU time.
Nov 22 04:29:44 np0005532048 podman[348823]: 2025-11-22 09:29:44.103018073 +0000 UTC m=+1.279606656 container died 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:29:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f59c585c97bfae42ffc1478883eb1dc7836ee1d1e860f2afac0fd2fdeb534837-merged.mount: Deactivated successfully.
Nov 22 04:29:44 np0005532048 podman[348823]: 2025-11-22 09:29:44.195214121 +0000 UTC m=+1.371802674 container remove 04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:29:44 np0005532048 systemd[1]: libpod-conmon-04c9f7fa9eb69cca976f8744418dda4cd68531737a38a18b0ac88d1ecd51ec2a.scope: Deactivated successfully.
Nov 22 04:29:44 np0005532048 nova_compute[253661]: 2025-11-22 09:29:44.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:29:44 np0005532048 nova_compute[253661]: 2025-11-22 09:29:44.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:44 np0005532048 podman[349020]: 2025-11-22 09:29:44.814085002 +0000 UTC m=+0.023575033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:29:44 np0005532048 podman[349020]: 2025-11-22 09:29:44.910373383 +0000 UTC m=+0.119863394 container create e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:29:45 np0005532048 systemd[1]: Started libpod-conmon-e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205.scope.
Nov 22 04:29:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:29:45 np0005532048 podman[349020]: 2025-11-22 09:29:45.508132244 +0000 UTC m=+0.717622285 container init e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:29:45 np0005532048 podman[349020]: 2025-11-22 09:29:45.518036153 +0000 UTC m=+0.727526164 container start e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:29:45 np0005532048 relaxed_bell[349036]: 167 167
Nov 22 04:29:45 np0005532048 systemd[1]: libpod-e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205.scope: Deactivated successfully.
Nov 22 04:29:45 np0005532048 podman[349020]: 2025-11-22 09:29:45.564347807 +0000 UTC m=+0.773837828 container attach e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:29:45 np0005532048 podman[349020]: 2025-11-22 09:29:45.565184909 +0000 UTC m=+0.774674950 container died e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:29:45 np0005532048 nova_compute[253661]: 2025-11-22 09:29:45.630 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.2 KiB/s wr, 79 op/s
Nov 22 04:29:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1ce6d33bfd7537f9406b69f80e01314817ca25645ce6ed4dd35d4eff4ea1997f-merged.mount: Deactivated successfully.
Nov 22 04:29:46 np0005532048 podman[349020]: 2025-11-22 09:29:46.933017133 +0000 UTC m=+2.142507184 container remove e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:29:47 np0005532048 systemd[1]: libpod-conmon-e2c263ed7bc5df023d51d9cdd06b1f8a429e6644b0b406a36c6d9ebdb47be205.scope: Deactivated successfully.
Nov 22 04:29:47 np0005532048 podman[349066]: 2025-11-22 09:29:47.097426376 +0000 UTC m=+0.023079710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:29:47 np0005532048 podman[349066]: 2025-11-22 09:29:47.405967015 +0000 UTC m=+0.331620329 container create 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:29:47 np0005532048 systemd[1]: Started libpod-conmon-4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283.scope.
Nov 22 04:29:47 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:29:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:47 np0005532048 podman[349055]: 2025-11-22 09:29:47.571710673 +0000 UTC m=+0.530556832 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:29:47 np0005532048 podman[349066]: 2025-11-22 09:29:47.593617004 +0000 UTC m=+0.519270338 container init 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:29:47 np0005532048 podman[349066]: 2025-11-22 09:29:47.601742397 +0000 UTC m=+0.527395711 container start 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:29:47 np0005532048 podman[349066]: 2025-11-22 09:29:47.612372245 +0000 UTC m=+0.538025589 container attach 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:29:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Nov 22 04:29:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 22 04:29:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 22 04:29:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]: {
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:    "0": [
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:        {
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "devices": [
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "/dev/loop3"
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            ],
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_name": "ceph_lv0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_size": "21470642176",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "name": "ceph_lv0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "tags": {
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.cluster_name": "ceph",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.crush_device_class": "",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.encrypted": "0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.osd_id": "0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.type": "block",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.vdo": "0"
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            },
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "type": "block",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "vg_name": "ceph_vg0"
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:        }
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:    ],
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:    "1": [
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:        {
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "devices": [
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "/dev/loop4"
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            ],
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_name": "ceph_lv1",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_size": "21470642176",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "name": "ceph_lv1",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "tags": {
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.cluster_name": "ceph",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.crush_device_class": "",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.encrypted": "0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.osd_id": "1",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.type": "block",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.vdo": "0"
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            },
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "type": "block",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "vg_name": "ceph_vg1"
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:        }
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:    ],
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:    "2": [
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:        {
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "devices": [
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "/dev/loop5"
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            ],
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_name": "ceph_lv2",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_size": "21470642176",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "name": "ceph_lv2",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "tags": {
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.cluster_name": "ceph",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.crush_device_class": "",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.encrypted": "0",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.osd_id": "2",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.type": "block",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:                "ceph.vdo": "0"
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            },
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "type": "block",
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:            "vg_name": "ceph_vg2"
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:        }
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]:    ]
Nov 22 04:29:48 np0005532048 nostalgic_stonebraker[349100]: }
Nov 22 04:29:48 np0005532048 systemd[1]: libpod-4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283.scope: Deactivated successfully.
Nov 22 04:29:48 np0005532048 podman[349066]: 2025-11-22 09:29:48.471088397 +0000 UTC m=+1.396741711 container died 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:29:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d27058decd2d6554a8c0ac45ac53ec135e27d0f40618c3c3ad8cabfc86b3aff4-merged.mount: Deactivated successfully.
Nov 22 04:29:48 np0005532048 podman[349066]: 2025-11-22 09:29:48.556113315 +0000 UTC m=+1.481766629 container remove 4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:29:48 np0005532048 systemd[1]: libpod-conmon-4baa7c0e7858582f106d4c105da341cf7dfd96b4ea46cf0c9474d76e68075283.scope: Deactivated successfully.
Nov 22 04:29:49 np0005532048 podman[349264]: 2025-11-22 09:29:49.213967727 +0000 UTC m=+0.044694225 container create 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:29:49 np0005532048 systemd[1]: Started libpod-conmon-6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463.scope.
Nov 22 04:29:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:29:49 np0005532048 podman[349264]: 2025-11-22 09:29:49.287400273 +0000 UTC m=+0.118126791 container init 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:29:49 np0005532048 podman[349264]: 2025-11-22 09:29:49.191886871 +0000 UTC m=+0.022613389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:29:49 np0005532048 podman[349264]: 2025-11-22 09:29:49.294364508 +0000 UTC m=+0.125091016 container start 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:29:49 np0005532048 crazy_carson[349280]: 167 167
Nov 22 04:29:49 np0005532048 systemd[1]: libpod-6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463.scope: Deactivated successfully.
Nov 22 04:29:49 np0005532048 podman[349264]: 2025-11-22 09:29:49.303443507 +0000 UTC m=+0.134170015 container attach 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:29:49 np0005532048 podman[349264]: 2025-11-22 09:29:49.304014401 +0000 UTC m=+0.134740889 container died 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:29:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 22 04:29:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 22 04:29:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 22 04:29:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-74cce09cd0ce7529b759ddf345369b6f5d06397cf770b1325e922c3619d5b5cc-merged.mount: Deactivated successfully.
Nov 22 04:29:49 np0005532048 podman[349264]: 2025-11-22 09:29:49.365415095 +0000 UTC m=+0.196141593 container remove 6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:29:49 np0005532048 systemd[1]: libpod-conmon-6742f86cd6e887046afe176844745dbdea118bbb5e8caa8b774181ef9ec70463.scope: Deactivated successfully.
Nov 22 04:29:49 np0005532048 podman[349304]: 2025-11-22 09:29:49.529724436 +0000 UTC m=+0.044263864 container create 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:29:49 np0005532048 systemd[1]: Started libpod-conmon-6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a.scope.
Nov 22 04:29:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:29:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:29:49 np0005532048 podman[349304]: 2025-11-22 09:29:49.511457057 +0000 UTC m=+0.025996305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:29:49 np0005532048 podman[349304]: 2025-11-22 09:29:49.615740729 +0000 UTC m=+0.130279977 container init 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 04:29:49 np0005532048 podman[349304]: 2025-11-22 09:29:49.625778841 +0000 UTC m=+0.140318069 container start 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:29:49 np0005532048 podman[349304]: 2025-11-22 09:29:49.634945112 +0000 UTC m=+0.149484340 container attach 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:29:49 np0005532048 nova_compute[253661]: 2025-11-22 09:29:49.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 4.2 KiB/s wr, 61 op/s
Nov 22 04:29:50 np0005532048 nova_compute[253661]: 2025-11-22 09:29:50.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]: {
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "osd_id": 1,
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "type": "bluestore"
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:    },
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "osd_id": 0,
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "type": "bluestore"
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:    },
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "osd_id": 2,
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:        "type": "bluestore"
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]:    }
Nov 22 04:29:50 np0005532048 upbeat_perlman[349321]: }
Nov 22 04:29:50 np0005532048 systemd[1]: libpod-6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a.scope: Deactivated successfully.
Nov 22 04:29:50 np0005532048 systemd[1]: libpod-6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a.scope: Consumed 1.080s CPU time.
Nov 22 04:29:50 np0005532048 podman[349304]: 2025-11-22 09:29:50.704944127 +0000 UTC m=+1.219483365 container died 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:29:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c776123cb437cd0e1f53b0310c479218e905972665ea03ba6825d5e86f66c021-merged.mount: Deactivated successfully.
Nov 22 04:29:50 np0005532048 podman[349304]: 2025-11-22 09:29:50.855901463 +0000 UTC m=+1.370440721 container remove 6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:29:50 np0005532048 systemd[1]: libpod-conmon-6bdeb4c21d3cd83b04f424ef61811520938a71f6a6356121618cf4233f05dc8a.scope: Deactivated successfully.
Nov 22 04:29:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:29:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:29:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:29:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:29:50 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2bf1b0ba-ca67-4ccd-8435-ce83fd2e1188 does not exist
Nov 22 04:29:50 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8c8c1a3c-08a8-4e7d-b085-102aec6d44a6 does not exist
Nov 22 04:29:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:29:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:29:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 22 04:29:51 np0005532048 nova_compute[253661]: 2025-11-22 09:29:51.989 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:51 np0005532048 nova_compute[253661]: 2025-11-22 09:29:51.990 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.006 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.167 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.168 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.179 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.180 253665 INFO nova.compute.claims [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:29:52
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr']
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.277 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:29:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:29:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:29:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728926225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.796 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.806 253665 DEBUG nova.compute.provider_tree [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.830 253665 DEBUG nova.scheduler.client.report [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.855 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.921 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "39fbf9e5-5d3a-4211-8ce0-7d3b2c7a90d2" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.921 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "39fbf9e5-5d3a-4211-8ce0-7d3b2c7a90d2" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.928 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "39fbf9e5-5d3a-4211-8ce0-7d3b2c7a90d2" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.929 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.968 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.969 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:29:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:52 np0005532048 nova_compute[253661]: 2025-11-22 09:29:52.990 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.007 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.104 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.106 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.107 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Creating image(s)#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.134 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.162 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.186 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.191 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.281 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.282 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.283 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.283 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.308 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.315 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:29:53 np0005532048 nova_compute[253661]: 2025-11-22 09:29:53.646 253665 DEBUG nova.policy [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a42c7b8d01c4f8e8dfbb1a0ce8d230d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '664bf3b26d414971a1d337e3eb9567e0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:29:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 3.4 KiB/s wr, 35 op/s
Nov 22 04:29:54 np0005532048 nova_compute[253661]: 2025-11-22 09:29:54.334 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Successfully created port: 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:29:54 np0005532048 nova_compute[253661]: 2025-11-22 09:29:54.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:55 np0005532048 nova_compute[253661]: 2025-11-22 09:29:55.375 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Successfully updated port: 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:29:55 np0005532048 nova_compute[253661]: 2025-11-22 09:29:55.391 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:29:55 np0005532048 nova_compute[253661]: 2025-11-22 09:29:55.392 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquired lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:29:55 np0005532048 nova_compute[253661]: 2025-11-22 09:29:55.392 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:29:55 np0005532048 nova_compute[253661]: 2025-11-22 09:29:55.586 253665 DEBUG nova.compute.manager [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received event network-changed-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:29:55 np0005532048 nova_compute[253661]: 2025-11-22 09:29:55.587 253665 DEBUG nova.compute.manager [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Refreshing instance network info cache due to event network-changed-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:29:55 np0005532048 nova_compute[253661]: 2025-11-22 09:29:55.587 253665 DEBUG oslo_concurrency.lockutils [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:29:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:29:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:29:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:29:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:29:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:29:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:29:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:29:55 np0005532048 nova_compute[253661]: 2025-11-22 09:29:55.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 50 op/s
Nov 22 04:29:55 np0005532048 nova_compute[253661]: 2025-11-22 09:29:55.709 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:29:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:29:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:29:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:29:56 np0005532048 nova_compute[253661]: 2025-11-22 09:29:56.935 253665 DEBUG nova.network.neutron [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Updating instance_info_cache with network_info: [{"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:29:56 np0005532048 nova_compute[253661]: 2025-11-22 09:29:56.959 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Releasing lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:29:56 np0005532048 nova_compute[253661]: 2025-11-22 09:29:56.960 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance network_info: |[{"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:29:56 np0005532048 nova_compute[253661]: 2025-11-22 09:29:56.961 253665 DEBUG oslo_concurrency.lockutils [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:29:56 np0005532048 nova_compute[253661]: 2025-11-22 09:29:56.961 253665 DEBUG nova.network.neutron [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Refreshing network info cache for port 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:29:57 np0005532048 nova_compute[253661]: 2025-11-22 09:29:57.573 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:29:57 np0005532048 nova_compute[253661]: 2025-11-22 09:29:57.649 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] resizing rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:29:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 58 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 442 KiB/s wr, 42 op/s
Nov 22 04:29:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:29:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 22 04:29:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 22 04:29:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 22 04:29:58 np0005532048 nova_compute[253661]: 2025-11-22 09:29:58.689 253665 DEBUG nova.network.neutron [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Updated VIF entry in instance network info cache for port 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:29:58 np0005532048 nova_compute[253661]: 2025-11-22 09:29:58.690 253665 DEBUG nova.network.neutron [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Updating instance_info_cache with network_info: [{"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:29:58 np0005532048 nova_compute[253661]: 2025-11-22 09:29:58.704 253665 DEBUG oslo_concurrency.lockutils [req-fc6517c9-4b27-48a1-8caa-523daa050c9f req-87b9b8bd-3a39-4ee2-8726-c2c1b644ea07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4a826b3b-aa3a-40c4-a85d-930239bc78d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.693 253665 DEBUG nova.objects.instance [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a826b3b-aa3a-40c4-a85d-930239bc78d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:29:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.713 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.713 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Ensure instance console log exists: /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.714 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.714 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.714 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.717 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start _get_guest_xml network_info=[{"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.724 253665 WARNING nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.730 253665 DEBUG nova.virt.libvirt.host [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.731 253665 DEBUG nova.virt.libvirt.host [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.735 253665 DEBUG nova.virt.libvirt.host [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.735 253665 DEBUG nova.virt.libvirt.host [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.736 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.736 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.737 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.737 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.737 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.738 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.738 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.738 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.739 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.739 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.739 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.740 253665 DEBUG nova.virt.hardware [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:29:59 np0005532048 nova_compute[253661]: 2025-11-22 09:29:59.744 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:30:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487422855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:30:00 np0005532048 nova_compute[253661]: 2025-11-22 09:30:00.246 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:00 np0005532048 nova_compute[253661]: 2025-11-22 09:30:00.279 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:00 np0005532048 nova_compute[253661]: 2025-11-22 09:30:00.286 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:00 np0005532048 nova_compute[253661]: 2025-11-22 09:30:00.638 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:30:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272232897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:30:00 np0005532048 nova_compute[253661]: 2025-11-22 09:30:00.735 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:00 np0005532048 nova_compute[253661]: 2025-11-22 09:30:00.737 253665 DEBUG nova.virt.libvirt.vif [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:29:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-528470523',display_name='tempest-ServerGroupTestJSON-server-528470523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-528470523',id=93,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='664bf3b26d414971a1d337e3eb9567e0',ramdisk_id='',reservation_id='r-0qsyzo83',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1285465503',owner_user_name='tempest-ServerGroupTestJSON-1285465503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:29:53Z,user_data=None,user_id='7a42c7b8d01c4f8e8dfbb1a0ce8d230d',uuid=4a826b3b-aa3a-40c4-a85d-930239bc78d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:30:00 np0005532048 nova_compute[253661]: 2025-11-22 09:30:00.737 253665 DEBUG nova.network.os_vif_util [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converting VIF {"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:30:00 np0005532048 nova_compute[253661]: 2025-11-22 09:30:00.738 253665 DEBUG nova.network.os_vif_util [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:30:00 np0005532048 nova_compute[253661]: 2025-11-22 09:30:00.740 253665 DEBUG nova.objects.instance [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a826b3b-aa3a-40c4-a85d-930239bc78d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:30:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 22 04:30:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 22 04:30:01 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 22 04:30:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 MiB/s wr, 55 op/s
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.846 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <uuid>4a826b3b-aa3a-40c4-a85d-930239bc78d6</uuid>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <name>instance-0000005d</name>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerGroupTestJSON-server-528470523</nova:name>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:29:59</nova:creationTime>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <nova:user uuid="7a42c7b8d01c4f8e8dfbb1a0ce8d230d">tempest-ServerGroupTestJSON-1285465503-project-member</nova:user>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <nova:project uuid="664bf3b26d414971a1d337e3eb9567e0">tempest-ServerGroupTestJSON-1285465503</nova:project>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <nova:port uuid="1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <entry name="serial">4a826b3b-aa3a-40c4-a85d-930239bc78d6</entry>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <entry name="uuid">4a826b3b-aa3a-40c4-a85d-930239bc78d6</entry>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:fc:8b:74"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <target dev="tap1a1647d5-7a"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/console.log" append="off"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:30:01 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:30:01 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:30:01 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:30:01 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.847 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Preparing to wait for external event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.848 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.848 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.848 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.849 253665 DEBUG nova.virt.libvirt.vif [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:29:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-528470523',display_name='tempest-ServerGroupTestJSON-server-528470523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-528470523',id=93,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='664bf3b26d414971a1d337e3eb9567e0',ramdisk_id='',reservation_id='r-0qsyzo83',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1285465503',owner_user_name='tempest-ServerGroupTestJSON-1285465503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:29:53Z,user_data=None,user_id='7a42c7b8d01c4f8e8dfbb1a0ce8d230d',uuid=4a826b3b-aa3a-40c4-a85d-930239bc78d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.849 253665 DEBUG nova.network.os_vif_util [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converting VIF {"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.850 253665 DEBUG nova.network.os_vif_util [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.850 253665 DEBUG os_vif [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.851 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.851 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.856 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a1647d5-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.856 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a1647d5-7a, col_values=(('external_ids', {'iface-id': '1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:8b:74', 'vm-uuid': '4a826b3b-aa3a-40c4-a85d-930239bc78d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:01 np0005532048 NetworkManager[48920]: <info>  [1763803801.8593] manager: (tap1a1647d5-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.869 253665 INFO os_vif [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a')#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.957 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.958 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.958 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] No VIF found with MAC fa:16:3e:fc:8b:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:30:01 np0005532048 nova_compute[253661]: 2025-11-22 09:30:01.958 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Using config drive#033[00m
Nov 22 04:30:02 np0005532048 nova_compute[253661]: 2025-11-22 09:30:02.073 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:02 np0005532048 nova_compute[253661]: 2025-11-22 09:30:02.634 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Creating config drive at /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config#033[00m
Nov 22 04:30:02 np0005532048 nova_compute[253661]: 2025-11-22 09:30:02.641 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7f9av7ac execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:30:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:30:02 np0005532048 nova_compute[253661]: 2025-11-22 09:30:02.793 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7f9av7ac" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:02 np0005532048 nova_compute[253661]: 2025-11-22 09:30:02.830 253665 DEBUG nova.storage.rbd_utils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] rbd image 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:02 np0005532048 nova_compute[253661]: 2025-11-22 09:30:02.836 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 MiB/s wr, 56 op/s
Nov 22 04:30:04 np0005532048 nova_compute[253661]: 2025-11-22 09:30:04.623 253665 DEBUG oslo_concurrency.processutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config 4a826b3b-aa3a-40c4-a85d-930239bc78d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.788s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:04 np0005532048 nova_compute[253661]: 2025-11-22 09:30:04.624 253665 INFO nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Deleting local config drive /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6/disk.config because it was imported into RBD.#033[00m
Nov 22 04:30:04 np0005532048 kernel: tap1a1647d5-7a: entered promiscuous mode
Nov 22 04:30:04 np0005532048 NetworkManager[48920]: <info>  [1763803804.6890] manager: (tap1a1647d5-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Nov 22 04:30:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:04Z|00942|binding|INFO|Claiming lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 for this chassis.
Nov 22 04:30:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:04Z|00943|binding|INFO|1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1: Claiming fa:16:3e:fc:8b:74 10.100.0.12
Nov 22 04:30:04 np0005532048 nova_compute[253661]: 2025-11-22 09:30:04.690 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:04 np0005532048 nova_compute[253661]: 2025-11-22 09:30:04.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:04 np0005532048 systemd-udevd[349741]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:30:04 np0005532048 systemd-machined[215941]: New machine qemu-113-instance-0000005d.
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.726 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:8b:74 10.100.0.12'], port_security=['fa:16:3e:fc:8b:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a826b3b-aa3a-40c4-a85d-930239bc78d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2815298-27cc-4036-b985-55e1f44ee473', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '664bf3b26d414971a1d337e3eb9567e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6c4ef4c8-f5db-475c-9470-c9efc0b15564', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d861fb2-ab02-4cd4-8bac-25c3ad28ac31, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.727 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 in datapath b2815298-27cc-4036-b985-55e1f44ee473 bound to our chassis#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.728 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2815298-27cc-4036-b985-55e1f44ee473#033[00m
Nov 22 04:30:04 np0005532048 NetworkManager[48920]: <info>  [1763803804.7392] device (tap1a1647d5-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:30:04 np0005532048 NetworkManager[48920]: <info>  [1763803804.7401] device (tap1a1647d5-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.747 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[257b0d01-c11a-4602-a242-06b829016150]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.749 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2815298-21 in ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.751 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2815298-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.751 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[071e6c53-5308-4cf8-ae30-9ccbccb5be50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.752 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a84ba754-3a2a-41a7-bc4e-ee76c0acb75d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 nova_compute[253661]: 2025-11-22 09:30:04.757 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:04 np0005532048 systemd[1]: Started Virtual Machine qemu-113-instance-0000005d.
Nov 22 04:30:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:04Z|00944|binding|INFO|Setting lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 ovn-installed in OVS
Nov 22 04:30:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:04Z|00945|binding|INFO|Setting lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 up in Southbound
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.764 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[889c9469-8649-4113-9c9f-1b3b5bc92608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 nova_compute[253661]: 2025-11-22 09:30:04.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f6931e8c-1345-4678-8243-865735389d6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.822 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b9776d74-dfff-4abe-b1b4-459c4363196d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 NetworkManager[48920]: <info>  [1763803804.8298] manager: (tapb2815298-20): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.829 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e85e423-3a62-4070-813b-87986a272f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.870 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[52c24ce4-8d70-46f9-a2f2-fc75ceb8be04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.873 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb7f3ef-f06f-436a-a575-50332f4c15bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 NetworkManager[48920]: <info>  [1763803804.9025] device (tapb2815298-20): carrier: link connected
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.911 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[791985cb-a554-4902-a533-33d8a5e0a754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7af82b1d-990b-43c6-8bae-ce7b804d9e85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2815298-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:b3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665564, 'reachable_time': 34750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349774, 'error': None, 'target': 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.960 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ade052b-b848-4dc2-8cf1-438b12e1b641]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:b3b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665564, 'tstamp': 665564}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349775, 'error': None, 'target': 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:04.981 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3aeb50ea-10f4-449d-9009-36a7a4eecf44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2815298-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:b3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665564, 'reachable_time': 34750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349776, 'error': None, 'target': 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.024 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34543836-88e8-423e-8699-0c4197acfc33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.110 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4a238632-aff7-457a-8f70-3a96ab376d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.112 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2815298-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.112 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.113 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2815298-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:05 np0005532048 nova_compute[253661]: 2025-11-22 09:30:05.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:05 np0005532048 NetworkManager[48920]: <info>  [1763803805.1154] manager: (tapb2815298-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Nov 22 04:30:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 22 04:30:05 np0005532048 kernel: tapb2815298-20: entered promiscuous mode
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.117 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2815298-20, col_values=(('external_ids', {'iface-id': '35c942a5-f18d-4a44-89ed-ba479c2b3ff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:05Z|00946|binding|INFO|Releasing lport 35c942a5-f18d-4a44-89ed-ba479c2b3ff9 from this chassis (sb_readonly=0)
Nov 22 04:30:05 np0005532048 nova_compute[253661]: 2025-11-22 09:30:05.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.152 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2815298-27cc-4036-b985-55e1f44ee473.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2815298-27cc-4036-b985-55e1f44ee473.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.154 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec925b10-55df-43fb-a3e0-60b517dea5ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.156 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-b2815298-27cc-4036-b985-55e1f44ee473
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/b2815298-27cc-4036-b985-55e1f44ee473.pid.haproxy
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID b2815298-27cc-4036-b985-55e1f44ee473
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.157 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'env', 'PROCESS_TAG=haproxy-b2815298-27cc-4036-b985-55e1f44ee473', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2815298-27cc-4036-b985-55e1f44ee473.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:30:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 22 04:30:05 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 22 04:30:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:05.576 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:30:05 np0005532048 nova_compute[253661]: 2025-11-22 09:30:05.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:05 np0005532048 nova_compute[253661]: 2025-11-22 09:30:05.597 253665 DEBUG nova.compute.manager [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:05 np0005532048 nova_compute[253661]: 2025-11-22 09:30:05.600 253665 DEBUG oslo_concurrency.lockutils [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:05 np0005532048 nova_compute[253661]: 2025-11-22 09:30:05.601 253665 DEBUG oslo_concurrency.lockutils [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:05 np0005532048 nova_compute[253661]: 2025-11-22 09:30:05.601 253665 DEBUG oslo_concurrency.lockutils [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:05 np0005532048 nova_compute[253661]: 2025-11-22 09:30:05.601 253665 DEBUG nova.compute.manager [req-597b64e4-1a3d-440b-8ced-9950bc11f5f1 req-90ec5c9b-b5fb-43a1-af6b-d3791a2cdcfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Processing event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:30:05 np0005532048 nova_compute[253661]: 2025-11-22 09:30:05.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:05 np0005532048 podman[349809]: 2025-11-22 09:30:05.589976888 +0000 UTC m=+0.029846542 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:30:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 MiB/s wr, 42 op/s
Nov 22 04:30:06 np0005532048 podman[349809]: 2025-11-22 09:30:06.02158503 +0000 UTC m=+0.461454664 container create ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:30:06 np0005532048 systemd[1]: Started libpod-conmon-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39.scope.
Nov 22 04:30:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:30:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc01e6e97cf42734016b9830101bf1f1f4bc57ae2055ce6357dae147575cc35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:06 np0005532048 podman[349809]: 2025-11-22 09:30:06.252680561 +0000 UTC m=+0.692550225 container init ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:30:06 np0005532048 podman[349809]: 2025-11-22 09:30:06.261902543 +0000 UTC m=+0.701772177 container start ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:30:06 np0005532048 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [NOTICE]   (349871) : New worker (349873) forked
Nov 22 04:30:06 np0005532048 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [NOTICE]   (349871) : Loading success.
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.311 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.312 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803806.3101366, 4a826b3b-aa3a-40c4-a85d-930239bc78d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.312 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] VM Started (Lifecycle Event)#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.316 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.320 253665 INFO nova.virt.libvirt.driver [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance spawned successfully.#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.320 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.336 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.348 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.349 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.349 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.350 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.350 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.351 253665 DEBUG nova.virt.libvirt.driver [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.379 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.380 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803806.3105385, 4a826b3b-aa3a-40c4-a85d-930239bc78d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.380 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:30:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:06.382 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.413 253665 INFO nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Took 13.31 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.414 253665 DEBUG nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.414 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.425 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803806.314656, 4a826b3b-aa3a-40c4-a85d-930239bc78d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.426 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.449 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.452 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.494 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.507 253665 INFO nova.compute.manager [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Took 14.39 seconds to build instance.#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.522 253665 DEBUG oslo_concurrency.lockutils [None req-1b44ef8c-86d1-403d-8813-6a3407c40c1f 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:06 np0005532048 nova_compute[253661]: 2025-11-22 09:30:06.934 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 22 04:30:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 22 04:30:07 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 22 04:30:07 np0005532048 nova_compute[253661]: 2025-11-22 09:30:07.672 253665 DEBUG nova.compute.manager [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:07 np0005532048 nova_compute[253661]: 2025-11-22 09:30:07.672 253665 DEBUG oslo_concurrency.lockutils [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:07 np0005532048 nova_compute[253661]: 2025-11-22 09:30:07.673 253665 DEBUG oslo_concurrency.lockutils [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:07 np0005532048 nova_compute[253661]: 2025-11-22 09:30:07.673 253665 DEBUG oslo_concurrency.lockutils [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:07 np0005532048 nova_compute[253661]: 2025-11-22 09:30:07.673 253665 DEBUG nova.compute.manager [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] No waiting events found dispatching network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:30:07 np0005532048 nova_compute[253661]: 2025-11-22 09:30:07.673 253665 WARNING nova.compute.manager [req-d3b91c2a-6a5b-414b-a1eb-3754c346e1fa req-df9ebc9f-fd7e-4053-b920-c466019b2bc5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received unexpected event network-vif-plugged-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:30:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 88 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Nov 22 04:30:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.171 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.172 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.173 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.173 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.174 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.176 253665 INFO nova.compute.manager [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Terminating instance#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.178 253665 DEBUG nova.compute.manager [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:30:08 np0005532048 kernel: tap1a1647d5-7a (unregistering): left promiscuous mode
Nov 22 04:30:08 np0005532048 NetworkManager[48920]: <info>  [1763803808.4034] device (tap1a1647d5-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:08Z|00947|binding|INFO|Releasing lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 from this chassis (sb_readonly=0)
Nov 22 04:30:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:08Z|00948|binding|INFO|Setting lport 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 down in Southbound
Nov 22 04:30:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:08Z|00949|binding|INFO|Removing iface tap1a1647d5-7a ovn-installed in OVS
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.425 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:8b:74 10.100.0.12'], port_security=['fa:16:3e:fc:8b:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4a826b3b-aa3a-40c4-a85d-930239bc78d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2815298-27cc-4036-b985-55e1f44ee473', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '664bf3b26d414971a1d337e3eb9567e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6c4ef4c8-f5db-475c-9470-c9efc0b15564', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d861fb2-ab02-4cd4-8bac-25c3ad28ac31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.426 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 in datapath b2815298-27cc-4036-b985-55e1f44ee473 unbound from our chassis#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.428 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2815298-27cc-4036-b985-55e1f44ee473, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.429 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cfaed6f9-e765-44da-baf8-66997bc1648a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.429 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 namespace which is not needed anymore#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:08 np0005532048 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Nov 22 04:30:08 np0005532048 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005d.scope: Consumed 3.251s CPU time.
Nov 22 04:30:08 np0005532048 systemd-machined[215941]: Machine qemu-113-instance-0000005d terminated.
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.621 253665 INFO nova.virt.libvirt.driver [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Instance destroyed successfully.#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.622 253665 DEBUG nova.objects.instance [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lazy-loading 'resources' on Instance uuid 4a826b3b-aa3a-40c4-a85d-930239bc78d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.635 253665 DEBUG nova.virt.libvirt.vif [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:29:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-528470523',display_name='tempest-ServerGroupTestJSON-server-528470523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-528470523',id=93,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:30:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='664bf3b26d414971a1d337e3eb9567e0',ramdisk_id='',reservation_id='r-0qsyzo83',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1285465503',owner_user_name='tempest-ServerGroupTestJSON-1285465503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:30:06Z,user_data=None,user_id='7a42c7b8d01c4f8e8dfbb1a0ce8d230d',uuid=4a826b3b-aa3a-40c4-a85d-930239bc78d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.635 253665 DEBUG nova.network.os_vif_util [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converting VIF {"id": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "address": "fa:16:3e:fc:8b:74", "network": {"id": "b2815298-27cc-4036-b985-55e1f44ee473", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-989486728-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "664bf3b26d414971a1d337e3eb9567e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a1647d5-7a", "ovs_interfaceid": "1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:30:08 np0005532048 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [NOTICE]   (349871) : haproxy version is 2.8.14-c23fe91
Nov 22 04:30:08 np0005532048 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [NOTICE]   (349871) : path to executable is /usr/sbin/haproxy
Nov 22 04:30:08 np0005532048 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [WARNING]  (349871) : Exiting Master process...
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.636 253665 DEBUG nova.network.os_vif_util [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.637 253665 DEBUG os_vif [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:30:08 np0005532048 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [ALERT]    (349871) : Current worker (349873) exited with code 143 (Terminated)
Nov 22 04:30:08 np0005532048 neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473[349864]: [WARNING]  (349871) : All workers exited. Exiting... (0)
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:08 np0005532048 systemd[1]: libpod-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39.scope: Deactivated successfully.
Nov 22 04:30:08 np0005532048 conmon[349864]: conmon ad931f19d49988cde499 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39.scope/container/memory.events
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.644 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a1647d5-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.646 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:08 np0005532048 podman[349906]: 2025-11-22 09:30:08.647746095 +0000 UTC m=+0.097807851 container died ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.651 253665 INFO os_vif [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:74,bridge_name='br-int',has_traffic_filtering=True,id=1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1,network=Network(b2815298-27cc-4036-b985-55e1f44ee473),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a1647d5-7a')#033[00m
Nov 22 04:30:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39-userdata-shm.mount: Deactivated successfully.
Nov 22 04:30:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bbc01e6e97cf42734016b9830101bf1f1f4bc57ae2055ce6357dae147575cc35-merged.mount: Deactivated successfully.
Nov 22 04:30:08 np0005532048 podman[349906]: 2025-11-22 09:30:08.755768581 +0000 UTC m=+0.205830337 container cleanup ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:30:08 np0005532048 systemd[1]: libpod-conmon-ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39.scope: Deactivated successfully.
Nov 22 04:30:08 np0005532048 podman[349964]: 2025-11-22 09:30:08.924497383 +0000 UTC m=+0.142289748 container remove ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.932 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18f679ad-426f-4e73-988d-ac0948402dc7]: (4, ('Sat Nov 22 09:30:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 (ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39)\nad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39\nSat Nov 22 09:30:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 (ad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39)\nad931f19d49988cde49936efffa4750f10a6dadaaf1475de870fc910ac395a39\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.934 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41fe07a8-6b21-430e-9fdf-dec3d453c128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.935 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2815298-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:08 np0005532048 kernel: tapb2815298-20: left promiscuous mode
Nov 22 04:30:08 np0005532048 nova_compute[253661]: 2025-11-22 09:30:08.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.956 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[49eb5f34-cfd1-4c20-a7ce-d0febe76d246]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.966 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0385b8-82ff-4bed-9f21-2a18c1dc7f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.967 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e2dad181-db97-4c1c-a44a-54e94f97f2a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.990 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af0ed48a-d1fc-432e-86ff-7c2ada68f5ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665555, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349979, 'error': None, 'target': 'ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:08 np0005532048 systemd[1]: run-netns-ovnmeta\x2db2815298\x2d27cc\x2d4036\x2db985\x2d55e1f44ee473.mount: Deactivated successfully.
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.994 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2815298-27cc-4036-b985-55e1f44ee473 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:30:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:08.994 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f39530-d863-49ea-9035-357407226eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 59 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 154 op/s
Nov 22 04:30:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 22 04:30:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 22 04:30:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 22 04:30:09 np0005532048 nova_compute[253661]: 2025-11-22 09:30:09.862 253665 INFO nova.virt.libvirt.driver [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Deleting instance files /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6_del#033[00m
Nov 22 04:30:09 np0005532048 nova_compute[253661]: 2025-11-22 09:30:09.863 253665 INFO nova.virt.libvirt.driver [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Deletion of /var/lib/nova/instances/4a826b3b-aa3a-40c4-a85d-930239bc78d6_del complete#033[00m
Nov 22 04:30:09 np0005532048 nova_compute[253661]: 2025-11-22 09:30:09.922 253665 INFO nova.compute.manager [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Took 1.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:30:09 np0005532048 nova_compute[253661]: 2025-11-22 09:30:09.923 253665 DEBUG oslo.service.loopingcall [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:30:09 np0005532048 nova_compute[253661]: 2025-11-22 09:30:09.923 253665 DEBUG nova.compute.manager [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:30:09 np0005532048 nova_compute[253661]: 2025-11-22 09:30:09.924 253665 DEBUG nova.network.neutron [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:30:10 np0005532048 nova_compute[253661]: 2025-11-22 09:30:10.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.098 253665 DEBUG nova.network.neutron [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.122 253665 INFO nova.compute.manager [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Took 1.20 seconds to deallocate network for instance.#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.178 253665 DEBUG nova.compute.manager [req-33335be1-8870-438a-90ac-c2eb2de937e7 req-cd049ddd-84f4-41be-8969-041dc38d4256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Received event network-vif-deleted-1a1647d5-7a81-4c4f-91fd-4ef73a7cd2d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.180 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.180 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.234 253665 DEBUG oslo_concurrency.processutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:11 np0005532048 podman[349982]: 2025-11-22 09:30:11.369256916 +0000 UTC m=+0.055126287 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:30:11 np0005532048 podman[349983]: 2025-11-22 09:30:11.405472936 +0000 UTC m=+0.089085950 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:30:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 59 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 30 KiB/s wr, 171 op/s
Nov 22 04:30:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:30:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087184616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.740 253665 DEBUG oslo_concurrency.processutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.747 253665 DEBUG nova.compute.provider_tree [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.762 253665 DEBUG nova.scheduler.client.report [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.800 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.832 253665 INFO nova.scheduler.client.report [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Deleted allocations for instance 4a826b3b-aa3a-40c4-a85d-930239bc78d6#033[00m
Nov 22 04:30:11 np0005532048 nova_compute[253661]: 2025-11-22 09:30:11.902 253665 DEBUG oslo_concurrency.lockutils [None req-c26e4edf-ae77-43b5-aecf-a430f47f0e19 7a42c7b8d01c4f8e8dfbb1a0ce8d230d 664bf3b26d414971a1d337e3eb9567e0 - - default default] Lock "4a826b3b-aa3a-40c4-a85d-930239bc78d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:30:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769843351' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:30:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:30:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769843351' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:30:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 22 04:30:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 22 04:30:12 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 22 04:30:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:13 np0005532048 nova_compute[253661]: 2025-11-22 09:30:13.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 41 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 33 KiB/s wr, 240 op/s
Nov 22 04:30:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 22 04:30:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 22 04:30:13 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 22 04:30:15 np0005532048 nova_compute[253661]: 2025-11-22 09:30:15.644 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 884 KiB/s rd, 7.2 KiB/s wr, 112 op/s
Nov 22 04:30:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:16.385 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 22 04:30:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 22 04:30:16 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 22 04:30:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 892 KiB/s rd, 9.0 KiB/s wr, 126 op/s
Nov 22 04:30:17 np0005532048 nova_compute[253661]: 2025-11-22 09:30:17.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:18 np0005532048 podman[350039]: 2025-11-22 09:30:18.441527987 +0000 UTC m=+0.114724796 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:30:18 np0005532048 nova_compute[253661]: 2025-11-22 09:30:18.650 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 22 04:30:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 22 04:30:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 22 04:30:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 8.3 KiB/s wr, 91 op/s
Nov 22 04:30:20 np0005532048 nova_compute[253661]: 2025-11-22 09:30:20.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 6.3 KiB/s wr, 69 op/s
Nov 22 04:30:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 22 04:30:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 22 04:30:22 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 22 04:30:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:30:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:30:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:30:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:30:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:30:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:30:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:23 np0005532048 nova_compute[253661]: 2025-11-22 09:30:23.622 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803808.618394, 4a826b3b-aa3a-40c4-a85d-930239bc78d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:23 np0005532048 nova_compute[253661]: 2025-11-22 09:30:23.625 253665 INFO nova.compute.manager [-] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:30:23 np0005532048 nova_compute[253661]: 2025-11-22 09:30:23.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:23 np0005532048 nova_compute[253661]: 2025-11-22 09:30:23.664 253665 DEBUG nova.compute.manager [None req-0e5905cc-126b-434c-887b-1fba10b11f0f - - - - - -] [instance: 4a826b3b-aa3a-40c4-a85d-930239bc78d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 6.3 KiB/s wr, 95 op/s
Nov 22 04:30:25 np0005532048 nova_compute[253661]: 2025-11-22 09:30:25.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 5.7 KiB/s wr, 92 op/s
Nov 22 04:30:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.2 KiB/s wr, 51 op/s
Nov 22 04:30:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 22 04:30:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 22 04:30:27 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 22 04:30:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:27.973 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:27.973 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:27.974 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Nov 22 04:30:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Nov 22 04:30:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Nov 22 04:30:28 np0005532048 nova_compute[253661]: 2025-11-22 09:30:28.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.8 KiB/s wr, 58 op/s
Nov 22 04:30:30 np0005532048 nova_compute[253661]: 2025-11-22 09:30:30.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.4 KiB/s wr, 50 op/s
Nov 22 04:30:32 np0005532048 nova_compute[253661]: 2025-11-22 09:30:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:30:32 np0005532048 nova_compute[253661]: 2025-11-22 09:30:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:30:32 np0005532048 nova_compute[253661]: 2025-11-22 09:30:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:30:32 np0005532048 nova_compute[253661]: 2025-11-22 09:30:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:30:32 np0005532048 nova_compute[253661]: 2025-11-22 09:30:32.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:30:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Nov 22 04:30:33 np0005532048 nova_compute[253661]: 2025-11-22 09:30:33.374 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:33 np0005532048 nova_compute[253661]: 2025-11-22 09:30:33.375 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:33 np0005532048 nova_compute[253661]: 2025-11-22 09:30:33.393 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:30:33 np0005532048 nova_compute[253661]: 2025-11-22 09:30:33.459 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:33 np0005532048 nova_compute[253661]: 2025-11-22 09:30:33.459 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:33 np0005532048 nova_compute[253661]: 2025-11-22 09:30:33.468 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:30:33 np0005532048 nova_compute[253661]: 2025-11-22 09:30:33.468 253665 INFO nova.compute.claims [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:30:33 np0005532048 nova_compute[253661]: 2025-11-22 09:30:33.584 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Nov 22 04:30:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Nov 22 04:30:33 np0005532048 nova_compute[253661]: 2025-11-22 09:30:33.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 4.0 KiB/s wr, 60 op/s
Nov 22 04:30:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:30:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933540216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.093 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.100 253665 DEBUG nova.compute.provider_tree [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.121 253665 DEBUG nova.scheduler.client.report [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.158 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.159 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.244 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.245 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.273 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.292 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.367 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.369 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.369 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Creating image(s)#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.397 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.427 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.446 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.451 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.546 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.547 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.548 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.548 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.571 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.575 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:34 np0005532048 nova_compute[253661]: 2025-11-22 09:30:34.939 253665 DEBUG nova.policy [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2edeecfc11f347c0856dcf9fae9296ff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a2f39f99bbb4a6ab72b64ecca259a1d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.066 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.119 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] resizing rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.210 253665 DEBUG nova.objects.instance [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lazy-loading 'migration_context' on Instance uuid 1f746354-73cc-421a-9cde-f5b8c2b597fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.231 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.232 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Ensure instance console log exists: /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.233 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.233 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.234 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 41 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 KiB/s wr, 56 op/s
Nov 22 04:30:35 np0005532048 nova_compute[253661]: 2025-11-22 09:30:35.781 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Successfully created port: efe559aa-813e-4a03-9d8a-363ad40448c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:30:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Nov 22 04:30:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Nov 22 04:30:36 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.756 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Successfully updated port: efe559aa-813e-4a03-9d8a-363ad40448c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.770 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.770 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquired lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.770 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.959 253665 DEBUG nova.compute.manager [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-changed-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.960 253665 DEBUG nova.compute.manager [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Refreshing instance network info cache due to event network-changed-efe559aa-813e-4a03-9d8a-363ad40448c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:30:36 np0005532048 nova_compute[253661]: 2025-11-22 09:30:36.961 253665 DEBUG oslo_concurrency.lockutils [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.106 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.245 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.245 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.246 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.246 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.246 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:30:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787036650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 59 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 577 KiB/s wr, 68 op/s
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.889 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.890 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3910MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.890 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.891 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.959 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 1f746354-73cc-421a-9cde-f5b8c2b597fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.960 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:30:37 np0005532048 nova_compute[253661]: 2025-11-22 09:30:37.960 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.007 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Nov 22 04:30:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Nov 22 04:30:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.399 253665 DEBUG nova.network.neutron [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Updating instance_info_cache with network_info: [{"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.415 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Releasing lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.416 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance network_info: |[{"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.416 253665 DEBUG oslo_concurrency.lockutils [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.416 253665 DEBUG nova.network.neutron [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Refreshing network info cache for port efe559aa-813e-4a03-9d8a-363ad40448c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.420 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start _get_guest_xml network_info=[{"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.425 253665 WARNING nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.432 253665 DEBUG nova.virt.libvirt.host [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.433 253665 DEBUG nova.virt.libvirt.host [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.436 253665 DEBUG nova.virt.libvirt.host [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.436 253665 DEBUG nova.virt.libvirt.host [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.437 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.437 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.437 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.437 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.438 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.439 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.439 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.439 253665 DEBUG nova.virt.hardware [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.445 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:30:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695170307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.488 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.495 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.507 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.525 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.525 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.664 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:30:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7201859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.931 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.958 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:38 np0005532048 nova_compute[253661]: 2025-11-22 09:30:38.964 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:30:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315525300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.452 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.454 253665 DEBUG nova.virt.libvirt.vif [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:30:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-813427165',display_name='tempest-ServerMetadataTestJSON-server-813427165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-813427165',id=94,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a2f39f99bbb4a6ab72b64ecca259a1d',ramdisk_id='',reservation_id='r-us3y0nv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-715260665',owner_user_name='tempest-ServerMetadataTestJSON-715260665-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:30:34Z,user_data=None,user_id='2edeecfc11f347c0856dcf9fae9296ff',uuid=1f746354-73cc-421a-9cde-f5b8c2b597fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.454 253665 DEBUG nova.network.os_vif_util [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converting VIF {"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.455 253665 DEBUG nova.network.os_vif_util [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.456 253665 DEBUG nova.objects.instance [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f746354-73cc-421a-9cde-f5b8c2b597fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.471 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <uuid>1f746354-73cc-421a-9cde-f5b8c2b597fe</uuid>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <name>instance-0000005e</name>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerMetadataTestJSON-server-813427165</nova:name>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:30:38</nova:creationTime>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <nova:user uuid="2edeecfc11f347c0856dcf9fae9296ff">tempest-ServerMetadataTestJSON-715260665-project-member</nova:user>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <nova:project uuid="6a2f39f99bbb4a6ab72b64ecca259a1d">tempest-ServerMetadataTestJSON-715260665</nova:project>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <nova:port uuid="efe559aa-813e-4a03-9d8a-363ad40448c7">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <entry name="serial">1f746354-73cc-421a-9cde-f5b8c2b597fe</entry>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <entry name="uuid">1f746354-73cc-421a-9cde-f5b8c2b597fe</entry>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/1f746354-73cc-421a-9cde-f5b8c2b597fe_disk">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:00:4f:bc"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <target dev="tapefe559aa-81"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/console.log" append="off"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:30:39 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:30:39 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:30:39 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:30:39 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.471 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Preparing to wait for external event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.471 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.472 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.472 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.472 253665 DEBUG nova.virt.libvirt.vif [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:30:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-813427165',display_name='tempest-ServerMetadataTestJSON-server-813427165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-813427165',id=94,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a2f39f99bbb4a6ab72b64ecca259a1d',ramdisk_id='',reservation_id='r-us3y0nv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-715260665',owner_user_name='tempest-ServerMetadataTestJSON-715260665-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:30:34Z,user_data=None,user_id='2edeecfc11f347c0856dcf9fae9296ff',uuid=1f746354-73cc-421a-9cde-f5b8c2b597fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.473 253665 DEBUG nova.network.os_vif_util [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converting VIF {"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.473 253665 DEBUG nova.network.os_vif_util [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.473 253665 DEBUG os_vif [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.474 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.475 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.480 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.480 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefe559aa-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.481 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapefe559aa-81, col_values=(('external_ids', {'iface-id': 'efe559aa-813e-4a03-9d8a-363ad40448c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:4f:bc', 'vm-uuid': '1f746354-73cc-421a-9cde-f5b8c2b597fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:39 np0005532048 NetworkManager[48920]: <info>  [1763803839.4852] manager: (tapefe559aa-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.488 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.494 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.495 253665 INFO os_vif [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81')#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.541 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.541 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.542 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] No VIF found with MAC fa:16:3e:00:4f:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.542 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Using config drive#033[00m
Nov 22 04:30:39 np0005532048 nova_compute[253661]: 2025-11-22 09:30:39.567 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Nov 22 04:30:40 np0005532048 nova_compute[253661]: 2025-11-22 09:30:40.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:40 np0005532048 nova_compute[253661]: 2025-11-22 09:30:40.678 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Creating config drive at /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config#033[00m
Nov 22 04:30:40 np0005532048 nova_compute[253661]: 2025-11-22 09:30:40.683 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnqunba08 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:40 np0005532048 nova_compute[253661]: 2025-11-22 09:30:40.837 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnqunba08" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:40 np0005532048 nova_compute[253661]: 2025-11-22 09:30:40.866 253665 DEBUG nova.storage.rbd_utils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] rbd image 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:40 np0005532048 nova_compute[253661]: 2025-11-22 09:30:40.870 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.340 253665 DEBUG nova.network.neutron [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Updated VIF entry in instance network info cache for port efe559aa-813e-4a03-9d8a-363ad40448c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.342 253665 DEBUG nova.network.neutron [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Updating instance_info_cache with network_info: [{"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.361 253665 DEBUG oslo_concurrency.processutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config 1f746354-73cc-421a-9cde-f5b8c2b597fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.362 253665 INFO nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Deleting local config drive /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe/disk.config because it was imported into RBD.#033[00m
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.364 253665 DEBUG oslo_concurrency.lockutils [req-05bafdba-b47c-4b8a-a4d0-b4426f143cde req-933318e2-bd79-464c-8c2a-84a4ad1e5f62 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-1f746354-73cc-421a-9cde-f5b8c2b597fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:30:41 np0005532048 kernel: tapefe559aa-81: entered promiscuous mode
Nov 22 04:30:41 np0005532048 NetworkManager[48920]: <info>  [1763803841.4327] manager: (tapefe559aa-81): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Nov 22 04:30:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:41Z|00950|binding|INFO|Claiming lport efe559aa-813e-4a03-9d8a-363ad40448c7 for this chassis.
Nov 22 04:30:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:41Z|00951|binding|INFO|efe559aa-813e-4a03-9d8a-363ad40448c7: Claiming fa:16:3e:00:4f:bc 10.100.0.3
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.468 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.481 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:4f:bc 10.100.0.3'], port_security=['fa:16:3e:00:4f:bc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f746354-73cc-421a-9cde-f5b8c2b597fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2246d64b-e77a-4784-9fa1-d08726a529e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a2f39f99bbb4a6ab72b64ecca259a1d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1969534-2e7e-4db0-b633-4898adada66f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5a0f272-182f-42b5-a484-bc1a2cbff822, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=efe559aa-813e-4a03-9d8a-363ad40448c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.482 162862 INFO neutron.agent.ovn.metadata.agent [-] Port efe559aa-813e-4a03-9d8a-363ad40448c7 in datapath 2246d64b-e77a-4784-9fa1-d08726a529e2 bound to our chassis#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.484 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2246d64b-e77a-4784-9fa1-d08726a529e2#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.506 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f34db187-2f5e-4b85-a908-d4991d63846e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.508 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2246d64b-e1 in ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.510 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2246d64b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.511 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d279661-26ea-4fe9-9482-27da6ed3fb52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[56eb4199-c2e5-4f13-a462-f3b154a76e41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.528 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[032066d8-f65f-4b00-9283-cc68eb0feeb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 systemd-machined[215941]: New machine qemu-114-instance-0000005e.
Nov 22 04:30:41 np0005532048 systemd-udevd[350454]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:30:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:41Z|00952|binding|INFO|Setting lport efe559aa-813e-4a03-9d8a-363ad40448c7 ovn-installed in OVS
Nov 22 04:30:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:41Z|00953|binding|INFO|Setting lport efe559aa-813e-4a03-9d8a-363ad40448c7 up in Southbound
Nov 22 04:30:41 np0005532048 systemd[1]: Started Virtual Machine qemu-114-instance-0000005e.
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.554 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6efadf97-6a9f-4171-b3c8-27f7ccb3cb23]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 NetworkManager[48920]: <info>  [1763803841.5607] device (tapefe559aa-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:30:41 np0005532048 NetworkManager[48920]: <info>  [1763803841.5689] device (tapefe559aa-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.591 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e01998f4-f645-427b-8b94-8c403df06172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 NetworkManager[48920]: <info>  [1763803841.6003] manager: (tap2246d64b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/391)
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.598 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e161f321-ba31-469c-87a9-36709149fff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 podman[350432]: 2025-11-22 09:30:41.610754936 +0000 UTC m=+0.120152497 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 22 04:30:41 np0005532048 podman[350433]: 2025-11-22 09:30:41.619467431 +0000 UTC m=+0.118245181 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.642 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c630e21d-5314-4425-89d7-0ac4625a24e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.646 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6bec24-2265-40e2-9847-2ecc5272be94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 NetworkManager[48920]: <info>  [1763803841.6745] device (tap2246d64b-e0): carrier: link connected
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.678 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d75e75b1-add2-434c-9058-8c52c664f42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.700 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06829226-1222-4ac4-8464-ba48f8069260]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2246d64b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:ab:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669241, 'reachable_time': 22089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350502, 'error': None, 'target': 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.721 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[85b084ac-ecf2-4ecf-b1a0-646aa6959292]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:ab05'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669241, 'tstamp': 669241}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350503, 'error': None, 'target': 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.738 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[75e22437-0781-461a-9d24-3390ce696d2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2246d64b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:ab:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669241, 'reachable_time': 22089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350504, 'error': None, 'target': 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.786 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[581ffb34-0a9b-4b1e-8510-38b941e1eb9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.871 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[783d5afd-c5f0-428a-a885-2d6c09216be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2246d64b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.874 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2246d64b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:41 np0005532048 NetworkManager[48920]: <info>  [1763803841.8775] manager: (tap2246d64b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.876 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:41 np0005532048 kernel: tap2246d64b-e0: entered promiscuous mode
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.882 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2246d64b-e0, col_values=(('external_ids', {'iface-id': '14e59265-4b5a-468d-8359-0d37f8713715'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:41Z|00954|binding|INFO|Releasing lport 14e59265-4b5a-468d-8359-0d37f8713715 from this chassis (sb_readonly=0)
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:41 np0005532048 nova_compute[253661]: 2025-11-22 09:30:41.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.901 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2246d64b-e77a-4784-9fa1-d08726a529e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2246d64b-e77a-4784-9fa1-d08726a529e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.903 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[038e8ae6-3c21-486a-9a6a-47bd37951d2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.905 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-2246d64b-e77a-4784-9fa1-d08726a529e2
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/2246d64b-e77a-4784-9fa1-d08726a529e2.pid.haproxy
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 2246d64b-e77a-4784-9fa1-d08726a529e2
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:30:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:41.905 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'env', 'PROCESS_TAG=haproxy-2246d64b-e77a-4784-9fa1-d08726a529e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2246d64b-e77a-4784-9fa1-d08726a529e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:30:42 np0005532048 nova_compute[253661]: 2025-11-22 09:30:42.170 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803842.1700737, 1f746354-73cc-421a-9cde-f5b8c2b597fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:42 np0005532048 nova_compute[253661]: 2025-11-22 09:30:42.172 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] VM Started (Lifecycle Event)#033[00m
Nov 22 04:30:42 np0005532048 nova_compute[253661]: 2025-11-22 09:30:42.191 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:42 np0005532048 nova_compute[253661]: 2025-11-22 09:30:42.198 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803842.1702478, 1f746354-73cc-421a-9cde-f5b8c2b597fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:42 np0005532048 nova_compute[253661]: 2025-11-22 09:30:42.199 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:30:42 np0005532048 nova_compute[253661]: 2025-11-22 09:30:42.212 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:42 np0005532048 nova_compute[253661]: 2025-11-22 09:30:42.216 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:30:42 np0005532048 nova_compute[253661]: 2025-11-22 09:30:42.234 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:30:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Nov 22 04:30:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Nov 22 04:30:42 np0005532048 podman[350576]: 2025-11-22 09:30:42.300082533 +0000 UTC m=+0.059492566 container create af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:30:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Nov 22 04:30:42 np0005532048 systemd[1]: Started libpod-conmon-af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3.scope.
Nov 22 04:30:42 np0005532048 podman[350576]: 2025-11-22 09:30:42.269198443 +0000 UTC m=+0.028608486 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:30:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:30:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e8f39852c3ce3291613e83f29352d363c0423547f21a29e1614f59dc448bb1f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:42 np0005532048 podman[350576]: 2025-11-22 09:30:42.396019034 +0000 UTC m=+0.155429147 container init af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 04:30:42 np0005532048 podman[350576]: 2025-11-22 09:30:42.403409907 +0000 UTC m=+0.162819970 container start af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:30:42 np0005532048 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [NOTICE]   (350595) : New worker (350597) forked
Nov 22 04:30:42 np0005532048 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [NOTICE]   (350595) : Loading success.
Nov 22 04:30:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Nov 22 04:30:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Nov 22 04:30:43 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Nov 22 04:30:43 np0005532048 nova_compute[253661]: 2025-11-22 09:30:43.526 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:30:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.379 253665 DEBUG nova.compute.manager [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.380 253665 DEBUG oslo_concurrency.lockutils [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.380 253665 DEBUG oslo_concurrency.lockutils [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.380 253665 DEBUG oslo_concurrency.lockutils [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.380 253665 DEBUG nova.compute.manager [req-0f8c2b12-9bb8-44d2-a8c9-f600302bfe88 req-d6970f09-a249-41e3-be7e-3d1e545ba15f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Processing event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.381 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.386 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803844.3858597, 1f746354-73cc-421a-9cde-f5b8c2b597fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.386 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.390 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.396 253665 INFO nova.virt.libvirt.driver [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance spawned successfully.#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.397 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.411 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.417 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.423 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.423 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.424 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.424 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.424 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.425 253665 DEBUG nova.virt.libvirt.driver [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.456 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.510 253665 INFO nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Took 10.14 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.510 253665 DEBUG nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.569 253665 INFO nova.compute.manager [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Took 11.14 seconds to build instance.#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.586 253665 DEBUG oslo_concurrency.lockutils [None req-c9556415-99cf-4123-8200-98de477d899e 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.909 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.910 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.922 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.992 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.993 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.999 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:30:44 np0005532048 nova_compute[253661]: 2025-11-22 09:30:44.999 253665 INFO nova.compute.claims [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.114 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:30:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14115640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.649 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.656 253665 DEBUG nova.compute.provider_tree [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.661 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.669 253665 DEBUG nova.scheduler.client.report [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.690 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.691 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:30:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 31 op/s
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.744 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.746 253665 DEBUG nova.network.neutron [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.768 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.784 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.854 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.855 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.855 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Creating image(s)#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.881 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.905 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.928 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.932 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "5f3104bce7037b3b1741dfbde06f1965ca5da121" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:45 np0005532048 nova_compute[253661]: 2025-11-22 09:30:45.933 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "5f3104bce7037b3b1741dfbde06f1965ca5da121" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.098 253665 DEBUG nova.network.neutron [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.099 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.191 253665 DEBUG nova.virt.libvirt.imagebackend [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38888df1-6493-484c-9550-c208e81fe437/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38888df1-6493-484c-9550-c208e81fe437/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.248 253665 DEBUG nova.virt.libvirt.imagebackend [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Selected location: {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/38888df1-6493-484c-9550-c208e81fe437/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.249 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] cloning images/38888df1-6493-484c-9550-c208e81fe437@snap to None/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.405 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "5f3104bce7037b3b1741dfbde06f1965ca5da121" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.494 253665 DEBUG nova.compute.manager [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.495 253665 DEBUG oslo_concurrency.lockutils [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.495 253665 DEBUG oslo_concurrency.lockutils [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.495 253665 DEBUG oslo_concurrency.lockutils [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.495 253665 DEBUG nova.compute.manager [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] No waiting events found dispatching network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.496 253665 WARNING nova.compute.manager [req-c87c94fa-9311-4347-b833-5e5c1107f90a req-fcbc8ad9-dfd5-45a6-931d-a48e619f12bf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received unexpected event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.552 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] resizing rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.640 253665 DEBUG nova.objects.instance [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lazy-loading 'migration_context' on Instance uuid 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.655 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.656 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Ensure instance console log exists: /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.656 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.656 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.656 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.658 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='ba2f9ee1ff49a773355335bb33ba3dbd',container_format='bare',created_at=2025-11-22T09:30:40Z,direct_url=<?>,disk_format='raw',id=38888df1-6493-484c-9550-c208e81fe437,min_disk=0,min_ram=0,name='tempest-image-dependency-test-2115234108',owner='7f68d63cdf1a45888e4dd7198f06fae4',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-22T09:30:42Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '38888df1-6493-484c-9550-c208e81fe437'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.661 253665 WARNING nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.665 253665 DEBUG nova.virt.libvirt.host [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.666 253665 DEBUG nova.virt.libvirt.host [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.668 253665 DEBUG nova.virt.libvirt.host [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.668 253665 DEBUG nova.virt.libvirt.host [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.668 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.669 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='ba2f9ee1ff49a773355335bb33ba3dbd',container_format='bare',created_at=2025-11-22T09:30:40Z,direct_url=<?>,disk_format='raw',id=38888df1-6493-484c-9550-c208e81fe437,min_disk=0,min_ram=0,name='tempest-image-dependency-test-2115234108',owner='7f68d63cdf1a45888e4dd7198f06fae4',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-22T09:30:42Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.669 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.669 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.670 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.671 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.671 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.671 253665 DEBUG nova.virt.hardware [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:30:46 np0005532048 nova_compute[253661]: 2025-11-22 09:30:46.674 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:30:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982895415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.080 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.113 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.117 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:30:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3731028041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.546 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.549 253665 DEBUG nova.objects.instance [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.563 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <uuid>79b0dd90-3f01-40c6-a7a7-fe79eeab97d0</uuid>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <name>instance-0000005f</name>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <nova:name>instance-depend-image</nova:name>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:30:46</nova:creationTime>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <nova:user uuid="18279c1b172740a1a20233efdadc6120">tempest-ImageDependencyTests-491625585-project-member</nova:user>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <nova:project uuid="7f68d63cdf1a45888e4dd7198f06fae4">tempest-ImageDependencyTests-491625585</nova:project>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="38888df1-6493-484c-9550-c208e81fe437"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <entry name="serial">79b0dd90-3f01-40c6-a7a7-fe79eeab97d0</entry>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <entry name="uuid">79b0dd90-3f01-40c6-a7a7-fe79eeab97d0</entry>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/console.log" append="off"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:30:47 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:30:47 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:30:47 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:30:47 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.631 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.632 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.632 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Using config drive#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.659 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 729 KiB/s rd, 22 KiB/s wr, 88 op/s
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.855 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Creating config drive at /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.860 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppw8nsg29 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:47 np0005532048 nova_compute[253661]: 2025-11-22 09:30:47.998 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppw8nsg29" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:48 np0005532048 nova_compute[253661]: 2025-11-22 09:30:48.026 253665 DEBUG nova.storage.rbd_utils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] rbd image 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:48 np0005532048 nova_compute[253661]: 2025-11-22 09:30:48.030 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:48 np0005532048 nova_compute[253661]: 2025-11-22 09:30:48.191 253665 DEBUG oslo_concurrency.processutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:48 np0005532048 nova_compute[253661]: 2025-11-22 09:30:48.193 253665 INFO nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Deleting local config drive /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0/disk.config because it was imported into RBD.#033[00m
Nov 22 04:30:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:48 np0005532048 systemd-machined[215941]: New machine qemu-115-instance-0000005f.
Nov 22 04:30:48 np0005532048 systemd[1]: Started Virtual Machine qemu-115-instance-0000005f.
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.089 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.090 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.091 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.091 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.091 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.094 253665 INFO nova.compute.manager [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Terminating instance#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.095 253665 DEBUG nova.compute.manager [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:30:49 np0005532048 kernel: tapefe559aa-81 (unregistering): left promiscuous mode
Nov 22 04:30:49 np0005532048 NetworkManager[48920]: <info>  [1763803849.1448] device (tapefe559aa-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:30:49 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:49Z|00955|binding|INFO|Releasing lport efe559aa-813e-4a03-9d8a-363ad40448c7 from this chassis (sb_readonly=0)
Nov 22 04:30:49 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:49Z|00956|binding|INFO|Setting lport efe559aa-813e-4a03-9d8a-363ad40448c7 down in Southbound
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:49 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:49Z|00957|binding|INFO|Removing iface tapefe559aa-81 ovn-installed in OVS
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.162 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.175 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:4f:bc 10.100.0.3'], port_security=['fa:16:3e:00:4f:bc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f746354-73cc-421a-9cde-f5b8c2b597fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2246d64b-e77a-4784-9fa1-d08726a529e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a2f39f99bbb4a6ab72b64ecca259a1d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1969534-2e7e-4db0-b633-4898adada66f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5a0f272-182f-42b5-a484-bc1a2cbff822, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=efe559aa-813e-4a03-9d8a-363ad40448c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.176 162862 INFO neutron.agent.ovn.metadata.agent [-] Port efe559aa-813e-4a03-9d8a-363ad40448c7 in datapath 2246d64b-e77a-4784-9fa1-d08726a529e2 unbound from our chassis#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.178 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2246d64b-e77a-4784-9fa1-d08726a529e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.187 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e82d5fb-06d3-45fb-a721-7256a88c786c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.188 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 namespace which is not needed anymore#033[00m
Nov 22 04:30:49 np0005532048 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 22 04:30:49 np0005532048 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005e.scope: Consumed 5.414s CPU time.
Nov 22 04:30:49 np0005532048 systemd-machined[215941]: Machine qemu-114-instance-0000005e terminated.
Nov 22 04:30:49 np0005532048 podman[350959]: 2025-11-22 09:30:49.279051944 +0000 UTC m=+0.103480658 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.342 253665 INFO nova.virt.libvirt.driver [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Instance destroyed successfully.#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.342 253665 DEBUG nova.objects.instance [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lazy-loading 'resources' on Instance uuid 1f746354-73cc-421a-9cde-f5b8c2b597fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:30:49 np0005532048 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [NOTICE]   (350595) : haproxy version is 2.8.14-c23fe91
Nov 22 04:30:49 np0005532048 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [NOTICE]   (350595) : path to executable is /usr/sbin/haproxy
Nov 22 04:30:49 np0005532048 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [WARNING]  (350595) : Exiting Master process...
Nov 22 04:30:49 np0005532048 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [ALERT]    (350595) : Current worker (350597) exited with code 143 (Terminated)
Nov 22 04:30:49 np0005532048 neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2[350591]: [WARNING]  (350595) : All workers exited. Exiting... (0)
Nov 22 04:30:49 np0005532048 systemd[1]: libpod-af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3.scope: Deactivated successfully.
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.355 253665 DEBUG nova.virt.libvirt.vif [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:30:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-813427165',display_name='tempest-ServerMetadataTestJSON-server-813427165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-813427165',id=94,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:30:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a2f39f99bbb4a6ab72b64ecca259a1d',ramdisk_id='',reservation_id='r-us3y0nv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-715260665',owner_user_name='tempest-ServerMetadataTestJSON-715260665-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:30:48Z,user_data=None,user_id='2edeecfc11f347c0856dcf9fae9296ff',uuid=1f746354-73cc-421a-9cde-f5b8c2b597fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.355 253665 DEBUG nova.network.os_vif_util [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converting VIF {"id": "efe559aa-813e-4a03-9d8a-363ad40448c7", "address": "fa:16:3e:00:4f:bc", "network": {"id": "2246d64b-e77a-4784-9fa1-d08726a529e2", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1768360973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a2f39f99bbb4a6ab72b64ecca259a1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefe559aa-81", "ovs_interfaceid": "efe559aa-813e-4a03-9d8a-363ad40448c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.356 253665 DEBUG nova.network.os_vif_util [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.356 253665 DEBUG os_vif [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.358 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.358 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefe559aa-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:30:49 np0005532048 podman[351008]: 2025-11-22 09:30:49.365504572 +0000 UTC m=+0.076239767 container died af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.366 253665 INFO os_vif [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:4f:bc,bridge_name='br-int',has_traffic_filtering=True,id=efe559aa-813e-4a03-9d8a-363ad40448c7,network=Network(2246d64b-e77a-4784-9fa1-d08726a529e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefe559aa-81')#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.405 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.406 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.427 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:30:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3-userdata-shm.mount: Deactivated successfully.
Nov 22 04:30:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2e8f39852c3ce3291613e83f29352d363c0423547f21a29e1614f59dc448bb1f-merged.mount: Deactivated successfully.
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.520 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.521 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.531 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.532 253665 INFO nova.compute.claims [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:30:49 np0005532048 podman[351008]: 2025-11-22 09:30:49.640793838 +0000 UTC m=+0.351529033 container cleanup af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:30:49 np0005532048 systemd[1]: libpod-conmon-af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3.scope: Deactivated successfully.
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.678 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 183 op/s
Nov 22 04:30:49 np0005532048 podman[351096]: 2025-11-22 09:30:49.823090804 +0000 UTC m=+0.155729363 container remove af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.836 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[089fa18a-da26-401e-a577-b7a688d4f90c]: (4, ('Sat Nov 22 09:30:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 (af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3)\naf5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3\nSat Nov 22 09:30:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 (af5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3)\naf5e2c2e527d6444234e84e8a7c8b98ee775bf6c7875907ca19e86be94ca84c3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fafe959-a1f0-4aa8-b706-3016c114c5f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.840 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2246d64b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.843 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:49 np0005532048 kernel: tap2246d64b-e0: left promiscuous mode
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.856 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803849.8554833, 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.856 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.860 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.862 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.863 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd6ab8a-cafe-47b5-8167-76d59dc3273a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.874 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.878 253665 INFO nova.virt.libvirt.driver [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance spawned successfully.#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.886 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.886 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a191bfe7-4bc5-460a-b773-e004de6efe75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73372b24-c987-4ef2-9dab-bd7059364280]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.891 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.905 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.906 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803849.859133, 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.906 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] VM Started (Lifecycle Event)#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.909 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.909 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.910 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.910 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.910 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.911 253665 DEBUG nova.virt.libvirt.driver [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.913 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea86c66-477f-4abc-85c7-4b7c7cda12c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669232, 'reachable_time': 33448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351136, 'error': None, 'target': 'ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:49 np0005532048 systemd[1]: run-netns-ovnmeta\x2d2246d64b\x2de77a\x2d4784\x2d9fa1\x2dd08726a529e2.mount: Deactivated successfully.
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.918 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2246d64b-e77a-4784-9fa1-d08726a529e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:30:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:49.918 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[bd97c9a6-d738-4d2f-b228-f2a66eb0d3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.929 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.934 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.955 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.972 253665 INFO nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 4.12 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:30:49 np0005532048 nova_compute[253661]: 2025-11-22 09:30:49.972 253665 DEBUG nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.044 253665 INFO nova.compute.manager [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 5.08 seconds to build instance.#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.058 253665 DEBUG oslo_concurrency.lockutils [None req-19b6d624-740c-49b5-8ad9-3387cc6bf727 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:30:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739822084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.180 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.186 253665 DEBUG nova.compute.provider_tree [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.197 253665 DEBUG nova.scheduler.client.report [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.206 253665 INFO nova.virt.libvirt.driver [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Deleting instance files /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe_del#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.206 253665 INFO nova.virt.libvirt.driver [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Deletion of /var/lib/nova/instances/1f746354-73cc-421a-9cde-f5b8c2b597fe_del complete#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.213 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.213 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.263 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.264 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.276 253665 INFO nova.compute.manager [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Took 1.18 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.277 253665 DEBUG oslo.service.loopingcall [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.277 253665 DEBUG nova.compute.manager [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.278 253665 DEBUG nova.network.neutron [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.283 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.305 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.648 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.649 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.650 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Creating image(s)#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.674 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.701 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.730 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.736 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.777 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.819 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.820 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.820 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.821 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.843 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.848 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:50 np0005532048 nova_compute[253661]: 2025-11-22 09:30:50.907 253665 DEBUG nova.policy [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.125 253665 DEBUG nova.compute.manager [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-unplugged-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG oslo_concurrency.lockutils [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG oslo_concurrency.lockutils [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG oslo_concurrency.lockutils [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG nova.compute.manager [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] No waiting events found dispatching network-vif-unplugged-efe559aa-813e-4a03-9d8a-363ad40448c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.126 253665 DEBUG nova.compute.manager [req-f37ddf57-b2d5-486a-b17b-fe999a626716 req-38daaf7e-aeef-4b1b-8ca8-146ce3ad7f76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-unplugged-efe559aa-813e-4a03-9d8a-363ad40448c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.225 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.303 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.463 253665 DEBUG nova.network.neutron [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.485 253665 INFO nova.compute.manager [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Took 1.21 seconds to deallocate network for instance.#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.539 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.539 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.647 253665 DEBUG nova.objects.instance [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.657 253665 DEBUG oslo_concurrency.processutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.694 253665 DEBUG nova.compute.manager [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.696 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.696 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Ensure instance console log exists: /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.697 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.697 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.698 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 KiB/s wr, 142 op/s
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.733 253665 INFO nova.compute.manager [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] instance snapshotting#033[00m
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:30:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ada2bc9a-612d-493b-82a5-e4f0f902af3c does not exist
Nov 22 04:30:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8add5ee4-36a2-4ee8-afdb-552f95ecf8a2 does not exist
Nov 22 04:30:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a81507dd-aab9-4e28-b25c-6acab796a35c does not exist
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:30:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.942 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Successfully created port: a13cc417-edce-4c30-a5b0-f90095810bcc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:30:51 np0005532048 nova_compute[253661]: 2025-11-22 09:30:51.965 253665 INFO nova.virt.libvirt.driver [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Beginning live snapshot process#033[00m
Nov 22 04:30:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:30:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3781483500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:30:52 np0005532048 nova_compute[253661]: 2025-11-22 09:30:52.118 253665 DEBUG oslo_concurrency.processutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:52 np0005532048 nova_compute[253661]: 2025-11-22 09:30:52.123 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] creating snapshot(850682c18479484b9b5434a2c2332bdc) on rbd image(79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:30:52 np0005532048 nova_compute[253661]: 2025-11-22 09:30:52.164 253665 DEBUG nova.compute.provider_tree [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:30:52 np0005532048 nova_compute[253661]: 2025-11-22 09:30:52.176 253665 DEBUG nova.scheduler.client.report [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:30:52 np0005532048 nova_compute[253661]: 2025-11-22 09:30:52.191 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:52 np0005532048 nova_compute[253661]: 2025-11-22 09:30:52.228 253665 INFO nova.scheduler.client.report [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Deleted allocations for instance 1f746354-73cc-421a-9cde-f5b8c2b597fe#033[00m
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:30:52
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.meta', '.mgr', 'vms', 'backups', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data']
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:30:52 np0005532048 nova_compute[253661]: 2025-11-22 09:30:52.282 253665 DEBUG oslo_concurrency.lockutils [None req-ff2dd98a-b721-41de-8385-b60e9a2e57bf 2edeecfc11f347c0856dcf9fae9296ff 6a2f39f99bbb4a6ab72b64ecca259a1d - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:52 np0005532048 podman[351649]: 2025-11-22 09:30:52.430347776 +0000 UTC m=+0.034254985 container create 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:30:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:30:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:30:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:30:52 np0005532048 systemd[1]: Started libpod-conmon-91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2.scope.
Nov 22 04:30:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:30:52 np0005532048 podman[351649]: 2025-11-22 09:30:52.415593712 +0000 UTC m=+0.019500951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:30:52 np0005532048 podman[351649]: 2025-11-22 09:30:52.516758463 +0000 UTC m=+0.120665722 container init 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:30:52 np0005532048 podman[351649]: 2025-11-22 09:30:52.523912269 +0000 UTC m=+0.127819488 container start 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:30:52 np0005532048 podman[351649]: 2025-11-22 09:30:52.527926038 +0000 UTC m=+0.131833247 container attach 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:30:52 np0005532048 focused_hofstadter[351666]: 167 167
Nov 22 04:30:52 np0005532048 systemd[1]: libpod-91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2.scope: Deactivated successfully.
Nov 22 04:30:52 np0005532048 podman[351649]: 2025-11-22 09:30:52.530155742 +0000 UTC m=+0.134062951 container died 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:30:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-12c8b59d88b3260689f594bb504ca865fe7160461837affe2aaaef64aae3606d-merged.mount: Deactivated successfully.
Nov 22 04:30:52 np0005532048 podman[351649]: 2025-11-22 09:30:52.56828631 +0000 UTC m=+0.172193519 container remove 91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:30:52 np0005532048 systemd[1]: libpod-conmon-91025dbc0910da0d045a1718dd3d2a74aefaf5836490f9b6f10abb0615cf7ea2.scope: Deactivated successfully.
Nov 22 04:30:52 np0005532048 podman[351688]: 2025-11-22 09:30:52.727808997 +0000 UTC m=+0.045110901 container create 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:30:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:30:52 np0005532048 systemd[1]: Started libpod-conmon-2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2.scope.
Nov 22 04:30:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:52 np0005532048 podman[351688]: 2025-11-22 09:30:52.709370323 +0000 UTC m=+0.026672247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:52 np0005532048 podman[351688]: 2025-11-22 09:30:52.816833578 +0000 UTC m=+0.134135502 container init 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:30:52 np0005532048 podman[351688]: 2025-11-22 09:30:52.823686567 +0000 UTC m=+0.140988471 container start 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:30:52 np0005532048 podman[351688]: 2025-11-22 09:30:52.827721577 +0000 UTC m=+0.145023501 container attach 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:30:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Nov 22 04:30:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Nov 22 04:30:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Nov 22 04:30:52 np0005532048 nova_compute[253661]: 2025-11-22 09:30:52.976 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] cloning vms/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk@850682c18479484b9b5434a2c2332bdc to images/1c164693-7d32-4441-9567-92f357c61148 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.097 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] flattening images/1c164693-7d32-4441-9567-92f357c61148 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:30:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.234 253665 DEBUG nova.compute.manager [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.234 253665 DEBUG oslo_concurrency.lockutils [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.234 253665 DEBUG oslo_concurrency.lockutils [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.235 253665 DEBUG oslo_concurrency.lockutils [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1f746354-73cc-421a-9cde-f5b8c2b597fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.235 253665 DEBUG nova.compute.manager [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] No waiting events found dispatching network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.235 253665 WARNING nova.compute.manager [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received unexpected event network-vif-plugged-efe559aa-813e-4a03-9d8a-363ad40448c7 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.235 253665 DEBUG nova.compute.manager [req-be58a01e-5585-47e7-922e-6f570ae012cd req-1a575502-fd62-4bd0-b3db-ad25fa41367f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Received event network-vif-deleted-efe559aa-813e-4a03-9d8a-363ad40448c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.248 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] removing snapshot(850682c18479484b9b5434a2c2332bdc) on rbd image(79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.378 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Successfully updated port: a13cc417-edce-4c30-a5b0-f90095810bcc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.396 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.396 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.396 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.510 253665 DEBUG nova.compute.manager [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.511 253665 DEBUG nova.compute.manager [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing instance network info cache due to event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.511 253665 DEBUG oslo_concurrency.lockutils [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:30:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 81 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 201 op/s
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.881 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:30:53 np0005532048 youthful_benz[351704]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:30:53 np0005532048 youthful_benz[351704]: --> relative data size: 1.0
Nov 22 04:30:53 np0005532048 youthful_benz[351704]: --> All data devices are unavailable
Nov 22 04:30:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Nov 22 04:30:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Nov 22 04:30:53 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Nov 22 04:30:53 np0005532048 systemd[1]: libpod-2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2.scope: Deactivated successfully.
Nov 22 04:30:53 np0005532048 podman[351688]: 2025-11-22 09:30:53.924520091 +0000 UTC m=+1.241821995 container died 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:30:53 np0005532048 systemd[1]: libpod-2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2.scope: Consumed 1.030s CPU time.
Nov 22 04:30:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-28f288077041682e6de959bbab4f3f58bf36c45cfd3e31117409fc8c122c0fca-merged.mount: Deactivated successfully.
Nov 22 04:30:53 np0005532048 nova_compute[253661]: 2025-11-22 09:30:53.979 253665 DEBUG nova.storage.rbd_utils [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] creating snapshot(snap) on rbd image(1c164693-7d32-4441-9567-92f357c61148) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:30:53 np0005532048 podman[351688]: 2025-11-22 09:30:53.997541279 +0000 UTC m=+1.314843183 container remove 2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_benz, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:30:54 np0005532048 systemd[1]: libpod-conmon-2b826f05e89411bc87d74e83a2a9d30b3482b0793bda3b05c7d588f1a27ab5f2.scope: Deactivated successfully.
Nov 22 04:30:54 np0005532048 nova_compute[253661]: 2025-11-22 09:30:54.361 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:54 np0005532048 podman[351975]: 2025-11-22 09:30:54.642648817 +0000 UTC m=+0.047653934 container create 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 22 04:30:54 np0005532048 systemd[1]: Started libpod-conmon-45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8.scope.
Nov 22 04:30:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:30:54 np0005532048 podman[351975]: 2025-11-22 09:30:54.625007002 +0000 UTC m=+0.030012139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:30:54 np0005532048 podman[351975]: 2025-11-22 09:30:54.723656981 +0000 UTC m=+0.128662118 container init 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:30:54 np0005532048 podman[351975]: 2025-11-22 09:30:54.732258732 +0000 UTC m=+0.137263849 container start 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:30:54 np0005532048 podman[351975]: 2025-11-22 09:30:54.735740677 +0000 UTC m=+0.140745874 container attach 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:30:54 np0005532048 charming_cohen[351991]: 167 167
Nov 22 04:30:54 np0005532048 systemd[1]: libpod-45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8.scope: Deactivated successfully.
Nov 22 04:30:54 np0005532048 conmon[351991]: conmon 45a76bed2164ec80ea36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8.scope/container/memory.events
Nov 22 04:30:54 np0005532048 podman[351975]: 2025-11-22 09:30:54.739760076 +0000 UTC m=+0.144765193 container died 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:30:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8127c13ecd010438710c7fb20a011c5612db61cf912fbaf7436514cacab70403-merged.mount: Deactivated successfully.
Nov 22 04:30:54 np0005532048 podman[351975]: 2025-11-22 09:30:54.779779452 +0000 UTC m=+0.184784589 container remove 45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:30:54 np0005532048 systemd[1]: libpod-conmon-45a76bed2164ec80ea36b5671ac63bb25f9c35e715b44fa6413e51933f494ca8.scope: Deactivated successfully.
Nov 22 04:30:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Nov 22 04:30:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Nov 22 04:30:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Nov 22 04:30:54 np0005532048 podman[352014]: 2025-11-22 09:30:54.961182706 +0000 UTC m=+0.044281650 container create 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:30:55 np0005532048 systemd[1]: Started libpod-conmon-9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48.scope.
Nov 22 04:30:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:30:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:55 np0005532048 podman[352014]: 2025-11-22 09:30:54.944495256 +0000 UTC m=+0.027594220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:30:55 np0005532048 podman[352014]: 2025-11-22 09:30:55.04339856 +0000 UTC m=+0.126497524 container init 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:30:55 np0005532048 podman[352014]: 2025-11-22 09:30:55.052218607 +0000 UTC m=+0.135317551 container start 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:30:55 np0005532048 podman[352014]: 2025-11-22 09:30:55.05641512 +0000 UTC m=+0.139514094 container attach 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.110 253665 DEBUG nova.network.neutron [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.130 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.130 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance network_info: |[{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.131 253665 DEBUG oslo_concurrency.lockutils [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.131 253665 DEBUG nova.network.neutron [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.135 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start _get_guest_xml network_info=[{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.141 253665 WARNING nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.148 253665 DEBUG nova.virt.libvirt.host [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.150 253665 DEBUG nova.virt.libvirt.host [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.157 253665 DEBUG nova.virt.libvirt.host [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.158 253665 DEBUG nova.virt.libvirt.host [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.158 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.158 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.159 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.159 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.159 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.159 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.160 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.161 253665 DEBUG nova.virt.hardware [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.163 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:30:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352264715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.618 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:30:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:30:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:30:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:30:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.654 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.660 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:55 np0005532048 nova_compute[253661]: 2025-11-22 09:30:55.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 3.6 MiB/s wr, 155 op/s
Nov 22 04:30:55 np0005532048 clever_ride[352030]: {
Nov 22 04:30:55 np0005532048 clever_ride[352030]:    "0": [
Nov 22 04:30:55 np0005532048 clever_ride[352030]:        {
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "devices": [
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "/dev/loop3"
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            ],
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_name": "ceph_lv0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_size": "21470642176",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "name": "ceph_lv0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "tags": {
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.cluster_name": "ceph",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.crush_device_class": "",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.encrypted": "0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.osd_id": "0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.type": "block",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.vdo": "0"
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            },
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "type": "block",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "vg_name": "ceph_vg0"
Nov 22 04:30:55 np0005532048 clever_ride[352030]:        }
Nov 22 04:30:55 np0005532048 clever_ride[352030]:    ],
Nov 22 04:30:55 np0005532048 clever_ride[352030]:    "1": [
Nov 22 04:30:55 np0005532048 clever_ride[352030]:        {
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "devices": [
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "/dev/loop4"
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            ],
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_name": "ceph_lv1",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_size": "21470642176",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "name": "ceph_lv1",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "tags": {
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.cluster_name": "ceph",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.crush_device_class": "",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.encrypted": "0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.osd_id": "1",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.type": "block",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.vdo": "0"
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            },
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "type": "block",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "vg_name": "ceph_vg1"
Nov 22 04:30:55 np0005532048 clever_ride[352030]:        }
Nov 22 04:30:55 np0005532048 clever_ride[352030]:    ],
Nov 22 04:30:55 np0005532048 clever_ride[352030]:    "2": [
Nov 22 04:30:55 np0005532048 clever_ride[352030]:        {
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "devices": [
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "/dev/loop5"
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            ],
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_name": "ceph_lv2",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_size": "21470642176",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "name": "ceph_lv2",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "tags": {
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.cluster_name": "ceph",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.crush_device_class": "",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.encrypted": "0",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.osd_id": "2",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.type": "block",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:                "ceph.vdo": "0"
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            },
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "type": "block",
Nov 22 04:30:55 np0005532048 clever_ride[352030]:            "vg_name": "ceph_vg2"
Nov 22 04:30:55 np0005532048 clever_ride[352030]:        }
Nov 22 04:30:55 np0005532048 clever_ride[352030]:    ]
Nov 22 04:30:55 np0005532048 clever_ride[352030]: }
Nov 22 04:30:55 np0005532048 systemd[1]: libpod-9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48.scope: Deactivated successfully.
Nov 22 04:30:55 np0005532048 podman[352099]: 2025-11-22 09:30:55.960204005 +0000 UTC m=+0.031799144 container died 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:30:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cbeee9815b12818bab684dcf45ff13228efd599163d66255b3cd6dfa42cac86b-merged.mount: Deactivated successfully.
Nov 22 04:30:56 np0005532048 podman[352099]: 2025-11-22 09:30:56.012614785 +0000 UTC m=+0.084209864 container remove 9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:30:56 np0005532048 systemd[1]: libpod-conmon-9bf42bb6aa5eeef304d1db335a8ae23dd68430aff48cef2c6291cae5874a9d48.scope: Deactivated successfully.
Nov 22 04:30:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:30:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279700360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.162 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.163 253665 DEBUG nova.virt.libvirt.vif [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:30:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-458402395',display_name='tempest-TestNetworkAdvancedServerOps-server-458402395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-458402395',id=96,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPN6Rf2Pe6I6Kwug9Q7FGB75vk9ho8mQhQaKMB+gkIT1QntL149y3I7blWOrUF/CBmpP9hEhIUJwXQpTVsnaSm2uVyBQ0rC8pr4pNUdemX2qkiqIxYyhgu6PS131KVtofw==',key_name='tempest-TestNetworkAdvancedServerOps-1088921116',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-1ip0oei2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:30:50Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.164 253665 DEBUG nova.network.os_vif_util [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.164 253665 DEBUG nova.network.os_vif_util [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.166 253665 DEBUG nova.objects.instance [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.182 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <uuid>84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2</uuid>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <name>instance-00000060</name>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-458402395</nova:name>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:30:55</nova:creationTime>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <nova:port uuid="a13cc417-edce-4c30-a5b0-f90095810bcc">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <entry name="serial">84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2</entry>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <entry name="uuid">84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2</entry>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:55:4e:e1"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <target dev="tapa13cc417-ed"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/console.log" append="off"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:30:56 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:30:56 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:30:56 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:30:56 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.183 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Preparing to wait for external event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.183 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.183 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.184 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.184 253665 DEBUG nova.virt.libvirt.vif [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:30:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-458402395',display_name='tempest-TestNetworkAdvancedServerOps-server-458402395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-458402395',id=96,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPN6Rf2Pe6I6Kwug9Q7FGB75vk9ho8mQhQaKMB+gkIT1QntL149y3I7blWOrUF/CBmpP9hEhIUJwXQpTVsnaSm2uVyBQ0rC8pr4pNUdemX2qkiqIxYyhgu6PS131KVtofw==',key_name='tempest-TestNetworkAdvancedServerOps-1088921116',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-1ip0oei2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:30:50Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.184 253665 DEBUG nova.network.os_vif_util [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.185 253665 DEBUG nova.network.os_vif_util [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.185 253665 DEBUG os_vif [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.186 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.186 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.190 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa13cc417-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.190 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa13cc417-ed, col_values=(('external_ids', {'iface-id': 'a13cc417-edce-4c30-a5b0-f90095810bcc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:4e:e1', 'vm-uuid': '84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.192 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:56 np0005532048 NetworkManager[48920]: <info>  [1763803856.1930] manager: (tapa13cc417-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.203 253665 INFO os_vif [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed')#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.256 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.256 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.256 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:55:4e:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.257 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Using config drive#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.283 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.405 253665 INFO nova.virt.libvirt.driver [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Snapshot image upload complete#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.405 253665 INFO nova.compute.manager [None req-6666dd0d-9c08-4b0e-b41d-1986b46c7566 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 4.67 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 22 04:30:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:30:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:30:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:30:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:30:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:30:56 np0005532048 podman[352274]: 2025-11-22 09:30:56.708136094 +0000 UTC m=+0.045859650 container create 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:30:56 np0005532048 systemd[1]: Started libpod-conmon-925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7.scope.
Nov 22 04:30:56 np0005532048 podman[352274]: 2025-11-22 09:30:56.685930497 +0000 UTC m=+0.023654063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:30:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:30:56 np0005532048 podman[352274]: 2025-11-22 09:30:56.802205499 +0000 UTC m=+0.139929065 container init 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 04:30:56 np0005532048 podman[352274]: 2025-11-22 09:30:56.810191176 +0000 UTC m=+0.147914722 container start 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:30:56 np0005532048 podman[352274]: 2025-11-22 09:30:56.813710403 +0000 UTC m=+0.151433979 container attach 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 04:30:56 np0005532048 upbeat_hellman[352290]: 167 167
Nov 22 04:30:56 np0005532048 systemd[1]: libpod-925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7.scope: Deactivated successfully.
Nov 22 04:30:56 np0005532048 podman[352274]: 2025-11-22 09:30:56.817404453 +0000 UTC m=+0.155127999 container died 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 04:30:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-77d672e37cc3640753e4e38d7a5bdd806e102d3855338c3202f7b0440bfe2b40-merged.mount: Deactivated successfully.
Nov 22 04:30:56 np0005532048 podman[352274]: 2025-11-22 09:30:56.85424854 +0000 UTC m=+0.191972086 container remove 925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hellman, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:30:56 np0005532048 systemd[1]: libpod-conmon-925f93d22409c66241383bef5bea36a93067ab74e4d5cf26edd48fee706369d7.scope: Deactivated successfully.
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.953 253665 DEBUG nova.network.neutron [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updated VIF entry in instance network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.956 253665 DEBUG nova.network.neutron [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:30:56 np0005532048 nova_compute[253661]: 2025-11-22 09:30:56.974 253665 DEBUG oslo_concurrency.lockutils [req-6b92c29a-92e0-49b4-a4ba-7b60b40a48ed req-9753050c-0b55-4fa7-9baf-c3fda411df9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:30:57 np0005532048 podman[352314]: 2025-11-22 09:30:57.026710905 +0000 UTC m=+0.040143759 container create d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:30:57 np0005532048 systemd[1]: Started libpod-conmon-d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288.scope.
Nov 22 04:30:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:57 np0005532048 podman[352314]: 2025-11-22 09:30:57.009405789 +0000 UTC m=+0.022838663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.107 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Creating config drive at /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config#033[00m
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.117 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxt4dvnon execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:57 np0005532048 podman[352314]: 2025-11-22 09:30:57.214075137 +0000 UTC m=+0.227508021 container init d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:30:57 np0005532048 podman[352314]: 2025-11-22 09:30:57.224843521 +0000 UTC m=+0.238276385 container start d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 04:30:57 np0005532048 podman[352314]: 2025-11-22 09:30:57.228907361 +0000 UTC m=+0.242340215 container attach d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.287 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxt4dvnon" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.330 253665 DEBUG nova.storage.rbd_utils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.336 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.485 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.559 253665 DEBUG oslo_concurrency.processutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.561 253665 INFO nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Deleting local config drive /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2/disk.config because it was imported into RBD.#033[00m
Nov 22 04:30:57 np0005532048 kernel: tapa13cc417-ed: entered promiscuous mode
Nov 22 04:30:57 np0005532048 NetworkManager[48920]: <info>  [1763803857.6319] manager: (tapa13cc417-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Nov 22 04:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:30:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9208 writes, 41K keys, 9208 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 9208 writes, 9208 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1611 writes, 7208 keys, 1611 commit groups, 1.0 writes per commit group, ingest: 9.66 MB, 0.02 MB/s#012Interval WAL: 1611 writes, 1611 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     43.3      1.07              0.17        25    0.043       0      0       0.0       0.0#012  L6      1/0    9.49 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.0     88.1     73.4      2.53              0.56        24    0.105    129K    13K       0.0       0.0#012 Sum      1/0    9.49 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.0     61.8     64.4      3.60              0.72        49    0.073    129K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4     58.2     58.5      0.88              0.15        10    0.088     33K   2571       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     88.1     73.4      2.53              0.56        24    0.105    129K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     43.4      1.07              0.17        24    0.045       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.045, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.06 MB/s write, 0.22 GB read, 0.06 MB/s read, 3.6 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 26.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000228 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1739,25.51 MB,8.39252%) FilterBlock(50,368.36 KB,0.118331%) IndexBlock(50,641.23 KB,0.205989%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.679 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.682 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:57 np0005532048 systemd-udevd[352385]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:30:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:57Z|00958|binding|INFO|Claiming lport a13cc417-edce-4c30-a5b0-f90095810bcc for this chassis.
Nov 22 04:30:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:57Z|00959|binding|INFO|a13cc417-edce-4c30-a5b0-f90095810bcc: Claiming fa:16:3e:55:4e:e1 10.100.0.10
Nov 22 04:30:57 np0005532048 NetworkManager[48920]: <info>  [1763803857.7012] device (tapa13cc417-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:30:57 np0005532048 NetworkManager[48920]: <info>  [1763803857.7032] device (tapa13cc417-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.705 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:4e:e1 10.100.0.10'], port_security=['fa:16:3e:55:4e:e1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bcad6c6-374a-4697-ae00-916836e6498e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0dea6476-4fee-41aa-8572-212b34cd06a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79997ef2-17e2-4f21-8229-7c6dd79ef3c8, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a13cc417-edce-4c30-a5b0-f90095810bcc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.707 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a13cc417-edce-4c30-a5b0-f90095810bcc in datapath 7bcad6c6-374a-4697-ae00-916836e6498e bound to our chassis#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.708 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bcad6c6-374a-4697-ae00-916836e6498e#033[00m
Nov 22 04:30:57 np0005532048 systemd-machined[215941]: New machine qemu-116-instance-00000060.
Nov 22 04:30:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 3.6 MiB/s wr, 232 op/s
Nov 22 04:30:57 np0005532048 systemd[1]: Started Virtual Machine qemu-116-instance-00000060.
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.737 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3778f7f0-095a-440d-9c95-f0f7bf2d5760]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.739 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7bcad6c6-31 in ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.741 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7bcad6c6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.742 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b18e826-b2cd-4bd8-9978-647670840261]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.743 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cba63b-b62b-4539-b7ff-befba8650f29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.761 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b7492497-8e14-484f-be89-6cd41895f5d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:57Z|00960|binding|INFO|Setting lport a13cc417-edce-4c30-a5b0-f90095810bcc ovn-installed in OVS
Nov 22 04:30:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:57Z|00961|binding|INFO|Setting lport a13cc417-edce-4c30-a5b0-f90095810bcc up in Southbound
Nov 22 04:30:57 np0005532048 nova_compute[253661]: 2025-11-22 09:30:57.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bddcc82c-2669-4594-9792-25e1e82ca5d0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.824 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4ad525-d7a5-402a-8a27-c9bf9f7c1709]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.829 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4300a0-a9df-46e2-ae2a-2df469d9dc8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 NetworkManager[48920]: <info>  [1763803857.8308] manager: (tap7bcad6c6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Nov 22 04:30:57 np0005532048 systemd-udevd[352389]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.874 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d58490f1-baa6-4ca6-9243-707ebfcec7bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.878 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[43915999-921c-491c-8744-50c81441dd60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 NetworkManager[48920]: <info>  [1763803857.9038] device (tap7bcad6c6-30): carrier: link connected
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.910 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dab34b28-1687-41e4-b92a-054a61b53953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ada9dc-827d-4da9-9884-7a9c25a44210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bcad6c6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:b9:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670864, 'reachable_time': 42742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352428, 'error': None, 'target': 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.963 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf92b627-d239-4874-bd13-22bc6782427f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:b9c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670864, 'tstamp': 670864}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352441, 'error': None, 'target': 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:57.988 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1370b931-0731-44b4-9d2d-1bb3f797aaae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bcad6c6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:b9:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670864, 'reachable_time': 42742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352457, 'error': None, 'target': 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.030 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[472bb6f7-dc51-4b97-ac7e-fd36ada6d4ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.047 253665 DEBUG nova.compute.manager [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.048 253665 DEBUG oslo_concurrency.lockutils [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.048 253665 DEBUG oslo_concurrency.lockutils [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.048 253665 DEBUG oslo_concurrency.lockutils [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.049 253665 DEBUG nova.compute.manager [req-8a0e16cc-1c50-4f6a-9b56-745e5584103a req-8c9788d3-b808-4e37-8334-4d8654347e32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Processing event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.107 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803858.1068766, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.107 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Started (Lifecycle Event)#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[17514f47-c61e-4bbd-8e5b-13e99d2598b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.109 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bcad6c6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.109 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.110 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bcad6c6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:58 np0005532048 NetworkManager[48920]: <info>  [1763803858.1129] manager: (tap7bcad6c6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 22 04:30:58 np0005532048 kernel: tap7bcad6c6-30: entered promiscuous mode
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.116 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bcad6c6-30, col_values=(('external_ids', {'iface-id': 'fbfcd9c4-6397-42df-bb55-2a7d60f24e6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:30:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:30:58Z|00962|binding|INFO|Releasing lport fbfcd9c4-6397-42df-bb55-2a7d60f24e6f from this chassis (sb_readonly=0)
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.119 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.119 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7bcad6c6-374a-4697-ae00-916836e6498e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7bcad6c6-374a-4697-ae00-916836e6498e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.120 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fc311b-7795-4be7-a54b-434183e80c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.121 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-7bcad6c6-374a-4697-ae00-916836e6498e
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/7bcad6c6-374a-4697-ae00-916836e6498e.pid.haproxy
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 7bcad6c6-374a-4697-ae00-916836e6498e
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:30:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:30:58.122 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'env', 'PROCESS_TAG=haproxy-7bcad6c6-374a-4697-ae00-916836e6498e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7bcad6c6-374a-4697-ae00-916836e6498e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.123 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.132 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.136 253665 INFO nova.virt.libvirt.driver [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance spawned successfully.#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.136 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.139 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.164 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.164 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.164 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.165 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.165 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.165 253665 DEBUG nova.virt.libvirt.driver [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.169 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.169 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803858.1184814, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.170 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.198 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.202 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803858.1226156, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.203 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:30:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.231 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.235 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.245 253665 INFO nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Took 7.60 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.245 253665 DEBUG nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.256 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]: {
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "osd_id": 1,
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "type": "bluestore"
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:    },
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "osd_id": 0,
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "type": "bluestore"
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:    },
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "osd_id": 2,
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:        "type": "bluestore"
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]:    }
Nov 22 04:30:58 np0005532048 exciting_cannon[352330]: }
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.306 253665 INFO nova.compute.manager [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Took 8.82 seconds to build instance.#033[00m
Nov 22 04:30:58 np0005532048 nova_compute[253661]: 2025-11-22 09:30:58.319 253665 DEBUG oslo_concurrency.lockutils [None req-8be1fe4f-c908-4b71-9680-1b1f70ff0fc4 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:58 np0005532048 systemd[1]: libpod-d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288.scope: Deactivated successfully.
Nov 22 04:30:58 np0005532048 systemd[1]: libpod-d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288.scope: Consumed 1.073s CPU time.
Nov 22 04:30:58 np0005532048 podman[352314]: 2025-11-22 09:30:58.329834578 +0000 UTC m=+1.343267442 container died d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:30:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c01d15cfaa5dec98906b71eb42dbeab5f22cf45371d2852898b846d988cdb5ae-merged.mount: Deactivated successfully.
Nov 22 04:30:58 np0005532048 podman[352314]: 2025-11-22 09:30:58.402959498 +0000 UTC m=+1.416392352 container remove d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:30:58 np0005532048 systemd[1]: libpod-conmon-d101ff49699710272e7813f4d2cd4266819b3104454006e03d90403be4d7b288.scope: Deactivated successfully.
Nov 22 04:30:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:30:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:30:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:30:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:30:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a9483270-dc8b-4b13-8780-3406264d4fea does not exist
Nov 22 04:30:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b43c6996-37f7-4954-a814-c2440bd2432a does not exist
Nov 22 04:30:58 np0005532048 podman[352536]: 2025-11-22 09:30:58.526620952 +0000 UTC m=+0.055478807 container create eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 04:30:58 np0005532048 systemd[1]: Started libpod-conmon-eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc.scope.
Nov 22 04:30:58 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:30:58 np0005532048 podman[352536]: 2025-11-22 09:30:58.497826212 +0000 UTC m=+0.026684087 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:30:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b23d174e80fe906250cd93b44fdfabff1fe6ae03ff6df4bfad6521ddf56c356/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:30:58 np0005532048 podman[352536]: 2025-11-22 09:30:58.613639573 +0000 UTC m=+0.142497448 container init eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:30:58 np0005532048 podman[352536]: 2025-11-22 09:30:58.618850882 +0000 UTC m=+0.147708727 container start eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 04:30:58 np0005532048 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [NOTICE]   (352601) : New worker (352603) forked
Nov 22 04:30:58 np0005532048 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [NOTICE]   (352601) : Loading success.
Nov 22 04:30:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Nov 22 04:30:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Nov 22 04:30:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Nov 22 04:30:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:30:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:30:59 np0005532048 nova_compute[253661]: 2025-11-22 09:30:59.718 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:59 np0005532048 nova_compute[253661]: 2025-11-22 09:30:59.718 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:59 np0005532048 nova_compute[253661]: 2025-11-22 09:30:59.718 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:30:59 np0005532048 nova_compute[253661]: 2025-11-22 09:30:59.718 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:30:59 np0005532048 nova_compute[253661]: 2025-11-22 09:30:59.719 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:30:59 np0005532048 nova_compute[253661]: 2025-11-22 09:30:59.719 253665 INFO nova.compute.manager [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Terminating instance#033[00m
Nov 22 04:30:59 np0005532048 nova_compute[253661]: 2025-11-22 09:30:59.720 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "refresh_cache-79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:30:59 np0005532048 nova_compute[253661]: 2025-11-22 09:30:59.720 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquired lock "refresh_cache-79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:30:59 np0005532048 nova_compute[253661]: 2025-11-22 09:30:59.720 253665 DEBUG nova.network.neutron [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:30:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 231 op/s
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.411 253665 DEBUG nova.network.neutron [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.572 253665 DEBUG nova.compute.manager [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.572 253665 DEBUG oslo_concurrency.lockutils [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.573 253665 DEBUG oslo_concurrency.lockutils [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.573 253665 DEBUG oslo_concurrency.lockutils [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.573 253665 DEBUG nova.compute.manager [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] No waiting events found dispatching network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.574 253665 WARNING nova.compute.manager [req-7cc10f19-0fb7-4a28-90e8-c1b1e8a01158 req-e40f87af-e260-44ba-a82f-77525f94665c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received unexpected event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc for instance with vm_state active and task_state None.#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.666 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.787 253665 DEBUG nova.network.neutron [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.797 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Releasing lock "refresh_cache-79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:31:00 np0005532048 nova_compute[253661]: 2025-11-22 09:31:00.798 253665 DEBUG nova.compute.manager [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:31:00 np0005532048 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 22 04:31:00 np0005532048 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005f.scope: Consumed 1.883s CPU time.
Nov 22 04:31:00 np0005532048 systemd-machined[215941]: Machine qemu-115-instance-0000005f terminated.
Nov 22 04:31:01 np0005532048 nova_compute[253661]: 2025-11-22 09:31:01.035 253665 INFO nova.virt.libvirt.driver [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance destroyed successfully.#033[00m
Nov 22 04:31:01 np0005532048 nova_compute[253661]: 2025-11-22 09:31:01.036 253665 DEBUG nova.objects.instance [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lazy-loading 'resources' on Instance uuid 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:01 np0005532048 nova_compute[253661]: 2025-11-22 09:31:01.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Nov 22 04:31:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Nov 22 04:31:01 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Nov 22 04:31:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 KiB/s wr, 167 op/s
Nov 22 04:31:01 np0005532048 nova_compute[253661]: 2025-11-22 09:31:01.789 253665 INFO nova.virt.libvirt.driver [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Deleting instance files /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_del#033[00m
Nov 22 04:31:01 np0005532048 nova_compute[253661]: 2025-11-22 09:31:01.790 253665 INFO nova.virt.libvirt.driver [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Deletion of /var/lib/nova/instances/79b0dd90-3f01-40c6-a7a7-fe79eeab97d0_del complete#033[00m
Nov 22 04:31:01 np0005532048 nova_compute[253661]: 2025-11-22 09:31:01.854 253665 INFO nova.compute.manager [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:31:01 np0005532048 nova_compute[253661]: 2025-11-22 09:31:01.854 253665 DEBUG oslo.service.loopingcall [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:31:01 np0005532048 nova_compute[253661]: 2025-11-22 09:31:01.855 253665 DEBUG nova.compute.manager [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:31:01 np0005532048 nova_compute[253661]: 2025-11-22 09:31:01.855 253665 DEBUG nova.network.neutron [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.102 253665 DEBUG nova.network.neutron [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.113 253665 DEBUG nova.network.neutron [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.127 253665 INFO nova.compute.manager [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Took 0.27 seconds to deallocate network for instance.#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.183 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.183 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.280 253665 DEBUG oslo_concurrency.processutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:02Z|00963|binding|INFO|Releasing lport fbfcd9c4-6397-42df-bb55-2a7d60f24e6f from this chassis (sb_readonly=0)
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:02 np0005532048 NetworkManager[48920]: <info>  [1763803862.3443] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Nov 22 04:31:02 np0005532048 NetworkManager[48920]: <info>  [1763803862.3468] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Nov 22 04:31:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:02Z|00964|binding|INFO|Releasing lport fbfcd9c4-6397-42df-bb55-2a7d60f24e6f from this chassis (sb_readonly=0)
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:31:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457195015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.722 253665 DEBUG oslo_concurrency.processutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.730 253665 DEBUG nova.compute.provider_tree [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.745 253665 DEBUG nova.scheduler.client.report [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.765 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.790 253665 INFO nova.scheduler.client.report [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Deleted allocations for instance 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0#033[00m
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003486042602721886 of space, bias 1.0, pg target 0.10458127808165658 quantized to 32 (current 32)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665577993748779 of space, bias 1.0, pg target 0.1999673398124634 quantized to 32 (current 32)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:31:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:31:02 np0005532048 nova_compute[253661]: 2025-11-22 09:31:02.858 253665 DEBUG oslo_concurrency.lockutils [None req-29f5425b-77c9-41f1-a5fb-ebdd4cc984f5 18279c1b172740a1a20233efdadc6120 7f68d63cdf1a45888e4dd7198f06fae4 - - default default] Lock "79b0dd90-3f01-40c6-a7a7-fe79eeab97d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:03 np0005532048 nova_compute[253661]: 2025-11-22 09:31:03.129 253665 DEBUG nova.compute.manager [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:03 np0005532048 nova_compute[253661]: 2025-11-22 09:31:03.130 253665 DEBUG nova.compute.manager [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing instance network info cache due to event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:31:03 np0005532048 nova_compute[253661]: 2025-11-22 09:31:03.131 253665 DEBUG oslo_concurrency.lockutils [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:31:03 np0005532048 nova_compute[253661]: 2025-11-22 09:31:03.131 253665 DEBUG oslo_concurrency.lockutils [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:31:03 np0005532048 nova_compute[253661]: 2025-11-22 09:31:03.132 253665 DEBUG nova.network.neutron [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:31:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Nov 22 04:31:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Nov 22 04:31:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Nov 22 04:31:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 33 KiB/s wr, 300 op/s
Nov 22 04:31:04 np0005532048 nova_compute[253661]: 2025-11-22 09:31:04.339 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803849.3387444, 1f746354-73cc-421a-9cde-f5b8c2b597fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:04 np0005532048 nova_compute[253661]: 2025-11-22 09:31:04.340 253665 INFO nova.compute.manager [-] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:31:04 np0005532048 nova_compute[253661]: 2025-11-22 09:31:04.360 253665 DEBUG nova.compute.manager [None req-9ae03645-ac1d-40cc-bcd5-83bb35193e82 - - - - - -] [instance: 1f746354-73cc-421a-9cde-f5b8c2b597fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:04 np0005532048 nova_compute[253661]: 2025-11-22 09:31:04.908 253665 DEBUG nova.network.neutron [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updated VIF entry in instance network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:31:04 np0005532048 nova_compute[253661]: 2025-11-22 09:31:04.908 253665 DEBUG nova.network.neutron [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:04 np0005532048 nova_compute[253661]: 2025-11-22 09:31:04.924 253665 DEBUG oslo_concurrency.lockutils [req-eb5191e0-c5f3-4834-997f-90aeb4d74581 req-ea59ec0e-4881-44b1-b267-88fe6a2f7d91 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:31:05 np0005532048 nova_compute[253661]: 2025-11-22 09:31:05.668 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 247 op/s
Nov 22 04:31:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:05.744 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:31:05 np0005532048 nova_compute[253661]: 2025-11-22 09:31:05.745 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:05.746 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:31:06 np0005532048 nova_compute[253661]: 2025-11-22 09:31:06.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:06 np0005532048 nova_compute[253661]: 2025-11-22 09:31:06.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 152 op/s
Nov 22 04:31:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Nov 22 04:31:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Nov 22 04:31:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Nov 22 04:31:08 np0005532048 nova_compute[253661]: 2025-11-22 09:31:08.984 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 152 op/s
Nov 22 04:31:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:09.749 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:10 np0005532048 nova_compute[253661]: 2025-11-22 09:31:10.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:11 np0005532048 nova_compute[253661]: 2025-11-22 09:31:11.198 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 88 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 120 B/s wr, 12 op/s
Nov 22 04:31:12 np0005532048 nova_compute[253661]: 2025-11-22 09:31:12.241 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:12 np0005532048 nova_compute[253661]: 2025-11-22 09:31:12.242 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:12 np0005532048 nova_compute[253661]: 2025-11-22 09:31:12.261 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:31:12 np0005532048 nova_compute[253661]: 2025-11-22 09:31:12.342 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:12 np0005532048 nova_compute[253661]: 2025-11-22 09:31:12.342 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:12 np0005532048 nova_compute[253661]: 2025-11-22 09:31:12.353 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:31:12 np0005532048 nova_compute[253661]: 2025-11-22 09:31:12.354 253665 INFO nova.compute.claims [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:31:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:31:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1306854197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:31:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:31:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1306854197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:31:12 np0005532048 podman[352658]: 2025-11-22 09:31:12.412285488 +0000 UTC m=+0.090753295 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:31:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:12Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:4e:e1 10.100.0.10
Nov 22 04:31:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:12Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:4e:e1 10.100.0.10
Nov 22 04:31:12 np0005532048 podman[352659]: 2025-11-22 09:31:12.424190551 +0000 UTC m=+0.088342845 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:31:12 np0005532048 nova_compute[253661]: 2025-11-22 09:31:12.488 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:31:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726041466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.021 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.030 253665 DEBUG nova.compute.provider_tree [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.055 253665 DEBUG nova.scheduler.client.report [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.080 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.081 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.126 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.127 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.146 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.160 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.230 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.232 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.233 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Creating image(s)#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.270 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.306 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.334 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.340 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.375 253665 DEBUG nova.policy [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ab3f37df3674a13a02926c1e3d79bbf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a426656c0559412895fd288e6aaaf579', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.411 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.412 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.412 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.413 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.440 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.444 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 104 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 1.0 MiB/s wr, 43 op/s
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.867 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.940 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] resizing rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:31:13 np0005532048 nova_compute[253661]: 2025-11-22 09:31:13.972 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Successfully created port: 4509527d-ccca-4d6f-96b5-cce2f7e28b54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:31:14 np0005532048 nova_compute[253661]: 2025-11-22 09:31:14.043 253665 DEBUG nova.objects.instance [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lazy-loading 'migration_context' on Instance uuid 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:14 np0005532048 nova_compute[253661]: 2025-11-22 09:31:14.057 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:31:14 np0005532048 nova_compute[253661]: 2025-11-22 09:31:14.058 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Ensure instance console log exists: /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:31:14 np0005532048 nova_compute[253661]: 2025-11-22 09:31:14.058 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:14 np0005532048 nova_compute[253661]: 2025-11-22 09:31:14.058 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:14 np0005532048 nova_compute[253661]: 2025-11-22 09:31:14.059 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:15 np0005532048 nova_compute[253661]: 2025-11-22 09:31:15.507 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Successfully updated port: 4509527d-ccca-4d6f-96b5-cce2f7e28b54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:31:15 np0005532048 nova_compute[253661]: 2025-11-22 09:31:15.524 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:31:15 np0005532048 nova_compute[253661]: 2025-11-22 09:31:15.524 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquired lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:31:15 np0005532048 nova_compute[253661]: 2025-11-22 09:31:15.525 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:31:15 np0005532048 nova_compute[253661]: 2025-11-22 09:31:15.584 253665 DEBUG nova.compute.manager [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-changed-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:15 np0005532048 nova_compute[253661]: 2025-11-22 09:31:15.584 253665 DEBUG nova.compute.manager [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Refreshing instance network info cache due to event network-changed-4509527d-ccca-4d6f-96b5-cce2f7e28b54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:31:15 np0005532048 nova_compute[253661]: 2025-11-22 09:31:15.585 253665 DEBUG oslo_concurrency.lockutils [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:31:15 np0005532048 nova_compute[253661]: 2025-11-22 09:31:15.657 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:31:15 np0005532048 nova_compute[253661]: 2025-11-22 09:31:15.674 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 142 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.033 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803861.030466, 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.034 253665 INFO nova.compute.manager [-] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.057 253665 DEBUG nova.compute.manager [None req-8c29c1fd-74be-4a79-8c8f-6dc6ccbaa785 - - - - - -] [instance: 79b0dd90-3f01-40c6-a7a7-fe79eeab97d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.201 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.620 253665 DEBUG nova.network.neutron [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Updating instance_info_cache with network_info: [{"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.762 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Releasing lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.762 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance network_info: |[{"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.764 253665 DEBUG oslo_concurrency.lockutils [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.765 253665 DEBUG nova.network.neutron [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Refreshing network info cache for port 4509527d-ccca-4d6f-96b5-cce2f7e28b54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.769 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start _get_guest_xml network_info=[{"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.776 253665 WARNING nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.785 253665 DEBUG nova.virt.libvirt.host [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.787 253665 DEBUG nova.virt.libvirt.host [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.793 253665 DEBUG nova.virt.libvirt.host [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.793 253665 DEBUG nova.virt.libvirt.host [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.794 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.794 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.795 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.796 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.796 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.796 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.796 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.797 253665 DEBUG nova.virt.hardware [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:31:16 np0005532048 nova_compute[253661]: 2025-11-22 09:31:16.801 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:31:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495932053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.299 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.326 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.332 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 150 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 4.1 MiB/s wr, 80 op/s
Nov 22 04:31:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:31:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913590761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.804 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.808 253665 DEBUG nova.virt.libvirt.vif [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-105000375',display_name='tempest-ServerPasswordTestJSON-server-105000375',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-105000375',id=97,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a426656c0559412895fd288e6aaaf579',ramdisk_id='',reservation_id='r-rvae7u61',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1791552166',owner_user_name='tempest-ServerPasswordTestJSON-1791552166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:13Z,user_data=None,user_id='2ab3f37df3674a13a02926c1e3d79bbf',uuid=0a3de2bf-6305-4b6e-a9c1-1932598e5bb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.809 253665 DEBUG nova.network.os_vif_util [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converting VIF {"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.811 253665 DEBUG nova.network.os_vif_util [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.814 253665 DEBUG nova.objects.instance [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.832 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <uuid>0a3de2bf-6305-4b6e-a9c1-1932598e5bb9</uuid>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <name>instance-00000061</name>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerPasswordTestJSON-server-105000375</nova:name>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:31:16</nova:creationTime>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <nova:user uuid="2ab3f37df3674a13a02926c1e3d79bbf">tempest-ServerPasswordTestJSON-1791552166-project-member</nova:user>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <nova:project uuid="a426656c0559412895fd288e6aaaf579">tempest-ServerPasswordTestJSON-1791552166</nova:project>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <nova:port uuid="4509527d-ccca-4d6f-96b5-cce2f7e28b54">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <entry name="serial">0a3de2bf-6305-4b6e-a9c1-1932598e5bb9</entry>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <entry name="uuid">0a3de2bf-6305-4b6e-a9c1-1932598e5bb9</entry>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:a0:86:b8"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <target dev="tap4509527d-cc"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/console.log" append="off"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:31:17 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:31:17 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:31:17 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:31:17 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.833 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Preparing to wait for external event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.833 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.833 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.834 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.834 253665 DEBUG nova.virt.libvirt.vif [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-105000375',display_name='tempest-ServerPasswordTestJSON-server-105000375',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-105000375',id=97,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a426656c0559412895fd288e6aaaf579',ramdisk_id='',reservation_id='r-rvae7u61',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1791552166',owner_user_name='tempest-ServerPasswordTestJSON-1791552166-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:13Z,user_data=None,user_id='2ab3f37df3674a13a02926c1e3d79bbf',uuid=0a3de2bf-6305-4b6e-a9c1-1932598e5bb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.835 253665 DEBUG nova.network.os_vif_util [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converting VIF {"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.835 253665 DEBUG nova.network.os_vif_util [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.836 253665 DEBUG os_vif [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.843 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4509527d-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.843 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4509527d-cc, col_values=(('external_ids', {'iface-id': '4509527d-ccca-4d6f-96b5-cce2f7e28b54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:86:b8', 'vm-uuid': '0a3de2bf-6305-4b6e-a9c1-1932598e5bb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:17 np0005532048 NetworkManager[48920]: <info>  [1763803877.8760] manager: (tap4509527d-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.874 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.889 253665 INFO os_vif [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc')#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.944 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.945 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.945 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] No VIF found with MAC fa:16:3e:a0:86:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.946 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Using config drive#033[00m
Nov 22 04:31:17 np0005532048 nova_compute[253661]: 2025-11-22 09:31:17.970 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:18 np0005532048 nova_compute[253661]: 2025-11-22 09:31:18.864 253665 INFO nova.compute.manager [None req-527e9c15-ed44-49eb-8eed-403288cc1b74 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Get console output#033[00m
Nov 22 04:31:18 np0005532048 nova_compute[253661]: 2025-11-22 09:31:18.872 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.038 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Creating config drive at /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.046 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjr3d7dx0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.140 253665 INFO nova.compute.manager [None req-aadb395c-90f1-4739-ba60-c5231636710b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Pausing#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.142 253665 DEBUG nova.objects.instance [None req-aadb395c-90f1-4739-ba60-c5231636710b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.178 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803879.1781137, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.179 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.184 253665 DEBUG nova.compute.manager [None req-aadb395c-90f1-4739-ba60-c5231636710b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.206 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjr3d7dx0" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.241 253665 DEBUG nova.storage.rbd_utils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] rbd image 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.247 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.312 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.322 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.451 253665 DEBUG oslo_concurrency.processutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.453 253665 INFO nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Deleting local config drive /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9/disk.config because it was imported into RBD.#033[00m
Nov 22 04:31:19 np0005532048 kernel: tap4509527d-cc: entered promiscuous mode
Nov 22 04:31:19 np0005532048 NetworkManager[48920]: <info>  [1763803879.5203] manager: (tap4509527d-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Nov 22 04:31:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:19Z|00965|binding|INFO|Claiming lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 for this chassis.
Nov 22 04:31:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:19Z|00966|binding|INFO|4509527d-ccca-4d6f-96b5-cce2f7e28b54: Claiming fa:16:3e:a0:86:b8 10.100.0.7
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.534 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:86:b8 10.100.0.7'], port_security=['fa:16:3e:a0:86:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0a3de2bf-6305-4b6e-a9c1-1932598e5bb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a426656c0559412895fd288e6aaaf579', 'neutron:revision_number': '2', 'neutron:security_group_ids': '99ba02ce-3868-4329-a8bd-a035b539a697', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac22de64-af49-4fe8-9780-139ebea6ab18, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4509527d-ccca-4d6f-96b5-cce2f7e28b54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.535 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4509527d-ccca-4d6f-96b5-cce2f7e28b54 in datapath 6fefd6ab-3d4a-489e-983f-f6640c22be71 bound to our chassis#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.537 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6fefd6ab-3d4a-489e-983f-f6640c22be71#033[00m
Nov 22 04:31:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:19Z|00967|binding|INFO|Setting lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 ovn-installed in OVS
Nov 22 04:31:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:19Z|00968|binding|INFO|Setting lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 up in Southbound
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.540 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.542 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[72eed5f7-fdda-4ba1-971b-29f48cddff4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.557 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6fefd6ab-31 in ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.559 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6fefd6ab-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.559 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5737082d-8510-4f4d-9152-2a3803a833c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.561 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[14497148-052f-4841-a864-11e7a3b552f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 systemd-udevd[353033]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.575 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[efcb763b-8e62-4e83-ae84-e6d558e35cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 systemd-machined[215941]: New machine qemu-117-instance-00000061.
Nov 22 04:31:19 np0005532048 NetworkManager[48920]: <info>  [1763803879.5960] device (tap4509527d-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:31:19 np0005532048 NetworkManager[48920]: <info>  [1763803879.5970] device (tap4509527d-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.595 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af268434-03ef-4d7c-a8aa-48655b74e6af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 systemd[1]: Started Virtual Machine qemu-117-instance-00000061.
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.639 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c772d9-eb5c-426d-864c-494f131045b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.649 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b56244e6-73d4-48d6-adfb-ef770866c439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 NetworkManager[48920]: <info>  [1763803879.6510] manager: (tap6fefd6ab-30): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Nov 22 04:31:19 np0005532048 podman[353021]: 2025-11-22 09:31:19.693717474 +0000 UTC m=+0.136130621 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.697 253665 DEBUG nova.network.neutron [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Updated VIF entry in instance network info cache for port 4509527d-ccca-4d6f-96b5-cce2f7e28b54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.698 253665 DEBUG nova.network.neutron [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Updating instance_info_cache with network_info: [{"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.708 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a56fc6-19cc-4c4d-818f-24ccbe5228c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.713 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9b88fe78-2540-4a8f-9581-e0ebf54aea70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.720 253665 DEBUG oslo_concurrency.lockutils [req-caf7487c-8956-4561-ae4e-3141f49a3e4a req-f7e4c4db-135c-4693-8a63-e378cf8bb578 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:31:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 167 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Nov 22 04:31:19 np0005532048 NetworkManager[48920]: <info>  [1763803879.7456] device (tap6fefd6ab-30): carrier: link connected
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.752 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[367f14d0-cf11-41fb-833f-a6db6197e757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.776 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4938be7-a6df-4a67-b758-b27ca93124f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fefd6ab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:04:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673048, 'reachable_time': 27824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353085, 'error': None, 'target': 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.798 253665 DEBUG nova.compute.manager [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.799 253665 DEBUG oslo_concurrency.lockutils [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.799 253665 DEBUG oslo_concurrency.lockutils [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.799 253665 DEBUG oslo_concurrency.lockutils [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.799 253665 DEBUG nova.compute.manager [req-51e53d06-de4e-41ff-a0cd-789e55d1c751 req-6cc47432-90ea-4bd4-a4ed-25ccf105fb20 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Processing event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.800 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f40c8423-089b-4671-a532-351e92d67e42]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:43f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673048, 'tstamp': 673048}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353086, 'error': None, 'target': 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.821 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3c26ed-e3fd-4392-8933-ad95baa2ef8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6fefd6ab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:04:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673048, 'reachable_time': 27824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353087, 'error': None, 'target': 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[91ce825a-674e-4ba9-902e-82037fb95298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.940 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc91f9b1-7303-4307-8a36-643f5d56301c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.943 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fefd6ab-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.943 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.944 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fefd6ab-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:19 np0005532048 NetworkManager[48920]: <info>  [1763803879.9475] manager: (tap6fefd6ab-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Nov 22 04:31:19 np0005532048 kernel: tap6fefd6ab-30: entered promiscuous mode
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.951 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.953 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6fefd6ab-30, col_values=(('external_ids', {'iface-id': 'e1825837-b7b8-471c-82f1-66c1fd37dbe3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.954 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:19Z|00969|binding|INFO|Releasing lport e1825837-b7b8-471c-82f1-66c1fd37dbe3 from this chassis (sb_readonly=0)
Nov 22 04:31:19 np0005532048 nova_compute[253661]: 2025-11-22 09:31:19.982 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.985 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6fefd6ab-3d4a-489e-983f-f6640c22be71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6fefd6ab-3d4a-489e-983f-f6640c22be71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[343d47cb-b466-4dee-aa69-5caefe12858b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.987 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-6fefd6ab-3d4a-489e-983f-f6640c22be71
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/6fefd6ab-3d4a-489e-983f-f6640c22be71.pid.haproxy
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 6fefd6ab-3d4a-489e-983f-f6640c22be71
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:31:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:19.988 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'env', 'PROCESS_TAG=haproxy-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6fefd6ab-3d4a-489e-983f-f6640c22be71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.298 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.299 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803880.298701, 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.299 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] VM Started (Lifecycle Event)#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.303 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.307 253665 INFO nova.virt.libvirt.driver [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance spawned successfully.#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.307 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.321 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.327 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.332 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.332 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.333 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.333 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.333 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.334 253665 DEBUG nova.virt.libvirt.driver [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.355 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.355 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803880.2987957, 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.356 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.378 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.383 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803880.3027217, 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.384 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.393 253665 INFO nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Took 7.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.393 253665 DEBUG nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:20 np0005532048 podman[353161]: 2025-11-22 09:31:20.396687216 +0000 UTC m=+0.062113411 container create 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.414 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.420 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:31:20 np0005532048 systemd[1]: Started libpod-conmon-3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4.scope.
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.452 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:31:20 np0005532048 podman[353161]: 2025-11-22 09:31:20.361228233 +0000 UTC m=+0.026654458 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.464 253665 INFO nova.compute.manager [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Took 8.16 seconds to build instance.#033[00m
Nov 22 04:31:20 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.478 253665 DEBUG oslo_concurrency.lockutils [None req-a9968f94-c6b8-4f41-a3d4-a49957b92a37 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f7e4e3891923fc209f6674be94c109149ed464a00eaa181a40f266a7eb43fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:31:20 np0005532048 podman[353161]: 2025-11-22 09:31:20.50005074 +0000 UTC m=+0.165477025 container init 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:31:20 np0005532048 podman[353161]: 2025-11-22 09:31:20.508131658 +0000 UTC m=+0.173557883 container start 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:31:20 np0005532048 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [NOTICE]   (353180) : New worker (353182) forked
Nov 22 04:31:20 np0005532048 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [NOTICE]   (353180) : Loading success.
Nov 22 04:31:20 np0005532048 nova_compute[253661]: 2025-11-22 09:31:20.677 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 167 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.010 253665 DEBUG nova.compute.manager [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.012 253665 DEBUG oslo_concurrency.lockutils [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.012 253665 DEBUG oslo_concurrency.lockutils [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.012 253665 DEBUG oslo_concurrency.lockutils [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.012 253665 DEBUG nova.compute.manager [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] No waiting events found dispatching network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.013 253665 WARNING nova.compute.manager [req-6dee0763-3111-4466-b06b-9619ac5ff7a3 req-e2d3ebba-a5cd-4807-bcc4-4abe32244310 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received unexpected event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.494 253665 INFO nova.compute.manager [None req-9fd2039b-aeef-420d-95c3-64b8b8de0832 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Get console output#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.502 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.731 253665 INFO nova.compute.manager [None req-8e713fd0-50d4-44f0-bd0d-d8cb7429af56 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Unpausing#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.732 253665 DEBUG nova.objects.instance [None req-8e713fd0-50d4-44f0-bd0d-d8cb7429af56 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.733 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.733 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.734 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.734 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.734 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:31:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.736 253665 INFO nova.compute.manager [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Terminating instance#033[00m
Nov 22 04:31:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:31:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:31:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:31:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.739 253665 DEBUG nova.compute.manager [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.764 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803882.7639353, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.764 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:31:22 np0005532048 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.770 253665 DEBUG nova.virt.libvirt.guest [None req-8e713fd0-50d4-44f0-bd0d-d8cb7429af56 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.770 253665 DEBUG nova.compute.manager [None req-8e713fd0-50d4-44f0-bd0d-d8cb7429af56 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.781 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.784 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:31:22 np0005532048 kernel: tap4509527d-cc (unregistering): left promiscuous mode
Nov 22 04:31:22 np0005532048 NetworkManager[48920]: <info>  [1763803882.7909] device (tap4509527d-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.805 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:22Z|00970|binding|INFO|Releasing lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 from this chassis (sb_readonly=0)
Nov 22 04:31:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:22Z|00971|binding|INFO|Setting lport 4509527d-ccca-4d6f-96b5-cce2f7e28b54 down in Southbound
Nov 22 04:31:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:22Z|00972|binding|INFO|Removing iface tap4509527d-cc ovn-installed in OVS
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.808 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.808 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.814 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:86:b8 10.100.0.7'], port_security=['fa:16:3e:a0:86:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0a3de2bf-6305-4b6e-a9c1-1932598e5bb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a426656c0559412895fd288e6aaaf579', 'neutron:revision_number': '4', 'neutron:security_group_ids': '99ba02ce-3868-4329-a8bd-a035b539a697', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac22de64-af49-4fe8-9780-139ebea6ab18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4509527d-ccca-4d6f-96b5-cce2f7e28b54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:31:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.815 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4509527d-ccca-4d6f-96b5-cce2f7e28b54 in datapath 6fefd6ab-3d4a-489e-983f-f6640c22be71 unbound from our chassis#033[00m
Nov 22 04:31:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.817 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6fefd6ab-3d4a-489e-983f-f6640c22be71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:31:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.818 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8463fd98-43e9-49fd-8f16-e605f0474694]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:22.819 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 namespace which is not needed anymore#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:22 np0005532048 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000061.scope: Deactivated successfully.
Nov 22 04:31:22 np0005532048 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000061.scope: Consumed 3.257s CPU time.
Nov 22 04:31:22 np0005532048 systemd-machined[215941]: Machine qemu-117-instance-00000061 terminated.
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.875 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:22 np0005532048 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [NOTICE]   (353180) : haproxy version is 2.8.14-c23fe91
Nov 22 04:31:22 np0005532048 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [NOTICE]   (353180) : path to executable is /usr/sbin/haproxy
Nov 22 04:31:22 np0005532048 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [WARNING]  (353180) : Exiting Master process...
Nov 22 04:31:22 np0005532048 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [ALERT]    (353180) : Current worker (353182) exited with code 143 (Terminated)
Nov 22 04:31:22 np0005532048 neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71[353176]: [WARNING]  (353180) : All workers exited. Exiting... (0)
Nov 22 04:31:22 np0005532048 systemd[1]: libpod-3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4.scope: Deactivated successfully.
Nov 22 04:31:22 np0005532048 podman[353214]: 2025-11-22 09:31:22.970657578 +0000 UTC m=+0.052203616 container died 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.985 253665 INFO nova.virt.libvirt.driver [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Instance destroyed successfully.#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.987 253665 DEBUG nova.objects.instance [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lazy-loading 'resources' on Instance uuid 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.997 253665 DEBUG nova.virt.libvirt.vif [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:31:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-105000375',display_name='tempest-ServerPasswordTestJSON-server-105000375',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-105000375',id=97,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:31:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a426656c0559412895fd288e6aaaf579',ramdisk_id='',reservation_id='r-rvae7u61',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1791552166',owner_user_name='tempest-ServerPasswordTestJSON-1791552166-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:31:22Z,user_data=None,user_id='2ab3f37df3674a13a02926c1e3d79bbf',uuid=0a3de2bf-6305-4b6e-a9c1-1932598e5bb9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:31:22 np0005532048 nova_compute[253661]: 2025-11-22 09:31:22.998 253665 DEBUG nova.network.os_vif_util [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converting VIF {"id": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "address": "fa:16:3e:a0:86:b8", "network": {"id": "6fefd6ab-3d4a-489e-983f-f6640c22be71", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-778283545-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a426656c0559412895fd288e6aaaf579", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4509527d-cc", "ovs_interfaceid": "4509527d-ccca-4d6f-96b5-cce2f7e28b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.000 253665 DEBUG nova.network.os_vif_util [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.001 253665 DEBUG os_vif [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:31:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4-userdata-shm.mount: Deactivated successfully.
Nov 22 04:31:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-12f7e4e3891923fc209f6674be94c109149ed464a00eaa181a40f266a7eb43fa-merged.mount: Deactivated successfully.
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.008 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.010 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4509527d-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.015 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.019 253665 INFO os_vif [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:86:b8,bridge_name='br-int',has_traffic_filtering=True,id=4509527d-ccca-4d6f-96b5-cce2f7e28b54,network=Network(6fefd6ab-3d4a-489e-983f-f6640c22be71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4509527d-cc')#033[00m
Nov 22 04:31:23 np0005532048 podman[353214]: 2025-11-22 09:31:23.028172593 +0000 UTC m=+0.109718631 container cleanup 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:31:23 np0005532048 systemd[1]: libpod-conmon-3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4.scope: Deactivated successfully.
Nov 22 04:31:23 np0005532048 podman[353269]: 2025-11-22 09:31:23.100932744 +0000 UTC m=+0.051803756 container remove 3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 04:31:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.110 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf914db-e836-43f9-b2ae-d513bf09ca1e]: (4, ('Sat Nov 22 09:31:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 (3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4)\n3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4\nSat Nov 22 09:31:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 (3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4)\n3113d3cc874812d6daef06c896ded5234102ba71792a24122333e349ab76cdd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.113 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4459dd-67fa-4447-bc9d-11442973e404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.114 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fefd6ab-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:23 np0005532048 kernel: tap6fefd6ab-30: left promiscuous mode
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.118 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.135 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fff1ddaa-6238-4c5b-9cf4-08f1d518cd7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.149 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa522aa-0f9e-475c-b513-8a79478da663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.151 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a75bbd83-a294-4146-9e3d-684cbb444e10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.170 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c05b2ad-a5e2-4cb2-9ed8-beecfbd8e282]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673037, 'reachable_time': 39814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353287, 'error': None, 'target': 'ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:23 np0005532048 systemd[1]: run-netns-ovnmeta\x2d6fefd6ab\x2d3d4a\x2d489e\x2d983f\x2df6640c22be71.mount: Deactivated successfully.
Nov 22 04:31:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.174 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6fefd6ab-3d4a-489e-983f-f6640c22be71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:31:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:23.175 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd20e7e-29f7-48c0-b23a-d420c670fd11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.481 253665 INFO nova.virt.libvirt.driver [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Deleting instance files /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_del#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.482 253665 INFO nova.virt.libvirt.driver [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Deletion of /var/lib/nova/instances/0a3de2bf-6305-4b6e-a9c1-1932598e5bb9_del complete#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.586 253665 INFO nova.compute.manager [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.586 253665 DEBUG oslo.service.loopingcall [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.587 253665 DEBUG nova.compute.manager [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:31:23 np0005532048 nova_compute[253661]: 2025-11-22 09:31:23.587 253665 DEBUG nova.network.neutron [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:31:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 167 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.074 253665 DEBUG nova.compute.manager [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-unplugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.075 253665 DEBUG oslo_concurrency.lockutils [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.075 253665 DEBUG oslo_concurrency.lockutils [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.076 253665 DEBUG oslo_concurrency.lockutils [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.076 253665 DEBUG nova.compute.manager [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] No waiting events found dispatching network-vif-unplugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.076 253665 DEBUG nova.compute.manager [req-4ed781e7-abee-4db0-8cb7-9ab125ccee36 req-b1792056-4011-433c-b7ca-093de52741fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-unplugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.283 253665 DEBUG nova.network.neutron [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.300 253665 INFO nova.compute.manager [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Took 1.71 seconds to deallocate network for instance.#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.347 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.348 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.469 253665 DEBUG oslo_concurrency.processutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.589 253665 INFO nova.compute.manager [None req-b22f5b8f-6e06-4c5b-9d3d-02fa0a0d0d9f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Get console output#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.600 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.696 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 145 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.1 MiB/s wr, 149 op/s
Nov 22 04:31:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:31:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4184846593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.966 253665 DEBUG oslo_concurrency.processutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.975 253665 DEBUG nova.compute.provider_tree [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:31:25 np0005532048 nova_compute[253661]: 2025-11-22 09:31:25.988 253665 DEBUG nova.scheduler.client.report [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.009 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.042 253665 INFO nova.scheduler.client.report [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Deleted allocations for instance 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.135 253665 DEBUG oslo_concurrency.lockutils [None req-9b341eed-5879-45ec-9fd9-1a4cf8a05b36 2ab3f37df3674a13a02926c1e3d79bbf a426656c0559412895fd288e6aaaf579 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.784 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.785 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.785 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.786 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.786 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.787 253665 INFO nova.compute.manager [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Terminating instance#033[00m
Nov 22 04:31:26 np0005532048 nova_compute[253661]: 2025-11-22 09:31:26.788 253665 DEBUG nova.compute.manager [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:31:27 np0005532048 kernel: tapa13cc417-ed (unregistering): left promiscuous mode
Nov 22 04:31:27 np0005532048 NetworkManager[48920]: <info>  [1763803887.0519] device (tapa13cc417-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:31:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:27Z|00973|binding|INFO|Releasing lport a13cc417-edce-4c30-a5b0-f90095810bcc from this chassis (sb_readonly=0)
Nov 22 04:31:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:27Z|00974|binding|INFO|Setting lport a13cc417-edce-4c30-a5b0-f90095810bcc down in Southbound
Nov 22 04:31:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:27Z|00975|binding|INFO|Removing iface tapa13cc417-ed ovn-installed in OVS
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.075 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:4e:e1 10.100.0.10'], port_security=['fa:16:3e:55:4e:e1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bcad6c6-374a-4697-ae00-916836e6498e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0dea6476-4fee-41aa-8572-212b34cd06a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79997ef2-17e2-4f21-8229-7c6dd79ef3c8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a13cc417-edce-4c30-a5b0-f90095810bcc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.078 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a13cc417-edce-4c30-a5b0-f90095810bcc in datapath 7bcad6c6-374a-4697-ae00-916836e6498e unbound from our chassis#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.081 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7bcad6c6-374a-4697-ae00-916836e6498e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.082 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cebd317e-224f-42c4-ae5f-e3698c19b021]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.083 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e namespace which is not needed anymore#033[00m
Nov 22 04:31:27 np0005532048 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 22 04:31:27 np0005532048 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000060.scope: Consumed 14.003s CPU time.
Nov 22 04:31:27 np0005532048 systemd-machined[215941]: Machine qemu-116-instance-00000060 terminated.
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.174 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.174 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.175 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.175 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "0a3de2bf-6305-4b6e-a9c1-1932598e5bb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.175 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] No waiting events found dispatching network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.176 253665 WARNING nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received unexpected event network-vif-plugged-4509527d-ccca-4d6f-96b5-cce2f7e28b54 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.176 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Received event network-vif-deleted-4509527d-ccca-4d6f-96b5-cce2f7e28b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.176 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.177 253665 DEBUG nova.compute.manager [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing instance network info cache due to event network-changed-a13cc417-edce-4c30-a5b0-f90095810bcc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.177 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.177 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.178 253665 DEBUG nova.network.neutron [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Refreshing network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.229 253665 INFO nova.virt.libvirt.driver [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Instance destroyed successfully.#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.231 253665 DEBUG nova.objects.instance [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:27 np0005532048 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [NOTICE]   (352601) : haproxy version is 2.8.14-c23fe91
Nov 22 04:31:27 np0005532048 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [NOTICE]   (352601) : path to executable is /usr/sbin/haproxy
Nov 22 04:31:27 np0005532048 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [WARNING]  (352601) : Exiting Master process...
Nov 22 04:31:27 np0005532048 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [ALERT]    (352601) : Current worker (352603) exited with code 143 (Terminated)
Nov 22 04:31:27 np0005532048 neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e[352595]: [WARNING]  (352601) : All workers exited. Exiting... (0)
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.247 253665 DEBUG nova.virt.libvirt.vif [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:30:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-458402395',display_name='tempest-TestNetworkAdvancedServerOps-server-458402395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-458402395',id=96,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPN6Rf2Pe6I6Kwug9Q7FGB75vk9ho8mQhQaKMB+gkIT1QntL149y3I7blWOrUF/CBmpP9hEhIUJwXQpTVsnaSm2uVyBQ0rC8pr4pNUdemX2qkiqIxYyhgu6PS131KVtofw==',key_name='tempest-TestNetworkAdvancedServerOps-1088921116',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-1ip0oei2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:31:22Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.247 253665 DEBUG nova.network.os_vif_util [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.248 253665 DEBUG nova.network.os_vif_util [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.248 253665 DEBUG os_vif [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:31:27 np0005532048 systemd[1]: libpod-eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc.scope: Deactivated successfully.
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.250 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa13cc417-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.252 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.255 253665 INFO os_vif [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:4e:e1,bridge_name='br-int',has_traffic_filtering=True,id=a13cc417-edce-4c30-a5b0-f90095810bcc,network=Network(7bcad6c6-374a-4697-ae00-916836e6498e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13cc417-ed')#033[00m
Nov 22 04:31:27 np0005532048 podman[353336]: 2025-11-22 09:31:27.256724699 +0000 UTC m=+0.059643729 container died eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:31:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc-userdata-shm.mount: Deactivated successfully.
Nov 22 04:31:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2b23d174e80fe906250cd93b44fdfabff1fe6ae03ff6df4bfad6521ddf56c356-merged.mount: Deactivated successfully.
Nov 22 04:31:27 np0005532048 podman[353336]: 2025-11-22 09:31:27.295258238 +0000 UTC m=+0.098177288 container cleanup eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:31:27 np0005532048 systemd[1]: libpod-conmon-eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc.scope: Deactivated successfully.
Nov 22 04:31:27 np0005532048 podman[353393]: 2025-11-22 09:31:27.391125777 +0000 UTC m=+0.062191861 container remove eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.398 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84632274-557c-4200-aa5c-734f8ec82c75]: (4, ('Sat Nov 22 09:31:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e (eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc)\neab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc\nSat Nov 22 09:31:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e (eab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc)\neab6b59cfd4b6b30f06244f57cc2b3fa5e746dbc3ae42129a0f8670838aa64dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.400 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc42f522-9aec-460d-9536-b724c6d132fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.401 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bcad6c6-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:27 np0005532048 kernel: tap7bcad6c6-30: left promiscuous mode
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.422 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e44ac1ff-65ad-4efd-90f4-4a0512958dff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.434 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88447bb4-bdad-486d-85b2-eddb5eee2adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.435 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33b15d44-a9fa-4355-8235-0ad850189dc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.460 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[919b9e4c-3583-42f4-b525-157a43f2e0ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670855, 'reachable_time': 15238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353408, 'error': None, 'target': 'ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:27 np0005532048 systemd[1]: run-netns-ovnmeta\x2d7bcad6c6\x2d374a\x2d4697\x2dae00\x2d916836e6498e.mount: Deactivated successfully.
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.466 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7bcad6c6-374a-4697-ae00-916836e6498e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.466 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[39c294a6-0e07-4b60-8b09-c71a6eeb70e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.715 253665 INFO nova.virt.libvirt.driver [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Deleting instance files /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_del#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.717 253665 INFO nova.virt.libvirt.driver [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Deletion of /var/lib/nova/instances/84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2_del complete#033[00m
Nov 22 04:31:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 137 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 905 KiB/s wr, 114 op/s
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.770 253665 INFO nova.compute.manager [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.771 253665 DEBUG oslo.service.loopingcall [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.772 253665 DEBUG nova.compute.manager [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:31:27 np0005532048 nova_compute[253661]: 2025-11-22 09:31:27.772 253665 DEBUG nova.network.neutron [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.974 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.975 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:27.975 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.114 253665 DEBUG nova.network.neutron [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updated VIF entry in instance network info cache for port a13cc417-edce-4c30-a5b0-f90095810bcc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.115 253665 DEBUG nova.network.neutron [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [{"id": "a13cc417-edce-4c30-a5b0-f90095810bcc", "address": "fa:16:3e:55:4e:e1", "network": {"id": "7bcad6c6-374a-4697-ae00-916836e6498e", "bridge": "br-int", "label": "tempest-network-smoke--90187414", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13cc417-ed", "ovs_interfaceid": "a13cc417-edce-4c30-a5b0-f90095810bcc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.139 253665 DEBUG oslo_concurrency.lockutils [req-11630167-9b26-4158-885e-750e814b971e req-7c3d6893-c34f-4788-bf62-8d3d33c9394b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.266 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-unplugged-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.266 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.267 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.267 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.267 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] No waiting events found dispatching network-vif-unplugged-a13cc417-edce-4c30-a5b0-f90095810bcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.268 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-unplugged-a13cc417-edce-4c30-a5b0-f90095810bcc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.268 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.268 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.269 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.269 253665 DEBUG oslo_concurrency.lockutils [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.270 253665 DEBUG nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] No waiting events found dispatching network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.270 253665 WARNING nova.compute.manager [req-b048b38d-8230-424a-921a-b820f85deafd req-449a4abe-b374-44dd-9e03-45a3ed010f5c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received unexpected event network-vif-plugged-a13cc417-edce-4c30-a5b0-f90095810bcc for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.287 253665 DEBUG nova.network.neutron [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.308 253665 INFO nova.compute.manager [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Took 1.54 seconds to deallocate network for instance.#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.358 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.358 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.371 253665 DEBUG nova.compute.manager [req-8045d8b0-3b00-4b8d-ab63-10ce9eeb320f req-19aa8cc1-dc35-48b0-ada0-dc6482e9c332 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Received event network-vif-deleted-a13cc417-edce-4c30-a5b0-f90095810bcc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.415 253665 DEBUG oslo_concurrency.processutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 67 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 540 KiB/s wr, 149 op/s
Nov 22 04:31:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:31:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567001461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.861 253665 DEBUG oslo_concurrency.processutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.870 253665 DEBUG nova.compute.provider_tree [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.890 253665 DEBUG nova.scheduler.client.report [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.938 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:29 np0005532048 nova_compute[253661]: 2025-11-22 09:31:29.988 253665 INFO nova.scheduler.client.report [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2#033[00m
Nov 22 04:31:30 np0005532048 nova_compute[253661]: 2025-11-22 09:31:30.071 253665 DEBUG oslo_concurrency.lockutils [None req-ac1fd153-aab1-4cf8-bd65-761fe303857c ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:30 np0005532048 nova_compute[253661]: 2025-11-22 09:31:30.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:30 np0005532048 nova_compute[253661]: 2025-11-22 09:31:30.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:30 np0005532048 nova_compute[253661]: 2025-11-22 09:31:30.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 67 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 125 op/s
Nov 22 04:31:32 np0005532048 nova_compute[253661]: 2025-11-22 09:31:32.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:32 np0005532048 nova_compute[253661]: 2025-11-22 09:31:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:32 np0005532048 nova_compute[253661]: 2025-11-22 09:31:32.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:31:32 np0005532048 nova_compute[253661]: 2025-11-22 09:31:32.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:31:32 np0005532048 nova_compute[253661]: 2025-11-22 09:31:32.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:31:32 np0005532048 nova_compute[253661]: 2025-11-22 09:31:32.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.627477) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893627531, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2161, "num_deletes": 263, "total_data_size": 3183771, "memory_usage": 3238168, "flush_reason": "Manual Compaction"}
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893644242, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2043766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40250, "largest_seqno": 42410, "table_properties": {"data_size": 2035989, "index_size": 4403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19520, "raw_average_key_size": 21, "raw_value_size": 2019082, "raw_average_value_size": 2226, "num_data_blocks": 196, "num_entries": 907, "num_filter_entries": 907, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803717, "oldest_key_time": 1763803717, "file_creation_time": 1763803893, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 16852 microseconds, and 5534 cpu microseconds.
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.644293) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2043766 bytes OK
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.644353) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.646654) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.646668) EVENT_LOG_v1 {"time_micros": 1763803893646664, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.646688) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3174597, prev total WAL file size 3174597, number of live WAL files 2.
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.647643) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353034' seq:72057594037927935, type:22 .. '6D6772737461740031373535' seq:0, type:0; will stop at (end)
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1995KB)], [89(9722KB)]
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893647676, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11999515, "oldest_snapshot_seqno": -1}
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6947 keys, 9771432 bytes, temperature: kUnknown
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893732702, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 9771432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9724582, "index_size": 28370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 174827, "raw_average_key_size": 25, "raw_value_size": 9599915, "raw_average_value_size": 1381, "num_data_blocks": 1148, "num_entries": 6947, "num_filter_entries": 6947, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803893, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.732951) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9771432 bytes
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.734593) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.0 rd, 114.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(10.7) write-amplify(4.8) OK, records in: 7397, records dropped: 450 output_compression: NoCompression
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.734641) EVENT_LOG_v1 {"time_micros": 1763803893734624, "job": 52, "event": "compaction_finished", "compaction_time_micros": 85107, "compaction_time_cpu_micros": 23212, "output_level": 6, "num_output_files": 1, "total_output_size": 9771432, "num_input_records": 7397, "num_output_records": 6947, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893735400, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803893737832, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.647547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:33.738107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 127 op/s
Nov 22 04:31:34 np0005532048 nova_compute[253661]: 2025-11-22 09:31:34.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:35 np0005532048 nova_compute[253661]: 2025-11-22 09:31:35.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.7 KiB/s wr, 94 op/s
Nov 22 04:31:36 np0005532048 nova_compute[253661]: 2025-11-22 09:31:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:36 np0005532048 nova_compute[253661]: 2025-11-22 09:31:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:36 np0005532048 nova_compute[253661]: 2025-11-22 09:31:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:36 np0005532048 nova_compute[253661]: 2025-11-22 09:31:36.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.251 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:31:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055461790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:31:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.756 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.950 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.951 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3887MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.952 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.952 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.983 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803882.9819217, 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:37 np0005532048 nova_compute[253661]: 2025-11-22 09:31:37.984 253665 INFO nova.compute.manager [-] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.011 253665 DEBUG nova.compute.manager [None req-80f961e2-0ede-48e9-b998-97b1fbd8138c - - - - - -] [instance: 0a3de2bf-6305-4b6e-a9c1-1932598e5bb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.038 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.056 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.079 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.079 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.093 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.121 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.135 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:31:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2394604051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:31:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.637 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.643 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.661 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.692 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:31:38 np0005532048 nova_compute[253661]: 2025-11-22 09:31:38.692 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.664446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899664506, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 306, "num_deletes": 251, "total_data_size": 99376, "memory_usage": 106488, "flush_reason": "Manual Compaction"}
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899667023, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 98525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42411, "largest_seqno": 42716, "table_properties": {"data_size": 96573, "index_size": 180, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5114, "raw_average_key_size": 18, "raw_value_size": 92692, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803894, "oldest_key_time": 1763803894, "file_creation_time": 1763803899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 2611 microseconds, and 1037 cpu microseconds.
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.667059) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 98525 bytes OK
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.667081) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668309) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668343) EVENT_LOG_v1 {"time_micros": 1763803899668337, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 97175, prev total WAL file size 97175, number of live WAL files 2.
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(96KB)], [92(9542KB)]
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899668884, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 9869957, "oldest_snapshot_seqno": -1}
Nov 22 04:31:39 np0005532048 nova_compute[253661]: 2025-11-22 09:31:39.694 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6716 keys, 8217195 bytes, temperature: kUnknown
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899741703, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 8217195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8173305, "index_size": 25986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 170722, "raw_average_key_size": 25, "raw_value_size": 8054147, "raw_average_value_size": 1199, "num_data_blocks": 1038, "num_entries": 6716, "num_filter_entries": 6716, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.742021) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 8217195 bytes
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.743833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.4 rd, 112.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.3 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(183.6) write-amplify(83.4) OK, records in: 7225, records dropped: 509 output_compression: NoCompression
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.743863) EVENT_LOG_v1 {"time_micros": 1763803899743851, "job": 54, "event": "compaction_finished", "compaction_time_micros": 72908, "compaction_time_cpu_micros": 38373, "output_level": 6, "num_output_files": 1, "total_output_size": 8217195, "num_input_records": 7225, "num_output_records": 6716, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899744085, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803899745678, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.668698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:39 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:31:39.745726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:31:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Nov 22 04:31:40 np0005532048 nova_compute[253661]: 2025-11-22 09:31:40.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Nov 22 04:31:42 np0005532048 nova_compute[253661]: 2025-11-22 09:31:42.225 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803887.2243953, 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:42 np0005532048 nova_compute[253661]: 2025-11-22 09:31:42.225 253665 INFO nova.compute.manager [-] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:31:42 np0005532048 nova_compute[253661]: 2025-11-22 09:31:42.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:42 np0005532048 nova_compute[253661]: 2025-11-22 09:31:42.240 253665 DEBUG nova.compute.manager [None req-c07db297-99cc-4a60-8594-8b8350052adf - - - - - -] [instance: 84ebb00a-8b3d-47b6-8e37-0b40a8ee4ef2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:42 np0005532048 nova_compute[253661]: 2025-11-22 09:31:42.301 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:43 np0005532048 podman[353480]: 2025-11-22 09:31:43.367174122 +0000 UTC m=+0.058194644 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:31:43 np0005532048 podman[353481]: 2025-11-22 09:31:43.379796492 +0000 UTC m=+0.064811956 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 04:31:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 2 op/s
Nov 22 04:31:45 np0005532048 nova_compute[253661]: 2025-11-22 09:31:45.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:31:47 np0005532048 nova_compute[253661]: 2025-11-22 09:31:47.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:31:47 np0005532048 nova_compute[253661]: 2025-11-22 09:31:47.304 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:31:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:31:50 np0005532048 nova_compute[253661]: 2025-11-22 09:31:50.346 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:50 np0005532048 nova_compute[253661]: 2025-11-22 09:31:50.347 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:50 np0005532048 nova_compute[253661]: 2025-11-22 09:31:50.361 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:31:50 np0005532048 nova_compute[253661]: 2025-11-22 09:31:50.449 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:50 np0005532048 nova_compute[253661]: 2025-11-22 09:31:50.450 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:50 np0005532048 podman[353519]: 2025-11-22 09:31:50.450356897 +0000 UTC m=+0.137735402 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:31:50 np0005532048 nova_compute[253661]: 2025-11-22 09:31:50.459 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:31:50 np0005532048 nova_compute[253661]: 2025-11-22 09:31:50.459 253665 INFO nova.compute.claims [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:31:50 np0005532048 nova_compute[253661]: 2025-11-22 09:31:50.567 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:50 np0005532048 nova_compute[253661]: 2025-11-22 09:31:50.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:31:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998360242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.038 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.044 253665 DEBUG nova.compute.provider_tree [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.059 253665 DEBUG nova.scheduler.client.report [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.088 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.089 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.132 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.133 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.158 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.185 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.291 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.293 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.293 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating image(s)#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.319 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.345 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.372 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.376 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.417 253665 DEBUG nova.policy [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '04e47309bea74c04b0750912db283ae1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93c8020137e04db486facc42cfe30f23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.468 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.469 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.469 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.470 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.494 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.498 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7da16450-9ec5-472a-99df-81f56ee341fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 41 MiB data, 653 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.883 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7da16450-9ec5-472a-99df-81f56ee341fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:51 np0005532048 nova_compute[253661]: 2025-11-22 09:31:51.962 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] resizing rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.085 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.085 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.091 253665 DEBUG nova.objects.instance [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'migration_context' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.115 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.116 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Ensure instance console log exists: /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.116 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.117 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.117 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.118 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.213 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.214 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.225 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.226 253665 INFO nova.compute.claims [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:31:52
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', '.mgr', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.307 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.372 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:31:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.799 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Successfully created port: 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:31:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:31:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457592920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.858 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.864 253665 DEBUG nova.compute.provider_tree [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.885 253665 DEBUG nova.scheduler.client.report [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.935 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.937 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.988 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:31:52 np0005532048 nova_compute[253661]: 2025-11-22 09:31:52.988 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.003 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.017 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.105 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.106 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.106 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Creating image(s)#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.134 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.159 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.185 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.189 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.229 253665 DEBUG nova.policy [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.277 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.279 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.280 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.281 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.320 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.329 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d60d8746-9288-4829-8073-bed8cf04d748_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 61 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 618 KiB/s wr, 22 op/s
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.831 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d60d8746-9288-4829-8073-bed8cf04d748_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:53 np0005532048 nova_compute[253661]: 2025-11-22 09:31:53.905 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.036 253665 DEBUG nova.objects.instance [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid d60d8746-9288-4829-8073-bed8cf04d748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.057 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.057 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Ensure instance console log exists: /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.058 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.058 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.059 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.060 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Successfully updated port: 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.071 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.072 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.072 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.227 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.299 253665 DEBUG nova.compute.manager [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-changed-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.299 253665 DEBUG nova.compute.manager [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Refreshing instance network info cache due to event network-changed-5b1477f9-c3cf-4bac-95a5-109e7ae8d852. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.300 253665 DEBUG oslo_concurrency.lockutils [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:31:54 np0005532048 nova_compute[253661]: 2025-11-22 09:31:54.929 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Successfully created port: f0934c58-4d53-43e5-8132-eb2195819f1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.366 253665 DEBUG nova.network.neutron [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.412 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.412 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance network_info: |[{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.412 253665 DEBUG oslo_concurrency.lockutils [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.413 253665 DEBUG nova.network.neutron [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Refreshing network info cache for port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.415 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start _get_guest_xml network_info=[{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.420 253665 WARNING nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.425 253665 DEBUG nova.virt.libvirt.host [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.425 253665 DEBUG nova.virt.libvirt.host [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.429 253665 DEBUG nova.virt.libvirt.host [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.429 253665 DEBUG nova.virt.libvirt.host [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.430 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.430 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.430 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.431 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.432 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.432 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.432 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.432 253665 DEBUG nova.virt.hardware [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.435 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:31:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:31:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:31:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:31:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 103 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.3 MiB/s wr, 39 op/s
Nov 22 04:31:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:31:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1054565765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.930 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.954 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:55 np0005532048 nova_compute[253661]: 2025-11-22 09:31:55.959 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.314 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Successfully updated port: f0934c58-4d53-43e5-8132-eb2195819f1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.340 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.340 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.340 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:31:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:31:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460515769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.425 253665 DEBUG nova.compute.manager [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.426 253665 DEBUG nova.compute.manager [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing instance network info cache due to event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.427 253665 DEBUG oslo_concurrency.lockutils [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.447 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.450 253665 DEBUG nova.virt.libvirt.vif [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1786356758',display_name='tempest-ServerRescueTestJSON-server-1786356758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1786356758',id=98,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-nx025m1d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:51Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=7da16450-9ec5-472a-99df-81f56ee341fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.451 253665 DEBUG nova.network.os_vif_util [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.452 253665 DEBUG nova.network.os_vif_util [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.454 253665 DEBUG nova.objects.instance [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.467 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <uuid>7da16450-9ec5-472a-99df-81f56ee341fc</uuid>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <name>instance-00000062</name>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerRescueTestJSON-server-1786356758</nova:name>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:31:55</nova:creationTime>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <nova:user uuid="04e47309bea74c04b0750912db283ae1">tempest-ServerRescueTestJSON-264324954-project-member</nova:user>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <nova:project uuid="93c8020137e04db486facc42cfe30f23">tempest-ServerRescueTestJSON-264324954</nova:project>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <nova:port uuid="5b1477f9-c3cf-4bac-95a5-109e7ae8d852">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <entry name="serial">7da16450-9ec5-472a-99df-81f56ee341fc</entry>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <entry name="uuid">7da16450-9ec5-472a-99df-81f56ee341fc</entry>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk.config">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:60:b5:90"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <target dev="tap5b1477f9-c3"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/console.log" append="off"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:31:56 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:31:56 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:31:56 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:31:56 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.469 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Preparing to wait for external event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.470 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.470 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.470 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.471 253665 DEBUG nova.virt.libvirt.vif [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1786356758',display_name='tempest-ServerRescueTestJSON-server-1786356758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1786356758',id=98,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-nx025m1d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:51Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=7da16450-9ec5-472a-99df-81f56ee341fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.472 253665 DEBUG nova.network.os_vif_util [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.473 253665 DEBUG nova.network.os_vif_util [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.473 253665 DEBUG os_vif [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.474 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.475 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.478 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.482 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.483 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b1477f9-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.483 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b1477f9-c3, col_values=(('external_ids', {'iface-id': '5b1477f9-c3cf-4bac-95a5-109e7ae8d852', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:b5:90', 'vm-uuid': '7da16450-9ec5-472a-99df-81f56ee341fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:56 np0005532048 NetworkManager[48920]: <info>  [1763803916.5149] manager: (tap5b1477f9-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.524 253665 INFO os_vif [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3')#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.572 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.572 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.573 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No VIF found with MAC fa:16:3e:60:b5:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.573 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Using config drive#033[00m
Nov 22 04:31:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:31:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:31:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:31:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:31:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:31:56 np0005532048 nova_compute[253661]: 2025-11-22 09:31:56.602 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.232 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating config drive at /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.242 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgbafze39 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.423 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgbafze39" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.470 253665 DEBUG nova.storage.rbd_utils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.476 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.540 253665 DEBUG nova.network.neutron [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updated VIF entry in instance network info cache for port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.542 253665 DEBUG nova.network.neutron [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.560 253665 DEBUG oslo_concurrency.lockutils [req-5950f52b-c15e-42ae-a47f-805fe47d39ee req-d86a2a36-2702-4a3d-a456-710aa19f4e4d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.686 253665 DEBUG oslo_concurrency.processutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.687 253665 INFO nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deleting local config drive /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config because it was imported into RBD.#033[00m
Nov 22 04:31:57 np0005532048 kernel: tap5b1477f9-c3: entered promiscuous mode
Nov 22 04:31:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:57Z|00976|binding|INFO|Claiming lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for this chassis.
Nov 22 04:31:57 np0005532048 NetworkManager[48920]: <info>  [1763803917.7570] manager: (tap5b1477f9-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Nov 22 04:31:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:57Z|00977|binding|INFO|5b1477f9-c3cf-4bac-95a5-109e7ae8d852: Claiming fa:16:3e:60:b5:90 10.100.0.7
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.757 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 120 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.1 MiB/s wr, 52 op/s
Nov 22 04:31:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:57.776 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:31:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:57.777 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis#033[00m
Nov 22 04:31:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:57.781 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:31:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:57.784 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ed82dc-1dbd-4e9b-ab3f-933998a402c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:31:57 np0005532048 systemd-udevd[354055]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:31:57 np0005532048 NetworkManager[48920]: <info>  [1763803917.8011] device (tap5b1477f9-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:31:57 np0005532048 NetworkManager[48920]: <info>  [1763803917.8024] device (tap5b1477f9-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:31:57 np0005532048 systemd-machined[215941]: New machine qemu-118-instance-00000062.
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:57 np0005532048 systemd[1]: Started Virtual Machine qemu-118-instance-00000062.
Nov 22 04:31:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:57Z|00978|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 ovn-installed in OVS
Nov 22 04:31:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:57Z|00979|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 up in Southbound
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.874 253665 DEBUG nova.network.neutron [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.893 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.893 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance network_info: |[{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.894 253665 DEBUG oslo_concurrency.lockutils [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.894 253665 DEBUG nova.network.neutron [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.897 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start _get_guest_xml network_info=[{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.913 253665 WARNING nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.920 253665 DEBUG nova.virt.libvirt.host [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.921 253665 DEBUG nova.virt.libvirt.host [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.926 253665 DEBUG nova.virt.libvirt.host [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.927 253665 DEBUG nova.virt.libvirt.host [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.927 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.927 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.928 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.928 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.928 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.929 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.930 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.930 253665 DEBUG nova.virt.hardware [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:31:57 np0005532048 nova_compute[253661]: 2025-11-22 09:31:57.933 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.265 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803918.2641804, 7da16450-9ec5-472a-99df-81f56ee341fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.267 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Started (Lifecycle Event)#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.289 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.293 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803918.2660067, 7da16450-9ec5-472a-99df-81f56ee341fc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.293 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.312 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.316 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.338 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:31:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:31:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230568166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.399 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.421 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.425 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:31:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:31:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647526940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.946 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.948 253665 DEBUG nova.virt.libvirt.vif [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-714198839',display_name='tempest-TestNetworkAdvancedServerOps-server-714198839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-714198839',id=99,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR2b18SIpx8gS1E3y/TzQyi9x+qeFqs0jOon8sbMm/5ZIjx+NrI5fGq/DEFizh5YAuLO2aSf/znN/DytSjdMVp7+cSM7ae+/kERmK84ftJ2WIfziOJizQIKJYVt0Z/aeQ==',key_name='tempest-TestNetworkAdvancedServerOps-1970810248',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-a3b9deas',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:53Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=d60d8746-9288-4829-8073-bed8cf04d748,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.948 253665 DEBUG nova.network.os_vif_util [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.949 253665 DEBUG nova.network.os_vif_util [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.950 253665 DEBUG nova.objects.instance [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid d60d8746-9288-4829-8073-bed8cf04d748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.965 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <uuid>d60d8746-9288-4829-8073-bed8cf04d748</uuid>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <name>instance-00000063</name>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-714198839</nova:name>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:31:57</nova:creationTime>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <nova:port uuid="f0934c58-4d53-43e5-8132-eb2195819f1f">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <entry name="serial">d60d8746-9288-4829-8073-bed8cf04d748</entry>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <entry name="uuid">d60d8746-9288-4829-8073-bed8cf04d748</entry>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d60d8746-9288-4829-8073-bed8cf04d748_disk">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d60d8746-9288-4829-8073-bed8cf04d748_disk.config">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:00:28:3e"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <target dev="tapf0934c58-4d"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/console.log" append="off"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:31:58 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:31:58 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:31:58 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:31:58 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.966 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Preparing to wait for external event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.968 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.968 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.969 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.969 253665 DEBUG nova.virt.libvirt.vif [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:31:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-714198839',display_name='tempest-TestNetworkAdvancedServerOps-server-714198839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-714198839',id=99,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR2b18SIpx8gS1E3y/TzQyi9x+qeFqs0jOon8sbMm/5ZIjx+NrI5fGq/DEFizh5YAuLO2aSf/znN/DytSjdMVp7+cSM7ae+/kERmK84ftJ2WIfziOJizQIKJYVt0Z/aeQ==',key_name='tempest-TestNetworkAdvancedServerOps-1970810248',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-a3b9deas',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:31:53Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=d60d8746-9288-4829-8073-bed8cf04d748,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.969 253665 DEBUG nova.network.os_vif_util [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.970 253665 DEBUG nova.network.os_vif_util [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.970 253665 DEBUG os_vif [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.971 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.972 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.974 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0934c58-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.975 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0934c58-4d, col_values=(('external_ids', {'iface-id': 'f0934c58-4d53-43e5-8132-eb2195819f1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:28:3e', 'vm-uuid': 'd60d8746-9288-4829-8073-bed8cf04d748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:58 np0005532048 NetworkManager[48920]: <info>  [1763803918.9780] manager: (tapf0934c58-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.984 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:58 np0005532048 nova_compute[253661]: 2025-11-22 09:31:58.985 253665 INFO os_vif [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d')#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.046 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.047 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.047 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:00:28:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.047 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Using config drive#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.073 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:31:59 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a4ede35d-ef15-4c9e-9bc5-adecdf0b0756 does not exist
Nov 22 04:31:59 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ac13eaa8-734f-49af-803f-74595ad681ff does not exist
Nov 22 04:31:59 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fac4903c-c737-49cd-a2ab-fc6b7ad30e94 does not exist
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:31:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.584 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Creating config drive at /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.589 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplw864cib execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.692 253665 DEBUG nova.network.neutron [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updated VIF entry in instance network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.693 253665 DEBUG nova.network.neutron [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.707 253665 DEBUG oslo_concurrency.lockutils [req-3a16ec20-17aa-4b69-885c-e3280cb04e24 req-c31dbcb1-f680-43a1-a164-fb895ab3ac1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.739 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplw864cib" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.764 253665 DEBUG nova.storage.rbd_utils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image d60d8746-9288-4829-8073-bed8cf04d748_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:31:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 134 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.768 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config d60d8746-9288-4829-8073-bed8cf04d748_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.938 253665 DEBUG oslo_concurrency.processutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config d60d8746-9288-4829-8073-bed8cf04d748_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.939 253665 INFO nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Deleting local config drive /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748/disk.config because it was imported into RBD.#033[00m
Nov 22 04:31:59 np0005532048 kernel: tapf0934c58-4d: entered promiscuous mode
Nov 22 04:31:59 np0005532048 NetworkManager[48920]: <info>  [1763803919.9815] manager: (tapf0934c58-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:59 np0005532048 nova_compute[253661]: 2025-11-22 09:31:59.986 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:31:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:59Z|00980|binding|INFO|Claiming lport f0934c58-4d53-43e5-8132-eb2195819f1f for this chassis.
Nov 22 04:31:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:31:59Z|00981|binding|INFO|f0934c58-4d53-43e5-8132-eb2195819f1f: Claiming fa:16:3e:00:28:3e 10.100.0.5
Nov 22 04:31:59 np0005532048 NetworkManager[48920]: <info>  [1763803919.9968] device (tapf0934c58-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:31:59 np0005532048 NetworkManager[48920]: <info>  [1763803919.9977] device (tapf0934c58-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:31:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:59.996 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:28:3e 10.100.0.5'], port_security=['fa:16:3e:00:28:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd60d8746-9288-4829-8073-bed8cf04d748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa17f410-f219-4ce2-8b8c-5124640f3749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12465327-1cbd-4adc-ab38-5ef26037180c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0934c58-4d53-43e5-8132-eb2195819f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:31:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:59.997 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0934c58-4d53-43e5-8132-eb2195819f1f in datapath 0263cd25-ddb2-49f9-ab5b-2f514c861684 bound to our chassis#033[00m
Nov 22 04:31:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:31:59.999 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0263cd25-ddb2-49f9-ab5b-2f514c861684#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[990298f3-bade-44a0-8c4b-c831f70f6940]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.016 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0263cd25-d1 in ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.018 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0263cd25-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[198828a5-7a2e-47f3-aadf-0f79c35c6fda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.019 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41c95b56-9820-4fc4-ba94-621eba424650]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 systemd-machined[215941]: New machine qemu-119-instance-00000063.
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.032 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3b272f29-1965-48e3-a87a-df6815198641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.055 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:00 np0005532048 systemd[1]: Started Virtual Machine qemu-119-instance-00000063.
Nov 22 04:32:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:00Z|00982|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f ovn-installed in OVS
Nov 22 04:32:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:00Z|00983|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f up in Southbound
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.063 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9c597a-8610-4165-9fe9-b7e2544e2c11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:32:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:32:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:32:00 np0005532048 podman[354515]: 2025-11-22 09:32:00.097335895 +0000 UTC m=+0.054340578 container create 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.102 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3463c8-8d68-41e0-8da3-d87fd102399f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.110 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6c5500-06cb-4cb5-bf69-b5c6f2cc5574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 NetworkManager[48920]: <info>  [1763803920.1118] manager: (tap0263cd25-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.143 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe8aa45-6a56-4885-b60c-dd10576339b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.146 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ded0d488-8dae-4dc7-8026-bad56225532c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 systemd[1]: Started libpod-conmon-11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4.scope.
Nov 22 04:32:00 np0005532048 podman[354515]: 2025-11-22 09:32:00.067901271 +0000 UTC m=+0.024905984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:32:00 np0005532048 NetworkManager[48920]: <info>  [1763803920.1691] device (tap0263cd25-d0): carrier: link connected
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.176 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23a21238-7e90-4fc8-a212-64682ba11791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.200 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d561cb61-8820-463d-b1b9-89543e89c015]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0263cd25-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:31:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677090, 'reachable_time': 37809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354562, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 podman[354515]: 2025-11-22 09:32:00.208572623 +0000 UTC m=+0.165577336 container init 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:32:00 np0005532048 podman[354515]: 2025-11-22 09:32:00.21777764 +0000 UTC m=+0.174782323 container start 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:32:00 np0005532048 kind_proskuriakova[354559]: 167 167
Nov 22 04:32:00 np0005532048 systemd[1]: libpod-11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4.scope: Deactivated successfully.
Nov 22 04:32:00 np0005532048 conmon[354559]: conmon 11305d5191f21d20f8b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4.scope/container/memory.events
Nov 22 04:32:00 np0005532048 podman[354515]: 2025-11-22 09:32:00.228778029 +0000 UTC m=+0.185782722 container attach 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:32:00 np0005532048 podman[354515]: 2025-11-22 09:32:00.229938168 +0000 UTC m=+0.186942861 container died 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.231 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[020d1288-d5e3-4fe0-a35a-cda55a773584]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:31f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677090, 'tstamp': 677090}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354563, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8aad9b08f2ab47fb6b08caee67c1654ff8997cd3b795703eb8bfa4d507f6352a-merged.mount: Deactivated successfully.
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.258 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa2b236-f7e8-4820-967f-c77f4e71b1df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0263cd25-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:31:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677090, 'reachable_time': 37809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354573, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 podman[354515]: 2025-11-22 09:32:00.27506186 +0000 UTC m=+0.232066543 container remove 11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:32:00 np0005532048 systemd[1]: libpod-conmon-11305d5191f21d20f8b8e7cddcb40b69a78d6659afcc552d756aa061d83c8ec4.scope: Deactivated successfully.
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.307 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee9d303-e27c-4e3b-8205-ad50823dc563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.374 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d164ec38-d7ca-4a02-8d91-a73b0b1d0b17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.376 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0263cd25-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.377 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.377 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0263cd25-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:00 np0005532048 NetworkManager[48920]: <info>  [1763803920.3804] manager: (tap0263cd25-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Nov 22 04:32:00 np0005532048 kernel: tap0263cd25-d0: entered promiscuous mode
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.383 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0263cd25-d0, col_values=(('external_ids', {'iface-id': '771f6da7-e306-4e95-84a5-f08be3c60513'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:00Z|00984|binding|INFO|Releasing lport 771f6da7-e306-4e95-84a5-f08be3c60513 from this chassis (sb_readonly=0)
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.406 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.407 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.409 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe89999d-50b9-4057-8b6d-9ad004e793fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.410 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:32:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:00.411 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'env', 'PROCESS_TAG=haproxy-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0263cd25-ddb2-49f9-ab5b-2f514c861684.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:32:00 np0005532048 podman[354591]: 2025-11-22 09:32:00.474355544 +0000 UTC m=+0.041552424 container create 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:32:00 np0005532048 systemd[1]: Started libpod-conmon-598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c.scope.
Nov 22 04:32:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:32:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:00 np0005532048 podman[354591]: 2025-11-22 09:32:00.455538551 +0000 UTC m=+0.022735451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:32:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:00 np0005532048 podman[354591]: 2025-11-22 09:32:00.568653045 +0000 UTC m=+0.135849945 container init 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:32:00 np0005532048 podman[354591]: 2025-11-22 09:32:00.575549745 +0000 UTC m=+0.142746625 container start 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 04:32:00 np0005532048 podman[354591]: 2025-11-22 09:32:00.579038351 +0000 UTC m=+0.146235231 container attach 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:00 np0005532048 podman[354670]: 2025-11-22 09:32:00.832714365 +0000 UTC m=+0.063374781 container create 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:32:00 np0005532048 systemd[1]: Started libpod-conmon-18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da.scope.
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.878 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803920.8782458, d60d8746-9288-4829-8073-bed8cf04d748 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.879 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Started (Lifecycle Event)#033[00m
Nov 22 04:32:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.899 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:00 np0005532048 podman[354670]: 2025-11-22 09:32:00.805453473 +0000 UTC m=+0.036113909 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:32:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05970ec99c950ccf608d1273db9007048dadc52d0e0d40152159bb3df22e752b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.909 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803920.8784685, d60d8746-9288-4829-8073-bed8cf04d748 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.909 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:32:00 np0005532048 podman[354670]: 2025-11-22 09:32:00.922652618 +0000 UTC m=+0.153313064 container init 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.925 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.929 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:32:00 np0005532048 podman[354670]: 2025-11-22 09:32:00.93085353 +0000 UTC m=+0.161513946 container start 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 04:32:00 np0005532048 nova_compute[253661]: 2025-11-22 09:32:00.947 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:32:00 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [NOTICE]   (354701) : New worker (354703) forked
Nov 22 04:32:00 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [NOTICE]   (354701) : Loading success.
Nov 22 04:32:01 np0005532048 magical_elbakyan[354610]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:32:01 np0005532048 magical_elbakyan[354610]: --> relative data size: 1.0
Nov 22 04:32:01 np0005532048 magical_elbakyan[354610]: --> All data devices are unavailable
Nov 22 04:32:01 np0005532048 systemd[1]: libpod-598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c.scope: Deactivated successfully.
Nov 22 04:32:01 np0005532048 systemd[1]: libpod-598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c.scope: Consumed 1.054s CPU time.
Nov 22 04:32:01 np0005532048 podman[354736]: 2025-11-22 09:32:01.764335484 +0000 UTC m=+0.025927359 container died 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:32:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 134 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Nov 22 04:32:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c9ed96c3288e42459deb59283c40b934c39368218ec247f98ed914a1652fb801-merged.mount: Deactivated successfully.
Nov 22 04:32:01 np0005532048 podman[354736]: 2025-11-22 09:32:01.821628124 +0000 UTC m=+0.083219989 container remove 598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:32:01 np0005532048 systemd[1]: libpod-conmon-598442673f5dc6bef695f621bf970bb333b86e1e2d2478be7bcf97777c71029c.scope: Deactivated successfully.
Nov 22 04:32:02 np0005532048 podman[354890]: 2025-11-22 09:32:02.521701915 +0000 UTC m=+0.048471654 container create 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:32:02 np0005532048 systemd[1]: Started libpod-conmon-4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878.scope.
Nov 22 04:32:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:32:02 np0005532048 podman[354890]: 2025-11-22 09:32:02.502359319 +0000 UTC m=+0.029129118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:32:02 np0005532048 podman[354890]: 2025-11-22 09:32:02.604072912 +0000 UTC m=+0.130842681 container init 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:32:02 np0005532048 podman[354890]: 2025-11-22 09:32:02.613599557 +0000 UTC m=+0.140369286 container start 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:32:02 np0005532048 podman[354890]: 2025-11-22 09:32:02.61697663 +0000 UTC m=+0.143746379 container attach 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:32:02 np0005532048 practical_morse[354906]: 167 167
Nov 22 04:32:02 np0005532048 systemd[1]: libpod-4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878.scope: Deactivated successfully.
Nov 22 04:32:02 np0005532048 podman[354890]: 2025-11-22 09:32:02.621419849 +0000 UTC m=+0.148189588 container died 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:32:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-423749be79d1ee976dae80034cd5e08a3c7202bf9b8b0ebe326ba631a170203a-merged.mount: Deactivated successfully.
Nov 22 04:32:02 np0005532048 podman[354890]: 2025-11-22 09:32:02.710507432 +0000 UTC m=+0.237277171 container remove 4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_morse, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:32:02 np0005532048 systemd[1]: libpod-conmon-4e28b40dee1e723677b929c56e587729be758be4821ac883f83475f208358878.scope: Deactivated successfully.
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006920576732109136 of space, bias 1.0, pg target 0.20761730196327408 quantized to 32 (current 32)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:32:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:32:02 np0005532048 podman[354930]: 2025-11-22 09:32:02.883582682 +0000 UTC m=+0.043022770 container create cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:32:02 np0005532048 systemd[1]: Started libpod-conmon-cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea.scope.
Nov 22 04:32:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:32:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:02 np0005532048 podman[354930]: 2025-11-22 09:32:02.865751383 +0000 UTC m=+0.025191491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:32:02 np0005532048 podman[354930]: 2025-11-22 09:32:02.967539229 +0000 UTC m=+0.126979337 container init cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 04:32:02 np0005532048 podman[354930]: 2025-11-22 09:32:02.974419237 +0000 UTC m=+0.133859325 container start cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:32:02 np0005532048 podman[354930]: 2025-11-22 09:32:02.977679438 +0000 UTC m=+0.137119556 container attach cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.127 253665 DEBUG nova.compute.manager [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.128 253665 DEBUG oslo_concurrency.lockutils [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.128 253665 DEBUG oslo_concurrency.lockutils [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.129 253665 DEBUG oslo_concurrency.lockutils [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.129 253665 DEBUG nova.compute.manager [req-bcc2471d-3660-46b2-84b3-67204e3312a3 req-9f419b3f-6ec2-4ccf-a9a8-a0072345643c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Processing event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.130 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.135 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803923.1347756, 7da16450-9ec5-472a-99df-81f56ee341fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.135 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.137 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.140 253665 INFO nova.virt.libvirt.driver [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance spawned successfully.#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.140 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.163 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.168 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.168 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.168 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.169 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.169 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.170 253665 DEBUG nova.virt.libvirt.driver [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.172 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.213 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.235 253665 INFO nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Took 11.94 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.236 253665 DEBUG nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.303 253665 INFO nova.compute.manager [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Took 12.89 seconds to build instance.#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.320 253665 DEBUG oslo_concurrency.lockutils [None req-617d534a-32f5-4790-b5b3-a191ae2512ba 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.515 253665 DEBUG nova.compute.manager [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.515 253665 DEBUG oslo_concurrency.lockutils [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.515 253665 DEBUG oslo_concurrency.lockutils [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.516 253665 DEBUG oslo_concurrency.lockutils [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.516 253665 DEBUG nova.compute.manager [req-e7e92908-5ef3-45dd-a31d-a207ac929c0f req-881cef7b-1ff7-498c-a831-53cc07b2a256 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Processing event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.517 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.522 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803923.52259, d60d8746-9288-4829-8073-bed8cf04d748 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.523 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.525 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.530 253665 INFO nova.virt.libvirt.driver [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance spawned successfully.#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.530 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.560 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.565 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.577 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.578 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.579 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.580 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.580 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.581 253665 DEBUG nova.virt.libvirt.driver [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.588 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:32:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.643 253665 INFO nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Took 10.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.644 253665 DEBUG nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.718 253665 INFO nova.compute.manager [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Took 11.53 seconds to build instance.#033[00m
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.734 253665 DEBUG oslo_concurrency.lockutils [None req-f31af24e-c5bf-439f-a70d-736e272d60cb ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]: {
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:    "0": [
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:        {
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "devices": [
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "/dev/loop3"
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            ],
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_name": "ceph_lv0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_size": "21470642176",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "name": "ceph_lv0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "tags": {
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.cluster_name": "ceph",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.crush_device_class": "",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.encrypted": "0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.osd_id": "0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.type": "block",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.vdo": "0"
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            },
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "type": "block",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "vg_name": "ceph_vg0"
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:        }
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:    ],
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:    "1": [
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:        {
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "devices": [
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "/dev/loop4"
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            ],
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_name": "ceph_lv1",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_size": "21470642176",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "name": "ceph_lv1",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "tags": {
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.cluster_name": "ceph",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.crush_device_class": "",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.encrypted": "0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.osd_id": "1",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.type": "block",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.vdo": "0"
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            },
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "type": "block",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "vg_name": "ceph_vg1"
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:        }
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:    ],
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:    "2": [
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:        {
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "devices": [
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "/dev/loop5"
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            ],
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_name": "ceph_lv2",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_size": "21470642176",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "name": "ceph_lv2",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "tags": {
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.cluster_name": "ceph",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.crush_device_class": "",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.encrypted": "0",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.osd_id": "2",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.type": "block",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:                "ceph.vdo": "0"
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            },
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "type": "block",
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:            "vg_name": "ceph_vg2"
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:        }
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]:    ]
Nov 22 04:32:03 np0005532048 inspiring_turing[354946]: }
Nov 22 04:32:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Nov 22 04:32:03 np0005532048 systemd[1]: libpod-cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea.scope: Deactivated successfully.
Nov 22 04:32:03 np0005532048 podman[354955]: 2025-11-22 09:32:03.825437903 +0000 UTC m=+0.029615420 container died cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:32:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-816c9799219437323dd210a8f856f461aae5ad4c968c221a5a08a6aed1d6d9d5-merged.mount: Deactivated successfully.
Nov 22 04:32:03 np0005532048 podman[354955]: 2025-11-22 09:32:03.903547226 +0000 UTC m=+0.107724733 container remove cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_turing, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:32:03 np0005532048 systemd[1]: libpod-conmon-cf547f641ef9691d85a9c51d32c18ccf7307a6b6f90dbaa06569d2b373526cea.scope: Deactivated successfully.
Nov 22 04:32:03 np0005532048 nova_compute[253661]: 2025-11-22 09:32:03.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:04 np0005532048 nova_compute[253661]: 2025-11-22 09:32:04.493 253665 INFO nova.compute.manager [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Rescuing#033[00m
Nov 22 04:32:04 np0005532048 nova_compute[253661]: 2025-11-22 09:32:04.495 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:32:04 np0005532048 nova_compute[253661]: 2025-11-22 09:32:04.495 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:32:04 np0005532048 nova_compute[253661]: 2025-11-22 09:32:04.496 253665 DEBUG nova.network.neutron [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:32:04 np0005532048 podman[355110]: 2025-11-22 09:32:04.641676883 +0000 UTC m=+0.049467498 container create 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:32:04 np0005532048 systemd[1]: Started libpod-conmon-319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526.scope.
Nov 22 04:32:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:32:04 np0005532048 podman[355110]: 2025-11-22 09:32:04.619454236 +0000 UTC m=+0.027244871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:32:04 np0005532048 podman[355110]: 2025-11-22 09:32:04.729430573 +0000 UTC m=+0.137221198 container init 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:32:04 np0005532048 podman[355110]: 2025-11-22 09:32:04.739268075 +0000 UTC m=+0.147058690 container start 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:32:04 np0005532048 podman[355110]: 2025-11-22 09:32:04.743147971 +0000 UTC m=+0.150938606 container attach 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:32:04 np0005532048 vibrant_dijkstra[355126]: 167 167
Nov 22 04:32:04 np0005532048 systemd[1]: libpod-319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526.scope: Deactivated successfully.
Nov 22 04:32:04 np0005532048 podman[355110]: 2025-11-22 09:32:04.748174924 +0000 UTC m=+0.155965539 container died 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:32:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ccf20c3cc29d87eaebe20cc4577f3e1d7db9bab8f10fd78308a657d52d682db6-merged.mount: Deactivated successfully.
Nov 22 04:32:04 np0005532048 podman[355110]: 2025-11-22 09:32:04.803005394 +0000 UTC m=+0.210796009 container remove 319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dijkstra, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:32:04 np0005532048 systemd[1]: libpod-conmon-319ab81e62326d317a6f501c47229d8284280dc5407211a2c6991f66e2ee4526.scope: Deactivated successfully.
Nov 22 04:32:04 np0005532048 podman[355149]: 2025-11-22 09:32:04.999621483 +0000 UTC m=+0.061324280 container create ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:32:05 np0005532048 systemd[1]: Started libpod-conmon-ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4.scope.
Nov 22 04:32:05 np0005532048 podman[355149]: 2025-11-22 09:32:04.971191804 +0000 UTC m=+0.032894631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:32:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:32:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:05 np0005532048 podman[355149]: 2025-11-22 09:32:05.108171625 +0000 UTC m=+0.169874442 container init ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:32:05 np0005532048 podman[355149]: 2025-11-22 09:32:05.119245477 +0000 UTC m=+0.180948274 container start ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 22 04:32:05 np0005532048 podman[355149]: 2025-11-22 09:32:05.125105862 +0000 UTC m=+0.186808659 container attach ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.247 253665 DEBUG nova.compute.manager [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.248 253665 DEBUG oslo_concurrency.lockutils [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.248 253665 DEBUG oslo_concurrency.lockutils [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.249 253665 DEBUG oslo_concurrency.lockutils [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.249 253665 DEBUG nova.compute.manager [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.249 253665 WARNING nova.compute.manager [req-67575d17-222b-4313-83ba-22577609ae53 req-2dcc83a2-04c4-48fc-848d-5108c4e3c064 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state active and task_state rescuing.#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.595 253665 DEBUG nova.compute.manager [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.595 253665 DEBUG oslo_concurrency.lockutils [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.595 253665 DEBUG oslo_concurrency.lockutils [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.596 253665 DEBUG oslo_concurrency.lockutils [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.596 253665 DEBUG nova.compute.manager [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.596 253665 WARNING nova.compute.manager [req-d59f8077-2991-4191-b02f-b827ae1609d9 req-7fefd907-e952-496a-af59-9e953f2078b8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 3.0 MiB/s wr, 69 op/s
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.922 253665 DEBUG nova.network.neutron [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:32:05 np0005532048 nova_compute[253661]: 2025-11-22 09:32:05.945 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:32:06 np0005532048 epic_noether[355166]: {
Nov 22 04:32:06 np0005532048 epic_noether[355166]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "osd_id": 1,
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "type": "bluestore"
Nov 22 04:32:06 np0005532048 epic_noether[355166]:    },
Nov 22 04:32:06 np0005532048 epic_noether[355166]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "osd_id": 0,
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "type": "bluestore"
Nov 22 04:32:06 np0005532048 epic_noether[355166]:    },
Nov 22 04:32:06 np0005532048 epic_noether[355166]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "osd_id": 2,
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:32:06 np0005532048 epic_noether[355166]:        "type": "bluestore"
Nov 22 04:32:06 np0005532048 epic_noether[355166]:    }
Nov 22 04:32:06 np0005532048 epic_noether[355166]: }
Nov 22 04:32:06 np0005532048 systemd[1]: libpod-ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4.scope: Deactivated successfully.
Nov 22 04:32:06 np0005532048 podman[355149]: 2025-11-22 09:32:06.195828925 +0000 UTC m=+1.257531752 container died ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:32:06 np0005532048 systemd[1]: libpod-ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4.scope: Consumed 1.081s CPU time.
Nov 22 04:32:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7b128bbdcec6386d1b0943c2340f2d98abe7098f1b46b6ea85594f6d37760f28-merged.mount: Deactivated successfully.
Nov 22 04:32:06 np0005532048 nova_compute[253661]: 2025-11-22 09:32:06.230 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:32:06 np0005532048 podman[355149]: 2025-11-22 09:32:06.263578513 +0000 UTC m=+1.325281310 container remove ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noether, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:32:06 np0005532048 systemd[1]: libpod-conmon-ac347a85420efa41969c39dc4a49864495faa5508fd1810a2fd05e73de7909a4.scope: Deactivated successfully.
Nov 22 04:32:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:32:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:32:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:32:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:32:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 454bec53-ca39-42e4-8b52-03115dc78397 does not exist
Nov 22 04:32:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev dbcc4ee9-206b-4706-ab99-a1394f283fdb does not exist
Nov 22 04:32:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:07Z|00985|binding|INFO|Releasing lport 771f6da7-e306-4e95-84a5-f08be3c60513 from this chassis (sb_readonly=0)
Nov 22 04:32:07 np0005532048 NetworkManager[48920]: <info>  [1763803927.0895] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 22 04:32:07 np0005532048 nova_compute[253661]: 2025-11-22 09:32:07.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:07 np0005532048 NetworkManager[48920]: <info>  [1763803927.0911] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Nov 22 04:32:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:07Z|00986|binding|INFO|Releasing lport 771f6da7-e306-4e95-84a5-f08be3c60513 from this chassis (sb_readonly=0)
Nov 22 04:32:07 np0005532048 nova_compute[253661]: 2025-11-22 09:32:07.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:07 np0005532048 nova_compute[253661]: 2025-11-22 09:32:07.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:32:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:32:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 93 op/s
Nov 22 04:32:07 np0005532048 nova_compute[253661]: 2025-11-22 09:32:07.838 253665 DEBUG nova.compute.manager [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:07 np0005532048 nova_compute[253661]: 2025-11-22 09:32:07.839 253665 DEBUG nova.compute.manager [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing instance network info cache due to event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:32:07 np0005532048 nova_compute[253661]: 2025-11-22 09:32:07.839 253665 DEBUG oslo_concurrency.lockutils [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:32:07 np0005532048 nova_compute[253661]: 2025-11-22 09:32:07.839 253665 DEBUG oslo_concurrency.lockutils [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:32:07 np0005532048 nova_compute[253661]: 2025-11-22 09:32:07.839 253665 DEBUG nova.network.neutron [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:32:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:08 np0005532048 nova_compute[253661]: 2025-11-22 09:32:08.985 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 440 KiB/s wr, 149 op/s
Nov 22 04:32:09 np0005532048 nova_compute[253661]: 2025-11-22 09:32:09.848 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:09.848 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:09.850 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:32:10 np0005532048 nova_compute[253661]: 2025-11-22 09:32:10.677 253665 DEBUG nova.network.neutron [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updated VIF entry in instance network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:32:10 np0005532048 nova_compute[253661]: 2025-11-22 09:32:10.677 253665 DEBUG nova.network.neutron [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:32:10 np0005532048 nova_compute[253661]: 2025-11-22 09:32:10.699 253665 DEBUG oslo_concurrency.lockutils [req-eccab05c-8c05-4c81-ad5e-58ffd5313fe5 req-1ac113c8-fecd-4911-9b51-2271933053b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:32:10 np0005532048 nova_compute[253661]: 2025-11-22 09:32:10.712 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 141 op/s
Nov 22 04:32:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:11.852 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:32:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2701429915' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:32:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:32:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2701429915' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:32:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 141 op/s
Nov 22 04:32:13 np0005532048 nova_compute[253661]: 2025-11-22 09:32:13.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:14 np0005532048 podman[355261]: 2025-11-22 09:32:14.410660814 +0000 UTC m=+0.081853205 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 22 04:32:14 np0005532048 podman[355260]: 2025-11-22 09:32:14.423303135 +0000 UTC m=+0.094537367 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:32:15 np0005532048 nova_compute[253661]: 2025-11-22 09:32:15.764 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 170 B/s wr, 130 op/s
Nov 22 04:32:16 np0005532048 nova_compute[253661]: 2025-11-22 09:32:16.285 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:32:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 151 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.2 MiB/s wr, 167 op/s
Nov 22 04:32:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:18Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:28:3e 10.100.0.5
Nov 22 04:32:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:18Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:28:3e 10.100.0.5
Nov 22 04:32:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:19 np0005532048 nova_compute[253661]: 2025-11-22 09:32:19.041 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:19 np0005532048 kernel: tap5b1477f9-c3 (unregistering): left promiscuous mode
Nov 22 04:32:19 np0005532048 NetworkManager[48920]: <info>  [1763803939.3888] device (tap5b1477f9-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:32:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:19Z|00987|binding|INFO|Releasing lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 from this chassis (sb_readonly=0)
Nov 22 04:32:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:19Z|00988|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 down in Southbound
Nov 22 04:32:19 np0005532048 nova_compute[253661]: 2025-11-22 09:32:19.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:19Z|00989|binding|INFO|Removing iface tap5b1477f9-c3 ovn-installed in OVS
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.404 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.405 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.406 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.408 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7e811bee-1c88-4734-8349-510ed5a76c7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:19 np0005532048 nova_compute[253661]: 2025-11-22 09:32:19.411 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:19 np0005532048 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 22 04:32:19 np0005532048 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000062.scope: Consumed 13.789s CPU time.
Nov 22 04:32:19 np0005532048 systemd-machined[215941]: Machine qemu-118-instance-00000062 terminated.
Nov 22 04:32:19 np0005532048 kernel: tap5b1477f9-c3: entered promiscuous mode
Nov 22 04:32:19 np0005532048 kernel: tap5b1477f9-c3 (unregistering): left promiscuous mode
Nov 22 04:32:19 np0005532048 NetworkManager[48920]: <info>  [1763803939.6452] manager: (tap5b1477f9-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Nov 22 04:32:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:19Z|00990|binding|INFO|Claiming lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for this chassis.
Nov 22 04:32:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:19Z|00991|binding|INFO|5b1477f9-c3cf-4bac-95a5-109e7ae8d852: Claiming fa:16:3e:60:b5:90 10.100.0.7
Nov 22 04:32:19 np0005532048 nova_compute[253661]: 2025-11-22 09:32:19.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.659 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.661 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.662 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.664 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25a230f6-62c9-4fff-9c63-66e178a6e2de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:19Z|00992|binding|INFO|Releasing lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 from this chassis (sb_readonly=0)
Nov 22 04:32:19 np0005532048 nova_compute[253661]: 2025-11-22 09:32:19.669 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.675 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.677 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.678 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:32:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:19.679 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca487be-b666-49fc-87c8-a998910063c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:19 np0005532048 nova_compute[253661]: 2025-11-22 09:32:19.684 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 196 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 188 op/s
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.095 253665 DEBUG nova.compute.manager [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.095 253665 DEBUG oslo_concurrency.lockutils [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.096 253665 DEBUG oslo_concurrency.lockutils [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.096 253665 DEBUG oslo_concurrency.lockutils [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.096 253665 DEBUG nova.compute.manager [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.097 253665 WARNING nova.compute.manager [req-6ca2fb02-fede-4138-9d6e-ce792613acf4 req-80f69e7b-59c0-4750-a880-20685f61817d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state active and task_state rescuing.#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.317 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance shutdown successfully after 14 seconds.#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.325 253665 INFO nova.virt.libvirt.driver [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance destroyed successfully.#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.326 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.351 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Attempting rescue#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.353 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.360 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.360 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating image(s)#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.428 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.434 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.480 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.519 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.526 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.639 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.640 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.641 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.641 253665 DEBUG oslo_concurrency.lockutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.666 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.670 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:20 np0005532048 nova_compute[253661]: 2025-11-22 09:32:20.767 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.437 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.767s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.439 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'migration_context' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.456 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.458 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start _get_guest_xml network_info=[{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:60:b5:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.458 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'resources' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:21 np0005532048 podman[355409]: 2025-11-22 09:32:21.480638145 +0000 UTC m=+0.145745438 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.487 253665 WARNING nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.499 253665 DEBUG nova.virt.libvirt.host [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.500 253665 DEBUG nova.virt.libvirt.host [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.504 253665 DEBUG nova.virt.libvirt.host [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.505 253665 DEBUG nova.virt.libvirt.host [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.505 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.505 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.506 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.506 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.506 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.506 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.507 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.508 253665 DEBUG nova.virt.hardware [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.508 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:21 np0005532048 nova_compute[253661]: 2025-11-22 09:32:21.524 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 196 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 670 KiB/s rd, 4.2 MiB/s wr, 119 op/s
Nov 22 04:32:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:32:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1498760101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.052 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.054 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.178 253665 DEBUG nova.compute.manager [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.178 253665 DEBUG oslo_concurrency.lockutils [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.179 253665 DEBUG oslo_concurrency.lockutils [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.179 253665 DEBUG oslo_concurrency.lockutils [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.179 253665 DEBUG nova.compute.manager [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.179 253665 WARNING nova.compute.manager [req-78273875-8f05-41b5-95c0-db87c2d3f549 req-2facd968-cb37-4e7a-b0a7-d39117da81e1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state active and task_state rescuing.#033[00m
Nov 22 04:32:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:32:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/819269320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.546 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:22 np0005532048 nova_compute[253661]: 2025-11-22 09:32:22.548 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:32:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/518424414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.075 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.077 253665 DEBUG nova.virt.libvirt.vif [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1786356758',display_name='tempest-ServerRescueTestJSON-server-1786356758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1786356758',id=98,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:32:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-nx025m1d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:32:03Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=7da16450-9ec5-472a-99df-81f56ee341fc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:60:b5:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.078 253665 DEBUG nova.network.os_vif_util [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:60:b5:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.079 253665 DEBUG nova.network.os_vif_util [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.081 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.098 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <uuid>7da16450-9ec5-472a-99df-81f56ee341fc</uuid>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <name>instance-00000062</name>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerRescueTestJSON-server-1786356758</nova:name>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:32:21</nova:creationTime>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <nova:user uuid="04e47309bea74c04b0750912db283ae1">tempest-ServerRescueTestJSON-264324954-project-member</nova:user>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <nova:project uuid="93c8020137e04db486facc42cfe30f23">tempest-ServerRescueTestJSON-264324954</nova:project>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <nova:port uuid="5b1477f9-c3cf-4bac-95a5-109e7ae8d852">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <entry name="serial">7da16450-9ec5-472a-99df-81f56ee341fc</entry>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <entry name="uuid">7da16450-9ec5-472a-99df-81f56ee341fc</entry>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk.rescue">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <target dev="vdb" bus="virtio"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:60:b5:90"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <target dev="tap5b1477f9-c3"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/console.log" append="off"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:32:23 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:32:23 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:32:23 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:32:23 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.107 253665 INFO nova.virt.libvirt.driver [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance destroyed successfully.#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.164 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.165 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.165 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.165 253665 DEBUG nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No VIF found with MAC fa:16:3e:60:b5:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.166 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Using config drive#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.191 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.211 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.238 253665 DEBUG nova.objects.instance [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'keypairs' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.428 253665 INFO nova.compute.manager [None req-26950718-f93c-4fba-8e83-7bbf30983c59 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Get console output#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.442 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.645003) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943645057, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 628, "num_deletes": 255, "total_data_size": 695058, "memory_usage": 707936, "flush_reason": "Manual Compaction"}
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943652673, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 688873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42717, "largest_seqno": 43344, "table_properties": {"data_size": 685433, "index_size": 1284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7748, "raw_average_key_size": 18, "raw_value_size": 678583, "raw_average_value_size": 1655, "num_data_blocks": 57, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803900, "oldest_key_time": 1763803900, "file_creation_time": 1763803943, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 7772 microseconds, and 4548 cpu microseconds.
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.652768) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 688873 bytes OK
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.652795) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.654991) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.655012) EVENT_LOG_v1 {"time_micros": 1763803943655005, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.655034) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 691631, prev total WAL file size 691631, number of live WAL files 2.
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.655739) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353034' seq:72057594037927935, type:22 .. '6C6F676D0031373535' seq:0, type:0; will stop at (end)
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(672KB)], [95(8024KB)]
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943655787, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 8906068, "oldest_snapshot_seqno": -1}
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6604 keys, 8772895 bytes, temperature: kUnknown
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943729592, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 8772895, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8728635, "index_size": 26676, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 169360, "raw_average_key_size": 25, "raw_value_size": 8610263, "raw_average_value_size": 1303, "num_data_blocks": 1065, "num_entries": 6604, "num_filter_entries": 6604, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803943, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.729889) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 8772895 bytes
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.731359) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.5 rd, 118.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.8 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(25.7) write-amplify(12.7) OK, records in: 7126, records dropped: 522 output_compression: NoCompression
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.731386) EVENT_LOG_v1 {"time_micros": 1763803943731374, "job": 56, "event": "compaction_finished", "compaction_time_micros": 73893, "compaction_time_cpu_micros": 42070, "output_level": 6, "num_output_files": 1, "total_output_size": 8772895, "num_input_records": 7126, "num_output_records": 6604, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943731822, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803943734921, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.655587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:32:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:32:23.734979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.754 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.755 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.756 253665 INFO nova.compute.manager [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Rebooting instance#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.774 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.774 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:32:23 np0005532048 nova_compute[253661]: 2025-11-22 09:32:23.775 253665 DEBUG nova.network.neutron [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:32:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 213 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 704 KiB/s rd, 4.5 MiB/s wr, 132 op/s
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.046 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.110 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Creating config drive at /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue#033[00m
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.119 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0yu9rdqh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.286 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0yu9rdqh" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.335 253665 DEBUG nova.storage.rbd_utils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.341 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.542 253665 DEBUG oslo_concurrency.processutils [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue 7da16450-9ec5-472a-99df-81f56ee341fc_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.545 253665 INFO nova.virt.libvirt.driver [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deleting local config drive /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc/disk.config.rescue because it was imported into RBD.#033[00m
Nov 22 04:32:24 np0005532048 kernel: tap5b1477f9-c3: entered promiscuous mode
Nov 22 04:32:24 np0005532048 NetworkManager[48920]: <info>  [1763803944.6327] manager: (tap5b1477f9-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Nov 22 04:32:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:24Z|00993|binding|INFO|Claiming lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for this chassis.
Nov 22 04:32:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:24Z|00994|binding|INFO|5b1477f9-c3cf-4bac-95a5-109e7ae8d852: Claiming fa:16:3e:60:b5:90 10.100.0.7
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:24.640 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '5', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:24.642 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis#033[00m
Nov 22 04:32:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:24.643 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:32:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:24.644 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94a79e4b-3237-40c5-9ae3-68f9f596fd48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:24Z|00995|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 up in Southbound
Nov 22 04:32:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:24Z|00996|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 ovn-installed in OVS
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:24 np0005532048 nova_compute[253661]: 2025-11-22 09:32:24.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:24 np0005532048 systemd-udevd[355574]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:32:24 np0005532048 systemd-machined[215941]: New machine qemu-120-instance-00000062.
Nov 22 04:32:24 np0005532048 NetworkManager[48920]: <info>  [1763803944.7045] device (tap5b1477f9-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:32:24 np0005532048 NetworkManager[48920]: <info>  [1763803944.7060] device (tap5b1477f9-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:32:24 np0005532048 systemd[1]: Started Virtual Machine qemu-120-instance-00000062.
Nov 22 04:32:25 np0005532048 nova_compute[253661]: 2025-11-22 09:32:25.769 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 712 KiB/s rd, 6.1 MiB/s wr, 146 op/s
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.227 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 7da16450-9ec5-472a-99df-81f56ee341fc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.228 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803946.2261624, 7da16450-9ec5-472a-99df-81f56ee341fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.229 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.238 253665 DEBUG nova.compute.manager [None req-77e8c320-cf6e-4dbe-ace1-8971144e21ef 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.251 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.256 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.278 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.279 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803946.2278614, 7da16450-9ec5-472a-99df-81f56ee341fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.279 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Started (Lifecycle Event)#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.303 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.310 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.940 253665 DEBUG nova.network.neutron [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.963 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:32:26 np0005532048 nova_compute[253661]: 2025-11-22 09:32:26.965 253665 DEBUG nova.compute.manager [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 717 KiB/s rd, 6.1 MiB/s wr, 153 op/s
Nov 22 04:32:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:27.975 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:27.976 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:27.977 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:29 np0005532048 kernel: tapf0934c58-4d (unregistering): left promiscuous mode
Nov 22 04:32:29 np0005532048 NetworkManager[48920]: <info>  [1763803949.3864] device (tapf0934c58-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:32:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:29Z|00997|binding|INFO|Releasing lport f0934c58-4d53-43e5-8132-eb2195819f1f from this chassis (sb_readonly=0)
Nov 22 04:32:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:29Z|00998|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f down in Southbound
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.394 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:29Z|00999|binding|INFO|Removing iface tapf0934c58-4d ovn-installed in OVS
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.406 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:28:3e 10.100.0.5'], port_security=['fa:16:3e:00:28:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd60d8746-9288-4829-8073-bed8cf04d748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa17f410-f219-4ce2-8b8c-5124640f3749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12465327-1cbd-4adc-ab38-5ef26037180c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0934c58-4d53-43e5-8132-eb2195819f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.407 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0934c58-4d53-43e5-8132-eb2195819f1f in datapath 0263cd25-ddb2-49f9-ab5b-2f514c861684 unbound from our chassis#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.409 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0263cd25-ddb2-49f9-ab5b-2f514c861684, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.410 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c146bdaf-128a-4294-83d0-5f62c72401dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.410 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 namespace which is not needed anymore#033[00m
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:29 np0005532048 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 22 04:32:29 np0005532048 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000063.scope: Consumed 14.928s CPU time.
Nov 22 04:32:29 np0005532048 systemd-machined[215941]: Machine qemu-119-instance-00000063 terminated.
Nov 22 04:32:29 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [NOTICE]   (354701) : haproxy version is 2.8.14-c23fe91
Nov 22 04:32:29 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [NOTICE]   (354701) : path to executable is /usr/sbin/haproxy
Nov 22 04:32:29 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [WARNING]  (354701) : Exiting Master process...
Nov 22 04:32:29 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [ALERT]    (354701) : Current worker (354703) exited with code 143 (Terminated)
Nov 22 04:32:29 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[354697]: [WARNING]  (354701) : All workers exited. Exiting... (0)
Nov 22 04:32:29 np0005532048 systemd[1]: libpod-18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da.scope: Deactivated successfully.
Nov 22 04:32:29 np0005532048 podman[355669]: 2025-11-22 09:32:29.584086393 +0000 UTC m=+0.055678431 container died 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:32:29 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da-userdata-shm.mount: Deactivated successfully.
Nov 22 04:32:29 np0005532048 systemd[1]: var-lib-containers-storage-overlay-05970ec99c950ccf608d1273db9007048dadc52d0e0d40152159bb3df22e752b-merged.mount: Deactivated successfully.
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.630 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:29 np0005532048 podman[355669]: 2025-11-22 09:32:29.644635664 +0000 UTC m=+0.116227702 container cleanup 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:32:29 np0005532048 systemd[1]: libpod-conmon-18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da.scope: Deactivated successfully.
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.683 253665 DEBUG nova.compute.manager [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.684 253665 DEBUG oslo_concurrency.lockutils [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.684 253665 DEBUG oslo_concurrency.lockutils [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.684 253665 DEBUG oslo_concurrency.lockutils [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.684 253665 DEBUG nova.compute.manager [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.685 253665 WARNING nova.compute.manager [req-e4323778-826d-43fa-98b0-89aac5420c13 req-428fd172-8ab2-499f-b1c4-f82ef2c1fad8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state reboot_started.#033[00m
Nov 22 04:32:29 np0005532048 podman[355706]: 2025-11-22 09:32:29.730139728 +0000 UTC m=+0.054700407 container remove 18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.738 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6a5126-7aa2-4ab5-b853-4641d63af7fb]: (4, ('Sat Nov 22 09:32:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 (18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da)\n18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da\nSat Nov 22 09:32:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 (18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da)\n18f1ec2d3cf7be3f387a563541faac0a68c11a7c0f6446d7cfa7f083e90922da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.740 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[822923fb-efc0-4e2d-9384-06fece24f475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.741 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0263cd25-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.744 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:29 np0005532048 kernel: tap0263cd25-d0: left promiscuous mode
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.767 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31a156fc-396a-4b62-9699-aa9340bdd9ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 166 op/s
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.791 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0e6019-8fdd-43e3-a3c5-e7a1c1c93cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[79f63a22-2d15-47b5-826d-46e4563b6a13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a0a21a-2005-4119-97d7-56014d3007c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677083, 'reachable_time': 32880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355725, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.819 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:32:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:29.819 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[427004bc-a955-4086-8685-8d329e899696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:29 np0005532048 systemd[1]: run-netns-ovnmeta\x2d0263cd25\x2dddb2\x2d49f9\x2dab5b\x2d2f514c861684.mount: Deactivated successfully.
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.954 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.955 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:29 np0005532048 nova_compute[253661]: 2025-11-22 09:32:29.986 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.094 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.095 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.104 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.104 253665 INFO nova.compute.claims [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.110 253665 INFO nova.virt.libvirt.driver [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance shutdown successfully.#033[00m
Nov 22 04:32:30 np0005532048 kernel: tapf0934c58-4d: entered promiscuous mode
Nov 22 04:32:30 np0005532048 systemd-udevd[355650]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:32:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:30Z|01000|binding|INFO|Claiming lport f0934c58-4d53-43e5-8132-eb2195819f1f for this chassis.
Nov 22 04:32:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:30Z|01001|binding|INFO|f0934c58-4d53-43e5-8132-eb2195819f1f: Claiming fa:16:3e:00:28:3e 10.100.0.5
Nov 22 04:32:30 np0005532048 NetworkManager[48920]: <info>  [1763803950.1888] manager: (tapf0934c58-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.197 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:28:3e 10.100.0.5'], port_security=['fa:16:3e:00:28:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd60d8746-9288-4829-8073-bed8cf04d748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'aa17f410-f219-4ce2-8b8c-5124640f3749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12465327-1cbd-4adc-ab38-5ef26037180c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0934c58-4d53-43e5-8132-eb2195819f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.199 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0934c58-4d53-43e5-8132-eb2195819f1f in datapath 0263cd25-ddb2-49f9-ab5b-2f514c861684 bound to our chassis#033[00m
Nov 22 04:32:30 np0005532048 NetworkManager[48920]: <info>  [1763803950.1997] device (tapf0934c58-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.200 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0263cd25-ddb2-49f9-ab5b-2f514c861684#033[00m
Nov 22 04:32:30 np0005532048 NetworkManager[48920]: <info>  [1763803950.2017] device (tapf0934c58-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:32:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:30Z|01002|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f ovn-installed in OVS
Nov 22 04:32:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:30Z|01003|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f up in Southbound
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.222 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[faf58e14-0af1-4283-89ad-26f484e694c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.224 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0263cd25-d1 in ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.228 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0263cd25-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.228 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dd341b-236e-41bb-95c1-9e92dd600c0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.229 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b56b88-a795-4fe3-b532-71538f27b76a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.243 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ab70ba73-06d5-4828-abf4-a3b16dac6337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.242 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:30 np0005532048 systemd-machined[215941]: New machine qemu-121-instance-00000063.
Nov 22 04:32:30 np0005532048 systemd[1]: Started Virtual Machine qemu-121-instance-00000063.
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.272 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae95088-8edf-40c7-b023-088d2c147848]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.320 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ba108be4-a60f-49c9-93b4-fdc16c25bad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 NetworkManager[48920]: <info>  [1763803950.3275] manager: (tap0263cd25-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/414)
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.329 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee66dfc-e9ab-4837-9331-882614683c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.371 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f30e21b-7b25-4413-84b1-147f126f7ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.375 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7e2fa3-379d-4b2a-beba-14d88c372301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 NetworkManager[48920]: <info>  [1763803950.4131] device (tap0263cd25-d0): carrier: link connected
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.420 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8e0f96-4326-4c82-a95c-5b8b013afb25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.448 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0eb291-667b-4338-ba90-ba32eec8f33f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0263cd25-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:31:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680115, 'reachable_time': 16988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355789, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.483 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34447df0-b7fa-40ff-b69f-5e0605711745]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:31f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680115, 'tstamp': 680115}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355790, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.503 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c233f63c-c036-4385-9821-e39b0e8fdfc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0263cd25-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:31:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680115, 'reachable_time': 16988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355791, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[078e0e2c-0dfb-4fa1-b248-7d25aa5f0122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.627 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88af3d8c-acf5-47fd-81b3-ebe38273ae6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.629 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0263cd25-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0263cd25-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:30 np0005532048 kernel: tap0263cd25-d0: entered promiscuous mode
Nov 22 04:32:30 np0005532048 NetworkManager[48920]: <info>  [1763803950.6329] manager: (tap0263cd25-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.637 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0263cd25-d0, col_values=(('external_ids', {'iface-id': '771f6da7-e306-4e95-84a5-f08be3c60513'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.638 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:30Z|01004|binding|INFO|Releasing lport 771f6da7-e306-4e95-84a5-f08be3c60513 from this chassis (sb_readonly=0)
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.640 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.641 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6da271-4572-46cd-8c05-5e8c4f663e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.642 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/0263cd25-ddb2-49f9-ab5b-2f514c861684.pid.haproxy
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 0263cd25-ddb2-49f9-ab5b-2f514c861684
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:32:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:30.644 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'env', 'PROCESS_TAG=haproxy-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0263cd25-ddb2-49f9-ab5b-2f514c861684.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:32:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326492962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.761 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.770 253665 DEBUG nova.compute.provider_tree [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.822 253665 DEBUG nova.scheduler.client.report [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.852 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.853 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.932 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.933 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.962 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:32:30 np0005532048 nova_compute[253661]: 2025-11-22 09:32:30.982 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:32:31 np0005532048 podman[355843]: 2025-11-22 09:32:31.069671116 +0000 UTC m=+0.069561643 container create ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.091 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.093 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.093 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating image(s)#033[00m
Nov 22 04:32:31 np0005532048 systemd[1]: Started libpod-conmon-ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18.scope.
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.130 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:31 np0005532048 podman[355843]: 2025-11-22 09:32:31.041028201 +0000 UTC m=+0.040918748 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:32:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:32:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da5c3a54b658459714df9c8a3e587b9a9c6c388db9cb59b269295b55bea989a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.169 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:31 np0005532048 podman[355843]: 2025-11-22 09:32:31.171507913 +0000 UTC m=+0.171398500 container init ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 04:32:31 np0005532048 podman[355843]: 2025-11-22 09:32:31.179233093 +0000 UTC m=+0.179123630 container start ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:32:31 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [NOTICE]   (355929) : New worker (355940) forked
Nov 22 04:32:31 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [NOTICE]   (355929) : Loading success.
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.206 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.211 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.258 253665 DEBUG nova.policy [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '04e47309bea74c04b0750912db283ae1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93c8020137e04db486facc42cfe30f23', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.263 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for d60d8746-9288-4829-8073-bed8cf04d748 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.264 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803951.1365564, d60d8746-9288-4829-8073-bed8cf04d748 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.264 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.275 253665 INFO nova.virt.libvirt.driver [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance running successfully.#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.276 253665 INFO nova.virt.libvirt.driver [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance soft rebooted successfully.#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.277 253665 DEBUG nova.compute.manager [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.297 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.298 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.299 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.299 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.322 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.331 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.383 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.386 253665 DEBUG oslo_concurrency.lockutils [None req-3f4d9fb7-ccaa-40e0-b44d-325de71df87d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.391 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.407 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803951.1385937, d60d8746-9288-4829-8073-bed8cf04d748 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.408 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Started (Lifecycle Event)#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.421 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.428 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.661 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.729 253665 DEBUG nova.compute.manager [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.730 253665 DEBUG oslo_concurrency.lockutils [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.731 253665 DEBUG oslo_concurrency.lockutils [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.731 253665 DEBUG oslo_concurrency.lockutils [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.732 253665 DEBUG nova.compute.manager [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.732 253665 WARNING nova.compute.manager [req-3fb9c755-cfc8-407e-9099-32a5fc678d99 req-7cc1de26-d16c-4139-b979-43dacde87238 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state rescued and task_state None.#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.744 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] resizing rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:32:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.810 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.811 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.812 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.812 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.813 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.814 253665 WARNING nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.814 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.815 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.816 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.816 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.817 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.819 253665 WARNING nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.820 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.820 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.821 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.821 253665 DEBUG oslo_concurrency.lockutils [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.821 253665 DEBUG nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.821 253665 WARNING nova.compute.manager [req-07c531a9-f705-4ad1-8098-5402696a5f1f req-8e6966da-4d28-4e75-bfaa-f1367709eb0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.862 253665 DEBUG nova.objects.instance [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'migration_context' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.878 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.879 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Ensure instance console log exists: /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.880 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.880 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:31 np0005532048 nova_compute[253661]: 2025-11-22 09:32:31.881 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:32 np0005532048 nova_compute[253661]: 2025-11-22 09:32:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:32:32 np0005532048 nova_compute[253661]: 2025-11-22 09:32:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:32:32 np0005532048 nova_compute[253661]: 2025-11-22 09:32:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:32:32 np0005532048 nova_compute[253661]: 2025-11-22 09:32:32.257 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:32:32 np0005532048 nova_compute[253661]: 2025-11-22 09:32:32.432 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:32:32 np0005532048 nova_compute[253661]: 2025-11-22 09:32:32.433 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:32:32 np0005532048 nova_compute[253661]: 2025-11-22 09:32:32.434 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:32:32 np0005532048 nova_compute[253661]: 2025-11-22 09:32:32.434 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:32 np0005532048 nova_compute[253661]: 2025-11-22 09:32:32.963 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Successfully created port: 4b489529-5b94-46ce-9810-23bef9215c04 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:32:33 np0005532048 nova_compute[253661]: 2025-11-22 09:32:33.613 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [{"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:32:33 np0005532048 nova_compute[253661]: 2025-11-22 09:32:33.630 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-7da16450-9ec5-472a-99df-81f56ee341fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:32:33 np0005532048 nova_compute[253661]: 2025-11-22 09:32:33.631 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:32:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 277 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 143 op/s
Nov 22 04:32:33 np0005532048 nova_compute[253661]: 2025-11-22 09:32:33.819 253665 DEBUG nova.compute.manager [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:33 np0005532048 nova_compute[253661]: 2025-11-22 09:32:33.820 253665 DEBUG oslo_concurrency.lockutils [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:33 np0005532048 nova_compute[253661]: 2025-11-22 09:32:33.821 253665 DEBUG oslo_concurrency.lockutils [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:33 np0005532048 nova_compute[253661]: 2025-11-22 09:32:33.821 253665 DEBUG oslo_concurrency.lockutils [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:33 np0005532048 nova_compute[253661]: 2025-11-22 09:32:33.821 253665 DEBUG nova.compute.manager [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:33 np0005532048 nova_compute[253661]: 2025-11-22 09:32:33.821 253665 WARNING nova.compute.manager [req-57d1de5a-1a64-4c3e-8c4a-c74130231227 req-c4a513a8-5d6f-418c-ba61-f926ab1ffd2c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state rescued and task_state None.#033[00m
Nov 22 04:32:34 np0005532048 nova_compute[253661]: 2025-11-22 09:32:34.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:34 np0005532048 nova_compute[253661]: 2025-11-22 09:32:34.148 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Successfully updated port: 4b489529-5b94-46ce-9810-23bef9215c04 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:32:34 np0005532048 nova_compute[253661]: 2025-11-22 09:32:34.177 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:32:34 np0005532048 nova_compute[253661]: 2025-11-22 09:32:34.178 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:32:34 np0005532048 nova_compute[253661]: 2025-11-22 09:32:34.179 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:32:34 np0005532048 nova_compute[253661]: 2025-11-22 09:32:34.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:32:34 np0005532048 nova_compute[253661]: 2025-11-22 09:32:34.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:32:34 np0005532048 nova_compute[253661]: 2025-11-22 09:32:34.356 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:32:35 np0005532048 nova_compute[253661]: 2025-11-22 09:32:35.561 253665 DEBUG nova.network.neutron [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:32:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.3 MiB/s wr, 165 op/s
Nov 22 04:32:35 np0005532048 nova_compute[253661]: 2025-11-22 09:32:35.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:35 np0005532048 nova_compute[253661]: 2025-11-22 09:32:35.922 253665 DEBUG nova.compute.manager [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-changed-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:35 np0005532048 nova_compute[253661]: 2025-11-22 09:32:35.923 253665 DEBUG nova.compute.manager [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Refreshing instance network info cache due to event network-changed-4b489529-5b94-46ce-9810-23bef9215c04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:32:35 np0005532048 nova_compute[253661]: 2025-11-22 09:32:35.923 253665 DEBUG oslo_concurrency.lockutils [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.643 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.645 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance network_info: |[{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.646 253665 DEBUG oslo_concurrency.lockutils [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.646 253665 DEBUG nova.network.neutron [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Refreshing network info cache for port 4b489529-5b94-46ce-9810-23bef9215c04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.650 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start _get_guest_xml network_info=[{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.656 253665 WARNING nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.666 253665 DEBUG nova.virt.libvirt.host [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.668 253665 DEBUG nova.virt.libvirt.host [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.678 253665 DEBUG nova.virt.libvirt.host [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.679 253665 DEBUG nova.virt.libvirt.host [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.679 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.680 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.680 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.681 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.681 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.681 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.682 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.682 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.682 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.683 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.683 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.684 253665 DEBUG nova.virt.hardware [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:32:36 np0005532048 nova_compute[253661]: 2025-11-22 09:32:36.687 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:32:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281243671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.173 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.200 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.206 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.253 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.284 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.285 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.285 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.286 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.286 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:32:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735989894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.713 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.716 253665 DEBUG nova.virt.libvirt.vif [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1406665007',display_name='tempest-ServerRescueTestJSON-server-1406665007',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1406665007',id=100,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-lhedv0vk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:32:31Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=80bb6ea3-dbff-48a3-b804-e3d356031a23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.717 253665 DEBUG nova.network.os_vif_util [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.718 253665 DEBUG nova.network.os_vif_util [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.719 253665 DEBUG nova.objects.instance [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.736 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <uuid>80bb6ea3-dbff-48a3-b804-e3d356031a23</uuid>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <name>instance-00000064</name>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerRescueTestJSON-server-1406665007</nova:name>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:32:36</nova:creationTime>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <nova:user uuid="04e47309bea74c04b0750912db283ae1">tempest-ServerRescueTestJSON-264324954-project-member</nova:user>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <nova:project uuid="93c8020137e04db486facc42cfe30f23">tempest-ServerRescueTestJSON-264324954</nova:project>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <nova:port uuid="4b489529-5b94-46ce-9810-23bef9215c04">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <entry name="serial">80bb6ea3-dbff-48a3-b804-e3d356031a23</entry>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <entry name="uuid">80bb6ea3-dbff-48a3-b804-e3d356031a23</entry>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:01:fd:71"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <target dev="tap4b489529-5b"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/console.log" append="off"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:32:37 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:32:37 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:32:37 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:32:37 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.738 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Preparing to wait for external event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.738 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.738 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.739 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.740 253665 DEBUG nova.virt.libvirt.vif [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1406665007',display_name='tempest-ServerRescueTestJSON-server-1406665007',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1406665007',id=100,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-lhedv0vk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:32:31Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=80bb6ea3-dbff-48a3-b804-e3d356031a23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.740 253665 DEBUG nova.network.os_vif_util [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.741 253665 DEBUG nova.network.os_vif_util [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.741 253665 DEBUG os_vif [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.742 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.743 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.744 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.759 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b489529-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.760 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b489529-5b, col_values=(('external_ids', {'iface-id': '4b489529-5b94-46ce-9810-23bef9215c04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:fd:71', 'vm-uuid': '80bb6ea3-dbff-48a3-b804-e3d356031a23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:37 np0005532048 NetworkManager[48920]: <info>  [1763803957.7632] manager: (tap4b489529-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Nov 22 04:32:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:32:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672544068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.770 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.772 253665 INFO os_vif [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b')#033[00m
Nov 22 04:32:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.792 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.848 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.849 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.849 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No VIF found with MAC fa:16:3e:01:fd:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.850 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Using config drive#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.876 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.917 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.918 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.922 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.922 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.922 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.926 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:32:37 np0005532048 nova_compute[253661]: 2025-11-22 09:32:37.927 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.134 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.135 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3552MB free_disk=59.855552673339844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.135 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.135 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 7da16450-9ec5-472a-99df-81f56ee341fc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d60d8746-9288-4829-8073-bed8cf04d748 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 80bb6ea3-dbff-48a3-b804-e3d356031a23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.213 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.296 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.358 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating config drive at /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.364 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpft73g2cu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.428 253665 DEBUG nova.network.neutron [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updated VIF entry in instance network info cache for port 4b489529-5b94-46ce-9810-23bef9215c04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.429 253665 DEBUG nova.network.neutron [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.450 253665 DEBUG oslo_concurrency.lockutils [req-59d76512-4e6f-4ea1-8095-85194f0983dc req-8de864e8-c5ca-4a75-bee6-a73d163dba76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.526 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpft73g2cu" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.561 253665 DEBUG nova.storage.rbd_utils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.566 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.749 253665 DEBUG oslo_concurrency.processutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.750 253665 INFO nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deleting local config drive /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config because it was imported into RBD.#033[00m
Nov 22 04:32:38 np0005532048 kernel: tap4b489529-5b: entered promiscuous mode
Nov 22 04:32:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:32:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193320973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:32:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:38Z|01005|binding|INFO|Claiming lport 4b489529-5b94-46ce-9810-23bef9215c04 for this chassis.
Nov 22 04:32:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:38Z|01006|binding|INFO|4b489529-5b94-46ce-9810-23bef9215c04: Claiming fa:16:3e:01:fd:71 10.100.0.2
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:38 np0005532048 NetworkManager[48920]: <info>  [1763803958.8148] manager: (tap4b489529-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Nov 22 04:32:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:38.822 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '2', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:38.825 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis#033[00m
Nov 22 04:32:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:38.827 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:32:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:38.828 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[83a5bbeb-f605-4d88-b4f5-8bb78a478094]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:38Z|01007|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 ovn-installed in OVS
Nov 22 04:32:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:38Z|01008|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 up in Southbound
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.845 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.853 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:32:38 np0005532048 systemd-machined[215941]: New machine qemu-122-instance-00000064.
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.867 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:32:38 np0005532048 systemd[1]: Started Virtual Machine qemu-122-instance-00000064.
Nov 22 04:32:38 np0005532048 systemd-udevd[356244]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.893 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:32:38 np0005532048 nova_compute[253661]: 2025-11-22 09:32:38.893 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:38 np0005532048 NetworkManager[48920]: <info>  [1763803958.8961] device (tap4b489529-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:32:38 np0005532048 NetworkManager[48920]: <info>  [1763803958.8990] device (tap4b489529-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:32:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.791 253665 DEBUG nova.compute.manager [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.792 253665 DEBUG oslo_concurrency.lockutils [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.793 253665 DEBUG oslo_concurrency.lockutils [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.793 253665 DEBUG oslo_concurrency.lockutils [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.794 253665 DEBUG nova.compute.manager [req-d716e55d-0e56-4ada-8205-7e7b6bb6eb69 req-e4b5d07d-c239-4539-ab7f-01312da1ff73 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Processing event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.868 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.869 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.869 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.869 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.975 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803959.9749005, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.975 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Started (Lifecycle Event)#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.978 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.982 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.985 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance spawned successfully.#033[00m
Nov 22 04:32:39 np0005532048 nova_compute[253661]: 2025-11-22 09:32:39.985 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.015 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.020 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.021 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.021 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.021 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.022 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.022 253665 DEBUG nova.virt.libvirt.driver [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.027 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.049 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.050 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803959.9784346, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.050 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.072 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.077 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803959.9814057, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.077 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.092 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.096 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.110 253665 INFO nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Took 9.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.111 253665 DEBUG nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.122 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.182 253665 INFO nova.compute.manager [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Took 10.13 seconds to build instance.#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.201 253665 DEBUG oslo_concurrency.lockutils [None req-d0f743a9-f7c7-49f7-a401-f7ae6241798e 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:40 np0005532048 nova_compute[253661]: 2025-11-22 09:32:40.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 22 04:32:41 np0005532048 nova_compute[253661]: 2025-11-22 09:32:41.905 253665 DEBUG nova.compute.manager [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:41 np0005532048 nova_compute[253661]: 2025-11-22 09:32:41.906 253665 DEBUG oslo_concurrency.lockutils [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:41 np0005532048 nova_compute[253661]: 2025-11-22 09:32:41.907 253665 DEBUG oslo_concurrency.lockutils [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:41 np0005532048 nova_compute[253661]: 2025-11-22 09:32:41.907 253665 DEBUG oslo_concurrency.lockutils [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:41 np0005532048 nova_compute[253661]: 2025-11-22 09:32:41.907 253665 DEBUG nova.compute.manager [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:41 np0005532048 nova_compute[253661]: 2025-11-22 09:32:41.908 253665 WARNING nova.compute.manager [req-f889b529-363e-4d49-9641-df6a35843e2d req-3e90ff4d-2459-432c-b61a-4e69a897f779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:32:42 np0005532048 nova_compute[253661]: 2025-11-22 09:32:42.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:43 np0005532048 nova_compute[253661]: 2025-11-22 09:32:43.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:32:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 293 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Nov 22 04:32:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:43Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:28:3e 10.100.0.5
Nov 22 04:32:45 np0005532048 podman[356295]: 2025-11-22 09:32:45.397579785 +0000 UTC m=+0.083311072 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:32:45 np0005532048 podman[356296]: 2025-11-22 09:32:45.403751016 +0000 UTC m=+0.089514894 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:32:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 293 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 377 KiB/s wr, 205 op/s
Nov 22 04:32:45 np0005532048 nova_compute[253661]: 2025-11-22 09:32:45.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:47 np0005532048 nova_compute[253661]: 2025-11-22 09:32:47.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 295 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 45 KiB/s wr, 188 op/s
Nov 22 04:32:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:49 np0005532048 nova_compute[253661]: 2025-11-22 09:32:49.744 253665 INFO nova.compute.manager [None req-46e00d8f-4e4a-4c47-9acb-04720c537d25 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Get console output#033[00m
Nov 22 04:32:49 np0005532048 nova_compute[253661]: 2025-11-22 09:32:49.749 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:32:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 295 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 45 KiB/s wr, 176 op/s
Nov 22 04:32:50 np0005532048 nova_compute[253661]: 2025-11-22 09:32:50.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 295 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 31 KiB/s wr, 152 op/s
Nov 22 04:32:51 np0005532048 nova_compute[253661]: 2025-11-22 09:32:51.931 253665 INFO nova.compute.manager [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Rescuing#033[00m
Nov 22 04:32:51 np0005532048 nova_compute[253661]: 2025-11-22 09:32:51.932 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:32:51 np0005532048 nova_compute[253661]: 2025-11-22 09:32:51.933 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:32:51 np0005532048 nova_compute[253661]: 2025-11-22 09:32:51.934 253665 DEBUG nova.network.neutron [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.220 253665 DEBUG nova.compute.manager [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.221 253665 DEBUG nova.compute.manager [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing instance network info cache due to event network-changed-f0934c58-4d53-43e5-8132-eb2195819f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.222 253665 DEBUG oslo_concurrency.lockutils [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.222 253665 DEBUG oslo_concurrency.lockutils [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.222 253665 DEBUG nova.network.neutron [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Refreshing network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:32:52
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:32:52 np0005532048 podman[356332]: 2025-11-22 09:32:52.418393576 +0000 UTC m=+0.095962434 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.421 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.422 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.422 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.422 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.423 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.424 253665 INFO nova.compute.manager [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Terminating instance#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.425 253665 DEBUG nova.compute.manager [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:32:52 np0005532048 kernel: tapf0934c58-4d (unregistering): left promiscuous mode
Nov 22 04:32:52 np0005532048 NetworkManager[48920]: <info>  [1763803972.4850] device (tapf0934c58-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:32:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:52Z|01009|binding|INFO|Releasing lport f0934c58-4d53-43e5-8132-eb2195819f1f from this chassis (sb_readonly=0)
Nov 22 04:32:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:52Z|01010|binding|INFO|Setting lport f0934c58-4d53-43e5-8132-eb2195819f1f down in Southbound
Nov 22 04:32:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:52Z|01011|binding|INFO|Removing iface tapf0934c58-4d ovn-installed in OVS
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.494 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.516 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:28:3e 10.100.0.5'], port_security=['fa:16:3e:00:28:3e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'd60d8746-9288-4829-8073-bed8cf04d748', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'aa17f410-f219-4ce2-8b8c-5124640f3749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12465327-1cbd-4adc-ab38-5ef26037180c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0934c58-4d53-43e5-8132-eb2195819f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.518 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0934c58-4d53-43e5-8132-eb2195819f1f in datapath 0263cd25-ddb2-49f9-ab5b-2f514c861684 unbound from our chassis#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.519 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0263cd25-ddb2-49f9-ab5b-2f514c861684, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.521 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f2fe8284-99c2-4451-8a0d-3f756e59d58a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.522 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 namespace which is not needed anymore#033[00m
Nov 22 04:32:52 np0005532048 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 22 04:32:52 np0005532048 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Consumed 13.783s CPU time.
Nov 22 04:32:52 np0005532048 systemd-machined[215941]: Machine qemu-121-instance-00000063 terminated.
Nov 22 04:32:52 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [NOTICE]   (355929) : haproxy version is 2.8.14-c23fe91
Nov 22 04:32:52 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [NOTICE]   (355929) : path to executable is /usr/sbin/haproxy
Nov 22 04:32:52 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [WARNING]  (355929) : Exiting Master process...
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.676 253665 INFO nova.virt.libvirt.driver [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Instance destroyed successfully.#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.677 253665 DEBUG nova.objects.instance [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid d60d8746-9288-4829-8073-bed8cf04d748 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:52 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [ALERT]    (355929) : Current worker (355940) exited with code 143 (Terminated)
Nov 22 04:32:52 np0005532048 neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684[355897]: [WARNING]  (355929) : All workers exited. Exiting... (0)
Nov 22 04:32:52 np0005532048 systemd[1]: libpod-ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18.scope: Deactivated successfully.
Nov 22 04:32:52 np0005532048 podman[356381]: 2025-11-22 09:32:52.689100568 +0000 UTC m=+0.072069024 container died ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.692 253665 DEBUG nova.virt.libvirt.vif [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:31:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-714198839',display_name='tempest-TestNetworkAdvancedServerOps-server-714198839',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-714198839',id=99,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR2b18SIpx8gS1E3y/TzQyi9x+qeFqs0jOon8sbMm/5ZIjx+NrI5fGq/DEFizh5YAuLO2aSf/znN/DytSjdMVp7+cSM7ae+/kERmK84ftJ2WIfziOJizQIKJYVt0Z/aeQ==',key_name='tempest-TestNetworkAdvancedServerOps-1970810248',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:32:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-a3b9deas',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:32:31Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=d60d8746-9288-4829-8073-bed8cf04d748,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.693 253665 DEBUG nova.network.os_vif_util [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.694 253665 DEBUG nova.network.os_vif_util [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.695 253665 DEBUG os_vif [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.699 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0934c58-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.700 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.705 253665 INFO os_vif [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:28:3e,bridge_name='br-int',has_traffic_filtering=True,id=f0934c58-4d53-43e5-8132-eb2195819f1f,network=Network(0263cd25-ddb2-49f9-ab5b-2f514c861684),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0934c58-4d')#033[00m
Nov 22 04:32:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18-userdata-shm.mount: Deactivated successfully.
Nov 22 04:32:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-da5c3a54b658459714df9c8a3e587b9a9c6c388db9cb59b269295b55bea989a9-merged.mount: Deactivated successfully.
Nov 22 04:32:52 np0005532048 podman[356381]: 2025-11-22 09:32:52.732852856 +0000 UTC m=+0.115821312 container cleanup ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:32:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:32:52 np0005532048 systemd[1]: libpod-conmon-ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18.scope: Deactivated successfully.
Nov 22 04:32:52 np0005532048 podman[356437]: 2025-11-22 09:32:52.813721335 +0000 UTC m=+0.056596703 container remove ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.820 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33228e81-09fb-4844-bf0c-01ba05e43487]: (4, ('Sat Nov 22 09:32:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 (ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18)\nef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18\nSat Nov 22 09:32:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 (ef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18)\nef9c07d4548f27fc94fab65864dc02181708dccbdbe508f98da63ad211d4ca18\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c17f87a3-0b34-4358-a87d-f83307565349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.823 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0263cd25-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:52 np0005532048 kernel: tap0263cd25-d0: left promiscuous mode
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.843 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.846 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69ff97a3-65d7-4d43-8ebb-c06ef18b8614]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.866 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dad93bd8-0bf8-457f-8776-ed114396cf59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f50ca2-26ba-4de0-a7f2-a686770a7580]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.893 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6058d3-629b-4c55-8a5d-9ba9f2186052]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680105, 'reachable_time': 43971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356455, 'error': None, 'target': 'ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:52 np0005532048 systemd[1]: run-netns-ovnmeta\x2d0263cd25\x2dddb2\x2d49f9\x2dab5b\x2d2f514c861684.mount: Deactivated successfully.
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.900 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0263cd25-ddb2-49f9-ab5b-2f514c861684 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:32:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:52.901 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a4629c23-0443-47f1-ad21-bbc51ced5ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.913 253665 DEBUG nova.compute.manager [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.914 253665 DEBUG oslo_concurrency.lockutils [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.914 253665 DEBUG oslo_concurrency.lockutils [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.914 253665 DEBUG oslo_concurrency.lockutils [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.914 253665 DEBUG nova.compute.manager [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:52 np0005532048 nova_compute[253661]: 2025-11-22 09:32:52.915 253665 DEBUG nova.compute.manager [req-c9e3c2b8-40f7-46e3-af65-b05105841d30 req-48927592-e2eb-4419-ac5f-0dffadd2445e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-unplugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:32:53 np0005532048 nova_compute[253661]: 2025-11-22 09:32:53.224 253665 INFO nova.virt.libvirt.driver [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Deleting instance files /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748_del#033[00m
Nov 22 04:32:53 np0005532048 nova_compute[253661]: 2025-11-22 09:32:53.225 253665 INFO nova.virt.libvirt.driver [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Deletion of /var/lib/nova/instances/d60d8746-9288-4829-8073-bed8cf04d748_del complete#033[00m
Nov 22 04:32:53 np0005532048 nova_compute[253661]: 2025-11-22 09:32:53.301 253665 INFO nova.compute.manager [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:32:53 np0005532048 nova_compute[253661]: 2025-11-22 09:32:53.302 253665 DEBUG oslo.service.loopingcall [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:32:53 np0005532048 nova_compute[253661]: 2025-11-22 09:32:53.303 253665 DEBUG nova.compute.manager [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:32:53 np0005532048 nova_compute[253661]: 2025-11-22 09:32:53.303 253665 DEBUG nova.network.neutron [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:32:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:53 np0005532048 nova_compute[253661]: 2025-11-22 09:32:53.718 253665 DEBUG nova.network.neutron [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:32:53 np0005532048 nova_compute[253661]: 2025-11-22 09:32:53.791 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:32:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 272 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 725 KiB/s wr, 182 op/s
Nov 22 04:32:54 np0005532048 nova_compute[253661]: 2025-11-22 09:32:54.023 253665 DEBUG nova.network.neutron [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updated VIF entry in instance network info cache for port f0934c58-4d53-43e5-8132-eb2195819f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:32:54 np0005532048 nova_compute[253661]: 2025-11-22 09:32:54.023 253665 DEBUG nova.network.neutron [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [{"id": "f0934c58-4d53-43e5-8132-eb2195819f1f", "address": "fa:16:3e:00:28:3e", "network": {"id": "0263cd25-ddb2-49f9-ab5b-2f514c861684", "bridge": "br-int", "label": "tempest-network-smoke--769826390", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0934c58-4d", "ovs_interfaceid": "f0934c58-4d53-43e5-8132-eb2195819f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:32:54 np0005532048 nova_compute[253661]: 2025-11-22 09:32:54.073 253665 DEBUG oslo_concurrency.lockutils [req-549e2894-38c0-4b19-943a-6cbe9f0faad1 req-d9606803-22e2-4b79-adde-8df65ea78fea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d60d8746-9288-4829-8073-bed8cf04d748" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:32:54 np0005532048 nova_compute[253661]: 2025-11-22 09:32:54.087 253665 DEBUG nova.network.neutron [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:32:54 np0005532048 nova_compute[253661]: 2025-11-22 09:32:54.114 253665 INFO nova.compute.manager [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Took 0.81 seconds to deallocate network for instance.#033[00m
Nov 22 04:32:54 np0005532048 nova_compute[253661]: 2025-11-22 09:32:54.260 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:54 np0005532048 nova_compute[253661]: 2025-11-22 09:32:54.261 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:54 np0005532048 nova_compute[253661]: 2025-11-22 09:32:54.266 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:32:54 np0005532048 nova_compute[253661]: 2025-11-22 09:32:54.358 253665 DEBUG oslo_concurrency.processutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.033 253665 DEBUG nova.compute.manager [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.034 253665 DEBUG oslo_concurrency.lockutils [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d60d8746-9288-4829-8073-bed8cf04d748-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.035 253665 DEBUG oslo_concurrency.lockutils [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.036 253665 DEBUG oslo_concurrency.lockutils [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.036 253665 DEBUG nova.compute.manager [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] No waiting events found dispatching network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.036 253665 WARNING nova.compute.manager [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received unexpected event network-vif-plugged-f0934c58-4d53-43e5-8132-eb2195819f1f for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.037 253665 DEBUG nova.compute.manager [req-8097e64b-3124-4d5a-886e-56020cac2e7a req-2418e9cf-3fc6-4a5e-af6a-764a52873b7f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Received event network-vif-deleted-f0934c58-4d53-43e5-8132-eb2195819f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:32:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121893696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.272 253665 DEBUG oslo_concurrency.processutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.282 253665 DEBUG nova.compute.provider_tree [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.299 253665 DEBUG nova.scheduler.client.report [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.434 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.477 253665 INFO nova.scheduler.client.report [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance d60d8746-9288-4829-8073-bed8cf04d748#033[00m
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.544 253665 DEBUG oslo_concurrency.lockutils [None req-2bcb69c1-bafd-450a-9fb4-f626a10df4d1 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "d60d8746-9288-4829-8073-bed8cf04d748" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:32:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:32:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:32:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:32:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:32:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 266 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 125 op/s
Nov 22 04:32:55 np0005532048 nova_compute[253661]: 2025-11-22 09:32:55.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:56 np0005532048 kernel: tap4b489529-5b (unregistering): left promiscuous mode
Nov 22 04:32:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:32:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:32:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:32:56 np0005532048 NetworkManager[48920]: <info>  [1763803976.6052] device (tap4b489529-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:32:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:32:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:32:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:56Z|01012|binding|INFO|Releasing lport 4b489529-5b94-46ce-9810-23bef9215c04 from this chassis (sb_readonly=0)
Nov 22 04:32:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:56Z|01013|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 down in Southbound
Nov 22 04:32:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:32:56Z|01014|binding|INFO|Removing iface tap4b489529-5b ovn-installed in OVS
Nov 22 04:32:56 np0005532048 nova_compute[253661]: 2025-11-22 09:32:56.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:56 np0005532048 nova_compute[253661]: 2025-11-22 09:32:56.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:56 np0005532048 nova_compute[253661]: 2025-11-22 09:32:56.630 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:56.643 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '4', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:32:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:56.644 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis#033[00m
Nov 22 04:32:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:56.645 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:32:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:32:56.647 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20b50e05-c14e-4a53-890e-e00c5e6db9b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:32:56 np0005532048 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 22 04:32:56 np0005532048 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000064.scope: Consumed 14.723s CPU time.
Nov 22 04:32:56 np0005532048 systemd-machined[215941]: Machine qemu-122-instance-00000064 terminated.
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.116 253665 DEBUG nova.compute.manager [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.116 253665 DEBUG oslo_concurrency.lockutils [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.117 253665 DEBUG oslo_concurrency.lockutils [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.117 253665 DEBUG oslo_concurrency.lockutils [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.117 253665 DEBUG nova.compute.manager [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.117 253665 WARNING nova.compute.manager [req-59d576b8-39b3-4e80-bb13-80eaed525e5e req-57182068-ba03-42a5-bce2-035eeb114efb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state rescuing.#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.298 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance shutdown successfully after 3 seconds.#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.306 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance destroyed successfully.#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.306 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'numa_topology' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.331 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Attempting rescue#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.332 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.339 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.340 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating image(s)#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.369 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.374 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.446 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.478 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.482 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.556 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.557 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.558 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.558 253665 DEBUG oslo_concurrency.lockutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.588 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.593 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 245 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 22 04:32:57 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.950 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.951 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'migration_context' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.966 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.967 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start _get_guest_xml network_info=[{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:01:fd:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.967 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'resources' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:57 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.994 253665 WARNING nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:57.999 253665 DEBUG nova.virt.libvirt.host [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.000 253665 DEBUG nova.virt.libvirt.host [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.003 253665 DEBUG nova.virt.libvirt.host [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.004 253665 DEBUG nova.virt.libvirt.host [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.004 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.004 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.005 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.005 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.005 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.005 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.006 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.006 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.006 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.007 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.007 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.007 253665 DEBUG nova.virt.hardware [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.008 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.031 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.118 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:32:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:32:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554647853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.582 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:58 np0005532048 nova_compute[253661]: 2025-11-22 09:32:58.583 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:32:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:32:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2523787406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.077 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.083 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.262 253665 DEBUG nova.compute.manager [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.263 253665 DEBUG oslo_concurrency.lockutils [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.263 253665 DEBUG oslo_concurrency.lockutils [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.264 253665 DEBUG oslo_concurrency.lockutils [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.264 253665 DEBUG nova.compute.manager [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.264 253665 WARNING nova.compute.manager [req-f8a844fa-3c94-4763-852e-73868695059d req-dfd43abf-90db-4e50-953f-ccba5332bd7d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state rescuing.#033[00m
Nov 22 04:32:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:32:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908467108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.610 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.611 253665 DEBUG nova.virt.libvirt.vif [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1406665007',display_name='tempest-ServerRescueTestJSON-server-1406665007',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1406665007',id=100,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:32:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-lhedv0vk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:32:40Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=80bb6ea3-dbff-48a3-b804-e3d356031a23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:01:fd:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.612 253665 DEBUG nova.network.os_vif_util [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1536653437-network", "vif_mac": "fa:16:3e:01:fd:71"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.613 253665 DEBUG nova.network.os_vif_util [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.615 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.639 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <uuid>80bb6ea3-dbff-48a3-b804-e3d356031a23</uuid>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <name>instance-00000064</name>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerRescueTestJSON-server-1406665007</nova:name>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:32:57</nova:creationTime>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <nova:user uuid="04e47309bea74c04b0750912db283ae1">tempest-ServerRescueTestJSON-264324954-project-member</nova:user>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <nova:project uuid="93c8020137e04db486facc42cfe30f23">tempest-ServerRescueTestJSON-264324954</nova:project>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <nova:port uuid="4b489529-5b94-46ce-9810-23bef9215c04">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <entry name="serial">80bb6ea3-dbff-48a3-b804-e3d356031a23</entry>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <entry name="uuid">80bb6ea3-dbff-48a3-b804-e3d356031a23</entry>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.rescue">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <target dev="vdb" bus="virtio"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:01:fd:71"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <target dev="tap4b489529-5b"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/console.log" append="off"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:32:59 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:32:59 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:32:59 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:32:59 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.649 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance destroyed successfully.#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.717 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.718 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.718 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.718 253665 DEBUG nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] No VIF found with MAC fa:16:3e:01:fd:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.719 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Using config drive#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.744 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.781 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:32:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 279 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 453 KiB/s rd, 3.2 MiB/s wr, 123 op/s
Nov 22 04:32:59 np0005532048 nova_compute[253661]: 2025-11-22 09:32:59.805 253665 DEBUG nova.objects.instance [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'keypairs' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:00 np0005532048 nova_compute[253661]: 2025-11-22 09:33:00.815 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.229 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.229 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.231 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.251 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.264 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.265 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Image id 878156d4-57f6-4a8b-8f4c-cbde182bb832 yields fingerprint 82db50257fd208421e31241f1b0ae2cc5ee8c9c4 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.265 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] image 878156d4-57f6-4a8b-8f4c-cbde182bb832 at (/var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4): checking#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.265 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] image 878156d4-57f6-4a8b-8f4c-cbde182bb832 at (/var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.268 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.269 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] 7da16450-9ec5-472a-99df-81f56ee341fc is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.269 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] 80bb6ea3-dbff-48a3-b804-e3d356031a23 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.269 253665 WARNING nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.270 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Active base files: /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.270 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Removable base files: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.270 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.271 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.271 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.271 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 22 04:33:01 np0005532048 nova_compute[253661]: 2025-11-22 09:33:01.271 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 22 04:33:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 279 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 3.2 MiB/s wr, 110 op/s
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.262 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Creating config drive at /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.269 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5gbsf03m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.414 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5gbsf03m" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.444 253665 DEBUG nova.storage.rbd_utils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] rbd image 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.449 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.651 253665 DEBUG oslo_concurrency.processutils [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue 80bb6ea3-dbff-48a3-b804-e3d356031a23_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.652 253665 INFO nova.virt.libvirt.driver [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deleting local config drive /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23/disk.config.rescue because it was imported into RBD.#033[00m
Nov 22 04:33:02 np0005532048 kernel: tap4b489529-5b: entered promiscuous mode
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:02 np0005532048 NetworkManager[48920]: <info>  [1763803982.7087] manager: (tap4b489529-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:02Z|01015|binding|INFO|Claiming lport 4b489529-5b94-46ce-9810-23bef9215c04 for this chassis.
Nov 22 04:33:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:02Z|01016|binding|INFO|4b489529-5b94-46ce-9810-23bef9215c04: Claiming fa:16:3e:01:fd:71 10.100.0.2
Nov 22 04:33:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:02.722 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '5', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:02.723 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis#033[00m
Nov 22 04:33:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:02.724 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:33:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:02.725 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73cac9ee-7301-441b-8801-1c4d89acf506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:02Z|01017|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 ovn-installed in OVS
Nov 22 04:33:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:02Z|01018|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 up in Southbound
Nov 22 04:33:02 np0005532048 systemd-udevd[356731]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:02 np0005532048 NetworkManager[48920]: <info>  [1763803982.7467] device (tap4b489529-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:33:02 np0005532048 NetworkManager[48920]: <info>  [1763803982.7475] device (tap4b489529-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:33:02 np0005532048 systemd-machined[215941]: New machine qemu-123-instance-00000064.
Nov 22 04:33:02 np0005532048 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020612292032949142 of space, bias 1.0, pg target 0.6183687609884743 quantized to 32 (current 32)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:33:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.990 253665 DEBUG nova.compute.manager [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.992 253665 DEBUG oslo_concurrency.lockutils [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.992 253665 DEBUG oslo_concurrency.lockutils [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.992 253665 DEBUG oslo_concurrency.lockutils [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.993 253665 DEBUG nova.compute.manager [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:02 np0005532048 nova_compute[253661]: 2025-11-22 09:33:02.993 253665 WARNING nova.compute.manager [req-529d6eaf-a912-409e-b9ea-3835007e876e req-7c8f0a46-55fd-473f-93a0-bc993aadb15e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state rescuing.#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.498 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 80bb6ea3-dbff-48a3-b804-e3d356031a23 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.499 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803983.4981024, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.505 253665 DEBUG nova.compute.manager [None req-53e3e920-d03f-425d-b8c8-5768f567dd20 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.532 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.536 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.573 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.576 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803983.5018227, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.576 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Started (Lifecycle Event)#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.602 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:03 np0005532048 nova_compute[253661]: 2025-11-22 09:33:03.606 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 294 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.062 253665 DEBUG nova.compute.manager [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.063 253665 DEBUG oslo_concurrency.lockutils [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.063 253665 DEBUG oslo_concurrency.lockutils [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.064 253665 DEBUG oslo_concurrency.lockutils [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.064 253665 DEBUG nova.compute.manager [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.064 253665 WARNING nova.compute.manager [req-6f359926-6254-4b85-952a-1229500866f2 req-44e17618-8c08-420d-aa49-5fe03da0042f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state rescued and task_state None.#033[00m
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.208 253665 INFO nova.compute.manager [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Unrescuing#033[00m
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.208 253665 DEBUG oslo_concurrency.lockutils [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.209 253665 DEBUG oslo_concurrency.lockutils [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquired lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.209 253665 DEBUG nova.network.neutron [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:33:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 295 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 3.3 MiB/s wr, 93 op/s
Nov 22 04:33:05 np0005532048 nova_compute[253661]: 2025-11-22 09:33:05.818 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:06 np0005532048 nova_compute[253661]: 2025-11-22 09:33:06.901 253665 DEBUG nova.network.neutron [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [{"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:33:06 np0005532048 nova_compute[253661]: 2025-11-22 09:33:06.929 253665 DEBUG oslo_concurrency.lockutils [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Releasing lock "refresh_cache-80bb6ea3-dbff-48a3-b804-e3d356031a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:33:06 np0005532048 nova_compute[253661]: 2025-11-22 09:33:06.930 253665 DEBUG nova.objects.instance [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'flavor' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:07 np0005532048 kernel: tap4b489529-5b (unregistering): left promiscuous mode
Nov 22 04:33:07 np0005532048 NetworkManager[48920]: <info>  [1763803987.0281] device (tap4b489529-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:07Z|01019|binding|INFO|Releasing lport 4b489529-5b94-46ce-9810-23bef9215c04 from this chassis (sb_readonly=0)
Nov 22 04:33:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:07Z|01020|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 down in Southbound
Nov 22 04:33:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:07Z|01021|binding|INFO|Removing iface tap4b489529-5b ovn-installed in OVS
Nov 22 04:33:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.049 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '6', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.055 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis#033[00m
Nov 22 04:33:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.060 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:33:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.061 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef42efde-dd66-492a-b0b6-421ba8a8b208]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:07 np0005532048 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 22 04:33:07 np0005532048 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 4.311s CPU time.
Nov 22 04:33:07 np0005532048 systemd-machined[215941]: Machine qemu-123-instance-00000064 terminated.
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.213 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance destroyed successfully.#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.213 253665 DEBUG nova.objects.instance [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'numa_topology' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:07 np0005532048 kernel: tap4b489529-5b: entered promiscuous mode
Nov 22 04:33:07 np0005532048 systemd-udevd[356920]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:33:07 np0005532048 NetworkManager[48920]: <info>  [1763803987.3543] manager: (tap4b489529-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Nov 22 04:33:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:07Z|01022|binding|INFO|Claiming lport 4b489529-5b94-46ce-9810-23bef9215c04 for this chassis.
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:07Z|01023|binding|INFO|4b489529-5b94-46ce-9810-23bef9215c04: Claiming fa:16:3e:01:fd:71 10.100.0.2
Nov 22 04:33:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.361 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '6', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.363 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea bound to our chassis#033[00m
Nov 22 04:33:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.364 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:33:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:07.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4532733f-b276-4ce4-b7c0-cf540ea81745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:07 np0005532048 NetworkManager[48920]: <info>  [1763803987.3700] device (tap4b489529-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:33:07 np0005532048 NetworkManager[48920]: <info>  [1763803987.3710] device (tap4b489529-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.374 253665 DEBUG nova.compute.manager [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.374 253665 DEBUG oslo_concurrency.lockutils [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.375 253665 DEBUG oslo_concurrency.lockutils [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.375 253665 DEBUG oslo_concurrency.lockutils [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.375 253665 DEBUG nova.compute.manager [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.375 253665 WARNING nova.compute.manager [req-6fbc89ad-46a8-4c68-b558-e6159fbabfff req-42cf4141-7140-4941-8b95-c531ea274474 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 22 04:33:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:07Z|01024|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 ovn-installed in OVS
Nov 22 04:33:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:07Z|01025|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 up in Southbound
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:07 np0005532048 systemd-machined[215941]: New machine qemu-124-instance-00000064.
Nov 22 04:33:07 np0005532048 systemd[1]: Started Virtual Machine qemu-124-instance-00000064.
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.675 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803972.6739175, d60d8746-9288-4829-8073-bed8cf04d748 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.675 253665 INFO nova.compute.manager [-] [instance: d60d8746-9288-4829-8073-bed8cf04d748] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.698 253665 DEBUG nova.compute.manager [None req-5ace31e9-a4b8-4484-8d74-63b1873015f3 - - - - - -] [instance: d60d8746-9288-4829-8073-bed8cf04d748] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:07 np0005532048 nova_compute[253661]: 2025-11-22 09:33:07.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 295 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 995 KiB/s rd, 2.3 MiB/s wr, 90 op/s
Nov 22 04:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:33:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 403ea977-11d0-4979-8d25-a61f8e3bedf5 does not exist
Nov 22 04:33:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b3a59441-e1b8-4ef1-8a6f-84630c20efcf does not exist
Nov 22 04:33:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1e60d011-1d12-4a08-b595-40ce54b23239 does not exist
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.063 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 80bb6ea3-dbff-48a3-b804-e3d356031a23 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.064 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803988.0632834, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.064 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.105 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.122 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.123 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763803988.0669854, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.123 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Started (Lifecycle Event)#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.144 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.149 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.166 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 22 04:33:08 np0005532048 nova_compute[253661]: 2025-11-22 09:33:08.541 253665 DEBUG nova.compute.manager [None req-9bb7ad9a-8dfc-4b59-81f3-ac58c5c0a330 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.657202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988657252, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 626, "num_deletes": 252, "total_data_size": 707025, "memory_usage": 720136, "flush_reason": "Manual Compaction"}
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988664985, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 701325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43345, "largest_seqno": 43970, "table_properties": {"data_size": 698000, "index_size": 1233, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 6563, "raw_average_key_size": 16, "raw_value_size": 691443, "raw_average_value_size": 1732, "num_data_blocks": 55, "num_entries": 399, "num_filter_entries": 399, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803944, "oldest_key_time": 1763803944, "file_creation_time": 1763803988, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 7837 microseconds, and 3579 cpu microseconds.
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.665035) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 701325 bytes OK
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.665059) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667264) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667279) EVENT_LOG_v1 {"time_micros": 1763803988667274, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667299) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 703663, prev total WAL file size 703663, number of live WAL files 2.
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667951) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(684KB)], [98(8567KB)]
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988668003, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9474220, "oldest_snapshot_seqno": -1}
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6488 keys, 8749073 bytes, temperature: kUnknown
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988741751, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8749073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8705163, "index_size": 26584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 168719, "raw_average_key_size": 26, "raw_value_size": 8588225, "raw_average_value_size": 1323, "num_data_blocks": 1042, "num_entries": 6488, "num_filter_entries": 6488, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763803988, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.742062) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8749073 bytes
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.744565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.3 rd, 118.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.4 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(26.0) write-amplify(12.5) OK, records in: 7003, records dropped: 515 output_compression: NoCompression
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.744592) EVENT_LOG_v1 {"time_micros": 1763803988744577, "job": 58, "event": "compaction_finished", "compaction_time_micros": 73865, "compaction_time_cpu_micros": 30136, "output_level": 6, "num_output_files": 1, "total_output_size": 8749073, "num_input_records": 7003, "num_output_records": 6488, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988744995, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763803988746675, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.667876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:33:08.746845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:33:08 np0005532048 podman[357293]: 2025-11-22 09:33:08.759780083 +0000 UTC m=+0.043887640 container create 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:33:08 np0005532048 systemd[1]: Started libpod-conmon-9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5.scope.
Nov 22 04:33:08 np0005532048 podman[357293]: 2025-11-22 09:33:08.737983658 +0000 UTC m=+0.022091235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:33:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:33:08 np0005532048 podman[357293]: 2025-11-22 09:33:08.876528828 +0000 UTC m=+0.160636405 container init 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:33:08 np0005532048 podman[357293]: 2025-11-22 09:33:08.88601544 +0000 UTC m=+0.170122987 container start 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:33:08 np0005532048 podman[357293]: 2025-11-22 09:33:08.889726342 +0000 UTC m=+0.173833889 container attach 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:33:08 np0005532048 systemd[1]: libpod-9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5.scope: Deactivated successfully.
Nov 22 04:33:08 np0005532048 eager_leakey[357309]: 167 167
Nov 22 04:33:08 np0005532048 conmon[357309]: conmon 9fb1dd3785202700188a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5.scope/container/memory.events
Nov 22 04:33:08 np0005532048 podman[357293]: 2025-11-22 09:33:08.896042568 +0000 UTC m=+0.180150115 container died 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:33:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-36464d677d3d139b6d49c9dd766396375ddb61d390f8ca9dc556ec0fbea28f36-merged.mount: Deactivated successfully.
Nov 22 04:33:08 np0005532048 podman[357293]: 2025-11-22 09:33:08.940665315 +0000 UTC m=+0.224772862 container remove 9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:33:08 np0005532048 systemd[1]: libpod-conmon-9fb1dd3785202700188a08b21d620e60853f6ccaed0f42b42bb04e3b031efcf5.scope: Deactivated successfully.
Nov 22 04:33:09 np0005532048 podman[357334]: 2025-11-22 09:33:09.140890223 +0000 UTC m=+0.040951678 container create a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:33:09 np0005532048 systemd[1]: Started libpod-conmon-a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358.scope.
Nov 22 04:33:09 np0005532048 podman[357334]: 2025-11-22 09:33:09.123085516 +0000 UTC m=+0.023147001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:33:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:33:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:09 np0005532048 podman[357334]: 2025-11-22 09:33:09.255498544 +0000 UTC m=+0.155560019 container init a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:33:09 np0005532048 podman[357334]: 2025-11-22 09:33:09.264488556 +0000 UTC m=+0.164550011 container start a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:33:09 np0005532048 podman[357334]: 2025-11-22 09:33:09.269479518 +0000 UTC m=+0.169540993 container attach a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.470 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.471 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.471 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.471 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 WARNING nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.472 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.473 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.473 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.473 253665 WARNING nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.474 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.474 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.474 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.475 253665 DEBUG oslo_concurrency.lockutils [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.475 253665 DEBUG nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:09 np0005532048 nova_compute[253661]: 2025-11-22 09:33:09.475 253665 WARNING nova.compute.manager [req-8583b9cf-cd88-4103-9b09-2bda28266dce req-c3b504a6-bdc0-45df-b69b-f9c3fbd00c50 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:33:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 275 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 142 op/s
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.315 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.317 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.318 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.318 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.319 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.320 253665 INFO nova.compute.manager [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Terminating instance#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.321 253665 DEBUG nova.compute.manager [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:33:10 np0005532048 kernel: tap4b489529-5b (unregistering): left promiscuous mode
Nov 22 04:33:10 np0005532048 NetworkManager[48920]: <info>  [1763803990.3667] device (tap4b489529-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:33:10 np0005532048 nifty_moore[357351]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:33:10 np0005532048 nifty_moore[357351]: --> relative data size: 1.0
Nov 22 04:33:10 np0005532048 nifty_moore[357351]: --> All data devices are unavailable
Nov 22 04:33:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:10Z|01026|binding|INFO|Releasing lport 4b489529-5b94-46ce-9810-23bef9215c04 from this chassis (sb_readonly=0)
Nov 22 04:33:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:10Z|01027|binding|INFO|Setting lport 4b489529-5b94-46ce-9810-23bef9215c04 down in Southbound
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:10Z|01028|binding|INFO|Removing iface tap4b489529-5b ovn-installed in OVS
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:10.393 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:fd:71 10.100.0.2'], port_security=['fa:16:3e:01:fd:71 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '80bb6ea3-dbff-48a3-b804-e3d356031a23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '8', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4b489529-5b94-46ce-9810-23bef9215c04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:10.394 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4b489529-5b94-46ce-9810-23bef9215c04 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis#033[00m
Nov 22 04:33:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:10.395 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:33:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:10.396 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[414474ee-541f-4c44-b7dc-869276b9d5ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.396 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:10 np0005532048 systemd[1]: libpod-a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358.scope: Deactivated successfully.
Nov 22 04:33:10 np0005532048 systemd[1]: libpod-a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358.scope: Consumed 1.073s CPU time.
Nov 22 04:33:10 np0005532048 podman[357334]: 2025-11-22 09:33:10.409819186 +0000 UTC m=+1.309880651 container died a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:33:10 np0005532048 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 22 04:33:10 np0005532048 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000064.scope: Consumed 2.719s CPU time.
Nov 22 04:33:10 np0005532048 systemd-machined[215941]: Machine qemu-124-instance-00000064 terminated.
Nov 22 04:33:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-211fdb64900345ce9f319e4b8651b4fef6577b22ed4bd917fba4b293c0bf202b-merged.mount: Deactivated successfully.
Nov 22 04:33:10 np0005532048 podman[357334]: 2025-11-22 09:33:10.469221238 +0000 UTC m=+1.369282693 container remove a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moore, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:33:10 np0005532048 systemd[1]: libpod-conmon-a88718cf65065aa6dc97d4ceda26a1a039c725cc9401a9bb80240dbefa4c0358.scope: Deactivated successfully.
Nov 22 04:33:10 np0005532048 kernel: tap4b489529-5b: entered promiscuous mode
Nov 22 04:33:10 np0005532048 kernel: tap4b489529-5b (unregistering): left promiscuous mode
Nov 22 04:33:10 np0005532048 NetworkManager[48920]: <info>  [1763803990.5510] manager: (tap4b489529-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.570 253665 INFO nova.virt.libvirt.driver [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Instance destroyed successfully.#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.570 253665 DEBUG nova.objects.instance [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'resources' on Instance uuid 80bb6ea3-dbff-48a3-b804-e3d356031a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.585 253665 DEBUG nova.virt.libvirt.vif [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1406665007',display_name='tempest-ServerRescueTestJSON-server-1406665007',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1406665007',id=100,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-lhedv0vk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:33:08Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=80bb6ea3-dbff-48a3-b804-e3d356031a23,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.586 253665 DEBUG nova.network.os_vif_util [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "4b489529-5b94-46ce-9810-23bef9215c04", "address": "fa:16:3e:01:fd:71", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b489529-5b", "ovs_interfaceid": "4b489529-5b94-46ce-9810-23bef9215c04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.587 253665 DEBUG nova.network.os_vif_util [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.587 253665 DEBUG os_vif [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.590 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b489529-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.592 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.598 253665 INFO os_vif [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:fd:71,bridge_name='br-int',has_traffic_filtering=True,id=4b489529-5b94-46ce-9810-23bef9215c04,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b489529-5b')#033[00m
Nov 22 04:33:10 np0005532048 nova_compute[253661]: 2025-11-22 09:33:10.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:11 np0005532048 podman[357571]: 2025-11-22 09:33:11.181292524 +0000 UTC m=+0.052394511 container create 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:33:11 np0005532048 systemd[1]: Started libpod-conmon-73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c.scope.
Nov 22 04:33:11 np0005532048 podman[357571]: 2025-11-22 09:33:11.155849148 +0000 UTC m=+0.026951185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:33:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:33:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:11.402 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:11.406 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:33:11 np0005532048 podman[357571]: 2025-11-22 09:33:11.507160974 +0000 UTC m=+0.378263001 container init 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:33:11 np0005532048 podman[357571]: 2025-11-22 09:33:11.518266247 +0000 UTC m=+0.389368274 container start 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:33:11 np0005532048 epic_sinoussi[357588]: 167 167
Nov 22 04:33:11 np0005532048 systemd[1]: libpod-73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c.scope: Deactivated successfully.
Nov 22 04:33:11 np0005532048 conmon[357588]: conmon 73b1b4f676ef64d3a26d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c.scope/container/memory.events
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.580 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.581 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.581 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.581 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.582 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.582 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-unplugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.582 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.583 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.583 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.583 253665 DEBUG oslo_concurrency.lockutils [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.583 253665 DEBUG nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] No waiting events found dispatching network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.584 253665 WARNING nova.compute.manager [req-37f608e9-5468-4263-ad7d-83c45f476a9d req-c5784fc9-1156-4a5b-86b6-0347e6dba955 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received unexpected event network-vif-plugged-4b489529-5b94-46ce-9810-23bef9215c04 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:33:11 np0005532048 podman[357571]: 2025-11-22 09:33:11.624080752 +0000 UTC m=+0.495182829 container attach 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:33:11 np0005532048 podman[357571]: 2025-11-22 09:33:11.62481034 +0000 UTC m=+0.495912367 container died 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:33:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8287e690c9884eaf8ceb14d39b030eb25bdf1764bed8d3bdb89b8ee210890fca-merged.mount: Deactivated successfully.
Nov 22 04:33:11 np0005532048 podman[357571]: 2025-11-22 09:33:11.667172803 +0000 UTC m=+0.538274800 container remove 73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:33:11 np0005532048 systemd[1]: libpod-conmon-73b1b4f676ef64d3a26ddb1a8926f4d8929792557eddf11f5e6b6dc90921258c.scope: Deactivated successfully.
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.713 253665 INFO nova.virt.libvirt.driver [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deleting instance files /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23_del#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.715 253665 INFO nova.virt.libvirt.driver [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deletion of /var/lib/nova/instances/80bb6ea3-dbff-48a3-b804-e3d356031a23_del complete#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.776 253665 INFO nova.compute.manager [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Took 1.45 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.777 253665 DEBUG oslo.service.loopingcall [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.778 253665 DEBUG nova.compute.manager [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:33:11 np0005532048 nova_compute[253661]: 2025-11-22 09:33:11.778 253665 DEBUG nova.network.neutron [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:33:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 275 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 801 KiB/s wr, 102 op/s
Nov 22 04:33:11 np0005532048 podman[357612]: 2025-11-22 09:33:11.870694372 +0000 UTC m=+0.051917269 container create cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:33:11 np0005532048 systemd[1]: Started libpod-conmon-cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893.scope.
Nov 22 04:33:11 np0005532048 podman[357612]: 2025-11-22 09:33:11.853923969 +0000 UTC m=+0.035146886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:33:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:33:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:11 np0005532048 podman[357612]: 2025-11-22 09:33:11.985306503 +0000 UTC m=+0.166529420 container init cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:33:12 np0005532048 podman[357612]: 2025-11-22 09:33:12.004078304 +0000 UTC m=+0.185301201 container start cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:33:12 np0005532048 podman[357612]: 2025-11-22 09:33:12.008137104 +0000 UTC m=+0.189360021 container attach cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:33:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:33:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/282528922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:33:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:33:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/282528922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:33:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:12.409 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]: {
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:    "0": [
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:        {
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "devices": [
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "/dev/loop3"
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            ],
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_name": "ceph_lv0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_size": "21470642176",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "name": "ceph_lv0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "tags": {
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.cluster_name": "ceph",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.crush_device_class": "",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.encrypted": "0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.osd_id": "0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.type": "block",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.vdo": "0"
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            },
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "type": "block",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "vg_name": "ceph_vg0"
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:        }
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:    ],
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:    "1": [
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:        {
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "devices": [
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "/dev/loop4"
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            ],
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_name": "ceph_lv1",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_size": "21470642176",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "name": "ceph_lv1",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "tags": {
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.cluster_name": "ceph",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.crush_device_class": "",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.encrypted": "0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.osd_id": "1",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.type": "block",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.vdo": "0"
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            },
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "type": "block",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "vg_name": "ceph_vg1"
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:        }
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:    ],
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:    "2": [
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:        {
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "devices": [
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "/dev/loop5"
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            ],
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_name": "ceph_lv2",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_size": "21470642176",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "name": "ceph_lv2",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "tags": {
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.cluster_name": "ceph",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.crush_device_class": "",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.encrypted": "0",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.osd_id": "2",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.type": "block",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:                "ceph.vdo": "0"
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            },
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "type": "block",
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:            "vg_name": "ceph_vg2"
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:        }
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]:    ]
Nov 22 04:33:12 np0005532048 stupefied_keldysh[357629]: }
Nov 22 04:33:12 np0005532048 systemd[1]: libpod-cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893.scope: Deactivated successfully.
Nov 22 04:33:12 np0005532048 podman[357612]: 2025-11-22 09:33:12.842981622 +0000 UTC m=+1.024204519 container died cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:33:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bf79ff4e4aaa9d2c7c5cfae9a6c12163cbe902729884ab47b6ad5dea79d17dc3-merged.mount: Deactivated successfully.
Nov 22 04:33:12 np0005532048 podman[357612]: 2025-11-22 09:33:12.913000656 +0000 UTC m=+1.094223553 container remove cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:33:12 np0005532048 systemd[1]: libpod-conmon-cce770a8c1144db2dd5990a89d7bc5bfaad6df312925f2bb73698e6fe5cdc893.scope: Deactivated successfully.
Nov 22 04:33:13 np0005532048 nova_compute[253661]: 2025-11-22 09:33:13.566 253665 DEBUG nova.network.neutron [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:33:13 np0005532048 nova_compute[253661]: 2025-11-22 09:33:13.586 253665 INFO nova.compute.manager [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Took 1.81 seconds to deallocate network for instance.#033[00m
Nov 22 04:33:13 np0005532048 nova_compute[253661]: 2025-11-22 09:33:13.628 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:13 np0005532048 nova_compute[253661]: 2025-11-22 09:33:13.629 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:13 np0005532048 podman[357792]: 2025-11-22 09:33:13.642567503 +0000 UTC m=+0.062076770 container create 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:33:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:13 np0005532048 nova_compute[253661]: 2025-11-22 09:33:13.692 253665 DEBUG nova.compute.manager [req-3ce4fe7b-df81-4ba6-b837-45ad9d1f894b req-24eb43d5-ec0b-4c06-b301-ce44a4bcb6c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Received event network-vif-deleted-4b489529-5b94-46ce-9810-23bef9215c04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:13 np0005532048 systemd[1]: Started libpod-conmon-1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e.scope.
Nov 22 04:33:13 np0005532048 nova_compute[253661]: 2025-11-22 09:33:13.708 253665 DEBUG oslo_concurrency.processutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:13 np0005532048 podman[357792]: 2025-11-22 09:33:13.621870713 +0000 UTC m=+0.041380020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:33:13 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:33:13 np0005532048 podman[357792]: 2025-11-22 09:33:13.76314436 +0000 UTC m=+0.182653657 container init 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:33:13 np0005532048 podman[357792]: 2025-11-22 09:33:13.777725769 +0000 UTC m=+0.197235036 container start 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:33:13 np0005532048 podman[357792]: 2025-11-22 09:33:13.781900202 +0000 UTC m=+0.201409489 container attach 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:33:13 np0005532048 exciting_buck[357809]: 167 167
Nov 22 04:33:13 np0005532048 systemd[1]: libpod-1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e.scope: Deactivated successfully.
Nov 22 04:33:13 np0005532048 podman[357792]: 2025-11-22 09:33:13.789872798 +0000 UTC m=+0.209382115 container died 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:33:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 204 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 803 KiB/s wr, 181 op/s
Nov 22 04:33:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-927d722d656c188d70f46f8c61fc4d984254813916174383f307e07ba8cad23c-merged.mount: Deactivated successfully.
Nov 22 04:33:13 np0005532048 podman[357792]: 2025-11-22 09:33:13.833544533 +0000 UTC m=+0.253053800 container remove 1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:33:13 np0005532048 systemd[1]: libpod-conmon-1af7b9dff2908cf074ccbf634abc3170e6dfe6c16ecf35d77ef0eff2f3254e1e.scope: Deactivated successfully.
Nov 22 04:33:14 np0005532048 podman[357853]: 2025-11-22 09:33:14.029935766 +0000 UTC m=+0.049499499 container create 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:33:14 np0005532048 systemd[1]: Started libpod-conmon-4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81.scope.
Nov 22 04:33:14 np0005532048 podman[357853]: 2025-11-22 09:33:14.008206452 +0000 UTC m=+0.027770215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:33:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:33:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:14 np0005532048 podman[357853]: 2025-11-22 09:33:14.145739847 +0000 UTC m=+0.165303580 container init 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:33:14 np0005532048 podman[357853]: 2025-11-22 09:33:14.153682963 +0000 UTC m=+0.173246696 container start 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:33:14 np0005532048 podman[357853]: 2025-11-22 09:33:14.157913407 +0000 UTC m=+0.177477160 container attach 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:33:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:33:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225321112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:33:14 np0005532048 nova_compute[253661]: 2025-11-22 09:33:14.187 253665 DEBUG oslo_concurrency.processutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:14 np0005532048 nova_compute[253661]: 2025-11-22 09:33:14.195 253665 DEBUG nova.compute.provider_tree [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:33:14 np0005532048 nova_compute[253661]: 2025-11-22 09:33:14.212 253665 DEBUG nova.scheduler.client.report [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:33:14 np0005532048 nova_compute[253661]: 2025-11-22 09:33:14.230 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:14 np0005532048 nova_compute[253661]: 2025-11-22 09:33:14.257 253665 INFO nova.scheduler.client.report [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Deleted allocations for instance 80bb6ea3-dbff-48a3-b804-e3d356031a23#033[00m
Nov 22 04:33:14 np0005532048 nova_compute[253661]: 2025-11-22 09:33:14.329 253665 DEBUG oslo_concurrency.lockutils [None req-0710c27c-ec9e-4ae2-9d3a-e7a0f568d319 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "80bb6ea3-dbff-48a3-b804-e3d356031a23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]: {
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "osd_id": 1,
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "type": "bluestore"
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:    },
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "osd_id": 0,
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "type": "bluestore"
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:    },
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "osd_id": 2,
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:        "type": "bluestore"
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]:    }
Nov 22 04:33:15 np0005532048 condescending_sanderson[357870]: }
Nov 22 04:33:15 np0005532048 systemd[1]: libpod-4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81.scope: Deactivated successfully.
Nov 22 04:33:15 np0005532048 systemd[1]: libpod-4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81.scope: Consumed 1.128s CPU time.
Nov 22 04:33:15 np0005532048 podman[357853]: 2025-11-22 09:33:15.276787315 +0000 UTC m=+1.296351068 container died 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 22 04:33:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7f5896d396e773f87974927bf8b8d097e130f891c55f324865510225e1134e87-merged.mount: Deactivated successfully.
Nov 22 04:33:15 np0005532048 podman[357853]: 2025-11-22 09:33:15.385212213 +0000 UTC m=+1.404775956 container remove 4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sanderson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:33:15 np0005532048 systemd[1]: libpod-conmon-4e5bbf22157cd086b0cf9d06aaa274bf3ea07244b37be0d31701b4876ef9fe81.scope: Deactivated successfully.
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.424 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.426 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.427 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.428 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.428 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.429 253665 INFO nova.compute.manager [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Terminating instance#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.430 253665 DEBUG nova.compute.manager [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:33:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:33:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:33:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4aeea8fa-eadd-48c7-a40d-9719dcbb0b23 does not exist
Nov 22 04:33:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev adcd9e4d-e58f-4d04-a693-682e3c112674 does not exist
Nov 22 04:33:15 np0005532048 kernel: tap5b1477f9-c3 (unregistering): left promiscuous mode
Nov 22 04:33:15 np0005532048 NetworkManager[48920]: <info>  [1763803995.5044] device (tap5b1477f9-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:33:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:15Z|01029|binding|INFO|Releasing lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 from this chassis (sb_readonly=0)
Nov 22 04:33:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:15Z|01030|binding|INFO|Setting lport 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 down in Southbound
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:15Z|01031|binding|INFO|Removing iface tap5b1477f9-c3 ovn-installed in OVS
Nov 22 04:33:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:15.523 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:b5:90 10.100.0.7'], port_security=['fa:16:3e:60:b5:90 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7da16450-9ec5-472a-99df-81f56ee341fc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18e5030a-5673-404f-927e-25a76f3164ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '93c8020137e04db486facc42cfe30f23', 'neutron:revision_number': '6', 'neutron:security_group_ids': '280afeed-2917-47fe-ab86-d6dcd8942915', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=289660ad-5d8a-4077-9b21-28105943634e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=5b1477f9-c3cf-4bac-95a5-109e7ae8d852) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:15.525 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 5b1477f9-c3cf-4bac-95a5-109e7ae8d852 in datapath 18e5030a-5673-404f-927e-25a76f3164ea unbound from our chassis#033[00m
Nov 22 04:33:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:15.526 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18e5030a-5673-404f-927e-25a76f3164ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:33:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:15.527 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa5595e-3d81-4f87-a45e-1c049c12321e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:15 np0005532048 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 22 04:33:15 np0005532048 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Consumed 16.482s CPU time.
Nov 22 04:33:15 np0005532048 systemd-machined[215941]: Machine qemu-120-instance-00000062 terminated.
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.598 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:15 np0005532048 podman[357943]: 2025-11-22 09:33:15.602992604 +0000 UTC m=+0.067065522 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:33:15 np0005532048 podman[357947]: 2025-11-22 09:33:15.610495388 +0000 UTC m=+0.075712604 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.669 253665 INFO nova.virt.libvirt.driver [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Instance destroyed successfully.#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.670 253665 DEBUG nova.objects.instance [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lazy-loading 'resources' on Instance uuid 7da16450-9ec5-472a-99df-81f56ee341fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.684 253665 DEBUG nova.virt.libvirt.vif [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1786356758',display_name='tempest-ServerRescueTestJSON-server-1786356758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1786356758',id=98,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='93c8020137e04db486facc42cfe30f23',ramdisk_id='',reservation_id='r-nx025m1d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-264324954',owner_user_name='tempest-ServerRescueTestJSON-264324954-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:32:26Z,user_data=None,user_id='04e47309bea74c04b0750912db283ae1',uuid=7da16450-9ec5-472a-99df-81f56ee341fc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.685 253665 DEBUG nova.network.os_vif_util [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converting VIF {"id": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "address": "fa:16:3e:60:b5:90", "network": {"id": "18e5030a-5673-404f-927e-25a76f3164ea", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1536653437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "93c8020137e04db486facc42cfe30f23", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b1477f9-c3", "ovs_interfaceid": "5b1477f9-c3cf-4bac-95a5-109e7ae8d852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.685 253665 DEBUG nova.network.os_vif_util [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.686 253665 DEBUG os_vif [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.689 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.690 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b1477f9-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.692 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.698 253665 INFO os_vif [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:b5:90,bridge_name='br-int',has_traffic_filtering=True,id=5b1477f9-c3cf-4bac-95a5-109e7ae8d852,network=Network(18e5030a-5673-404f-927e-25a76f3164ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b1477f9-c3')#033[00m
Nov 22 04:33:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 169 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 17 KiB/s wr, 195 op/s
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.868 253665 DEBUG nova.compute.manager [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.868 253665 DEBUG oslo_concurrency.lockutils [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.869 253665 DEBUG oslo_concurrency.lockutils [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.869 253665 DEBUG oslo_concurrency.lockutils [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.869 253665 DEBUG nova.compute.manager [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:15 np0005532048 nova_compute[253661]: 2025-11-22 09:33:15.869 253665 DEBUG nova.compute.manager [req-06f3bf1d-151a-477b-b2b8-8207efd08093 req-d94f8546-bf24-4707-af68-2f34716d30cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-unplugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:33:16 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:16 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:33:16 np0005532048 nova_compute[253661]: 2025-11-22 09:33:16.528 253665 INFO nova.virt.libvirt.driver [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deleting instance files /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc_del#033[00m
Nov 22 04:33:16 np0005532048 nova_compute[253661]: 2025-11-22 09:33:16.529 253665 INFO nova.virt.libvirt.driver [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deletion of /var/lib/nova/instances/7da16450-9ec5-472a-99df-81f56ee341fc_del complete#033[00m
Nov 22 04:33:16 np0005532048 nova_compute[253661]: 2025-11-22 09:33:16.575 253665 INFO nova.compute.manager [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:33:16 np0005532048 nova_compute[253661]: 2025-11-22 09:33:16.576 253665 DEBUG oslo.service.loopingcall [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:33:16 np0005532048 nova_compute[253661]: 2025-11-22 09:33:16.577 253665 DEBUG nova.compute.manager [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:33:16 np0005532048 nova_compute[253661]: 2025-11-22 09:33:16.577 253665 DEBUG nova.network.neutron [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:33:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 143 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.0 KiB/s wr, 205 op/s
Nov 22 04:33:17 np0005532048 nova_compute[253661]: 2025-11-22 09:33:17.924 253665 DEBUG nova.network.neutron [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:33:17 np0005532048 nova_compute[253661]: 2025-11-22 09:33:17.945 253665 INFO nova.compute.manager [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Took 1.37 seconds to deallocate network for instance.#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.000 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.000 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.007 253665 DEBUG nova.compute.manager [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.008 253665 DEBUG oslo_concurrency.lockutils [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.008 253665 DEBUG oslo_concurrency.lockutils [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.008 253665 DEBUG oslo_concurrency.lockutils [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.009 253665 DEBUG nova.compute.manager [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] No waiting events found dispatching network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.009 253665 WARNING nova.compute.manager [req-75626bf7-287e-4c68-8ad6-c796dda51d96 req-8eb1664e-56ac-4f77-ab86-77ed79ac8f07 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received unexpected event network-vif-plugged-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.052 253665 DEBUG oslo_concurrency.processutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.106 253665 DEBUG nova.compute.manager [req-0c0db93c-d381-4b1a-b010-52661809f9be req-e770c740-4e49-46e2-9c10-19119dc5efde 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Received event network-vif-deleted-5b1477f9-c3cf-4bac-95a5-109e7ae8d852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.419 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.420 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.433 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.491 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:33:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601486343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.517 253665 DEBUG oslo_concurrency.processutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.522 253665 DEBUG nova.compute.provider_tree [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.535 253665 DEBUG nova.scheduler.client.report [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.555 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.558 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.568 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.568 253665 INFO nova.compute.claims [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.577 253665 INFO nova.scheduler.client.report [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Deleted allocations for instance 7da16450-9ec5-472a-99df-81f56ee341fc#033[00m
Nov 22 04:33:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.661 253665 DEBUG oslo_concurrency.lockutils [None req-c5d07787-5d5c-40a7-b9d4-0c109d5709d2 04e47309bea74c04b0750912db283ae1 93c8020137e04db486facc42cfe30f23 - - default default] Lock "7da16450-9ec5-472a-99df-81f56ee341fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:18 np0005532048 nova_compute[253661]: 2025-11-22 09:33:18.696 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:33:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/554635775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.149 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.155 253665 DEBUG nova.compute.provider_tree [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.169 253665 DEBUG nova.scheduler.client.report [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.193 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.194 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.242 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.243 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.262 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.278 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.367 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.368 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.369 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating image(s)#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.393 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.417 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.442 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.448 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.534 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.536 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.537 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.538 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.568 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.576 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 eb81b22a-c733-4b44-8546-e4bd1c24d808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 KiB/s wr, 214 op/s
Nov 22 04:33:19 np0005532048 nova_compute[253661]: 2025-11-22 09:33:19.963 253665 DEBUG nova.policy [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:33:20 np0005532048 nova_compute[253661]: 2025-11-22 09:33:20.093 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 eb81b22a-c733-4b44-8546-e4bd1c24d808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:20 np0005532048 nova_compute[253661]: 2025-11-22 09:33:20.153 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:33:20 np0005532048 nova_compute[253661]: 2025-11-22 09:33:20.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:20 np0005532048 nova_compute[253661]: 2025-11-22 09:33:20.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:21 np0005532048 nova_compute[253661]: 2025-11-22 09:33:21.269 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Successfully created port: 9cb5df7f-b707-42d9-b17d-75811fd05cbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:33:21 np0005532048 nova_compute[253661]: 2025-11-22 09:33:21.481 253665 DEBUG nova.objects.instance [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:21 np0005532048 nova_compute[253661]: 2025-11-22 09:33:21.496 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:33:21 np0005532048 nova_compute[253661]: 2025-11-22 09:33:21.497 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Ensure instance console log exists: /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:33:21 np0005532048 nova_compute[253661]: 2025-11-22 09:33:21.497 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:21 np0005532048 nova_compute[253661]: 2025-11-22 09:33:21.497 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:21 np0005532048 nova_compute[253661]: 2025-11-22 09:33:21.498 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.3 KiB/s wr, 152 op/s
Nov 22 04:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:33:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:33:22 np0005532048 nova_compute[253661]: 2025-11-22 09:33:22.909 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Successfully updated port: 9cb5df7f-b707-42d9-b17d-75811fd05cbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:33:22 np0005532048 nova_compute[253661]: 2025-11-22 09:33:22.924 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:33:22 np0005532048 nova_compute[253661]: 2025-11-22 09:33:22.925 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:33:22 np0005532048 nova_compute[253661]: 2025-11-22 09:33:22.925 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:33:23 np0005532048 nova_compute[253661]: 2025-11-22 09:33:23.164 253665 DEBUG nova.compute.manager [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:23 np0005532048 nova_compute[253661]: 2025-11-22 09:33:23.165 253665 DEBUG nova.compute.manager [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing instance network info cache due to event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:33:23 np0005532048 nova_compute[253661]: 2025-11-22 09:33:23.165 253665 DEBUG oslo_concurrency.lockutils [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:33:23 np0005532048 nova_compute[253661]: 2025-11-22 09:33:23.298 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:33:23 np0005532048 podman[358254]: 2025-11-22 09:33:23.482081208 +0000 UTC m=+0.160601864 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:33:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 71 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 165 op/s
Nov 22 04:33:23 np0005532048 nova_compute[253661]: 2025-11-22 09:33:23.966 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:24 np0005532048 nova_compute[253661]: 2025-11-22 09:33:24.963 253665 DEBUG nova.network.neutron [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:33:24 np0005532048 nova_compute[253661]: 2025-11-22 09:33:24.988 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:33:24 np0005532048 nova_compute[253661]: 2025-11-22 09:33:24.988 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance network_info: |[{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:33:24 np0005532048 nova_compute[253661]: 2025-11-22 09:33:24.989 253665 DEBUG oslo_concurrency.lockutils [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:33:24 np0005532048 nova_compute[253661]: 2025-11-22 09:33:24.989 253665 DEBUG nova.network.neutron [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:33:24 np0005532048 nova_compute[253661]: 2025-11-22 09:33:24.994 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start _get_guest_xml network_info=[{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.000 253665 WARNING nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.010 253665 DEBUG nova.virt.libvirt.host [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.011 253665 DEBUG nova.virt.libvirt.host [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.015 253665 DEBUG nova.virt.libvirt.host [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.016 253665 DEBUG nova.virt.libvirt.host [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.016 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.016 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.017 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.018 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.018 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.018 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.018 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.019 253665 DEBUG nova.virt.hardware [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.021 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3929486319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.515 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.551 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.558 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.615 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803990.5675216, 80bb6ea3-dbff-48a3-b804-e3d356031a23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.615 253665 INFO nova.compute.manager [-] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.649 253665 DEBUG nova.compute.manager [None req-52c67dcf-91cc-4ec1-b41a-4af07f2b4f2e - - - - - -] [instance: 80bb6ea3-dbff-48a3-b804-e3d356031a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Nov 22 04:33:25 np0005532048 nova_compute[253661]: 2025-11-22 09:33:25.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513764550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.089 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.091 253665 DEBUG nova.virt.libvirt.vif [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:19Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.092 253665 DEBUG nova.network.os_vif_util [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.093 253665 DEBUG nova.network.os_vif_util [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.095 253665 DEBUG nova.objects.instance [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.117 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <uuid>eb81b22a-c733-4b44-8546-e4bd1c24d808</uuid>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <name>instance-00000065</name>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1059413669</nova:name>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:33:25</nova:creationTime>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <nova:port uuid="9cb5df7f-b707-42d9-b17d-75811fd05cbb">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <entry name="serial">eb81b22a-c733-4b44-8546-e4bd1c24d808</entry>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <entry name="uuid">eb81b22a-c733-4b44-8546-e4bd1c24d808</entry>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:33:26 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/eb81b22a-c733-4b44-8546-e4bd1c24d808_disk">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:ee:62:78"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <target dev="tap9cb5df7f-b7"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/console.log" append="off"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:33:26 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:33:26 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:33:26 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:33:26 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.120 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Preparing to wait for external event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.121 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.121 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.122 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.123 253665 DEBUG nova.virt.libvirt.vif [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:19Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.124 253665 DEBUG nova.network.os_vif_util [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.127 253665 DEBUG nova.network.os_vif_util [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.127 253665 DEBUG os_vif [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.128 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.129 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.133 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb5df7f-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.134 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9cb5df7f-b7, col_values=(('external_ids', {'iface-id': '9cb5df7f-b707-42d9-b17d-75811fd05cbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:62:78', 'vm-uuid': 'eb81b22a-c733-4b44-8546-e4bd1c24d808'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:26 np0005532048 NetworkManager[48920]: <info>  [1763804006.1371] manager: (tap9cb5df7f-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.143 253665 INFO os_vif [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7')#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.200 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.200 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.201 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:ee:62:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.201 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Using config drive#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.228 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.852 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating config drive at /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.856 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq4yhv1w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:26 np0005532048 nova_compute[253661]: 2025-11-22 09:33:26.998 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgq4yhv1w" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.023 253665 DEBUG nova.storage.rbd_utils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.027 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.196 253665 DEBUG nova.network.neutron [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updated VIF entry in instance network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.197 253665 DEBUG nova.network.neutron [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.211 253665 DEBUG oslo_concurrency.lockutils [req-81ca6f56-eb26-4680-83e4-97d7b2472b3e req-b5df23af-ed08-4792-a7f0-91cf12428375 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.245 253665 DEBUG oslo_concurrency.processutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.246 253665 INFO nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deleting local config drive /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config because it was imported into RBD.#033[00m
Nov 22 04:33:27 np0005532048 kernel: tap9cb5df7f-b7: entered promiscuous mode
Nov 22 04:33:27 np0005532048 NetworkManager[48920]: <info>  [1763804007.3053] manager: (tap9cb5df7f-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Nov 22 04:33:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:27Z|01032|binding|INFO|Claiming lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb for this chassis.
Nov 22 04:33:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:27Z|01033|binding|INFO|9cb5df7f-b707-42d9-b17d-75811fd05cbb: Claiming fa:16:3e:ee:62:78 10.100.0.13
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.311 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.325 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:78 10.100.0.13'], port_security=['fa:16:3e:ee:62:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb81b22a-c733-4b44-8546-e4bd1c24d808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9a9e980-b9b8-4093-8614-a39717adaa19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed202cc-8346-4c69-b67f-f490be608094, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9cb5df7f-b707-42d9-b17d-75811fd05cbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.327 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9cb5df7f-b707-42d9-b17d-75811fd05cbb in datapath 3acaad61-a3f6-4bd6-83f4-0ab1438bb136 bound to our chassis#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.329 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3acaad61-a3f6-4bd6-83f4-0ab1438bb136#033[00m
Nov 22 04:33:27 np0005532048 systemd-udevd[358415]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.347 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[416ce9ff-5375-4521-835c-be8a1408c040]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.350 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3acaad61-a1 in ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:33:27 np0005532048 NetworkManager[48920]: <info>  [1763804007.3548] device (tap9cb5df7f-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.353 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3acaad61-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.353 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[53e07c3b-93c1-47f1-84ce-1fdbd0b27c14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.355 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b79a41d3-adee-4046-ad15-a1e3fc30e65c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 NetworkManager[48920]: <info>  [1763804007.3567] device (tap9cb5df7f-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:33:27 np0005532048 systemd-machined[215941]: New machine qemu-125-instance-00000065.
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.370 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[64ec8d87-6d61-4eb5-b42a-c71e7ee36489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.387 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:27 np0005532048 systemd[1]: Started Virtual Machine qemu-125-instance-00000065.
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:27Z|01034|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb ovn-installed in OVS
Nov 22 04:33:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:27Z|01035|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb up in Southbound
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.400 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[42859102-177e-4fbf-b630-9f733817d876]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.441 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[682e17ed-aad7-4fd8-b72c-891c3d72e7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.447 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe97ecd0-1284-4fd7-b8aa-d413aeb42735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 NetworkManager[48920]: <info>  [1763804007.4489] manager: (tap3acaad61-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.498 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[30b831e2-3dbe-4769-bc62-9385a9eeb885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.503 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d869fe7-b82f-4216-b1f4-cefacc7f9c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 NetworkManager[48920]: <info>  [1763804007.5295] device (tap3acaad61-a0): carrier: link connected
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.536 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f442eb47-701a-4bf7-bb54-f15e29d9e593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.560 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[610eccf2-bbaa-4221-8bc1-30eb365df10a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3acaad61-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a4:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685826, 'reachable_time': 30323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358450, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.584 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2dea1c47-b2c9-4428-bfd6-ace7398a0464]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a4ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685826, 'tstamp': 685826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358451, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.608 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[146b0b56-fc35-4ad8-803f-d682c98dee5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3acaad61-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a4:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685826, 'reachable_time': 30323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358452, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.652 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af524059-092c-4ed1-81a7-455dba1010be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.737 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1be4c2-7bd5-4b7b-a29b-f7ca05c22351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.739 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3acaad61-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.739 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.740 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3acaad61-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:27 np0005532048 kernel: tap3acaad61-a0: entered promiscuous mode
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.782 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:27 np0005532048 NetworkManager[48920]: <info>  [1763804007.7845] manager: (tap3acaad61-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.785 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3acaad61-a0, col_values=(('external_ids', {'iface-id': '505b5f2b-f067-432d-8ac4-da2043ed18cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:27Z|01036|binding|INFO|Releasing lport 505b5f2b-f067-432d-8ac4-da2043ed18cf from this chassis (sb_readonly=0)
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 22 04:33:27 np0005532048 nova_compute[253661]: 2025-11-22 09:33:27.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.820 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.821 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9716e6d1-a40b-4600-8c80-e22785e0e685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.821 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.822 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'env', 'PROCESS_TAG=haproxy-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.976 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.979 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:27.980 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:28 np0005532048 nova_compute[253661]: 2025-11-22 09:33:28.002 253665 DEBUG nova.compute.manager [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:28 np0005532048 nova_compute[253661]: 2025-11-22 09:33:28.002 253665 DEBUG oslo_concurrency.lockutils [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:28 np0005532048 nova_compute[253661]: 2025-11-22 09:33:28.003 253665 DEBUG oslo_concurrency.lockutils [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:28 np0005532048 nova_compute[253661]: 2025-11-22 09:33:28.003 253665 DEBUG oslo_concurrency.lockutils [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:28 np0005532048 nova_compute[253661]: 2025-11-22 09:33:28.003 253665 DEBUG nova.compute.manager [req-bfa8156a-dbca-4a62-b1c6-74c4f90c6a6f req-2a93de08-f4d6-4452-a21d-0577e816c81b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Processing event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:33:28 np0005532048 podman[358484]: 2025-11-22 09:33:28.220661797 +0000 UTC m=+0.064432237 container create 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:33:28 np0005532048 systemd[1]: Started libpod-conmon-09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467.scope.
Nov 22 04:33:28 np0005532048 podman[358484]: 2025-11-22 09:33:28.183233126 +0000 UTC m=+0.027003596 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:33:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:33:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323283cbee7146e9c3c1575a344ce40e1bcaad9765d07290209292215bb1d53b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:28 np0005532048 podman[358484]: 2025-11-22 09:33:28.338493458 +0000 UTC m=+0.182263968 container init 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:33:28 np0005532048 podman[358484]: 2025-11-22 09:33:28.344660859 +0000 UTC m=+0.188431339 container start 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:33:28 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [NOTICE]   (358504) : New worker (358506) forked
Nov 22 04:33:28 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [NOTICE]   (358504) : Loading success.
Nov 22 04:33:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.309 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.310 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804009.3092601, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.310 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Started (Lifecycle Event)#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.316 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.320 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance spawned successfully.#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.321 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.340 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.343 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.350 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.351 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.351 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.351 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.352 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.352 253665 DEBUG nova.virt.libvirt.driver [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.383 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.383 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804009.3094077, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.384 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.409 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.413 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804009.3145497, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.430 253665 INFO nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Took 10.06 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.430 253665 DEBUG nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.432 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.438 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.469 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.488 253665 INFO nova.compute.manager [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Took 11.02 seconds to build instance.#033[00m
Nov 22 04:33:29 np0005532048 nova_compute[253661]: 2025-11-22 09:33:29.501 253665 DEBUG oslo_concurrency.lockutils [None req-082eb71f-15ee-4e57-82b7-e73f1a907461 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.084 253665 DEBUG nova.compute.manager [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.085 253665 DEBUG oslo_concurrency.lockutils [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.085 253665 DEBUG oslo_concurrency.lockutils [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.085 253665 DEBUG oslo_concurrency.lockutils [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.086 253665 DEBUG nova.compute.manager [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.086 253665 WARNING nova.compute.manager [req-25860e25-fb4b-4e20-b702-1b60a6e19813 req-75dfd576-fec9-47ec-ae02-4d20509d28db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state None.#033[00m
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.667 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763803995.6670394, 7da16450-9ec5-472a-99df-81f56ee341fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.668 253665 INFO nova.compute.manager [-] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.690 253665 DEBUG nova.compute.manager [None req-c7db1342-f86d-4b28-8773-9c068fc7fb5d - - - - - -] [instance: 7da16450-9ec5-472a-99df-81f56ee341fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:30 np0005532048 nova_compute[253661]: 2025-11-22 09:33:30.829 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:31 np0005532048 nova_compute[253661]: 2025-11-22 09:33:31.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:31 np0005532048 nova_compute[253661]: 2025-11-22 09:33:31.453 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:31 np0005532048 nova_compute[253661]: 2025-11-22 09:33:31.453 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:31 np0005532048 nova_compute[253661]: 2025-11-22 09:33:31.473 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:33:31 np0005532048 nova_compute[253661]: 2025-11-22 09:33:31.559 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:31 np0005532048 nova_compute[253661]: 2025-11-22 09:33:31.560 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:31 np0005532048 nova_compute[253661]: 2025-11-22 09:33:31.573 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:33:31 np0005532048 nova_compute[253661]: 2025-11-22 09:33:31.573 253665 INFO nova.compute.claims [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:33:31 np0005532048 nova_compute[253661]: 2025-11-22 09:33:31.716 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 88 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 22 04:33:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:33:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891860011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.206 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.214 253665 DEBUG nova.compute.provider_tree [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.233 253665 DEBUG nova.scheduler.client.report [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.261 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.262 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.307 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.324 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.346 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.451 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.453 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.453 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Creating image(s)#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.477 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.506 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.535 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.540 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.637 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.639 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.640 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.640 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.665 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.670 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.857 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "0922fe2c-d67c-47da-a1ac-5b217442c632" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.858 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.881 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.984 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.985 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.992 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:33:32 np0005532048 nova_compute[253661]: 2025-11-22 09:33:32.992 253665 INFO nova.compute.claims [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.100 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:33 np0005532048 NetworkManager[48920]: <info>  [1763804013.1628] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Nov 22 04:33:33 np0005532048 NetworkManager[48920]: <info>  [1763804013.1641] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Nov 22 04:33:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:33Z|01037|binding|INFO|Releasing lport 505b5f2b-f067-432d-8ac4-da2043ed18cf from this chassis (sb_readonly=0)
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.234 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] resizing rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.271 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.271 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.292 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.296 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.417 253665 DEBUG nova.objects.instance [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'migration_context' on Instance uuid 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.437 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.438 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Ensure instance console log exists: /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.438 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.439 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.439 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.441 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.447 253665 WARNING nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.455 253665 DEBUG nova.virt.libvirt.host [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.456 253665 DEBUG nova.virt.libvirt.host [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.460 253665 DEBUG nova.virt.libvirt.host [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.461 253665 DEBUG nova.virt.libvirt.host [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.462 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.462 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.463 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.463 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.463 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.464 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.464 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.464 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.464 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.465 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.465 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.465 253665 DEBUG nova.virt.hardware [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.469 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.757 253665 DEBUG nova.compute.manager [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.758 253665 DEBUG nova.compute.manager [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing instance network info cache due to event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.758 253665 DEBUG oslo_concurrency.lockutils [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.759 253665 DEBUG oslo_concurrency.lockutils [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.759 253665 DEBUG nova.network.neutron [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:33:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:33:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651741607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.796 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.803 253665 DEBUG nova.compute.provider_tree [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:33:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 102 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 91 op/s
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.824 253665 DEBUG nova.scheduler.client.report [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.844 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.845 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.897 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.916 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.933 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:33:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584434442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:33 np0005532048 nova_compute[253661]: 2025-11-22 09:33:33.994 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.024 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.031 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.084 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.087 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.087 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating image(s)#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.121 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.155 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.186 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.191 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.275 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.276 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.277 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.277 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.300 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.305 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0922fe2c-d67c-47da-a1ac-5b217442c632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3089764262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.539 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.541 253665 DEBUG nova.objects.instance [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.557 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <uuid>9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae</uuid>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <name>instance-00000066</name>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerShowV247Test-server-1320746866</nova:name>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:33:33</nova:creationTime>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <nova:user uuid="872ddfa50ca3429ca2eb86919c4c82cf">tempest-ServerShowV247Test-1598997937-project-member</nova:user>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <nova:project uuid="93a61bafffff48389d1004154f28d04c">tempest-ServerShowV247Test-1598997937</nova:project>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <entry name="serial">9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae</entry>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <entry name="uuid">9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae</entry>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/console.log" append="off"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:33:34 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:33:34 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:33:34 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:33:34 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.628 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.629 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.629 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Using config drive#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.651 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.906 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Creating config drive at /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config#033[00m
Nov 22 04:33:34 np0005532048 nova_compute[253661]: 2025-11-22 09:33:34.911 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfv4w_t1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:35 np0005532048 nova_compute[253661]: 2025-11-22 09:33:35.065 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfv4w_t1" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:35 np0005532048 nova_compute[253661]: 2025-11-22 09:33:35.089 253665 DEBUG nova.storage.rbd_utils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:35 np0005532048 nova_compute[253661]: 2025-11-22 09:33:35.092 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:35 np0005532048 nova_compute[253661]: 2025-11-22 09:33:35.382 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 0922fe2c-d67c-47da-a1ac-5b217442c632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:35 np0005532048 nova_compute[253661]: 2025-11-22 09:33:35.453 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] resizing rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:33:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 116 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 102 op/s
Nov 22 04:33:35 np0005532048 nova_compute[253661]: 2025-11-22 09:33:35.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:35 np0005532048 nova_compute[253661]: 2025-11-22 09:33:35.842 253665 DEBUG nova.network.neutron [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updated VIF entry in instance network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:33:35 np0005532048 nova_compute[253661]: 2025-11-22 09:33:35.843 253665 DEBUG nova.network.neutron [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:33:35 np0005532048 nova_compute[253661]: 2025-11-22 09:33:35.863 253665 DEBUG oslo_concurrency.lockutils [req-be0653bd-c18b-4910-8ccc-4ead64be53f6 req-3058ee9d-8d15-49f4-a120-14a562458914 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.466 253665 DEBUG oslo_concurrency.processutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.467 253665 INFO nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Deleting local config drive /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae/disk.config because it was imported into RBD.#033[00m
Nov 22 04:33:36 np0005532048 systemd-machined[215941]: New machine qemu-126-instance-00000066.
Nov 22 04:33:36 np0005532048 systemd[1]: Started Virtual Machine qemu-126-instance-00000066.
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.850 253665 DEBUG nova.objects.instance [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'migration_context' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.863 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.864 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Ensure instance console log exists: /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.864 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.865 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.865 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.867 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.872 253665 WARNING nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.880 253665 DEBUG nova.virt.libvirt.host [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.882 253665 DEBUG nova.virt.libvirt.host [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.885 253665 DEBUG nova.virt.libvirt.host [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.886 253665 DEBUG nova.virt.libvirt.host [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.886 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.887 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.888 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.888 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.889 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.889 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.890 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.890 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.890 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.891 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.891 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.891 253665 DEBUG nova.virt.hardware [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:33:36 np0005532048 nova_compute[253661]: 2025-11-22 09:33:36.895 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.197 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804017.197109, 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.200 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.221 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.222 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.227 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.230 253665 INFO nova.virt.libvirt.driver [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance spawned successfully.#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.231 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.234 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.258 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.258 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804017.2193768, 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.259 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] VM Started (Lifecycle Event)#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.265 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.266 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.267 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.267 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.267 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.268 253665 DEBUG nova.virt.libvirt.driver [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.273 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.276 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.297 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.318 253665 INFO nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Took 4.87 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.319 253665 DEBUG nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.362 253665 INFO nova.compute.manager [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Took 5.83 seconds to build instance.#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.385 253665 DEBUG oslo_concurrency.lockutils [None req-5b6c88bf-93f7-4ec7-b83b-c406ae5685ed 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1439546622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.427 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.452 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:37 np0005532048 nova_compute[253661]: 2025-11-22 09:33:37.460 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 148 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 93 op/s
Nov 22 04:33:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4121484801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.049 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.052 253665 DEBUG nova.objects.instance [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'pci_devices' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.066 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <uuid>0922fe2c-d67c-47da-a1ac-5b217442c632</uuid>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <name>instance-00000067</name>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerShowV247Test-server-2120834641</nova:name>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:33:36</nova:creationTime>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <nova:user uuid="872ddfa50ca3429ca2eb86919c4c82cf">tempest-ServerShowV247Test-1598997937-project-member</nova:user>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <nova:project uuid="93a61bafffff48389d1004154f28d04c">tempest-ServerShowV247Test-1598997937</nova:project>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <entry name="serial">0922fe2c-d67c-47da-a1ac-5b217442c632</entry>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <entry name="uuid">0922fe2c-d67c-47da-a1ac-5b217442c632</entry>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0922fe2c-d67c-47da-a1ac-5b217442c632_disk">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/console.log" append="off"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:33:38 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:33:38 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:33:38 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:33:38 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.177 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.177 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.178 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Using config drive#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.203 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.250 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.357 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating config drive at /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.362 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_176jbi9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.517 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_176jbi9" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.547 253665 DEBUG nova.storage.rbd_utils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.553 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.757 253665 DEBUG oslo_concurrency.processutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:38 np0005532048 nova_compute[253661]: 2025-11-22 09:33:38.758 253665 INFO nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deleting local config drive /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config because it was imported into RBD.#033[00m
Nov 22 04:33:38 np0005532048 systemd-machined[215941]: New machine qemu-127-instance-00000067.
Nov 22 04:33:38 np0005532048 systemd[1]: Started Virtual Machine qemu-127-instance-00000067.
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.248 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804019.24759, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.249 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.252 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.252 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.258 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.259 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.314 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.318 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance spawned successfully.#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.319 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.322 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.343 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.344 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.344 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.345 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.345 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.346 253665 DEBUG nova.virt.libvirt.driver [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.349 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.350 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804019.2496293, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.350 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Started (Lifecycle Event)#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.386 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.389 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.409 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.474 253665 INFO nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Took 5.39 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.475 253665 DEBUG nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.528 253665 INFO nova.compute.manager [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Took 6.57 seconds to build instance.#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.547 253665 DEBUG oslo_concurrency.lockutils [None req-e2a9d4ef-2d63-4c8b-8e9d-ba55c325574c 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:33:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3156660686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.761 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 181 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.877 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.877 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.881 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.881 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.885 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:33:39 np0005532048 nova_compute[253661]: 2025-11-22 09:33:39.885 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.087 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.088 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3474MB free_disk=59.9403076171875GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.088 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.089 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.319 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance eb81b22a-c733-4b44-8546-e4bd1c24d808 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 0922fe2c-d67c-47da-a1ac-5b217442c632 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.320 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.507 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:33:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4026977404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.974 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.979 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:33:40 np0005532048 nova_compute[253661]: 2025-11-22 09:33:40.994 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.013 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.013 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.059 253665 INFO nova.compute.manager [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Rebuilding instance#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.328 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.344 253665 DEBUG nova.compute.manager [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.392 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'pci_requests' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.403 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'pci_devices' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.414 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'resources' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.422 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'migration_context' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.431 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:33:41 np0005532048 nova_compute[253661]: 2025-11-22 09:33:41.435 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:33:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 181 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Nov 22 04:33:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:42Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:62:78 10.100.0.13
Nov 22 04:33:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:42Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:62:78 10.100.0.13
Nov 22 04:33:42 np0005532048 nova_compute[253661]: 2025-11-22 09:33:42.259 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:42 np0005532048 nova_compute[253661]: 2025-11-22 09:33:42.259 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:42 np0005532048 nova_compute[253661]: 2025-11-22 09:33:42.276 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:33:42 np0005532048 nova_compute[253661]: 2025-11-22 09:33:42.346 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:42 np0005532048 nova_compute[253661]: 2025-11-22 09:33:42.346 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:42 np0005532048 nova_compute[253661]: 2025-11-22 09:33:42.359 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:33:42 np0005532048 nova_compute[253661]: 2025-11-22 09:33:42.360 253665 INFO nova.compute.claims [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:33:42 np0005532048 nova_compute[253661]: 2025-11-22 09:33:42.529 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.014 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:33:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437313440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.086 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.092 253665 DEBUG nova.compute.provider_tree [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.108 253665 DEBUG nova.scheduler.client.report [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.142 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.144 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.196 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.197 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.220 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.244 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.344 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.346 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.346 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Creating image(s)#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.375 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.429 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.464 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.470 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.572 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.573 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.574 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.574 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.603 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:43 np0005532048 nova_compute[253661]: 2025-11-22 09:33:43.609 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 197 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.8 MiB/s wr, 276 op/s
Nov 22 04:33:44 np0005532048 nova_compute[253661]: 2025-11-22 09:33:44.049 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:44 np0005532048 nova_compute[253661]: 2025-11-22 09:33:44.120 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:33:44 np0005532048 nova_compute[253661]: 2025-11-22 09:33:44.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:44 np0005532048 nova_compute[253661]: 2025-11-22 09:33:44.279 253665 DEBUG nova.objects.instance [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:44 np0005532048 nova_compute[253661]: 2025-11-22 09:33:44.291 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:33:44 np0005532048 nova_compute[253661]: 2025-11-22 09:33:44.292 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Ensure instance console log exists: /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:33:44 np0005532048 nova_compute[253661]: 2025-11-22 09:33:44.292 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:44 np0005532048 nova_compute[253661]: 2025-11-22 09:33:44.293 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:44 np0005532048 nova_compute[253661]: 2025-11-22 09:33:44.293 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:45 np0005532048 nova_compute[253661]: 2025-11-22 09:33:45.011 253665 DEBUG nova.policy [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:33:45 np0005532048 nova_compute[253661]: 2025-11-22 09:33:45.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:45 np0005532048 nova_compute[253661]: 2025-11-22 09:33:45.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:33:45 np0005532048 nova_compute[253661]: 2025-11-22 09:33:45.715 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Successfully created port: 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:33:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 234 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 286 op/s
Nov 22 04:33:45 np0005532048 nova_compute[253661]: 2025-11-22 09:33:45.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:46 np0005532048 nova_compute[253661]: 2025-11-22 09:33:46.144 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:46 np0005532048 podman[359523]: 2025-11-22 09:33:46.390446066 +0000 UTC m=+0.071739818 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 22 04:33:46 np0005532048 podman[359524]: 2025-11-22 09:33:46.393612043 +0000 UTC m=+0.070040695 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.290 253665 INFO nova.compute.manager [None req-5546d05f-fa71-4eb8-9f99-43d699b8610d ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Get console output#033[00m
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.297 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.447 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Successfully updated port: 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.465 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.465 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.466 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.710 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:33:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 245 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 270 op/s
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.871 253665 DEBUG nova.compute.manager [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.871 253665 DEBUG nova.compute.manager [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing instance network info cache due to event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:33:47 np0005532048 nova_compute[253661]: 2025-11-22 09:33:47.872 253665 DEBUG oslo_concurrency.lockutils [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:33:48 np0005532048 nova_compute[253661]: 2025-11-22 09:33:48.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:33:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:33:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 31K writes, 125K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 10K syncs, 2.99 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7339 writes, 27K keys, 7339 commit groups, 1.0 writes per commit group, ingest: 27.17 MB, 0.05 MB/s#012Interval WAL: 7339 writes, 2840 syncs, 2.58 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:33:48 np0005532048 nova_compute[253661]: 2025-11-22 09:33:48.949 253665 INFO nova.compute.manager [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Rebuilding instance#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.096 253665 DEBUG nova.network.neutron [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.112 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.113 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance network_info: |[{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.114 253665 DEBUG oslo_concurrency.lockutils [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.115 253665 DEBUG nova.network.neutron [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.119 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start _get_guest_xml network_info=[{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.125 253665 WARNING nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.139 253665 DEBUG nova.virt.libvirt.host [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.141 253665 DEBUG nova.virt.libvirt.host [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.147 253665 DEBUG nova.virt.libvirt.host [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.148 253665 DEBUG nova.virt.libvirt.host [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.149 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.150 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.150 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.150 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.151 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.152 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.152 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.153 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.154 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.155 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.155 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.156 253665 DEBUG nova.virt.hardware [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.162 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.277 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'trusted_certs' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.295 253665 DEBUG nova.compute.manager [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.363 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_requests' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.396 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.409 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.424 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.435 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.439 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:33:49 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 22 04:33:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513839142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.705 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.733 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:49 np0005532048 nova_compute[253661]: 2025-11-22 09:33:49.739 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 260 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 274 op/s
Nov 22 04:33:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073774821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.302 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.304 253665 DEBUG nova.virt.libvirt.vif [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1891830994',display_name='tempest-TestNetworkBasicOps-server-1891830994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1891830994',id=104,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9/oNERa4AbqxHUPuutKC57v2O48q74KuKUDGgcFa55ErxBYOBd37EKQrgbQiEDb5SwoFM9AeHUddF0XE/aljzNPw78dYMARly2RFfRYPgRPvDRHLrrtwK6XNq8kEtqIg==',key_name='tempest-TestNetworkBasicOps-1008221113',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-8xsk0rz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:43Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c5540f5a-8dfa-4b11-8452-c6fe99db1d64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.305 253665 DEBUG nova.network.os_vif_util [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.306 253665 DEBUG nova.network.os_vif_util [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.309 253665 DEBUG nova.objects.instance [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.324 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <uuid>c5540f5a-8dfa-4b11-8452-c6fe99db1d64</uuid>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <name>instance-00000068</name>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkBasicOps-server-1891830994</nova:name>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:33:49</nova:creationTime>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <nova:port uuid="4d3de607-ad62-4c7d-ae3b-7cecb934aa9a">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <entry name="serial">c5540f5a-8dfa-4b11-8452-c6fe99db1d64</entry>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <entry name="uuid">c5540f5a-8dfa-4b11-8452-c6fe99db1d64</entry>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:cf:2e:8f"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <target dev="tap4d3de607-ad"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/console.log" append="off"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:33:50 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:33:50 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:33:50 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:33:50 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.325 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Preparing to wait for external event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.325 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.326 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.326 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.327 253665 DEBUG nova.virt.libvirt.vif [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1891830994',display_name='tempest-TestNetworkBasicOps-server-1891830994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1891830994',id=104,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9/oNERa4AbqxHUPuutKC57v2O48q74KuKUDGgcFa55ErxBYOBd37EKQrgbQiEDb5SwoFM9AeHUddF0XE/aljzNPw78dYMARly2RFfRYPgRPvDRHLrrtwK6XNq8kEtqIg==',key_name='tempest-TestNetworkBasicOps-1008221113',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-8xsk0rz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:43Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c5540f5a-8dfa-4b11-8452-c6fe99db1d64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.327 253665 DEBUG nova.network.os_vif_util [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.328 253665 DEBUG nova.network.os_vif_util [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.328 253665 DEBUG os_vif [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.329 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.330 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.333 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d3de607-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.334 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d3de607-ad, col_values=(('external_ids', {'iface-id': '4d3de607-ad62-4c7d-ae3b-7cecb934aa9a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:2e:8f', 'vm-uuid': 'c5540f5a-8dfa-4b11-8452-c6fe99db1d64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:50 np0005532048 NetworkManager[48920]: <info>  [1763804030.3380] manager: (tap4d3de607-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.348 253665 INFO os_vif [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad')#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.452 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.453 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.453 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:cf:2e:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.453 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Using config drive#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.478 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.954 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Creating config drive at /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config#033[00m
Nov 22 04:33:50 np0005532048 nova_compute[253661]: 2025-11-22 09:33:50.961 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpok3265ah execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:51 np0005532048 nova_compute[253661]: 2025-11-22 09:33:51.115 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpok3265ah" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:51 np0005532048 nova_compute[253661]: 2025-11-22 09:33:51.160 253665 DEBUG nova.storage.rbd_utils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:51 np0005532048 nova_compute[253661]: 2025-11-22 09:33:51.165 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:51 np0005532048 nova_compute[253661]: 2025-11-22 09:33:51.569 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:33:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 260 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Nov 22 04:33:51 np0005532048 nova_compute[253661]: 2025-11-22 09:33:51.989 253665 DEBUG nova.network.neutron [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updated VIF entry in instance network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:33:51 np0005532048 nova_compute[253661]: 2025-11-22 09:33:51.990 253665 DEBUG nova.network.neutron [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:33:52 np0005532048 nova_compute[253661]: 2025-11-22 09:33:52.022 253665 DEBUG oslo_concurrency.lockutils [req-4fcbdc90-875e-4b39-9556-53a05e112ce7 req-447efefe-66c3-4bf1-98e3-ad563ddc3e16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:33:52
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', '.mgr', 'volumes']
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:33:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:33:53 np0005532048 kernel: tap9cb5df7f-b7 (unregistering): left promiscuous mode
Nov 22 04:33:53 np0005532048 NetworkManager[48920]: <info>  [1763804033.1956] device (tap9cb5df7f-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:53Z|01038|binding|INFO|Releasing lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb from this chassis (sb_readonly=0)
Nov 22 04:33:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:53Z|01039|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb down in Southbound
Nov 22 04:33:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:53Z|01040|binding|INFO|Removing iface tap9cb5df7f-b7 ovn-installed in OVS
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.211 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:78 10.100.0.13'], port_security=['fa:16:3e:ee:62:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb81b22a-c733-4b44-8546-e4bd1c24d808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9a9e980-b9b8-4093-8614-a39717adaa19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed202cc-8346-4c69-b67f-f490be608094, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9cb5df7f-b707-42d9-b17d-75811fd05cbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.212 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9cb5df7f-b707-42d9-b17d-75811fd05cbb in datapath 3acaad61-a3f6-4bd6-83f4-0ab1438bb136 unbound from our chassis#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.213 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.215 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[35dcf32d-862c-40fd-8a3a-86628ddc31e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.216 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 namespace which is not needed anymore#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:53 np0005532048 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 22 04:33:53 np0005532048 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Consumed 15.577s CPU time.
Nov 22 04:33:53 np0005532048 systemd-machined[215941]: Machine qemu-125-instance-00000065 terminated.
Nov 22 04:33:53 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [NOTICE]   (358504) : haproxy version is 2.8.14-c23fe91
Nov 22 04:33:53 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [NOTICE]   (358504) : path to executable is /usr/sbin/haproxy
Nov 22 04:33:53 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [WARNING]  (358504) : Exiting Master process...
Nov 22 04:33:53 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [ALERT]    (358504) : Current worker (358506) exited with code 143 (Terminated)
Nov 22 04:33:53 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[358500]: [WARNING]  (358504) : All workers exited. Exiting... (0)
Nov 22 04:33:53 np0005532048 systemd[1]: libpod-09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467.scope: Deactivated successfully.
Nov 22 04:33:53 np0005532048 podman[359710]: 2025-11-22 09:33:53.40484716 +0000 UTC m=+0.074201527 container died 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.483 253665 DEBUG oslo_concurrency.processutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config c5540f5a-8dfa-4b11-8452-c6fe99db1d64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.484 253665 INFO nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Deleting local config drive /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64/disk.config because it was imported into RBD.#033[00m
Nov 22 04:33:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467-userdata-shm.mount: Deactivated successfully.
Nov 22 04:33:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-323283cbee7146e9c3c1575a344ce40e1bcaad9765d07290209292215bb1d53b-merged.mount: Deactivated successfully.
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.504 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance shutdown successfully after 4 seconds.#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.525 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance destroyed successfully.#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.537 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance destroyed successfully.#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.538 253665 DEBUG nova.virt.libvirt.vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:48Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.539 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.540 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.540 253665 DEBUG os_vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.544 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb5df7f-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.549 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:53 np0005532048 podman[359710]: 2025-11-22 09:33:53.558347498 +0000 UTC m=+0.227701855 container cleanup 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.558 253665 INFO os_vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7')#033[00m
Nov 22 04:33:53 np0005532048 kernel: tap4d3de607-ad: entered promiscuous mode
Nov 22 04:33:53 np0005532048 NetworkManager[48920]: <info>  [1763804033.5673] manager: (tap4d3de607-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Nov 22 04:33:53 np0005532048 systemd-udevd[359689]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:33:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:53Z|01041|binding|INFO|Claiming lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a for this chassis.
Nov 22 04:33:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:53Z|01042|binding|INFO|4d3de607-ad62-4c7d-ae3b-7cecb934aa9a: Claiming fa:16:3e:cf:2e:8f 10.100.0.14
Nov 22 04:33:53 np0005532048 systemd[1]: libpod-conmon-09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467.scope: Deactivated successfully.
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.577 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:2e:8f 10.100.0.14'], port_security=['fa:16:3e:cf:2e:8f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c5540f5a-8dfa-4b11-8452-c6fe99db1d64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ea999ce-3074-41ab-b630-d39c003b894a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1557c77-7174-4c01-8889-0c9609535e78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f542600-846f-418d-bf6a-c20db70e9dc6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:53 np0005532048 NetworkManager[48920]: <info>  [1763804033.5830] device (tap4d3de607-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:33:53 np0005532048 NetworkManager[48920]: <info>  [1763804033.5842] device (tap4d3de607-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:33:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:53Z|01043|binding|INFO|Setting lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a ovn-installed in OVS
Nov 22 04:33:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:53Z|01044|binding|INFO|Setting lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a up in Southbound
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:53 np0005532048 systemd-machined[215941]: New machine qemu-128-instance-00000068.
Nov 22 04:33:53 np0005532048 systemd[1]: Started Virtual Machine qemu-128-instance-00000068.
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.654 253665 DEBUG nova.compute.manager [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-unplugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.655 253665 DEBUG oslo_concurrency.lockutils [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.655 253665 DEBUG oslo_concurrency.lockutils [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.655 253665 DEBUG oslo_concurrency.lockutils [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.655 253665 DEBUG nova.compute.manager [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-unplugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.656 253665 WARNING nova.compute.manager [req-ed312ffd-703c-45a4-8fe1-516cf7f2a6fb req-6f710913-b396-4c0e-91b6-ceef1392dd5d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-unplugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state rebuilding.#033[00m
Nov 22 04:33:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:53 np0005532048 podman[359751]: 2025-11-22 09:33:53.675513582 +0000 UTC m=+0.144537899 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:33:53 np0005532048 podman[359772]: 2025-11-22 09:33:53.687332403 +0000 UTC m=+0.093697118 container remove 09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.695 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03c7e64b-e04f-4b0c-bf5f-c0a6479edc52]: (4, ('Sat Nov 22 09:33:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 (09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467)\n09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467\nSat Nov 22 09:33:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 (09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467)\n09a04554c3add88d1249216ad58121d18ed852361effca26c3c26a00916d0467\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.696 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c44fdba-2329-4282-bf97-de185aa0c20d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.697 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3acaad61-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:53 np0005532048 kernel: tap3acaad61-a0: left promiscuous mode
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.715 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.717 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32783e75-4f84-49d9-8e26-a4939a1261e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.730 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59df2a55-28cb-4a51-ad03-8888d620370e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.731 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[794a8f93-16bf-4a1b-ad88-5d90dd37c170]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.751 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[158db009-fac2-45be-ac22-80296133b75f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685816, 'reachable_time': 33099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359828, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.755 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.756 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfaf21a-263e-4af9-b866-cdaa5d72e93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 systemd[1]: run-netns-ovnmeta\x2d3acaad61\x2da3f6\x2d4bd6\x2d83f4\x2d0ab1438bb136.mount: Deactivated successfully.
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.757 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a in datapath 5ea999ce-3074-41ab-b630-d39c003b894a unbound from our chassis#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.758 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5ea999ce-3074-41ab-b630-d39c003b894a#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.770 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fbdca928-9a2d-45a0-acfe-7a2f0240e5aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.771 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5ea999ce-31 in ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.775 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5ea999ce-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.775 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[168fdb4e-ad2e-491f-ba65-c7e76369059e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.777 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c54e7acc-7ce8-4e3d-af53-e32717ad5ce7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.792 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[12ab45dd-28d4-4b31-9c43-669a948e417a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.819 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4786a2-46ad-4d21-80e2-14a9d9980fe3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 280 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.9 MiB/s wr, 261 op/s
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.861 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebb447e-24bc-4503-abdb-478ad6575b04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 NetworkManager[48920]: <info>  [1763804033.8680] manager: (tap5ea999ce-30): new Veth device (/org/freedesktop/NetworkManager/Devices/429)
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.869 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a4daa590-3516-444e-9e6a-fb858019a388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.893 253665 DEBUG nova.compute.manager [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.893 253665 DEBUG oslo_concurrency.lockutils [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.894 253665 DEBUG oslo_concurrency.lockutils [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.894 253665 DEBUG oslo_concurrency.lockutils [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:53 np0005532048 nova_compute[253661]: 2025-11-22 09:33:53.894 253665 DEBUG nova.compute.manager [req-5e5a8d1d-c8f2-4060-b2f0-96d09d311150 req-78b659c7-5c0c-4dfd-9b92-8626dd43ccd7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Processing event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.913 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf110b83-72ae-4c23-8e00-56d4a5c3756b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.917 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[713fc1aa-270c-4420-bc49-e3fc796370a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 NetworkManager[48920]: <info>  [1763804033.9506] device (tap5ea999ce-30): carrier: link connected
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.957 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c9dc50-3ce9-4320-b58c-4cf4160f1d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:53.984 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eadc6e43-29bb-4b3b-9005-99b6227a3375]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ea999ce-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:63:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688468, 'reachable_time': 43905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359852, 'error': None, 'target': 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.006 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[42bd18af-e923-492f-a7b2-d582d959bf86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:6347'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688468, 'tstamp': 688468}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359868, 'error': None, 'target': 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.040 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[93246642-cee0-4af5-904f-3240153d0bce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ea999ce-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:63:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688468, 'reachable_time': 43905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359872, 'error': None, 'target': 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.092 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0506cab8-597d-4eb3-907d-529db4af9f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.165 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e96a0077-e4df-4a8e-9ca3-6132b94d874d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.166 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ea999ce-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.167 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.167 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ea999ce-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:54 np0005532048 NetworkManager[48920]: <info>  [1763804034.1697] manager: (tap5ea999ce-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:54 np0005532048 kernel: tap5ea999ce-30: entered promiscuous mode
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.173 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.179 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5ea999ce-30, col_values=(('external_ids', {'iface-id': 'a1771b67-4cb9-46af-b99c-bccbb7cc939f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:54Z|01045|binding|INFO|Releasing lport a1771b67-4cb9-46af-b99c-bccbb7cc939f from this chassis (sb_readonly=0)
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.185 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5ea999ce-3074-41ab-b630-d39c003b894a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5ea999ce-3074-41ab-b630-d39c003b894a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84c17024-1d28-48d2-b3e7-9443af348750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.187 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-5ea999ce-3074-41ab-b630-d39c003b894a
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/5ea999ce-3074-41ab-b630-d39c003b894a.pid.haproxy
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 5ea999ce-3074-41ab-b630-d39c003b894a
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:33:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:54.188 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'env', 'PROCESS_TAG=haproxy-5ea999ce-3074-41ab-b630-d39c003b894a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5ea999ce-3074-41ab-b630-d39c003b894a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.246 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.247 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804034.247238, c5540f5a-8dfa-4b11-8452-c6fe99db1d64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.247 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] VM Started (Lifecycle Event)#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.263 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.273 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.276 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.279 253665 INFO nova.virt.libvirt.driver [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance spawned successfully.#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.279 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.297 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.297 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804034.2496908, c5540f5a-8dfa-4b11-8452-c6fe99db1d64 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.297 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.306 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.306 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.307 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.307 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.307 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.308 253665 DEBUG nova.virt.libvirt.driver [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.313 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.317 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804034.2523408, c5540f5a-8dfa-4b11-8452-c6fe99db1d64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.317 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.339 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.344 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.351 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deleting instance files /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808_del#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.352 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deletion of /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808_del complete#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.378 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.394 253665 INFO nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Took 11.05 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.394 253665 DEBUG nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.487 253665 INFO nova.compute.manager [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Took 12.17 seconds to build instance.#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.528 253665 DEBUG oslo_concurrency.lockutils [None req-40181cd8-8233-4828-9c6c-79a74ebc4ce8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.536 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.536 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating image(s)#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.565 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.594 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.618 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.622 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:54 np0005532048 podman[359929]: 2025-11-22 09:33:54.574164411 +0000 UTC m=+0.025945000 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.708 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.709 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.710 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.710 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.733 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:54 np0005532048 nova_compute[253661]: 2025-11-22 09:33:54.736 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 eb81b22a-c733-4b44-8546-e4bd1c24d808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:55 np0005532048 podman[359929]: 2025-11-22 09:33:55.030679797 +0000 UTC m=+0.482460366 container create c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:33:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:33:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3601.2 total, 600.0 interval#012Cumulative writes: 32K writes, 128K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 32K writes, 11K syncs, 2.94 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7053 writes, 27K keys, 7053 commit groups, 1.0 writes per commit group, ingest: 28.68 MB, 0.05 MB/s#012Interval WAL: 7053 writes, 2686 syncs, 2.63 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:33:55 np0005532048 systemd[1]: Started libpod-conmon-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b.scope.
Nov 22 04:33:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:33:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f254f240f0f6ebbbb79d2bb4edf4897aae117a89bf2f466a7a1a47ed4367c5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:33:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:33:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:33:55 np0005532048 podman[359929]: 2025-11-22 09:33:55.655641889 +0000 UTC m=+1.107422478 container init c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:33:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:33:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:33:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:33:55 np0005532048 podman[359929]: 2025-11-22 09:33:55.664186139 +0000 UTC m=+1.115966708 container start c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:33:55 np0005532048 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [NOTICE]   (360040) : New worker (360045) forked
Nov 22 04:33:55 np0005532048 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [NOTICE]   (360040) : Loading success.
Nov 22 04:33:55 np0005532048 nova_compute[253661]: 2025-11-22 09:33:55.781 253665 DEBUG nova.compute.manager [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:55 np0005532048 nova_compute[253661]: 2025-11-22 09:33:55.782 253665 DEBUG oslo_concurrency.lockutils [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:55 np0005532048 nova_compute[253661]: 2025-11-22 09:33:55.782 253665 DEBUG oslo_concurrency.lockutils [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:55 np0005532048 nova_compute[253661]: 2025-11-22 09:33:55.783 253665 DEBUG oslo_concurrency.lockutils [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:55 np0005532048 nova_compute[253661]: 2025-11-22 09:33:55.783 253665 DEBUG nova.compute.manager [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:55 np0005532048 nova_compute[253661]: 2025-11-22 09:33:55.783 253665 WARNING nova.compute.manager [req-3584d5ee-0ded-414d-9bdd-35fb345f8f97 req-0900c9dc-a5ae-4daa-aea9-cee4e486b2da 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 22 04:33:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 269 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.1 MiB/s wr, 175 op/s
Nov 22 04:33:55 np0005532048 nova_compute[253661]: 2025-11-22 09:33:55.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:56 np0005532048 nova_compute[253661]: 2025-11-22 09:33:56.020 253665 DEBUG nova.compute.manager [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:56 np0005532048 nova_compute[253661]: 2025-11-22 09:33:56.021 253665 DEBUG oslo_concurrency.lockutils [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:56 np0005532048 nova_compute[253661]: 2025-11-22 09:33:56.021 253665 DEBUG oslo_concurrency.lockutils [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:56 np0005532048 nova_compute[253661]: 2025-11-22 09:33:56.021 253665 DEBUG oslo_concurrency.lockutils [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:56 np0005532048 nova_compute[253661]: 2025-11-22 09:33:56.021 253665 DEBUG nova.compute.manager [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] No waiting events found dispatching network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:56 np0005532048 nova_compute[253661]: 2025-11-22 09:33:56.022 253665 WARNING nova.compute.manager [req-8c684629-3835-4f89-977f-784c8f0c9da2 req-87f0ea0a-cf7b-4c08-9a4f-87ec05e89edb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received unexpected event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a for instance with vm_state active and task_state None.#033[00m
Nov 22 04:33:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:33:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:33:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:33:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:33:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:33:56 np0005532048 nova_compute[253661]: 2025-11-22 09:33:56.682 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 eb81b22a-c733-4b44-8546-e4bd1c24d808_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.947s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:56 np0005532048 nova_compute[253661]: 2025-11-22 09:33:56.740 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.684 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.684 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Ensure instance console log exists: /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.685 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.685 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.685 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.687 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start _get_guest_xml network_info=[{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.691 253665 WARNING nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.697 253665 DEBUG nova.virt.libvirt.host [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.697 253665 DEBUG nova.virt.libvirt.host [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.701 253665 DEBUG nova.virt.libvirt.host [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.701 253665 DEBUG nova.virt.libvirt.host [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.702 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.702 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.703 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.703 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.703 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.703 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.704 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.704 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.704 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.705 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.705 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.705 253665 DEBUG nova.virt.hardware [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.706 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'vcpu_model' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:57 np0005532048 nova_compute[253661]: 2025-11-22 09:33:57.725 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 262 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 974 KiB/s rd, 5.6 MiB/s wr, 146 op/s
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.113 253665 DEBUG nova.compute.manager [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.114 253665 DEBUG nova.compute.manager [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing instance network info cache due to event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.114 253665 DEBUG oslo_concurrency.lockutils [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.115 253665 DEBUG oslo_concurrency.lockutils [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.115 253665 DEBUG nova.network.neutron [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626209637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.226 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.249 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.255 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:33:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:33:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073921862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.739 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.741 253665 DEBUG nova.virt.libvirt.vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:54Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.742 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.743 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.746 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <uuid>eb81b22a-c733-4b44-8546-e4bd1c24d808</uuid>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <name>instance-00000065</name>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1059413669</nova:name>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:33:57</nova:creationTime>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <nova:port uuid="9cb5df7f-b707-42d9-b17d-75811fd05cbb">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <entry name="serial">eb81b22a-c733-4b44-8546-e4bd1c24d808</entry>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <entry name="uuid">eb81b22a-c733-4b44-8546-e4bd1c24d808</entry>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/eb81b22a-c733-4b44-8546-e4bd1c24d808_disk">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:ee:62:78"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <target dev="tap9cb5df7f-b7"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/console.log" append="off"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:33:58 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:33:58 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:33:58 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:33:58 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.752 253665 DEBUG nova.virt.libvirt.vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:33:54Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.753 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.753 253665 DEBUG nova.network.os_vif_util [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.754 253665 DEBUG os_vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.755 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.755 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.756 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.759 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cb5df7f-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.759 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9cb5df7f-b7, col_values=(('external_ids', {'iface-id': '9cb5df7f-b707-42d9-b17d-75811fd05cbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:62:78', 'vm-uuid': 'eb81b22a-c733-4b44-8546-e4bd1c24d808'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.761 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:58 np0005532048 NetworkManager[48920]: <info>  [1763804038.7620] manager: (tap9cb5df7f-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.768 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.769 253665 INFO os_vif [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7')#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.837 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.838 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.838 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:ee:62:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.839 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Using config drive#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.870 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.890 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'ec2_ids' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:58 np0005532048 nova_compute[253661]: 2025-11-22 09:33:58.921 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'keypairs' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.291 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Creating config drive at /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.301 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu5l267p2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.466 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu5l267p2" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.501 253665 DEBUG nova.storage.rbd_utils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.506 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.672 253665 DEBUG oslo_concurrency.processutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config eb81b22a-c733-4b44-8546-e4bd1c24d808_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.673 253665 INFO nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deleting local config drive /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808/disk.config because it was imported into RBD.#033[00m
Nov 22 04:33:59 np0005532048 kernel: tap9cb5df7f-b7: entered promiscuous mode
Nov 22 04:33:59 np0005532048 NetworkManager[48920]: <info>  [1763804039.7354] manager: (tap9cb5df7f-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Nov 22 04:33:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:59Z|01046|binding|INFO|Claiming lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb for this chassis.
Nov 22 04:33:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:59Z|01047|binding|INFO|9cb5df7f-b707-42d9-b17d-75811fd05cbb: Claiming fa:16:3e:ee:62:78 10.100.0.13
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.739 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.744 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:78 10.100.0.13'], port_security=['fa:16:3e:ee:62:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb81b22a-c733-4b44-8546-e4bd1c24d808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a9a9e980-b9b8-4093-8614-a39717adaa19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed202cc-8346-4c69-b67f-f490be608094, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9cb5df7f-b707-42d9-b17d-75811fd05cbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.746 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9cb5df7f-b707-42d9-b17d-75811fd05cbb in datapath 3acaad61-a3f6-4bd6-83f4-0ab1438bb136 bound to our chassis#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.748 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3acaad61-a3f6-4bd6-83f4-0ab1438bb136#033[00m
Nov 22 04:33:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:59Z|01048|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb ovn-installed in OVS
Nov 22 04:33:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:33:59Z|01049|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb up in Southbound
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.771 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb1a863-f612-4b27-980a-dd5fcf84ea2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.772 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3acaad61-a1 in ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:33:59 np0005532048 systemd-udevd[360262]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.776 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3acaad61-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.776 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10f3ded6-b99f-4c37-b4a9-2784a4b9b663]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.780 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c588ebff-409a-49e8-8c72-d1efd205f21a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 systemd-machined[215941]: New machine qemu-129-instance-00000065.
Nov 22 04:33:59 np0005532048 systemd[1]: Started Virtual Machine qemu-129-instance-00000065.
Nov 22 04:33:59 np0005532048 NetworkManager[48920]: <info>  [1763804039.7962] device (tap9cb5df7f-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.796 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9b77dbbb-d694-47fc-8f26-97f1300821f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 NetworkManager[48920]: <info>  [1763804039.7974] device (tap9cb5df7f-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.816 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[529d9c6b-3ecb-471b-99ea-b6cc475522c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 288 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 259 op/s
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.854 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf1540e-db4a-4c67-b16a-9a6d2642764b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 NetworkManager[48920]: <info>  [1763804039.8632] manager: (tap3acaad61-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/433)
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.867 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c34ff98-7f2d-46fe-ac14-2beb7e1f8ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.902 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecd4dc7-52f4-4a19-b916-b9670fac1930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.905 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[267fc380-3aee-409e-9501-da7989c4479d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 NetworkManager[48920]: <info>  [1763804039.9411] device (tap3acaad61-a0): carrier: link connected
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.952 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[61d32570-fbd0-4a0e-b5fc-2004b76912e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.969 253665 DEBUG nova.compute.manager [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.969 253665 DEBUG oslo_concurrency.lockutils [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.970 253665 DEBUG oslo_concurrency.lockutils [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.970 253665 DEBUG oslo_concurrency.lockutils [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.970 253665 DEBUG nova.compute.manager [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:33:59 np0005532048 nova_compute[253661]: 2025-11-22 09:33:59.970 253665 WARNING nova.compute.manager [req-5f81e9be-ce41-4336-9808-4fddd5ac9658 req-d062d2ff-b00b-4165-a97b-27ec5e5828de 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 22 04:33:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:33:59.978 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31d04337-32c7-416f-826b-a558ea3d6bbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3acaad61-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a4:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689067, 'reachable_time': 28290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360295, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f85a1196-f8e3-4fe7-abeb-d0799371143a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a4ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689067, 'tstamp': 689067}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360296, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.005 253665 DEBUG nova.network.neutron [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updated VIF entry in instance network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.005 253665 DEBUG nova.network.neutron [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.016 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[16bbacbd-63dc-4ebb-884c-f9bdb2a2b669]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3acaad61-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a4:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 305], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689067, 'reachable_time': 28290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360297, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.026 253665 DEBUG oslo_concurrency.lockutils [req-98992304-2e6a-4f35-901c-fb508133613f req-1c5bbe54-1e7b-4329-9043-eb500c3b022a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.046 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8ad344-c77a-42fb-b59c-0e6fa1962827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.133 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e8482a-4bcc-452d-8fcd-d314c5b2585f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.136 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3acaad61-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.136 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.137 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3acaad61-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:00 np0005532048 NetworkManager[48920]: <info>  [1763804040.1396] manager: (tap3acaad61-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Nov 22 04:34:00 np0005532048 kernel: tap3acaad61-a0: entered promiscuous mode
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.139 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3acaad61-a0, col_values=(('external_ids', {'iface-id': '505b5f2b-f067-432d-8ac4-da2043ed18cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:00Z|01050|binding|INFO|Releasing lport 505b5f2b-f067-432d-8ac4-da2043ed18cf from this chassis (sb_readonly=0)
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.147 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.148 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.150 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[22bc76ce-1dd2-4395-b9f4-3268065aff4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.150 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.pid.haproxy
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 3acaad61-a3f6-4bd6-83f4-0ab1438bb136
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:34:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:00.151 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'env', 'PROCESS_TAG=haproxy-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3acaad61-a3f6-4bd6-83f4-0ab1438bb136.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.161 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.338 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for eb81b22a-c733-4b44-8546-e4bd1c24d808 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.341 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804040.337858, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.342 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.347 253665 DEBUG nova.compute.manager [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.348 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.353 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance spawned successfully.#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.354 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.385 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.386 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.386 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.387 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.388 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.388 253665 DEBUG nova.virt.libvirt.driver [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:34:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 25K writes, 104K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8408 syncs, 3.05 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6270 writes, 23K keys, 6270 commit groups, 1.0 writes per commit group, ingest: 24.71 MB, 0.04 MB/s#012Interval WAL: 6271 writes, 2417 syncs, 2.59 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.515 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.519 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.541 253665 DEBUG nova.compute.manager [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.542 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.543 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804040.3396366, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.543 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Started (Lifecycle Event)#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.558 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.566 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.589 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:34:00 np0005532048 podman[360371]: 2025-11-22 09:34:00.599374657 +0000 UTC m=+0.072285811 container create 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.609 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.610 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.610 253665 DEBUG nova.objects.instance [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:34:00 np0005532048 systemd[1]: Started libpod-conmon-4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d.scope.
Nov 22 04:34:00 np0005532048 podman[360371]: 2025-11-22 09:34:00.562852458 +0000 UTC m=+0.035763642 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.667 253665 DEBUG oslo_concurrency.lockutils [None req-9b0c9a2f-4363-48d2-89e2-cd4d9a31ebe9 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:34:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6f38e7aba289fc0c9ae62241c8d2f65dd0c33419fd634f67d646ac8fe686c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:00 np0005532048 podman[360371]: 2025-11-22 09:34:00.704930705 +0000 UTC m=+0.177841869 container init 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:34:00 np0005532048 podman[360371]: 2025-11-22 09:34:00.712160442 +0000 UTC m=+0.185071596 container start 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:34:00 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [NOTICE]   (360390) : New worker (360392) forked
Nov 22 04:34:00 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [NOTICE]   (360390) : Loading success.
Nov 22 04:34:00 np0005532048 nova_compute[253661]: 2025-11-22 09:34:00.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 293 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.1 MiB/s wr, 270 op/s
Nov 22 04:34:02 np0005532048 nova_compute[253661]: 2025-11-22 09:34:02.069 253665 DEBUG nova.compute.manager [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:02 np0005532048 nova_compute[253661]: 2025-11-22 09:34:02.070 253665 DEBUG oslo_concurrency.lockutils [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:02 np0005532048 nova_compute[253661]: 2025-11-22 09:34:02.070 253665 DEBUG oslo_concurrency.lockutils [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:02 np0005532048 nova_compute[253661]: 2025-11-22 09:34:02.071 253665 DEBUG oslo_concurrency.lockutils [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:02 np0005532048 nova_compute[253661]: 2025-11-22 09:34:02.071 253665 DEBUG nova.compute.manager [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] No waiting events found dispatching network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:34:02 np0005532048 nova_compute[253661]: 2025-11-22 09:34:02.071 253665 WARNING nova.compute.manager [req-bd005188-55dd-4d71-ad97-ad43fbb0fc46 req-2876b306-4492-4fbd-bf09-32fb2673fd70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received unexpected event network-vif-plugged-9cb5df7f-b707-42d9-b17d-75811fd05cbb for instance with vm_state active and task_state None.#033[00m
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 04:34:02 np0005532048 nova_compute[253661]: 2025-11-22 09:34:02.807 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022084416918994594 of space, bias 1.0, pg target 0.6625325075698378 quantized to 32 (current 32)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:34:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:34:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:03 np0005532048 nova_compute[253661]: 2025-11-22 09:34:03.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 293 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.1 MiB/s wr, 310 op/s
Nov 22 04:34:05 np0005532048 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 22 04:34:05 np0005532048 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Consumed 14.227s CPU time.
Nov 22 04:34:05 np0005532048 systemd-machined[215941]: Machine qemu-127-instance-00000067 terminated.
Nov 22 04:34:05 np0005532048 nova_compute[253661]: 2025-11-22 09:34:05.826 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance shutdown successfully after 24 seconds.#033[00m
Nov 22 04:34:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 293 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.1 MiB/s wr, 257 op/s
Nov 22 04:34:05 np0005532048 nova_compute[253661]: 2025-11-22 09:34:05.835 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance destroyed successfully.#033[00m
Nov 22 04:34:05 np0005532048 nova_compute[253661]: 2025-11-22 09:34:05.843 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance destroyed successfully.#033[00m
Nov 22 04:34:05 np0005532048 nova_compute[253661]: 2025-11-22 09:34:05.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.523 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deleting instance files /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632_del#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.524 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deletion of /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632_del complete#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.708 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.709 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating image(s)#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.743 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.775 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.807 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.812 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.890 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.892 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.893 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.893 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.920 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:06 np0005532048 nova_compute[253661]: 2025-11-22 09:34:06.924 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 0922fe2c-d67c-47da-a1ac-5b217442c632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.379 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 0922fe2c-d67c-47da-a1ac-5b217442c632_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.456 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] resizing rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.593 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.595 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Ensure instance console log exists: /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.595 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.596 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.596 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.598 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.604 253665 WARNING nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.613 253665 DEBUG nova.virt.libvirt.host [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.614 253665 DEBUG nova.virt.libvirt.host [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.619 253665 DEBUG nova.virt.libvirt.host [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.620 253665 DEBUG nova.virt.libvirt.host [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.620 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.620 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.621 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.621 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.622 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.622 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.622 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.623 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.623 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.623 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.624 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.624 253665 DEBUG nova.virt.hardware [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.625 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:07 np0005532048 nova_compute[253661]: 2025-11-22 09:34:07.639 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 257 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Nov 22 04:34:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737509771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.129 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.165 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.171 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1545881550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.623 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.628 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <uuid>0922fe2c-d67c-47da-a1ac-5b217442c632</uuid>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <name>instance-00000067</name>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerShowV247Test-server-2120834641</nova:name>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:34:07</nova:creationTime>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <nova:user uuid="872ddfa50ca3429ca2eb86919c4c82cf">tempest-ServerShowV247Test-1598997937-project-member</nova:user>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <nova:project uuid="93a61bafffff48389d1004154f28d04c">tempest-ServerShowV247Test-1598997937</nova:project>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <entry name="serial">0922fe2c-d67c-47da-a1ac-5b217442c632</entry>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <entry name="uuid">0922fe2c-d67c-47da-a1ac-5b217442c632</entry>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0922fe2c-d67c-47da-a1ac-5b217442c632_disk">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/console.log" append="off"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:34:08 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:34:08 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:34:08 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:34:08 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:34:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.693 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.695 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.696 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Using config drive#033[00m
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.724 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.746 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:08 np0005532048 nova_compute[253661]: 2025-11-22 09:34:08.790 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'keypairs' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:09Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:2e:8f 10.100.0.14
Nov 22 04:34:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:09Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:2e:8f 10.100.0.14
Nov 22 04:34:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 251 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 267 op/s
Nov 22 04:34:09 np0005532048 nova_compute[253661]: 2025-11-22 09:34:09.903 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Creating config drive at /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config#033[00m
Nov 22 04:34:09 np0005532048 nova_compute[253661]: 2025-11-22 09:34:09.909 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxsms8fpo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:10 np0005532048 nova_compute[253661]: 2025-11-22 09:34:10.052 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxsms8fpo" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:10 np0005532048 nova_compute[253661]: 2025-11-22 09:34:10.085 253665 DEBUG nova.storage.rbd_utils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] rbd image 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:10 np0005532048 nova_compute[253661]: 2025-11-22 09:34:10.093 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:10 np0005532048 nova_compute[253661]: 2025-11-22 09:34:10.353 253665 DEBUG oslo_concurrency.processutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config 0922fe2c-d67c-47da-a1ac-5b217442c632_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:10 np0005532048 nova_compute[253661]: 2025-11-22 09:34:10.355 253665 INFO nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deleting local config drive /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632/disk.config because it was imported into RBD.#033[00m
Nov 22 04:34:10 np0005532048 systemd-machined[215941]: New machine qemu-130-instance-00000067.
Nov 22 04:34:10 np0005532048 systemd[1]: Started Virtual Machine qemu-130-instance-00000067.
Nov 22 04:34:10 np0005532048 nova_compute[253661]: 2025-11-22 09:34:10.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.251 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.345 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 0922fe2c-d67c-47da-a1ac-5b217442c632 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.345 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804051.344523, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.346 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.354 253665 DEBUG nova.compute.manager [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.355 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.361 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance spawned successfully.#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.361 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.379 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.383 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.400 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.400 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.401 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.401 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.402 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.402 253665 DEBUG nova.virt.libvirt.driver [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.414 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804051.3532696, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Started (Lifecycle Event)#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.462 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.467 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.495 253665 DEBUG nova.compute.manager [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.497 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.559 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.560 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.560 253665 DEBUG nova.objects.instance [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:34:11 np0005532048 nova_compute[253661]: 2025-11-22 09:34:11.619 253665 DEBUG oslo_concurrency.lockutils [None req-bdc40d33-f13e-4d24-8914-fe8d5a2cffe1 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 271 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 164 op/s
Nov 22 04:34:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:34:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2253664503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:34:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:34:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2253664503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.544 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "0922fe2c-d67c-47da-a1ac-5b217442c632" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.546 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.547 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "0922fe2c-d67c-47da-a1ac-5b217442c632-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.547 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.547 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.549 253665 INFO nova.compute.manager [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Terminating instance#033[00m
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.550 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "refresh_cache-0922fe2c-d67c-47da-a1ac-5b217442c632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.550 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquired lock "refresh_cache-0922fe2c-d67c-47da-a1ac-5b217442c632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.550 253665 DEBUG nova.network.neutron [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:34:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.770 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 293 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Nov 22 04:34:13 np0005532048 nova_compute[253661]: 2025-11-22 09:34:13.885 253665 DEBUG nova.network.neutron [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:34:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:14.136 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:34:14 np0005532048 nova_compute[253661]: 2025-11-22 09:34:14.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:14.137 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:34:14 np0005532048 nova_compute[253661]: 2025-11-22 09:34:14.243 253665 DEBUG nova.network.neutron [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:14 np0005532048 nova_compute[253661]: 2025-11-22 09:34:14.261 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Releasing lock "refresh_cache-0922fe2c-d67c-47da-a1ac-5b217442c632" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:34:14 np0005532048 nova_compute[253661]: 2025-11-22 09:34:14.264 253665 DEBUG nova.compute.manager [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:34:14 np0005532048 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 22 04:34:14 np0005532048 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000067.scope: Consumed 3.932s CPU time.
Nov 22 04:34:14 np0005532048 systemd-machined[215941]: Machine qemu-130-instance-00000067 terminated.
Nov 22 04:34:14 np0005532048 nova_compute[253661]: 2025-11-22 09:34:14.501 253665 INFO nova.virt.libvirt.driver [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance destroyed successfully.#033[00m
Nov 22 04:34:14 np0005532048 nova_compute[253661]: 2025-11-22 09:34:14.501 253665 DEBUG nova.objects.instance [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'resources' on Instance uuid 0922fe2c-d67c-47da-a1ac-5b217442c632 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.125 253665 INFO nova.compute.manager [None req-5482febe-56e6-43c5-a57d-dce6e3126815 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Get console output#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.131 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:34:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:15.140 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.285 253665 INFO nova.virt.libvirt.driver [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deleting instance files /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632_del#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.286 253665 INFO nova.virt.libvirt.driver [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deletion of /var/lib/nova/instances/0922fe2c-d67c-47da-a1ac-5b217442c632_del complete#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.369 253665 INFO nova.compute.manager [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.370 253665 DEBUG oslo.service.loopingcall [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.370 253665 DEBUG nova.compute.manager [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.371 253665 DEBUG nova.network.neutron [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:34:15 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.528 253665 DEBUG nova.network.neutron [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.543 253665 DEBUG nova.network.neutron [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.560 253665 INFO nova.compute.manager [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Took 0.19 seconds to deallocate network for instance.#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.623 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.624 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.763 253665 DEBUG oslo_concurrency.processutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 293 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Nov 22 04:34:15 np0005532048 nova_compute[253661]: 2025-11-22 09:34:15.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:15Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:62:78 10.100.0.13
Nov 22 04:34:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:15Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:62:78 10.100.0.13
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954433103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:34:16 np0005532048 nova_compute[253661]: 2025-11-22 09:34:16.242 253665 DEBUG oslo_concurrency.processutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:16 np0005532048 nova_compute[253661]: 2025-11-22 09:34:16.250 253665 DEBUG nova.compute.provider_tree [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:34:16 np0005532048 nova_compute[253661]: 2025-11-22 09:34:16.278 253665 DEBUG nova.scheduler.client.report [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:34:16 np0005532048 nova_compute[253661]: 2025-11-22 09:34:16.312 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:16 np0005532048 nova_compute[253661]: 2025-11-22 09:34:16.347 253665 INFO nova.scheduler.client.report [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Deleted allocations for instance 0922fe2c-d67c-47da-a1ac-5b217442c632#033[00m
Nov 22 04:34:16 np0005532048 nova_compute[253661]: 2025-11-22 09:34:16.404 253665 DEBUG oslo_concurrency.lockutils [None req-56892fcf-cb5a-421f-9496-59e1e2b1cb68 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "0922fe2c-d67c-47da-a1ac-5b217442c632" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:34:16 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a24fb9e6-6655-4690-a299-bbfacb8263e3 does not exist
Nov 22 04:34:16 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 41127b87-1f75-472b-8837-8c3bdcb61fbe does not exist
Nov 22 04:34:16 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ea1852af-6b4c-4c6e-bf3e-22e8c859e1fc does not exist
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:34:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:34:16 np0005532048 podman[360966]: 2025-11-22 09:34:16.724339825 +0000 UTC m=+0.069100781 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:34:16 np0005532048 podman[360967]: 2025-11-22 09:34:16.731478662 +0000 UTC m=+0.076340920 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 04:34:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:34:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:34:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:34:17 np0005532048 podman[361118]: 2025-11-22 09:34:17.233388075 +0000 UTC m=+0.047572412 container create 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:34:17 np0005532048 systemd[1]: Started libpod-conmon-92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b.scope.
Nov 22 04:34:17 np0005532048 podman[361118]: 2025-11-22 09:34:17.209497797 +0000 UTC m=+0.023682164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:34:17 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:34:17 np0005532048 podman[361118]: 2025-11-22 09:34:17.330377602 +0000 UTC m=+0.144561939 container init 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:34:17 np0005532048 podman[361118]: 2025-11-22 09:34:17.339474836 +0000 UTC m=+0.153659173 container start 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:34:17 np0005532048 podman[361118]: 2025-11-22 09:34:17.345482884 +0000 UTC m=+0.159667221 container attach 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:34:17 np0005532048 peaceful_roentgen[361134]: 167 167
Nov 22 04:34:17 np0005532048 systemd[1]: libpod-92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b.scope: Deactivated successfully.
Nov 22 04:34:17 np0005532048 conmon[361134]: conmon 92f67ee57f6d78a430ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b.scope/container/memory.events
Nov 22 04:34:17 np0005532048 podman[361118]: 2025-11-22 09:34:17.350271452 +0000 UTC m=+0.164455809 container died 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 04:34:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5597ba80905c1d64a139a484186ae68892dbbd312fa68174d21077d7bcba01a1-merged.mount: Deactivated successfully.
Nov 22 04:34:17 np0005532048 podman[361118]: 2025-11-22 09:34:17.417941737 +0000 UTC m=+0.232126074 container remove 92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_roentgen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:34:17 np0005532048 systemd[1]: libpod-conmon-92f67ee57f6d78a430acfd8b63e659d246f484f4f2f69c681c76ab78462cae8b.scope: Deactivated successfully.
Nov 22 04:34:17 np0005532048 podman[361157]: 2025-11-22 09:34:17.651083975 +0000 UTC m=+0.062356245 container create 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:34:17 np0005532048 systemd[1]: Started libpod-conmon-543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1.scope.
Nov 22 04:34:17 np0005532048 podman[361157]: 2025-11-22 09:34:17.626914571 +0000 UTC m=+0.038186901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:34:17 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:34:17 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:17 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:17 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:17 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:17 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.757 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.760 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.760 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.760 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.761 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.762 253665 INFO nova.compute.manager [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Terminating instance#033[00m
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.763 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "refresh_cache-9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.763 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquired lock "refresh_cache-9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.763 253665 DEBUG nova.network.neutron [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:34:17 np0005532048 podman[361157]: 2025-11-22 09:34:17.76503028 +0000 UTC m=+0.176302570 container init 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:34:17 np0005532048 podman[361157]: 2025-11-22 09:34:17.774944164 +0000 UTC m=+0.186216434 container start 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:34:17 np0005532048 podman[361157]: 2025-11-22 09:34:17.780403509 +0000 UTC m=+0.191675779 container attach 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:34:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 287 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.8 MiB/s wr, 281 op/s
Nov 22 04:34:17 np0005532048 nova_compute[253661]: 2025-11-22 09:34:17.944 253665 DEBUG nova.network.neutron [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:34:18 np0005532048 nova_compute[253661]: 2025-11-22 09:34:18.221 253665 DEBUG nova.network.neutron [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:18 np0005532048 nova_compute[253661]: 2025-11-22 09:34:18.243 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Releasing lock "refresh_cache-9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:34:18 np0005532048 nova_compute[253661]: 2025-11-22 09:34:18.243 253665 DEBUG nova.compute.manager [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:34:18 np0005532048 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 22 04:34:18 np0005532048 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000066.scope: Consumed 15.206s CPU time.
Nov 22 04:34:18 np0005532048 systemd-machined[215941]: Machine qemu-126-instance-00000066 terminated.
Nov 22 04:34:18 np0005532048 nova_compute[253661]: 2025-11-22 09:34:18.480 253665 INFO nova.virt.libvirt.driver [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance destroyed successfully.#033[00m
Nov 22 04:34:18 np0005532048 nova_compute[253661]: 2025-11-22 09:34:18.481 253665 DEBUG nova.objects.instance [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lazy-loading 'resources' on Instance uuid 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:18 np0005532048 nova_compute[253661]: 2025-11-22 09:34:18.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:18 np0005532048 bold_franklin[361173]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:34:18 np0005532048 bold_franklin[361173]: --> relative data size: 1.0
Nov 22 04:34:18 np0005532048 bold_franklin[361173]: --> All data devices are unavailable
Nov 22 04:34:18 np0005532048 systemd[1]: libpod-543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1.scope: Deactivated successfully.
Nov 22 04:34:18 np0005532048 systemd[1]: libpod-543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1.scope: Consumed 1.035s CPU time.
Nov 22 04:34:18 np0005532048 podman[361157]: 2025-11-22 09:34:18.888803189 +0000 UTC m=+1.300075469 container died 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:34:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay-36855b6b8c4af0f011b5a690dbe46e0512d6a54df30e09468ff2aa5e9ecf41a3-merged.mount: Deactivated successfully.
Nov 22 04:34:19 np0005532048 podman[361157]: 2025-11-22 09:34:19.002163969 +0000 UTC m=+1.413436239 container remove 543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_franklin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:34:19 np0005532048 systemd[1]: libpod-conmon-543e7a3a6339d3e28472aaf2f37f8492391c03dc227f23bd4a9da03df0062be1.scope: Deactivated successfully.
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.146 253665 INFO nova.virt.libvirt.driver [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Deleting instance files /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_del#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.148 253665 INFO nova.virt.libvirt.driver [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Deletion of /var/lib/nova/instances/9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae_del complete#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.259 253665 INFO nova.compute.manager [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.260 253665 DEBUG oslo.service.loopingcall [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.261 253665 DEBUG nova.compute.manager [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.261 253665 DEBUG nova.network.neutron [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.400 253665 DEBUG nova.network.neutron [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.410 253665 DEBUG nova.network.neutron [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.431 253665 INFO nova.compute.manager [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Took 0.17 seconds to deallocate network for instance.#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.476 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.477 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:19 np0005532048 nova_compute[253661]: 2025-11-22 09:34:19.585 253665 DEBUG oslo_concurrency.processutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:19 np0005532048 podman[361378]: 2025-11-22 09:34:19.687102798 +0000 UTC m=+0.040547339 container create 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:34:19 np0005532048 systemd[1]: Started libpod-conmon-3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe.scope.
Nov 22 04:34:19 np0005532048 podman[361378]: 2025-11-22 09:34:19.670637782 +0000 UTC m=+0.024082343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:34:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:34:19 np0005532048 podman[361378]: 2025-11-22 09:34:19.811286194 +0000 UTC m=+0.164730735 container init 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:34:19 np0005532048 podman[361378]: 2025-11-22 09:34:19.819858104 +0000 UTC m=+0.173302645 container start 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:34:19 np0005532048 loving_bouman[361395]: 167 167
Nov 22 04:34:19 np0005532048 systemd[1]: libpod-3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe.scope: Deactivated successfully.
Nov 22 04:34:19 np0005532048 podman[361378]: 2025-11-22 09:34:19.828750654 +0000 UTC m=+0.182195225 container attach 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:34:19 np0005532048 podman[361378]: 2025-11-22 09:34:19.830528327 +0000 UTC m=+0.183972888 container died 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:34:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 272 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 245 op/s
Nov 22 04:34:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e768aa26aaf80eaf67224ae680aa63f3377b9a65525f3f326634da9699f3799e-merged.mount: Deactivated successfully.
Nov 22 04:34:19 np0005532048 podman[361378]: 2025-11-22 09:34:19.932295823 +0000 UTC m=+0.285740374 container remove 3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:34:19 np0005532048 systemd[1]: libpod-conmon-3390ae076303b60d53163697b88867f5c401baf6f792013a3468c91601c7c9fe.scope: Deactivated successfully.
Nov 22 04:34:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:34:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3766812116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:34:20 np0005532048 nova_compute[253661]: 2025-11-22 09:34:20.100 253665 DEBUG oslo_concurrency.processutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:20 np0005532048 nova_compute[253661]: 2025-11-22 09:34:20.108 253665 DEBUG nova.compute.provider_tree [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:34:20 np0005532048 nova_compute[253661]: 2025-11-22 09:34:20.124 253665 DEBUG nova.scheduler.client.report [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:34:20 np0005532048 nova_compute[253661]: 2025-11-22 09:34:20.143 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:20 np0005532048 podman[361437]: 2025-11-22 09:34:20.146289089 +0000 UTC m=+0.070089346 container create e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:34:20 np0005532048 nova_compute[253661]: 2025-11-22 09:34:20.165 253665 INFO nova.scheduler.client.report [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Deleted allocations for instance 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae#033[00m
Nov 22 04:34:20 np0005532048 systemd[1]: Started libpod-conmon-e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d.scope.
Nov 22 04:34:20 np0005532048 podman[361437]: 2025-11-22 09:34:20.105683929 +0000 UTC m=+0.029484216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:34:20 np0005532048 nova_compute[253661]: 2025-11-22 09:34:20.214 253665 DEBUG oslo_concurrency.lockutils [None req-5745f2d0-386f-49bf-ad17-4a4eae1b1578 872ddfa50ca3429ca2eb86919c4c82cf 93a61bafffff48389d1004154f28d04c - - default default] Lock "9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:20 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:34:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:20 np0005532048 podman[361437]: 2025-11-22 09:34:20.256216695 +0000 UTC m=+0.180017002 container init e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:34:20 np0005532048 podman[361437]: 2025-11-22 09:34:20.263640017 +0000 UTC m=+0.187440274 container start e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:34:20 np0005532048 podman[361437]: 2025-11-22 09:34:20.269489611 +0000 UTC m=+0.193289928 container attach e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:34:20 np0005532048 nova_compute[253661]: 2025-11-22 09:34:20.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]: {
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:    "0": [
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:        {
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "devices": [
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "/dev/loop3"
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            ],
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_name": "ceph_lv0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_size": "21470642176",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "name": "ceph_lv0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "tags": {
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.cluster_name": "ceph",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.crush_device_class": "",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.encrypted": "0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.osd_id": "0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.type": "block",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.vdo": "0"
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            },
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "type": "block",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "vg_name": "ceph_vg0"
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:        }
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:    ],
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:    "1": [
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:        {
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "devices": [
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "/dev/loop4"
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            ],
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_name": "ceph_lv1",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_size": "21470642176",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "name": "ceph_lv1",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "tags": {
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.cluster_name": "ceph",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.crush_device_class": "",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.encrypted": "0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.osd_id": "1",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.type": "block",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.vdo": "0"
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            },
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "type": "block",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "vg_name": "ceph_vg1"
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:        }
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:    ],
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:    "2": [
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:        {
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "devices": [
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "/dev/loop5"
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            ],
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_name": "ceph_lv2",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_size": "21470642176",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "name": "ceph_lv2",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "tags": {
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.cluster_name": "ceph",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.crush_device_class": "",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.encrypted": "0",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.osd_id": "2",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.type": "block",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:                "ceph.vdo": "0"
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            },
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "type": "block",
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:            "vg_name": "ceph_vg2"
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:        }
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]:    ]
Nov 22 04:34:21 np0005532048 pensive_mahavira[361456]: }
Nov 22 04:34:21 np0005532048 systemd[1]: libpod-e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d.scope: Deactivated successfully.
Nov 22 04:34:21 np0005532048 podman[361437]: 2025-11-22 09:34:21.130609826 +0000 UTC m=+1.054410103 container died e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 04:34:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1bcecf19abef9bf2aa64922b3830a79700e0c3662e5873abab336fae11e09754-merged.mount: Deactivated successfully.
Nov 22 04:34:21 np0005532048 podman[361437]: 2025-11-22 09:34:21.191556256 +0000 UTC m=+1.115356513 container remove e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:34:21 np0005532048 systemd[1]: libpod-conmon-e21c399e8431ba461456c5049dc555521f5388ef9600f9f1550a72792cea0b8d.scope: Deactivated successfully.
Nov 22 04:34:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 256 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 224 op/s
Nov 22 04:34:21 np0005532048 podman[361619]: 2025-11-22 09:34:21.929205902 +0000 UTC m=+0.045344258 container create 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:34:21 np0005532048 systemd[1]: Started libpod-conmon-0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18.scope.
Nov 22 04:34:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:34:22 np0005532048 podman[361619]: 2025-11-22 09:34:21.909742413 +0000 UTC m=+0.025880769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:34:22 np0005532048 podman[361619]: 2025-11-22 09:34:22.017633878 +0000 UTC m=+0.133772254 container init 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:34:22 np0005532048 podman[361619]: 2025-11-22 09:34:22.027372538 +0000 UTC m=+0.143510894 container start 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:34:22 np0005532048 podman[361619]: 2025-11-22 09:34:22.031493289 +0000 UTC m=+0.147631665 container attach 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:34:22 np0005532048 systemd[1]: libpod-0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18.scope: Deactivated successfully.
Nov 22 04:34:22 np0005532048 condescending_agnesi[361635]: 167 167
Nov 22 04:34:22 np0005532048 conmon[361635]: conmon 0b964d6b6ac7f2b7d0da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18.scope/container/memory.events
Nov 22 04:34:22 np0005532048 podman[361619]: 2025-11-22 09:34:22.035732524 +0000 UTC m=+0.151870880 container died 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:34:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bb931832c02d65b9b735ff1a11721f9349b7baecb28a5fbf839739790267243b-merged.mount: Deactivated successfully.
Nov 22 04:34:22 np0005532048 podman[361619]: 2025-11-22 09:34:22.076771813 +0000 UTC m=+0.192910169 container remove 0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_agnesi, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:34:22 np0005532048 systemd[1]: libpod-conmon-0b964d6b6ac7f2b7d0da392dc68010b8eb696035c8f7b9f5a31918b2a54e7e18.scope: Deactivated successfully.
Nov 22 04:34:22 np0005532048 nova_compute[253661]: 2025-11-22 09:34:22.117 253665 INFO nova.compute.manager [None req-7daa3790-656a-4a94-bd91-9abe7a6aea00 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Get console output#033[00m
Nov 22 04:34:22 np0005532048 nova_compute[253661]: 2025-11-22 09:34:22.125 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:34:22 np0005532048 podman[361659]: 2025-11-22 09:34:22.271867816 +0000 UTC m=+0.047341466 container create e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:34:22 np0005532048 systemd[1]: Started libpod-conmon-e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6.scope.
Nov 22 04:34:22 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:34:22 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:22 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:22 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:22 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:22 np0005532048 podman[361659]: 2025-11-22 09:34:22.253682768 +0000 UTC m=+0.029156438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:34:22 np0005532048 podman[361659]: 2025-11-22 09:34:22.362474235 +0000 UTC m=+0.137947905 container init e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:34:22 np0005532048 podman[361659]: 2025-11-22 09:34:22.370607276 +0000 UTC m=+0.146080926 container start e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:34:22 np0005532048 podman[361659]: 2025-11-22 09:34:22.374279846 +0000 UTC m=+0.149753506 container attach e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:34:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.376 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.378 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.378 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.379 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.379 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.381 253665 INFO nova.compute.manager [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Terminating instance#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.384 253665 DEBUG nova.compute.manager [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]: {
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "osd_id": 1,
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "type": "bluestore"
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:    },
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "osd_id": 0,
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "type": "bluestore"
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:    },
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "osd_id": 2,
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:        "type": "bluestore"
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]:    }
Nov 22 04:34:23 np0005532048 friendly_shamir[361675]: }
Nov 22 04:34:23 np0005532048 systemd[1]: libpod-e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6.scope: Deactivated successfully.
Nov 22 04:34:23 np0005532048 systemd[1]: libpod-e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6.scope: Consumed 1.073s CPU time.
Nov 22 04:34:23 np0005532048 podman[361659]: 2025-11-22 09:34:23.439888754 +0000 UTC m=+1.215362454 container died e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:34:23 np0005532048 kernel: tap9cb5df7f-b7 (unregistering): left promiscuous mode
Nov 22 04:34:23 np0005532048 NetworkManager[48920]: <info>  [1763804063.4514] device (tap9cb5df7f-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.457 253665 DEBUG nova.compute.manager [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.457 253665 DEBUG nova.compute.manager [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing instance network info cache due to event network-changed-9cb5df7f-b707-42d9-b17d-75811fd05cbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.458 253665 DEBUG oslo_concurrency.lockutils [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.458 253665 DEBUG oslo_concurrency.lockutils [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.458 253665 DEBUG nova.network.neutron [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Refreshing network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:34:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:23Z|01051|binding|INFO|Releasing lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb from this chassis (sb_readonly=0)
Nov 22 04:34:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:23Z|01052|binding|INFO|Setting lport 9cb5df7f-b707-42d9-b17d-75811fd05cbb down in Southbound
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:23Z|01053|binding|INFO|Removing iface tap9cb5df7f-b7 ovn-installed in OVS
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.470 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-68209bd4d82f7d476eced1099e43bc891bae82cd2b26aa7acb41033dd4ddf3c9-merged.mount: Deactivated successfully.
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.489 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:78 10.100.0.13'], port_security=['fa:16:3e:ee:62:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb81b22a-c733-4b44-8546-e4bd1c24d808', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a9a9e980-b9b8-4093-8614-a39717adaa19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ed202cc-8346-4c69-b67f-f490be608094, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9cb5df7f-b707-42d9-b17d-75811fd05cbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.491 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9cb5df7f-b707-42d9-b17d-75811fd05cbb in datapath 3acaad61-a3f6-4bd6-83f4-0ab1438bb136 unbound from our chassis#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.493 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.495 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e1daa2c4-f605-4242-b91f-d04759a1ffc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.501 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 namespace which is not needed anymore#033[00m
Nov 22 04:34:23 np0005532048 podman[361659]: 2025-11-22 09:34:23.5218292 +0000 UTC m=+1.297302840 container remove e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shamir, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:34:23 np0005532048 systemd[1]: libpod-conmon-e2c7556fc8ac63df7d68d5dc06d135c0c94315e9817d13fe9c6ae297e2a7b2a6.scope: Deactivated successfully.
Nov 22 04:34:23 np0005532048 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 22 04:34:23 np0005532048 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000065.scope: Consumed 16.902s CPU time.
Nov 22 04:34:23 np0005532048 systemd-machined[215941]: Machine qemu-129-instance-00000065 terminated.
Nov 22 04:34:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:34:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:34:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:34:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:34:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b89ec349-f9b0-4713-b2dc-4864b2dcf627 does not exist
Nov 22 04:34:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 704feecb-8b05-447b-a4e4-180c22b1fe8e does not exist
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.631 253665 INFO nova.virt.libvirt.driver [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Instance destroyed successfully.#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.633 253665 DEBUG nova.objects.instance [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid eb81b22a-c733-4b44-8546-e4bd1c24d808 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.644 253665 DEBUG nova.virt.libvirt.vif [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:33:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1059413669',display_name='tempest-TestNetworkAdvancedServerOps-server-1059413669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1059413669',id=101,image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMWaHwZx+zbUAKWiLs2U5zkhr9N8SVrOtHRFfBlHQQ/ubsNn5ZhG0XVdGoDeqI3mK5yhooQBHUgTYQsbJgQUwvgPE5uhIJtGcOwev9t0XqeF59xbZ+1hxRSCdVq/1AmgA==',key_name='tempest-TestNetworkAdvancedServerOps-790856761',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-opid60ry',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='baf70c6a-4f18-40eb-9d40-874af269a47f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:34:00Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=eb81b22a-c733-4b44-8546-e4bd1c24d808,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.645 253665 DEBUG nova.network.os_vif_util [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.646 253665 DEBUG nova.network.os_vif_util [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.647 253665 DEBUG os_vif [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.649 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.651 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cb5df7f-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.658 253665 INFO os_vif [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:78,bridge_name='br-int',has_traffic_filtering=True,id=9cb5df7f-b707-42d9-b17d-75811fd05cbb,network=Network(3acaad61-a3f6-4bd6-83f4-0ab1438bb136),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9cb5df7f-b7')#033[00m
Nov 22 04:34:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:23 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [NOTICE]   (360390) : haproxy version is 2.8.14-c23fe91
Nov 22 04:34:23 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [NOTICE]   (360390) : path to executable is /usr/sbin/haproxy
Nov 22 04:34:23 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [WARNING]  (360390) : Exiting Master process...
Nov 22 04:34:23 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [ALERT]    (360390) : Current worker (360392) exited with code 143 (Terminated)
Nov 22 04:34:23 np0005532048 neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136[360386]: [WARNING]  (360390) : All workers exited. Exiting... (0)
Nov 22 04:34:23 np0005532048 systemd[1]: libpod-4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d.scope: Deactivated successfully.
Nov 22 04:34:23 np0005532048 podman[361756]: 2025-11-22 09:34:23.688356219 +0000 UTC m=+0.054404950 container died 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:34:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d-userdata-shm.mount: Deactivated successfully.
Nov 22 04:34:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6a6f38e7aba289fc0c9ae62241c8d2f65dd0c33419fd634f67d646ac8fe686c6-merged.mount: Deactivated successfully.
Nov 22 04:34:23 np0005532048 podman[361756]: 2025-11-22 09:34:23.741938969 +0000 UTC m=+0.107987700 container cleanup 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:34:23 np0005532048 systemd[1]: libpod-conmon-4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d.scope: Deactivated successfully.
Nov 22 04:34:23 np0005532048 podman[361812]: 2025-11-22 09:34:23.81273135 +0000 UTC m=+0.102789290 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 22 04:34:23 np0005532048 podman[361858]: 2025-11-22 09:34:23.825408822 +0000 UTC m=+0.051657152 container remove 4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.833 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7c8469-2db1-4598-98fb-58760020f290]: (4, ('Sat Nov 22 09:34:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 (4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d)\n4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d\nSat Nov 22 09:34:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 (4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d)\n4789fe209d96c7855557248f03ff6046a562831855c1d2ba386115259fa0713d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.836 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19d8af29-bcd2-4dad-bbd4-4e141d908020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.838 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3acaad61-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 200 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 227 op/s
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:23 np0005532048 kernel: tap3acaad61-a0: left promiscuous mode
Nov 22 04:34:23 np0005532048 nova_compute[253661]: 2025-11-22 09:34:23.862 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.868 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc962e8-587f-481f-8165-d4752cd1aa13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.884 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[378778af-28f5-4ba6-a242-e7eea0d8099d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.885 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[83665db7-a093-4fea-b3c3-b7c842351bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.904 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[415692e8-4698-4b3a-890e-738bf1d6fc44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689058, 'reachable_time': 23410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361881, 'error': None, 'target': 'ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:23 np0005532048 systemd[1]: run-netns-ovnmeta\x2d3acaad61\x2da3f6\x2d4bd6\x2d83f4\x2d0ab1438bb136.mount: Deactivated successfully.
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.907 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3acaad61-a3f6-4bd6-83f4-0ab1438bb136 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:34:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:23.908 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8afc3213-4f1c-40e5-a0df-a0b24fb3186c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:24 np0005532048 nova_compute[253661]: 2025-11-22 09:34:24.106 253665 INFO nova.virt.libvirt.driver [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deleting instance files /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808_del#033[00m
Nov 22 04:34:24 np0005532048 nova_compute[253661]: 2025-11-22 09:34:24.107 253665 INFO nova.virt.libvirt.driver [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deletion of /var/lib/nova/instances/eb81b22a-c733-4b44-8546-e4bd1c24d808_del complete#033[00m
Nov 22 04:34:24 np0005532048 nova_compute[253661]: 2025-11-22 09:34:24.199 253665 INFO nova.compute.manager [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:34:24 np0005532048 nova_compute[253661]: 2025-11-22 09:34:24.200 253665 DEBUG oslo.service.loopingcall [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:34:24 np0005532048 nova_compute[253661]: 2025-11-22 09:34:24.200 253665 DEBUG nova.compute.manager [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:34:24 np0005532048 nova_compute[253661]: 2025-11-22 09:34:24.200 253665 DEBUG nova.network.neutron [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:34:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:34:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.003 253665 DEBUG nova.network.neutron [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.021 253665 INFO nova.compute.manager [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Took 0.82 seconds to deallocate network for instance.#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.072 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.073 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.092 253665 DEBUG nova.compute.manager [req-19b6783e-a520-4744-a574-bfcff83580ab req-4d327967-fded-4d60-a8b8-144e61d9a882 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Received event network-vif-deleted-9cb5df7f-b707-42d9-b17d-75811fd05cbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.134 253665 DEBUG oslo_concurrency.processutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:34:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26099833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.612 253665 DEBUG oslo_concurrency.processutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.621 253665 DEBUG nova.compute.provider_tree [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.639 253665 DEBUG nova.scheduler.client.report [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.668 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 200 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 597 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.860 253665 INFO nova.scheduler.client.report [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance eb81b22a-c733-4b44-8546-e4bd1c24d808#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.922 253665 DEBUG oslo_concurrency.lockutils [None req-d3a472f0-da1d-4e5d-9911-1462846c06e0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "eb81b22a-c733-4b44-8546-e4bd1c24d808" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.970 253665 DEBUG nova.network.neutron [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updated VIF entry in instance network info cache for port 9cb5df7f-b707-42d9-b17d-75811fd05cbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:34:25 np0005532048 nova_compute[253661]: 2025-11-22 09:34:25.971 253665 DEBUG nova.network.neutron [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Updating instance_info_cache with network_info: [{"id": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "address": "fa:16:3e:ee:62:78", "network": {"id": "3acaad61-a3f6-4bd6-83f4-0ab1438bb136", "bridge": "br-int", "label": "tempest-network-smoke--735282053", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9cb5df7f-b7", "ovs_interfaceid": "9cb5df7f-b707-42d9-b17d-75811fd05cbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.000 253665 DEBUG oslo_concurrency.lockutils [req-4aee1080-220d-4365-9dea-fc3b6147f51b req-7450ef05-91d7-48c2-8ea3-cd1518de2d79 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-eb81b22a-c733-4b44-8546-e4bd1c24d808" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.175 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.200 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.280 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.281 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.288 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.289 253665 INFO nova.compute.claims [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.414 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:34:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716024109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.886 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.894 253665 DEBUG nova.compute.provider_tree [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.908 253665 DEBUG nova.scheduler.client.report [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.929 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:26 np0005532048 nova_compute[253661]: 2025-11-22 09:34:26.930 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.007 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.007 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.033 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.049 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.141 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.142 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.143 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Creating image(s)#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.165 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.192 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.217 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.221 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.287 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.288 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.289 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.289 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.314 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.317 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.611 253665 DEBUG nova.policy [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.836 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 155 MiB data, 811 MiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 3.4 MiB/s wr, 156 op/s
Nov 22 04:34:27 np0005532048 nova_compute[253661]: 2025-11-22 09:34:27.918 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:34:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:27.977 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:27.977 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:27.978 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:28 np0005532048 nova_compute[253661]: 2025-11-22 09:34:28.040 253665 DEBUG nova.objects.instance [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid da98da35-5fb2-47cd-9d6b-a3bb2254bec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:28 np0005532048 nova_compute[253661]: 2025-11-22 09:34:28.054 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:34:28 np0005532048 nova_compute[253661]: 2025-11-22 09:34:28.054 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Ensure instance console log exists: /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:34:28 np0005532048 nova_compute[253661]: 2025-11-22 09:34:28.055 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:28 np0005532048 nova_compute[253661]: 2025-11-22 09:34:28.055 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:28 np0005532048 nova_compute[253661]: 2025-11-22 09:34:28.055 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:28 np0005532048 nova_compute[253661]: 2025-11-22 09:34:28.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:29 np0005532048 nova_compute[253661]: 2025-11-22 09:34:29.372 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Successfully created port: 46a77e89-60ff-4609-9a5a-6e542d8343e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:34:29 np0005532048 nova_compute[253661]: 2025-11-22 09:34:29.499 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804054.4982965, 0922fe2c-d67c-47da-a1ac-5b217442c632 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:29 np0005532048 nova_compute[253661]: 2025-11-22 09:34:29.500 253665 INFO nova.compute.manager [-] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:34:29 np0005532048 nova_compute[253661]: 2025-11-22 09:34:29.534 253665 DEBUG nova.compute.manager [None req-36c34fc6-6377-43c1-908d-8333d9921925 - - - - - -] [instance: 0922fe2c-d67c-47da-a1ac-5b217442c632] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 151 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 1.7 MiB/s wr, 95 op/s
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.142 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.143 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.159 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.244 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.244 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.252 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.252 253665 INFO nova.compute.claims [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.419 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:30Z|01054|binding|INFO|Releasing lport a1771b67-4cb9-46af-b99c-bccbb7cc939f from this chassis (sb_readonly=0)
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.655 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Successfully updated port: 46a77e89-60ff-4609-9a5a-6e542d8343e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.673 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.674 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.674 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.857 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:34:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915126923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.944 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.951 253665 DEBUG nova.compute.provider_tree [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.959 253665 DEBUG nova.compute.manager [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-changed-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.960 253665 DEBUG nova.compute.manager [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Refreshing instance network info cache due to event network-changed-46a77e89-60ff-4609-9a5a-6e542d8343e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.960 253665 DEBUG oslo_concurrency.lockutils [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.963 253665 DEBUG nova.scheduler.client.report [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.983 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:30 np0005532048 nova_compute[253661]: 2025-11-22 09:34:30.984 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.024 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.044 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.062 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.088 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.130 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.132 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.132 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating image(s)#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.161 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.190 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.211 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.214 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.283 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.284 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.284 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.284 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.308 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.313 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.664 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.726 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] resizing rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.831 253665 DEBUG nova.objects.instance [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'migration_context' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 163 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.850 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.851 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Ensure instance console log exists: /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.852 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.852 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.852 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.854 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.859 253665 WARNING nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.865 253665 DEBUG nova.virt.libvirt.host [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.866 253665 DEBUG nova.virt.libvirt.host [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.869 253665 DEBUG nova.virt.libvirt.host [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.869 253665 DEBUG nova.virt.libvirt.host [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.870 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.870 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.871 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.872 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.872 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.872 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.872 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.873 253665 DEBUG nova.virt.hardware [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:34:31 np0005532048 nova_compute[253661]: 2025-11-22 09:34:31.876 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.226 253665 DEBUG nova.network.neutron [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Updating instance_info_cache with network_info: [{"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.256 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.256 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance network_info: |[{"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.257 253665 DEBUG oslo_concurrency.lockutils [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.257 253665 DEBUG nova.network.neutron [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Refreshing network info cache for port 46a77e89-60ff-4609-9a5a-6e542d8343e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.259 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start _get_guest_xml network_info=[{"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.264 253665 WARNING nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.272 253665 DEBUG nova.virt.libvirt.host [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.273 253665 DEBUG nova.virt.libvirt.host [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.278 253665 DEBUG nova.virt.libvirt.host [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.278 253665 DEBUG nova.virt.libvirt.host [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.279 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.279 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.279 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.280 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.280 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.280 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.280 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.281 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.281 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.281 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.281 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.282 253665 DEBUG nova.virt.hardware [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.285 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162470709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.377 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.399 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.404 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954683896' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.748 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.771 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.776 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459410174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.885 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.887 253665 DEBUG nova.objects.instance [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'pci_devices' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.916 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <uuid>361d3f1d-84a4-4159-a69a-8a0254446ab6</uuid>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <name>instance-0000006a</name>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerShowV257Test-server-1169039346</nova:name>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:34:31</nova:creationTime>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <nova:user uuid="c9e3213f01af435aab231356352dba1b">tempest-ServerShowV257Test-555892026-project-member</nova:user>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <nova:project uuid="d7e64b9e1f5f4ed7a0a6326357a91223">tempest-ServerShowV257Test-555892026</nova:project>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <entry name="serial">361d3f1d-84a4-4159-a69a-8a0254446ab6</entry>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <entry name="uuid">361d3f1d-84a4-4159-a69a-8a0254446ab6</entry>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/361d3f1d-84a4-4159-a69a-8a0254446ab6_disk">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/console.log" append="off"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:34:32 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:34:32 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:34:32 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:34:32 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.965 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.965 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.966 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Using config drive#033[00m
Nov 22 04:34:32 np0005532048 nova_compute[253661]: 2025-11-22 09:34:32.987 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231786100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.221 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.223 253665 DEBUG nova.virt.libvirt.vif [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-295501579',display_name='tempest-TestNetworkBasicOps-server-295501579',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-295501579',id=105,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNCNSJMfyeSfEucYEe4QlFd/0YaeFQZS6dilhF9jqVM15NRor8ABSVHqTZCPRl6JVm69HZTDz0B8aTd74/zbrdmaxxEXgRl0/0G8KTm0chRbWM6114wV+6thTAZigMHMqw==',key_name='tempest-TestNetworkBasicOps-1414379365',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-eerdrny6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:27Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=da98da35-5fb2-47cd-9d6b-a3bb2254bec9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.224 253665 DEBUG nova.network.os_vif_util [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.225 253665 DEBUG nova.network.os_vif_util [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.227 253665 DEBUG nova.objects.instance [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid da98da35-5fb2-47cd-9d6b-a3bb2254bec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.244 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <uuid>da98da35-5fb2-47cd-9d6b-a3bb2254bec9</uuid>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <name>instance-00000069</name>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkBasicOps-server-295501579</nova:name>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:34:32</nova:creationTime>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <nova:port uuid="46a77e89-60ff-4609-9a5a-6e542d8343e1">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <entry name="serial">da98da35-5fb2-47cd-9d6b-a3bb2254bec9</entry>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <entry name="uuid">da98da35-5fb2-47cd-9d6b-a3bb2254bec9</entry>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:75:2e:22"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <target dev="tap46a77e89-60"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/console.log" append="off"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:34:33 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:34:33 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:34:33 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:34:33 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.246 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Preparing to wait for external event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.247 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.247 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.247 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.248 253665 DEBUG nova.virt.libvirt.vif [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-295501579',display_name='tempest-TestNetworkBasicOps-server-295501579',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-295501579',id=105,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNCNSJMfyeSfEucYEe4QlFd/0YaeFQZS6dilhF9jqVM15NRor8ABSVHqTZCPRl6JVm69HZTDz0B8aTd74/zbrdmaxxEXgRl0/0G8KTm0chRbWM6114wV+6thTAZigMHMqw==',key_name='tempest-TestNetworkBasicOps-1414379365',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-eerdrny6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:27Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=da98da35-5fb2-47cd-9d6b-a3bb2254bec9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.248 253665 DEBUG nova.network.os_vif_util [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.249 253665 DEBUG nova.network.os_vif_util [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.249 253665 DEBUG os_vif [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.250 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.251 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.254 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46a77e89-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.254 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap46a77e89-60, col_values=(('external_ids', {'iface-id': '46a77e89-60ff-4609-9a5a-6e542d8343e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:2e:22', 'vm-uuid': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:33 np0005532048 NetworkManager[48920]: <info>  [1763804073.3090] manager: (tap46a77e89-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.308 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.317 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.320 253665 INFO os_vif [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60')#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.371 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.372 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.372 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:75:2e:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.373 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Using config drive#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.399 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.479 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804058.4782982, 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.479 253665 INFO nova.compute.manager [-] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.507 253665 DEBUG nova.compute.manager [None req-6c3d8916-57ed-4b74-8fab-7fce2c32f581 - - - - - -] [instance: 9fc4c2e5-2185-4caf-b4e6-eb817a69e0ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 186 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 2.4 MiB/s wr, 94 op/s
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.991 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating config drive at /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config#033[00m
Nov 22 04:34:33 np0005532048 nova_compute[253661]: 2025-11-22 09:34:33.996 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rszdhjt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.156 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rszdhjt" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.182 253665 DEBUG nova.storage.rbd_utils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.187 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.256 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.274 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.275 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.288 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Creating config drive at /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.294 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn67jqeu_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.448 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn67jqeu_" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.478 253665 DEBUG nova.storage.rbd_utils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.482 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.525 253665 DEBUG oslo_concurrency.processutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.526 253665 INFO nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deleting local config drive /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config because it was imported into RBD.#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.541 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.541 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.542 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.542 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.575 253665 DEBUG nova.network.neutron [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Updated VIF entry in instance network info cache for port 46a77e89-60ff-4609-9a5a-6e542d8343e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.576 253665 DEBUG nova.network.neutron [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Updating instance_info_cache with network_info: [{"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.591 253665 DEBUG oslo_concurrency.lockutils [req-798da1f5-2f53-4cbf-9d09-50677a9b3cb0 req-9acb0702-be14-4a11-a5c9-93d8631a5125 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-da98da35-5fb2-47cd-9d6b-a3bb2254bec9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:34:34 np0005532048 systemd-machined[215941]: New machine qemu-131-instance-0000006a.
Nov 22 04:34:34 np0005532048 systemd[1]: Started Virtual Machine qemu-131-instance-0000006a.
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.785 253665 DEBUG oslo_concurrency.processutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config da98da35-5fb2-47cd-9d6b-a3bb2254bec9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.786 253665 INFO nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Deleting local config drive /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9/disk.config because it was imported into RBD.#033[00m
Nov 22 04:34:34 np0005532048 kernel: tap46a77e89-60: entered promiscuous mode
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.829 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:34 np0005532048 NetworkManager[48920]: <info>  [1763804074.8304] manager: (tap46a77e89-60): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Nov 22 04:34:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:34Z|01055|binding|INFO|Claiming lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 for this chassis.
Nov 22 04:34:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:34Z|01056|binding|INFO|46a77e89-60ff-4609-9a5a-6e542d8343e1: Claiming fa:16:3e:75:2e:22 10.100.0.22
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.844 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:2e:22 10.100.0.22'], port_security=['fa:16:3e:75:2e:22 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15f88390-1071-41f9-b1a4-108f4f3845d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e50820c9-c083-42b3-a5c1-62f5befbff0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcb3a475-2422-4e03-9155-1b7e58a05aab, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=46a77e89-60ff-4609-9a5a-6e542d8343e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.845 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 46a77e89-60ff-4609-9a5a-6e542d8343e1 in datapath 15f88390-1071-41f9-b1a4-108f4f3845d0 bound to our chassis#033[00m
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.847 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 15f88390-1071-41f9-b1a4-108f4f3845d0#033[00m
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.860 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0dcc9f-3972-41fa-a1ef-61254f707610]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.861 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap15f88390-11 in ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.864 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap15f88390-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[484d4a8e-5674-4c12-a8ef-ccd41a23b6d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.864 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eafacb55-0a1c-4890-94a7-c7d35f55dfc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:34 np0005532048 systemd-machined[215941]: New machine qemu-132-instance-00000069.
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.876 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4861efa8-e800-4fa2-8ae5-56ff0161eaa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:34 np0005532048 systemd[1]: Started Virtual Machine qemu-132-instance-00000069.
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.879 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:34Z|01057|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 ovn-installed in OVS
Nov 22 04:34:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:34Z|01058|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 up in Southbound
Nov 22 04:34:34 np0005532048 nova_compute[253661]: 2025-11-22 09:34:34.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.895 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[86b8f433-6985-4fe2-92a8-ef8b5922ed6f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:34 np0005532048 systemd-udevd[362556]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:34:34 np0005532048 NetworkManager[48920]: <info>  [1763804074.9232] device (tap46a77e89-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:34:34 np0005532048 NetworkManager[48920]: <info>  [1763804074.9240] device (tap46a77e89-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.927 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0d5dc4-ec71-4682-958e-26609f22c76c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:34 np0005532048 systemd-udevd[362564]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[42303a38-f873-4f5f-bb83-a0952f62dbe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:34 np0005532048 NetworkManager[48920]: <info>  [1763804074.9394] manager: (tap15f88390-10): new Veth device (/org/freedesktop/NetworkManager/Devices/437)
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.977 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8045a02f-885a-49b4-87a6-9feddfbf7dcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:34.981 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7cebe14f-6226-4f8d-ad5e-b416baaa0f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:35 np0005532048 NetworkManager[48920]: <info>  [1763804075.0057] device (tap15f88390-10): carrier: link connected
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.013 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[159b1de2-c0a8-4031-8b8b-f60761147be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c142bdf4-55af-48e1-9b48-85669fc86500]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15f88390-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:5d:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692574, 'reachable_time': 23853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362585, 'error': None, 'target': 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.051 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5650bfbe-3eff-483a-ab86-7a3b0b9e6265]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:5df2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692574, 'tstamp': 692574}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362586, 'error': None, 'target': 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.069 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[29740764-b8b0-4893-8d2a-296af6ce7325]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap15f88390-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:5d:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 308], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692574, 'reachable_time': 23853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362587, 'error': None, 'target': 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.105 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[30c2ed43-13a3-4d86-8a7c-767147be75af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.160 253665 DEBUG nova.compute.manager [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.161 253665 DEBUG oslo_concurrency.lockutils [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.161 253665 DEBUG oslo_concurrency.lockutils [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.161 253665 DEBUG oslo_concurrency.lockutils [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.161 253665 DEBUG nova.compute.manager [req-207e6d8a-c82b-49f4-8147-37ffe2e1926d req-48515084-6964-43d2-b832-75f7abee354b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Processing event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10909999-ab70-468b-9c37-6bab030928da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.187 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15f88390-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.188 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.189 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15f88390-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:35 np0005532048 NetworkManager[48920]: <info>  [1763804075.1912] manager: (tap15f88390-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Nov 22 04:34:35 np0005532048 kernel: tap15f88390-10: entered promiscuous mode
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.194 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap15f88390-10, col_values=(('external_ids', {'iface-id': '06dfbc87-2377-412e-8b1f-e2e6f4be9f29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:35Z|01059|binding|INFO|Releasing lport 06dfbc87-2377-412e-8b1f-e2e6f4be9f29 from this chassis (sb_readonly=0)
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.214 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/15f88390-1071-41f9-b1a4-108f4f3845d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/15f88390-1071-41f9-b1a4-108f4f3845d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.215 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d358950-5369-4321-b223-9e8abd2b20eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.216 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-15f88390-1071-41f9-b1a4-108f4f3845d0
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/15f88390-1071-41f9-b1a4-108f4f3845d0.pid.haproxy
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 15f88390-1071-41f9-b1a4-108f4f3845d0
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:34:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:35.217 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'env', 'PROCESS_TAG=haproxy-15f88390-1071-41f9-b1a4-108f4f3845d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/15f88390-1071-41f9-b1a4-108f4f3845d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.282 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.2818408, da98da35-5fb2-47cd-9d6b-a3bb2254bec9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.283 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] VM Started (Lifecycle Event)#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.288 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.296 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.303 253665 INFO nova.virt.libvirt.driver [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance spawned successfully.#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.304 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.309 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.318 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.333 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.333 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.334 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.334 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.335 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.335 253665 DEBUG nova.virt.libvirt.driver [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.340 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.340 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.2859046, da98da35-5fb2-47cd-9d6b-a3bb2254bec9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.341 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.382 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.387 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.2908475, da98da35-5fb2-47cd-9d6b-a3bb2254bec9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.387 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.406 253665 INFO nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Took 8.27 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.407 253665 DEBUG nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.415 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.417 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.456 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.456 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.4547694, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.456 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.458 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.458 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.461 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance spawned successfully.#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.461 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.477 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.480 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.490 253665 INFO nova.compute.manager [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Took 9.24 seconds to build instance.#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.494 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.494 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.494 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.495 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.495 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.495 253665 DEBUG nova.virt.libvirt.driver [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.499 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804075.455411, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.499 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Started (Lifecycle Event)#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.519 253665 DEBUG oslo_concurrency.lockutils [None req-d4569236-a1fe-4a33-b41a-b97cc1dc1035 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.526 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.532 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.557 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.566 253665 INFO nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Took 4.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.566 253665 DEBUG nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.625 253665 INFO nova.compute.manager [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Took 5.41 seconds to build instance.#033[00m
Nov 22 04:34:35 np0005532048 podman[362699]: 2025-11-22 09:34:35.632679383 +0000 UTC m=+0.060880199 container create 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.641 253665 DEBUG oslo_concurrency.lockutils [None req-fa4ec08f-c497-48d1-98ba-38aa47bed64f c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:35 np0005532048 systemd[1]: Started libpod-conmon-3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf.scope.
Nov 22 04:34:35 np0005532048 podman[362699]: 2025-11-22 09:34:35.600612003 +0000 UTC m=+0.028812849 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:34:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:34:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dbd92e328427ae01a91a6e788883f396c5e4580a3c0d724680d342b6d537fb2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:35 np0005532048 podman[362699]: 2025-11-22 09:34:35.734083629 +0000 UTC m=+0.162284475 container init 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 04:34:35 np0005532048 podman[362699]: 2025-11-22 09:34:35.742386573 +0000 UTC m=+0.170587389 container start 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:34:35 np0005532048 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [NOTICE]   (362719) : New worker (362721) forked
Nov 22 04:34:35 np0005532048 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [NOTICE]   (362719) : Loading success.
Nov 22 04:34:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 186 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Nov 22 04:34:35 np0005532048 nova_compute[253661]: 2025-11-22 09:34:35.861 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.391 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.404 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.404 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.404 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.405 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.418 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.419 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid da98da35-5fb2-47cd-9d6b-a3bb2254bec9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.419 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.420 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.420 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.420 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.421 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.421 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.421 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.444 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.444 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:36 np0005532048 nova_compute[253661]: 2025-11-22 09:34:36.446 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:37 np0005532048 nova_compute[253661]: 2025-11-22 09:34:37.386 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:37 np0005532048 nova_compute[253661]: 2025-11-22 09:34:37.843 253665 DEBUG nova.compute.manager [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:37 np0005532048 nova_compute[253661]: 2025-11-22 09:34:37.844 253665 DEBUG oslo_concurrency.lockutils [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:37 np0005532048 nova_compute[253661]: 2025-11-22 09:34:37.844 253665 DEBUG oslo_concurrency.lockutils [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:37 np0005532048 nova_compute[253661]: 2025-11-22 09:34:37.844 253665 DEBUG oslo_concurrency.lockutils [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:37 np0005532048 nova_compute[253661]: 2025-11-22 09:34:37.844 253665 DEBUG nova.compute.manager [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] No waiting events found dispatching network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:34:37 np0005532048 nova_compute[253661]: 2025-11-22 09:34:37.845 253665 WARNING nova.compute.manager [req-af3e50ac-4862-4462-8968-217569185b84 req-9a976c41-1d14-4acb-9999-2b16dd563b8b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received unexpected event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:34:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 213 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.5 MiB/s wr, 173 op/s
Nov 22 04:34:38 np0005532048 nova_compute[253661]: 2025-11-22 09:34:38.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:38 np0005532048 nova_compute[253661]: 2025-11-22 09:34:38.325 253665 INFO nova.compute.manager [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Rebuilding instance#033[00m
Nov 22 04:34:38 np0005532048 nova_compute[253661]: 2025-11-22 09:34:38.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:38 np0005532048 nova_compute[253661]: 2025-11-22 09:34:38.628 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804063.6279562, eb81b22a-c733-4b44-8546-e4bd1c24d808 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:38 np0005532048 nova_compute[253661]: 2025-11-22 09:34:38.629 253665 INFO nova.compute.manager [-] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:34:38 np0005532048 nova_compute[253661]: 2025-11-22 09:34:38.648 253665 DEBUG nova.compute.manager [None req-37acd4bd-cfe2-4001-8fc4-0e9e410c89d3 - - - - - -] [instance: eb81b22a-c733-4b44-8546-e4bd1c24d808] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:38 np0005532048 nova_compute[253661]: 2025-11-22 09:34:38.937 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:38 np0005532048 nova_compute[253661]: 2025-11-22 09:34:38.955 253665 DEBUG nova.compute.manager [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:39 np0005532048 nova_compute[253661]: 2025-11-22 09:34:39.017 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'pci_requests' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:39 np0005532048 nova_compute[253661]: 2025-11-22 09:34:39.025 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'pci_devices' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:39 np0005532048 nova_compute[253661]: 2025-11-22 09:34:39.036 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'resources' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:39 np0005532048 nova_compute[253661]: 2025-11-22 09:34:39.043 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'migration_context' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:39 np0005532048 nova_compute[253661]: 2025-11-22 09:34:39.053 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:34:39 np0005532048 nova_compute[253661]: 2025-11-22 09:34:39.056 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:34:39 np0005532048 nova_compute[253661]: 2025-11-22 09:34:39.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.4 MiB/s wr, 172 op/s
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.249 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.250 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:34:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911633361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.702 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.795 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.796 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.801 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.802 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.806 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.806 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:34:40 np0005532048 nova_compute[253661]: 2025-11-22 09:34:40.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.008 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.009 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3282MB free_disk=59.900909423828125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.010 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.010 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.076 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance c5540f5a-8dfa-4b11-8452-c6fe99db1d64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.077 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance da98da35-5fb2-47cd-9d6b-a3bb2254bec9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.077 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 361d3f1d-84a4-4159-a69a-8a0254446ab6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.077 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.078 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.180 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:34:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3897940132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.643 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.652 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.669 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.703 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:34:41 np0005532048 nova_compute[253661]: 2025-11-22 09:34:41.704 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 186 op/s
Nov 22 04:34:42 np0005532048 nova_compute[253661]: 2025-11-22 09:34:42.705 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:42 np0005532048 nova_compute[253661]: 2025-11-22 09:34:42.705 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:34:43 np0005532048 nova_compute[253661]: 2025-11-22 09:34:43.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:43 np0005532048 nova_compute[253661]: 2025-11-22 09:34:43.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:43 np0005532048 nova_compute[253661]: 2025-11-22 09:34:43.455 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Nov 22 04:34:45 np0005532048 nova_compute[253661]: 2025-11-22 09:34:45.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:34:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 158 op/s
Nov 22 04:34:45 np0005532048 nova_compute[253661]: 2025-11-22 09:34:45.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:46 np0005532048 nova_compute[253661]: 2025-11-22 09:34:46.356 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:46 np0005532048 nova_compute[253661]: 2025-11-22 09:34:46.356 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:46 np0005532048 nova_compute[253661]: 2025-11-22 09:34:46.383 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:34:46 np0005532048 nova_compute[253661]: 2025-11-22 09:34:46.464 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:46 np0005532048 nova_compute[253661]: 2025-11-22 09:34:46.465 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:46 np0005532048 nova_compute[253661]: 2025-11-22 09:34:46.470 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:34:46 np0005532048 nova_compute[253661]: 2025-11-22 09:34:46.470 253665 INFO nova.compute.claims [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:34:46 np0005532048 nova_compute[253661]: 2025-11-22 09:34:46.608 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:34:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390561136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.147 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.156 253665 DEBUG nova.compute.provider_tree [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.180 253665 DEBUG nova.scheduler.client.report [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.206 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.207 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.250 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.250 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.265 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.285 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:34:47 np0005532048 podman[362799]: 2025-11-22 09:34:47.378872844 +0000 UTC m=+0.069494789 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 04:34:47 np0005532048 podman[362800]: 2025-11-22 09:34:47.387286733 +0000 UTC m=+0.075508049 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.411 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.412 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.413 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Creating image(s)#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.435 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.459 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.488 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.495 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.547 253665 DEBUG nova.policy [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.591 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.592 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.593 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.593 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.627 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:47 np0005532048 nova_compute[253661]: 2025-11-22 09:34:47.632 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3f8530ae-f429-4807-81ca-84d8f964a38c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 222 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 164 op/s
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.321 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.505 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Successfully created port: 8da41f38-3812-4494-9cab-c4854772a569 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.532 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3f8530ae-f429-4807-81ca-84d8f964a38c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.900s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.617 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:34:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.750 253665 DEBUG nova.objects.instance [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.761 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.762 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Ensure instance console log exists: /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.762 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.762 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:48 np0005532048 nova_compute[253661]: 2025-11-22 09:34:48.763 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:49 np0005532048 nova_compute[253661]: 2025-11-22 09:34:49.105 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 22 04:34:49 np0005532048 nova_compute[253661]: 2025-11-22 09:34:49.443 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Successfully updated port: 8da41f38-3812-4494-9cab-c4854772a569 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:34:49 np0005532048 nova_compute[253661]: 2025-11-22 09:34:49.469 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:49 np0005532048 nova_compute[253661]: 2025-11-22 09:34:49.469 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:49 np0005532048 nova_compute[253661]: 2025-11-22 09:34:49.470 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:34:49 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:49Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:2e:22 10.100.0.22
Nov 22 04:34:49 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:49Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:2e:22 10.100.0.22
Nov 22 04:34:49 np0005532048 nova_compute[253661]: 2025-11-22 09:34:49.534 253665 DEBUG nova.compute.manager [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-changed-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:49 np0005532048 nova_compute[253661]: 2025-11-22 09:34:49.534 253665 DEBUG nova.compute.manager [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing instance network info cache due to event network-changed-8da41f38-3812-4494-9cab-c4854772a569. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:34:49 np0005532048 nova_compute[253661]: 2025-11-22 09:34:49.534 253665 DEBUG oslo_concurrency.lockutils [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 234 MiB data, 866 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 82 op/s
Nov 22 04:34:50 np0005532048 nova_compute[253661]: 2025-11-22 09:34:50.017 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:34:50 np0005532048 nova_compute[253661]: 2025-11-22 09:34:50.458 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:50 np0005532048 nova_compute[253661]: 2025-11-22 09:34:50.866 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:50.999 253665 DEBUG nova.network.neutron [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.024 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.025 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance network_info: |[{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.027 253665 DEBUG oslo_concurrency.lockutils [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.027 253665 DEBUG nova.network.neutron [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing network info cache for port 8da41f38-3812-4494-9cab-c4854772a569 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.035 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start _get_guest_xml network_info=[{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.043 253665 WARNING nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.052 253665 DEBUG nova.virt.libvirt.host [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.053 253665 DEBUG nova.virt.libvirt.host [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.063 253665 DEBUG nova.virt.libvirt.host [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.064 253665 DEBUG nova.virt.libvirt.host [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.065 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.065 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.066 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.066 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.067 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.067 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.068 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.068 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.069 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.069 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.070 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.070 253665 DEBUG nova.virt.hardware [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.075 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784280840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.555 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.589 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:51 np0005532048 nova_compute[253661]: 2025-11-22 09:34:51.597 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 270 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.7 MiB/s wr, 117 op/s
Nov 22 04:34:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1283272683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.048 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.052 253665 DEBUG nova.virt.libvirt.vif [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:47Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.052 253665 DEBUG nova.network.os_vif_util [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.054 253665 DEBUG nova.network.os_vif_util [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.056 253665 DEBUG nova.objects.instance [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.069 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <uuid>3f8530ae-f429-4807-81ca-84d8f964a38c</uuid>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <name>instance-0000006b</name>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1778115453</nova:name>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:34:51</nova:creationTime>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <nova:port uuid="8da41f38-3812-4494-9cab-c4854772a569">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <entry name="serial">3f8530ae-f429-4807-81ca-84d8f964a38c</entry>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <entry name="uuid">3f8530ae-f429-4807-81ca-84d8f964a38c</entry>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3f8530ae-f429-4807-81ca-84d8f964a38c_disk">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:02:ea:ba"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <target dev="tap8da41f38-38"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/console.log" append="off"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:34:52 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:34:52 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:34:52 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:34:52 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.071 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Preparing to wait for external event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.072 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.072 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.073 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.074 253665 DEBUG nova.virt.libvirt.vif [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:47Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.074 253665 DEBUG nova.network.os_vif_util [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.075 253665 DEBUG nova.network.os_vif_util [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.075 253665 DEBUG os_vif [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.076 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.077 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.077 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.083 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8da41f38-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.084 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8da41f38-38, col_values=(('external_ids', {'iface-id': '8da41f38-3812-4494-9cab-c4854772a569', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:ea:ba', 'vm-uuid': '3f8530ae-f429-4807-81ca-84d8f964a38c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:52 np0005532048 NetworkManager[48920]: <info>  [1763804092.0869] manager: (tap8da41f38-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.093 253665 INFO os_vif [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38')#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.140 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.141 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.142 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:02:ea:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.143 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Using config drive#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.175 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:34:52
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.mgr', '.rgw.root']
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:34:52 np0005532048 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 22 04:34:52 np0005532048 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Consumed 13.726s CPU time.
Nov 22 04:34:52 np0005532048 systemd-machined[215941]: Machine qemu-131-instance-0000006a terminated.
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.590 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.601 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Creating config drive at /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.606 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx95ajfle execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:34:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.754 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx95ajfle" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.788 253665 DEBUG nova.storage.rbd_utils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.792 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.992 253665 DEBUG oslo_concurrency.processutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config 3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:52 np0005532048 nova_compute[253661]: 2025-11-22 09:34:52.993 253665 INFO nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Deleting local config drive /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/disk.config because it was imported into RBD.#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.037 253665 DEBUG nova.network.neutron [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updated VIF entry in instance network info cache for port 8da41f38-3812-4494-9cab-c4854772a569. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.038 253665 DEBUG nova.network.neutron [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:53 np0005532048 kernel: tap8da41f38-38: entered promiscuous mode
Nov 22 04:34:53 np0005532048 NetworkManager[48920]: <info>  [1763804093.0531] manager: (tap8da41f38-38): new Tun device (/org/freedesktop/NetworkManager/Devices/440)
Nov 22 04:34:53 np0005532048 systemd-udevd[363083]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:34:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:53Z|01060|binding|INFO|Claiming lport 8da41f38-3812-4494-9cab-c4854772a569 for this chassis.
Nov 22 04:34:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:53Z|01061|binding|INFO|8da41f38-3812-4494-9cab-c4854772a569: Claiming fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.052 253665 DEBUG oslo_concurrency.lockutils [req-1aaabda3-f625-40c0-bc5f-70ef80d3b1a9 req-12e246c8-ec8d-4cf1-bab7-27a278799c1e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.061 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ea:ba 10.100.0.4'], port_security=['fa:16:3e:02:ea:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f8530ae-f429-4807-81ca-84d8f964a38c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20570e02-4f3c-425d-9564-924b275d70dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0291e4d-91dd-4ee6-9074-0372622e253d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f04ee3-5979-45f2-bf12-c1c6b0bf9924, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8da41f38-3812-4494-9cab-c4854772a569) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.062 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8da41f38-3812-4494-9cab-c4854772a569 in datapath 20570e02-4f3c-425d-9564-924b275d70dc bound to our chassis#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.065 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20570e02-4f3c-425d-9564-924b275d70dc#033[00m
Nov 22 04:34:53 np0005532048 NetworkManager[48920]: <info>  [1763804093.0660] device (tap8da41f38-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:34:53 np0005532048 NetworkManager[48920]: <info>  [1763804093.0679] device (tap8da41f38-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:34:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:53Z|01062|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 ovn-installed in OVS
Nov 22 04:34:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:53Z|01063|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 up in Southbound
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.083 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e811a07-45b4-4a1d-b3f7-3c2a4fd5e635]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.084 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20570e02-41 in ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.086 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20570e02-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.086 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2190e54e-a592-4251-9f51-5f8aefebdd21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.088 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26d22db1-7ef7-471a-9f6b-e779b2b23c9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 systemd-machined[215941]: New machine qemu-133-instance-0000006b.
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.103 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a49799ee-1fa2-4aac-83eb-a28c03aa1647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 systemd[1]: Started Virtual Machine qemu-133-instance-0000006b.
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.131 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance shutdown successfully after 14 seconds.#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.132 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73abbd6c-e594-47ae-bc93-e02c1efd81c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.143 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance destroyed successfully.#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.155 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance destroyed successfully.#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.166 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6da58c-57b2-4000-a276-81b6dc6fd149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 NetworkManager[48920]: <info>  [1763804093.1760] manager: (tap20570e02-40): new Veth device (/org/freedesktop/NetworkManager/Devices/441)
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.174 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc75f5f-3ac1-45ad-891f-eb42f423824f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 systemd-udevd[363137]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.220 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ac825cba-750f-4c19-b1b4-eccef9e92524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.223 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b69a68fa-a1c9-4578-a964-728edc722414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 NetworkManager[48920]: <info>  [1763804093.2564] device (tap20570e02-40): carrier: link connected
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.264 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[db42c948-662d-4d34-ad5c-7fbf86877443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.290 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[12ebc52b-b8ed-4509-a1d8-75d1ffb7390b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20570e02-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:a4:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694399, 'reachable_time': 29289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363191, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.305 253665 DEBUG nova.compute.manager [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.306 253665 DEBUG oslo_concurrency.lockutils [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.306 253665 DEBUG oslo_concurrency.lockutils [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.306 253665 DEBUG oslo_concurrency.lockutils [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.307 253665 DEBUG nova.compute.manager [req-1c273d52-6354-49a6-a48d-40bb672ca2b6 req-a35455a6-26a0-4f67-8ad8-f128f5ec1581 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Processing event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9126b73-7682-4987-9005-1fb3f1fd0d92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:a4f4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694399, 'tstamp': 694399}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363192, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.337 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c34727-9b79-4c6c-9bd6-d5b6c08076c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20570e02-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:a4:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 310], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694399, 'reachable_time': 29289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363193, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.377 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6c5a06-283e-4d8e-bdf9-1ff33fc180f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.456 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b782b496-9a4d-4a16-a24f-5441159aac8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.457 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20570e02-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.458 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.458 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20570e02-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:53 np0005532048 NetworkManager[48920]: <info>  [1763804093.4612] manager: (tap20570e02-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Nov 22 04:34:53 np0005532048 kernel: tap20570e02-40: entered promiscuous mode
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.467 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20570e02-40, col_values=(('external_ids', {'iface-id': '4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.466 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:53Z|01064|binding|INFO|Releasing lport 4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9 from this chassis (sb_readonly=0)
Nov 22 04:34:53 np0005532048 nova_compute[253661]: 2025-11-22 09:34:53.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.493 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82c082d3-ce25-4adb-a999-fa0738cdb3f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.495 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:34:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:53.496 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'env', 'PROCESS_TAG=haproxy-20570e02-4f3c-425d-9564-924b275d70dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20570e02-4f3c-425d-9564-924b275d70dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:34:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 325 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.1 MiB/s wr, 159 op/s
Nov 22 04:34:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:54.227 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:a9:a9 10.100.0.2 2001:db8::f816:3eff:fe75:a9a9'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe75:a9a9/64', 'neutron:device_id': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ff0f834b-9623-4226-98e1-741634e7eb05) old=Port_Binding(mac=['fa:16:3e:75:a9:a9 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.408 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deleting instance files /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6_del#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.410 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deletion of /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6_del complete#033[00m
Nov 22 04:34:54 np0005532048 podman[363205]: 2025-11-22 09:34:54.428098399 +0000 UTC m=+0.110207592 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 22 04:34:54 np0005532048 podman[363244]: 2025-11-22 09:34:54.463348325 +0000 UTC m=+0.052332592 container create 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:34:54 np0005532048 systemd[1]: Started libpod-conmon-83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2.scope.
Nov 22 04:34:54 np0005532048 podman[363244]: 2025-11-22 09:34:54.43658619 +0000 UTC m=+0.025570467 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:34:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:34:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054c9b7e428385e12a38fc4d69601f1307ee63ce36fecad82d262095c96a25dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.537 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.538 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating image(s)#033[00m
Nov 22 04:34:54 np0005532048 podman[363244]: 2025-11-22 09:34:54.548612366 +0000 UTC m=+0.137596643 container init 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:34:54 np0005532048 podman[363244]: 2025-11-22 09:34:54.555501237 +0000 UTC m=+0.144485494 container start 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.561 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:54 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [NOTICE]   (363285) : New worker (363305) forked
Nov 22 04:34:54 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [NOTICE]   (363285) : Loading success.
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.585 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.606 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.609 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:54.630 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ff0f834b-9623-4226-98e1-741634e7eb05 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 updated#033[00m
Nov 22 04:34:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:54.631 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3e4e01e-5e3e-4572-b404-ee47aaec1186, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:34:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:54.632 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b925d9a7-1459-4eca-a27f-18fbe77dca03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.681 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.682 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.682 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.683 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "1e779d522b09f035efed829c4b66cb9ab2a7bed4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.712 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:54 np0005532048 nova_compute[253661]: 2025-11-22 09:34:54.716 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.026 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804095.0260923, 3f8530ae-f429-4807-81ca-84d8f964a38c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.027 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Started (Lifecycle Event)#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.031 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.034 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.037 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance spawned successfully.#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.037 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.053 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.057 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.062 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.063 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.063 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.064 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.064 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.065 253665 DEBUG nova.virt.libvirt.driver [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.094 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.095 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804095.030547, 3f8530ae-f429-4807-81ca-84d8f964a38c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.095 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.098 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.130 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.134 253665 INFO nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Took 7.72 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.134 253665 DEBUG nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.171 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804095.0335686, 3f8530ae-f429-4807-81ca-84d8f964a38c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.171 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.176 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] resizing rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.206 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.213 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.271 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.278 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.279 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Ensure instance console log exists: /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.282 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.283 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.283 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.285 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.294 253665 INFO nova.compute.manager [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Took 8.86 seconds to build instance.#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.300 253665 WARNING nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.306 253665 DEBUG nova.virt.libvirt.host [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.307 253665 DEBUG nova.virt.libvirt.host [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.311 253665 DEBUG nova.virt.libvirt.host [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.311 253665 DEBUG nova.virt.libvirt.host [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.312 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.312 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:53Z,direct_url=<?>,disk_format='qcow2',id=baf70c6a-4f18-40eb-9d40-874af269a47f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.312 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.313 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.314 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.314 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.314 253665 DEBUG nova.virt.hardware [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.314 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.316 253665 DEBUG oslo_concurrency.lockutils [None req-7b52ba0b-becc-4499-a954-2d8307e5c1c8 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.333 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.406 253665 DEBUG nova.compute.manager [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.407 253665 DEBUG oslo_concurrency.lockutils [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.407 253665 DEBUG oslo_concurrency.lockutils [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.407 253665 DEBUG oslo_concurrency.lockutils [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.407 253665 DEBUG nova.compute.manager [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.408 253665 WARNING nova.compute.manager [req-fa63de85-461d-4730-81b5-43cf4285d3c3 req-c5c1c2b5-57ef-4818-920b-7f9b6bc014bb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:34:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:34:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:34:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:34:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:34:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:34:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 325 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.1 MiB/s wr, 159 op/s
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.869 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226086813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.898 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.926 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:55 np0005532048 nova_compute[253661]: 2025-11-22 09:34:55.932 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:34:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491713492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.492 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.496 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <uuid>361d3f1d-84a4-4159-a69a-8a0254446ab6</uuid>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <name>instance-0000006a</name>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServerShowV257Test-server-1169039346</nova:name>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:34:55</nova:creationTime>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <nova:user uuid="c9e3213f01af435aab231356352dba1b">tempest-ServerShowV257Test-555892026-project-member</nova:user>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <nova:project uuid="d7e64b9e1f5f4ed7a0a6326357a91223">tempest-ServerShowV257Test-555892026</nova:project>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="baf70c6a-4f18-40eb-9d40-874af269a47f"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <entry name="serial">361d3f1d-84a4-4159-a69a-8a0254446ab6</entry>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <entry name="uuid">361d3f1d-84a4-4159-a69a-8a0254446ab6</entry>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/361d3f1d-84a4-4159-a69a-8a0254446ab6_disk">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/console.log" append="off"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:34:56 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:34:56 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:34:56 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:34:56 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.558 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.559 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.560 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Using config drive#033[00m
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.583 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.601 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:34:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:34:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:34:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:34:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.630 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'keypairs' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.977 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Creating config drive at /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config#033[00m
Nov 22 04:34:56 np0005532048 nova_compute[253661]: 2025-11-22 09:34:56.983 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdd1xkba execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.163 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcdd1xkba" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.190 253665 DEBUG nova.storage.rbd_utils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] rbd image 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.194 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.307 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.308 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.309 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.309 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.309 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.311 253665 INFO nova.compute.manager [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Terminating instance#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.311 253665 DEBUG nova.compute.manager [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:34:57 np0005532048 kernel: tap46a77e89-60 (unregistering): left promiscuous mode
Nov 22 04:34:57 np0005532048 NetworkManager[48920]: <info>  [1763804097.4114] device (tap46a77e89-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01065|binding|INFO|Releasing lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 from this chassis (sb_readonly=0)
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01066|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 down in Southbound
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01067|binding|INFO|Removing iface tap46a77e89-60 ovn-installed in OVS
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.447 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:2e:22 10.100.0.22'], port_security=['fa:16:3e:75:2e:22 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15f88390-1071-41f9-b1a4-108f4f3845d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e50820c9-c083-42b3-a5c1-62f5befbff0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcb3a475-2422-4e03-9155-1b7e58a05aab, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=46a77e89-60ff-4609-9a5a-6e542d8343e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.449 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 46a77e89-60ff-4609-9a5a-6e542d8343e1 in datapath 15f88390-1071-41f9-b1a4-108f4f3845d0 unbound from our chassis#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.450 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15f88390-1071-41f9-b1a4-108f4f3845d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.451 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3c9e4b-ab79-4220-938f-1758bb756b2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.452 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 namespace which is not needed anymore#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.465 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.478 253665 DEBUG oslo_concurrency.processutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config 361d3f1d-84a4-4159-a69a-8a0254446ab6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.479 253665 INFO nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deleting local config drive /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6/disk.config because it was imported into RBD.#033[00m
Nov 22 04:34:57 np0005532048 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 22 04:34:57 np0005532048 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Consumed 13.971s CPU time.
Nov 22 04:34:57 np0005532048 systemd-machined[215941]: Machine qemu-132-instance-00000069 terminated.
Nov 22 04:34:57 np0005532048 kernel: tap46a77e89-60: entered promiscuous mode
Nov 22 04:34:57 np0005532048 NetworkManager[48920]: <info>  [1763804097.5444] manager: (tap46a77e89-60): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Nov 22 04:34:57 np0005532048 systemd-udevd[363613]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:34:57 np0005532048 kernel: tap46a77e89-60 (unregistering): left promiscuous mode
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01068|binding|INFO|Claiming lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 for this chassis.
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01069|binding|INFO|46a77e89-60ff-4609-9a5a-6e542d8343e1: Claiming fa:16:3e:75:2e:22 10.100.0.22
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: hostname: compute-0
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.582 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:2e:22 10.100.0.22'], port_security=['fa:16:3e:75:2e:22 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15f88390-1071-41f9-b1a4-108f4f3845d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e50820c9-c083-42b3-a5c1-62f5befbff0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcb3a475-2422-4e03-9155-1b7e58a05aab, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=46a77e89-60ff-4609-9a5a-6e542d8343e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01070|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 ovn-installed in OVS
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01071|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 up in Southbound
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01072|binding|INFO|Releasing lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 from this chassis (sb_readonly=1)
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01073|if_status|INFO|Dropped 3 log messages in last 555 seconds (most recently, 555 seconds ago) due to excessive rate
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.597 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01074|if_status|INFO|Not setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 down as sb is readonly
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01075|binding|INFO|Removing iface tap46a77e89-60 ovn-installed in OVS
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01076|binding|INFO|Releasing lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 from this chassis (sb_readonly=0)
Nov 22 04:34:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:34:57Z|01077|binding|INFO|Setting lport 46a77e89-60ff-4609-9a5a-6e542d8343e1 down in Southbound
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.605 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:2e:22 10.100.0.22'], port_security=['fa:16:3e:75:2e:22 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'da98da35-5fb2-47cd-9d6b-a3bb2254bec9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15f88390-1071-41f9-b1a4-108f4f3845d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e50820c9-c083-42b3-a5c1-62f5befbff0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcb3a475-2422-4e03-9155-1b7e58a05aab, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=46a77e89-60ff-4609-9a5a-6e542d8343e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 04:34:57 np0005532048 virtnodedevd[254391]: ethtool ioctl error on tap46a77e89-60: No such device
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.626 253665 INFO nova.virt.libvirt.driver [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Instance destroyed successfully.#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.627 253665 DEBUG nova.objects.instance [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid da98da35-5fb2-47cd-9d6b-a3bb2254bec9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.638 253665 DEBUG nova.virt.libvirt.vif [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-295501579',display_name='tempest-TestNetworkBasicOps-server-295501579',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-295501579',id=105,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNCNSJMfyeSfEucYEe4QlFd/0YaeFQZS6dilhF9jqVM15NRor8ABSVHqTZCPRl6JVm69HZTDz0B8aTd74/zbrdmaxxEXgRl0/0G8KTm0chRbWM6114wV+6thTAZigMHMqw==',key_name='tempest-TestNetworkBasicOps-1414379365',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-eerdrny6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:34:35Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=da98da35-5fb2-47cd-9d6b-a3bb2254bec9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.638 253665 DEBUG nova.network.os_vif_util [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "address": "fa:16:3e:75:2e:22", "network": {"id": "15f88390-1071-41f9-b1a4-108f4f3845d0", "bridge": "br-int", "label": "tempest-network-smoke--419105064", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46a77e89-60", "ovs_interfaceid": "46a77e89-60ff-4609-9a5a-6e542d8343e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.639 253665 DEBUG nova.network.os_vif_util [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.639 253665 DEBUG os_vif [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.642 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46a77e89-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.644 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.646 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.648 253665 INFO os_vif [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:2e:22,bridge_name='br-int',has_traffic_filtering=True,id=46a77e89-60ff-4609-9a5a-6e542d8343e1,network=Network(15f88390-1071-41f9-b1a4-108f4f3845d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46a77e89-60')#033[00m
Nov 22 04:34:57 np0005532048 systemd-machined[215941]: New machine qemu-134-instance-0000006a.
Nov 22 04:34:57 np0005532048 systemd[1]: Started Virtual Machine qemu-134-instance-0000006a.
Nov 22 04:34:57 np0005532048 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [NOTICE]   (362719) : haproxy version is 2.8.14-c23fe91
Nov 22 04:34:57 np0005532048 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [NOTICE]   (362719) : path to executable is /usr/sbin/haproxy
Nov 22 04:34:57 np0005532048 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [ALERT]    (362719) : Current worker (362721) exited with code 143 (Terminated)
Nov 22 04:34:57 np0005532048 neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0[362715]: [WARNING]  (362719) : All workers exited. Exiting... (0)
Nov 22 04:34:57 np0005532048 systemd[1]: libpod-3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf.scope: Deactivated successfully.
Nov 22 04:34:57 np0005532048 podman[363645]: 2025-11-22 09:34:57.691727091 +0000 UTC m=+0.086359249 container died 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:34:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf-userdata-shm.mount: Deactivated successfully.
Nov 22 04:34:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0dbd92e328427ae01a91a6e788883f396c5e4580a3c0d724680d342b6d537fb2-merged.mount: Deactivated successfully.
Nov 22 04:34:57 np0005532048 podman[363645]: 2025-11-22 09:34:57.831056066 +0000 UTC m=+0.225688234 container cleanup 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:34:57 np0005532048 systemd[1]: libpod-conmon-3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf.scope: Deactivated successfully.
Nov 22 04:34:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 295 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.0 MiB/s wr, 216 op/s
Nov 22 04:34:57 np0005532048 podman[363715]: 2025-11-22 09:34:57.948533337 +0000 UTC m=+0.086988284 container remove 3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.955 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7ae19c-a83d-4c73-afca-27f5df6c8673]: (4, ('Sat Nov 22 09:34:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 (3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf)\n3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf\nSat Nov 22 09:34:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 (3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf)\n3cbddb472c410184f5cbb791c36786fdb4b69ee9f8b82dfec669fe36f4656cdf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.958 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[199f0955-3734-4bc8-96bd-9826553fbea5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.959 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15f88390-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 kernel: tap15f88390-10: left promiscuous mode
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.972 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[28c2a5a8-67ad-4efd-aff3-e10f55258b5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.989 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f424b037-db19-450a-bb4e-bd4237492dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:57 np0005532048 nova_compute[253661]: 2025-11-22 09:34:57.989 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:34:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:57.991 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa03396d-a7fe-47e9-9534-30ea6f1a204c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.013 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33739d93-70e6-4422-9e8a-cc2b7bbae122]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692565, 'reachable_time': 42158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363764, 'error': None, 'target': 'ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:58 np0005532048 systemd[1]: run-netns-ovnmeta\x2d15f88390\x2d1071\x2d41f9\x2db1a4\x2d108f4f3845d0.mount: Deactivated successfully.
Nov 22 04:34:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.017 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-15f88390-1071-41f9-b1a4-108f4f3845d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:34:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.018 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8f482a38-f03d-4b95-805d-1e48bd9b562f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.022 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 46a77e89-60ff-4609-9a5a-6e542d8343e1 in datapath 15f88390-1071-41f9-b1a4-108f4f3845d0 unbound from our chassis#033[00m
Nov 22 04:34:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.023 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15f88390-1071-41f9-b1a4-108f4f3845d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:34:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.024 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cd4617-0058-4707-a7b3-9db6d2a41e79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.025 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 46a77e89-60ff-4609-9a5a-6e542d8343e1 in datapath 15f88390-1071-41f9-b1a4-108f4f3845d0 unbound from our chassis#033[00m
Nov 22 04:34:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.026 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15f88390-1071-41f9-b1a4-108f4f3845d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:34:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:34:58.027 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4a6f8c-d76a-4b21-87aa-e12f54a313ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.133 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 361d3f1d-84a4-4159-a69a-8a0254446ab6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.133 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804098.1327739, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.133 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.141 253665 DEBUG nova.compute.manager [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.141 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.145 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance spawned successfully.#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.145 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.165 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.171 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.175 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.176 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.176 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.176 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.177 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.177 253665 DEBUG nova.virt.libvirt.driver [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.207 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.208 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804098.1390717, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.208 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Started (Lifecycle Event)#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.241 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.245 253665 DEBUG nova.compute.manager [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.276 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.306 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.306 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.306 253665 DEBUG nova.objects.instance [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.366 253665 DEBUG oslo_concurrency.lockutils [None req-596e2abb-923e-4e13-b75e-86cf69fd5bd9 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.702 253665 DEBUG nova.compute.manager [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-changed-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.703 253665 DEBUG nova.compute.manager [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing instance network info cache due to event network-changed-8da41f38-3812-4494-9cab-c4854772a569. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.703 253665 DEBUG oslo_concurrency.lockutils [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.704 253665 DEBUG oslo_concurrency.lockutils [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.704 253665 DEBUG nova.network.neutron [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing network info cache for port 8da41f38-3812-4494-9cab-c4854772a569 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.774 253665 INFO nova.virt.libvirt.driver [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Deleting instance files /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9_del#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.774 253665 INFO nova.virt.libvirt.driver [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Deletion of /var/lib/nova/instances/da98da35-5fb2-47cd-9d6b-a3bb2254bec9_del complete#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.842 253665 INFO nova.compute.manager [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Took 1.53 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.843 253665 DEBUG oslo.service.loopingcall [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.843 253665 DEBUG nova.compute.manager [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.844 253665 DEBUG nova.network.neutron [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.933 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.933 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:58 np0005532048 nova_compute[253661]: 2025-11-22 09:34:58.950 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.044 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.045 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.054 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.055 253665 INFO nova.compute.claims [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.258 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.552 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.553 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.553 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "361d3f1d-84a4-4159-a69a-8a0254446ab6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.554 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.554 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.555 253665 INFO nova.compute.manager [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Terminating instance#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.557 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "refresh_cache-361d3f1d-84a4-4159-a69a-8a0254446ab6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.557 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquired lock "refresh_cache-361d3f1d-84a4-4159-a69a-8a0254446ab6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.557 253665 DEBUG nova.network.neutron [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.630 253665 DEBUG nova.network.neutron [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.651 253665 INFO nova.compute.manager [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Took 0.81 seconds to deallocate network for instance.#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.693 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.752 253665 DEBUG nova.network.neutron [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:34:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:34:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3326564271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.779 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.786 253665 DEBUG nova.compute.provider_tree [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.802 253665 DEBUG nova.scheduler.client.report [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.827 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.828 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.830 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:34:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 269 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.1 MiB/s wr, 294 op/s
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.879 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.879 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.892 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.906 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:34:59 np0005532048 nova_compute[253661]: 2025-11-22 09:34:59.937 253665 DEBUG oslo_concurrency.processutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.261 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.262 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.263 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Creating image(s)#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.292 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.325 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.351 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.356 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1988217623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.415 253665 DEBUG nova.network.neutron [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.418 253665 DEBUG nova.network.neutron [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updated VIF entry in instance network info cache for port 8da41f38-3812-4494-9cab-c4854772a569. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.419 253665 DEBUG nova.network.neutron [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.421 253665 DEBUG nova.policy [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.427 253665 DEBUG oslo_concurrency.processutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.433 253665 DEBUG nova.compute.provider_tree [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.459 253665 DEBUG nova.scheduler.client.report [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.463 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Releasing lock "refresh_cache-361d3f1d-84a4-4159-a69a-8a0254446ab6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.463 253665 DEBUG nova.compute.manager [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.465 253665 DEBUG oslo_concurrency.lockutils [req-cae43404-f417-4661-a530-bd5592c54808 req-cadf1129-9263-45f8-a2c1-9dd13964f344 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.466 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.466 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.467 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.468 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.489 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.493 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.555 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.580 253665 INFO nova.scheduler.client.report [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance da98da35-5fb2-47cd-9d6b-a3bb2254bec9#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.646 253665 DEBUG oslo_concurrency.lockutils [None req-115b5ef7-b37c-4977-801e-6a8688e8aa44 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:00 np0005532048 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 22 04:35:00 np0005532048 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Consumed 2.821s CPU time.
Nov 22 04:35:00 np0005532048 systemd-machined[215941]: Machine qemu-134-instance-0000006a terminated.
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.777 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-unplugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.779 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.779 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.779 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.780 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] No waiting events found dispatching network-vif-unplugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.780 253665 WARNING nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received unexpected event network-vif-unplugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.780 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.780 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.781 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.781 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.781 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] No waiting events found dispatching network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.781 253665 WARNING nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received unexpected event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG oslo_concurrency.lockutils [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "da98da35-5fb2-47cd-9d6b-a3bb2254bec9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.782 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] No waiting events found dispatching network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.785 253665 WARNING nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received unexpected event network-vif-plugged-46a77e89-60ff-4609-9a5a-6e542d8343e1 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.786 253665 DEBUG nova.compute.manager [req-56457182-ca52-4aa4-baf5-4698e7ef4e20 req-9f39c060-c2aa-440c-97bf-e3a29e0df95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Received event network-vif-deleted-46a77e89-60ff-4609-9a5a-6e542d8343e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.799 253665 INFO nova.virt.libvirt.driver [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance destroyed successfully.#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.800 253665 DEBUG nova.objects.instance [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lazy-loading 'resources' on Instance uuid 361d3f1d-84a4-4159-a69a-8a0254446ab6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:00 np0005532048 nova_compute[253661]: 2025-11-22 09:35:00.876 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.043 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.125 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.271 253665 DEBUG nova.objects.instance [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 2a866674-0c27-4cfc-89f2-dfe8e9768900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.285 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.286 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Ensure instance console log exists: /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.286 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.287 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.287 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 239 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.2 MiB/s wr, 307 op/s
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.947 253665 INFO nova.virt.libvirt.driver [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deleting instance files /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6_del#033[00m
Nov 22 04:35:01 np0005532048 nova_compute[253661]: 2025-11-22 09:35:01.948 253665 INFO nova.virt.libvirt.driver [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deletion of /var/lib/nova/instances/361d3f1d-84a4-4159-a69a-8a0254446ab6_del complete#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.007 253665 INFO nova.compute.manager [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Took 1.54 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.008 253665 DEBUG oslo.service.loopingcall [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.009 253665 DEBUG nova.compute.manager [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.009 253665 DEBUG nova.network.neutron [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.287 253665 DEBUG nova.network.neutron [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.301 253665 DEBUG nova.network.neutron [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.316 253665 INFO nova.compute.manager [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Took 0.31 seconds to deallocate network for instance.#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.375 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.375 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.460 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Successfully created port: 0334ba91-f8b0-462b-a47b-b421e8796a21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.508 253665 DEBUG oslo_concurrency.processutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.645 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018322390644331574 of space, bias 1.0, pg target 0.5496717193299472 quantized to 32 (current 32)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:35:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:35:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2097270545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.985 253665 DEBUG oslo_concurrency.processutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:02 np0005532048 nova_compute[253661]: 2025-11-22 09:35:02.993 253665 DEBUG nova.compute.provider_tree [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:03 np0005532048 nova_compute[253661]: 2025-11-22 09:35:03.016 253665 DEBUG nova.scheduler.client.report [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:03 np0005532048 nova_compute[253661]: 2025-11-22 09:35:03.051 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:03 np0005532048 nova_compute[253661]: 2025-11-22 09:35:03.094 253665 INFO nova.scheduler.client.report [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Deleted allocations for instance 361d3f1d-84a4-4159-a69a-8a0254446ab6#033[00m
Nov 22 04:35:03 np0005532048 nova_compute[253661]: 2025-11-22 09:35:03.191 253665 DEBUG oslo_concurrency.lockutils [None req-b53e3cfe-3e00-4b86-9e53-bcb5b8a6dc98 c9e3213f01af435aab231356352dba1b d7e64b9e1f5f4ed7a0a6326357a91223 - - default default] Lock "361d3f1d-84a4-4159-a69a-8a0254446ab6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 213 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.9 MiB/s wr, 348 op/s
Nov 22 04:35:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:04Z|01078|binding|INFO|Releasing lport 4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9 from this chassis (sb_readonly=0)
Nov 22 04:35:04 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:04Z|01079|binding|INFO|Releasing lport a1771b67-4cb9-46af-b99c-bccbb7cc939f from this chassis (sb_readonly=0)
Nov 22 04:35:04 np0005532048 nova_compute[253661]: 2025-11-22 09:35:04.562 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Successfully updated port: 0334ba91-f8b0-462b-a47b-b421e8796a21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:35:04 np0005532048 nova_compute[253661]: 2025-11-22 09:35:04.579 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:04 np0005532048 nova_compute[253661]: 2025-11-22 09:35:04.582 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:04 np0005532048 nova_compute[253661]: 2025-11-22 09:35:04.583 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:04 np0005532048 nova_compute[253661]: 2025-11-22 09:35:04.583 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:35:04 np0005532048 nova_compute[253661]: 2025-11-22 09:35:04.640 253665 DEBUG nova.compute.manager [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:04 np0005532048 nova_compute[253661]: 2025-11-22 09:35:04.641 253665 DEBUG nova.compute.manager [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing instance network info cache due to event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:04 np0005532048 nova_compute[253661]: 2025-11-22 09:35:04.641 253665 DEBUG oslo_concurrency.lockutils [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:04 np0005532048 nova_compute[253661]: 2025-11-22 09:35:04.734 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.391 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.391 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.392 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.392 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.392 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.393 253665 INFO nova.compute.manager [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Terminating instance#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.395 253665 DEBUG nova.compute.manager [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:35:05 np0005532048 kernel: tap4d3de607-ad (unregistering): left promiscuous mode
Nov 22 04:35:05 np0005532048 NetworkManager[48920]: <info>  [1763804105.4622] device (tap4d3de607-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.474 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:05Z|01080|binding|INFO|Releasing lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a from this chassis (sb_readonly=0)
Nov 22 04:35:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:05Z|01081|binding|INFO|Setting lport 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a down in Southbound
Nov 22 04:35:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:05Z|01082|binding|INFO|Removing iface tap4d3de607-ad ovn-installed in OVS
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.485 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:2e:8f 10.100.0.14'], port_security=['fa:16:3e:cf:2e:8f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c5540f5a-8dfa-4b11-8452-c6fe99db1d64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ea999ce-3074-41ab-b630-d39c003b894a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1557c77-7174-4c01-8889-0c9609535e78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f542600-846f-418d-bf6a-c20db70e9dc6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.491 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a in datapath 5ea999ce-3074-41ab-b630-d39c003b894a unbound from our chassis#033[00m
Nov 22 04:35:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.493 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ea999ce-3074-41ab-b630-d39c003b894a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:35:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68f40c0e-4a24-495b-9cc1-0511c994a047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:05.496 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a namespace which is not needed anymore#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:05 np0005532048 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 22 04:35:05 np0005532048 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Consumed 17.630s CPU time.
Nov 22 04:35:05 np0005532048 systemd-machined[215941]: Machine qemu-128-instance-00000068 terminated.
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.619 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.633 253665 INFO nova.virt.libvirt.driver [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Instance destroyed successfully.#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.634 253665 DEBUG nova.objects.instance [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid c5540f5a-8dfa-4b11-8452-c6fe99db1d64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.645 253665 DEBUG nova.virt.libvirt.vif [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:33:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1891830994',display_name='tempest-TestNetworkBasicOps-server-1891830994',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1891830994',id=104,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH9/oNERa4AbqxHUPuutKC57v2O48q74KuKUDGgcFa55ErxBYOBd37EKQrgbQiEDb5SwoFM9AeHUddF0XE/aljzNPw78dYMARly2RFfRYPgRPvDRHLrrtwK6XNq8kEtqIg==',key_name='tempest-TestNetworkBasicOps-1008221113',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:33:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-8xsk0rz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:33:54Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c5540f5a-8dfa-4b11-8452-c6fe99db1d64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.647 253665 DEBUG nova.network.os_vif_util [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.648 253665 DEBUG nova.network.os_vif_util [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.649 253665 DEBUG os_vif [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.653 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d3de607-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.665 253665 INFO os_vif [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:2e:8f,bridge_name='br-int',has_traffic_filtering=True,id=4d3de607-ad62-4c7d-ae3b-7cecb934aa9a,network=Network(5ea999ce-3074-41ab-b630-d39c003b894a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d3de607-ad')#033[00m
Nov 22 04:35:05 np0005532048 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [NOTICE]   (360040) : haproxy version is 2.8.14-c23fe91
Nov 22 04:35:05 np0005532048 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [NOTICE]   (360040) : path to executable is /usr/sbin/haproxy
Nov 22 04:35:05 np0005532048 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [ALERT]    (360040) : Current worker (360045) exited with code 143 (Terminated)
Nov 22 04:35:05 np0005532048 neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a[360036]: [WARNING]  (360040) : All workers exited. Exiting... (0)
Nov 22 04:35:05 np0005532048 systemd[1]: libpod-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b.scope: Deactivated successfully.
Nov 22 04:35:05 np0005532048 conmon[360036]: conmon c99013bc2fb20721eb98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b.scope/container/memory.events
Nov 22 04:35:05 np0005532048 podman[364050]: 2025-11-22 09:35:05.783127653 +0000 UTC m=+0.168209254 container died c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:35:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 213 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 279 op/s
Nov 22 04:35:05 np0005532048 nova_compute[253661]: 2025-11-22 09:35:05.875 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b-userdata-shm.mount: Deactivated successfully.
Nov 22 04:35:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7f254f240f0f6ebbbb79d2bb4edf4897aae117a89bf2f466a7a1a47ed4367c5b-merged.mount: Deactivated successfully.
Nov 22 04:35:06 np0005532048 podman[364050]: 2025-11-22 09:35:06.001552495 +0000 UTC m=+0.386634086 container cleanup c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:35:06 np0005532048 systemd[1]: libpod-conmon-c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b.scope: Deactivated successfully.
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.513 253665 DEBUG nova.network.neutron [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.531 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.532 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance network_info: |[{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.534 253665 DEBUG oslo_concurrency.lockutils [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.534 253665 DEBUG nova.network.neutron [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.538 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start _get_guest_xml network_info=[{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.542 253665 WARNING nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.551 253665 DEBUG nova.virt.libvirt.host [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.552 253665 DEBUG nova.virt.libvirt.host [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.555 253665 DEBUG nova.virt.libvirt.host [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.556 253665 DEBUG nova.virt.libvirt.host [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.556 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.556 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.557 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.558 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.558 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.558 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.558 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.559 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.559 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.559 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.560 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.560 253665 DEBUG nova.virt.hardware [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.563 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.734 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.736 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing instance network info cache due to event network-changed-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.736 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.737 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.737 253665 DEBUG nova.network.neutron [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Refreshing network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:06 np0005532048 podman[364104]: 2025-11-22 09:35:06.886478282 +0000 UTC m=+0.858519861 container remove c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:35:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.895 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[269367ee-c180-46bc-b923-0a5b2eb52d3a]: (4, ('Sat Nov 22 09:35:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a (c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b)\nc99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b\nSat Nov 22 09:35:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a (c99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b)\nc99013bc2fb20721eb984f22226d814d241ee53ff9fa71ef93a06db74635184b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.897 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa38ac67-9d3c-4582-9fd1-a30e3743cd5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.899 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ea999ce-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:06 np0005532048 kernel: tap5ea999ce-30: left promiscuous mode
Nov 22 04:35:06 np0005532048 nova_compute[253661]: 2025-11-22 09:35:06.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.944 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6daf320d-29f0-49d1-868f-6a4e10d05745]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.963 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ffe19ce-fc0f-4e1f-8b5a-40a75b0d9c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.966 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdb69db-cda3-46a4-8883-17c330298c27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.991 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba3d4c0-d03b-4545-877f-b90ba97f2c3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688459, 'reachable_time': 25486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364139, 'error': None, 'target': 'ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:06 np0005532048 systemd[1]: run-netns-ovnmeta\x2d5ea999ce\x2d3074\x2d41ab\x2db630\x2dd39c003b894a.mount: Deactivated successfully.
Nov 22 04:35:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.997 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5ea999ce-3074-41ab-b630-d39c003b894a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:35:06 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:06.997 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfaaeb9-d8fa-4ced-b71f-f24f8f93a98e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616839578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.022 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.048 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.053 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845275590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.541 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.543 253665 DEBUG nova.virt.libvirt.vif [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-534004704',display_name='tempest-TestGettingAddress-server-534004704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-534004704',id=108,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-svtsxafy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:59Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2a866674-0c27-4cfc-89f2-dfe8e9768900,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.544 253665 DEBUG nova.network.os_vif_util [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.545 253665 DEBUG nova.network.os_vif_util [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.546 253665 DEBUG nova.objects.instance [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2a866674-0c27-4cfc-89f2-dfe8e9768900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.562 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <uuid>2a866674-0c27-4cfc-89f2-dfe8e9768900</uuid>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <name>instance-0000006c</name>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-534004704</nova:name>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:35:06</nova:creationTime>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <nova:port uuid="0334ba91-f8b0-462b-a47b-b421e8796a21">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb6:3376" ipVersion="6"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <entry name="serial">2a866674-0c27-4cfc-89f2-dfe8e9768900</entry>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <entry name="uuid">2a866674-0c27-4cfc-89f2-dfe8e9768900</entry>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2a866674-0c27-4cfc-89f2-dfe8e9768900_disk">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:b6:33:76"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <target dev="tap0334ba91-f8"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/console.log" append="off"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:35:07 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:35:07 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:35:07 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:35:07 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.564 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Preparing to wait for external event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.564 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.564 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.565 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.565 253665 DEBUG nova.virt.libvirt.vif [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:34:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-534004704',display_name='tempest-TestGettingAddress-server-534004704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-534004704',id=108,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-svtsxafy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:34:59Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2a866674-0c27-4cfc-89f2-dfe8e9768900,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.565 253665 DEBUG nova.network.os_vif_util [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.566 253665 DEBUG nova.network.os_vif_util [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.567 253665 DEBUG os_vif [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.567 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.568 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.568 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.571 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0334ba91-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.572 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0334ba91-f8, col_values=(('external_ids', {'iface-id': '0334ba91-f8b0-462b-a47b-b421e8796a21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:33:76', 'vm-uuid': '2a866674-0c27-4cfc-89f2-dfe8e9768900'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:07 np0005532048 NetworkManager[48920]: <info>  [1763804107.5754] manager: (tap0334ba91-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.580 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.584 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.585 253665 INFO os_vif [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8')#033[00m
Nov 22 04:35:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 213 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 281 op/s
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.916 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.917 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.917 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:b6:33:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.918 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Using config drive#033[00m
Nov 22 04:35:07 np0005532048 nova_compute[253661]: 2025-11-22 09:35:07.983 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.197 253665 DEBUG nova.network.neutron [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updated VIF entry in instance network info cache for port 4d3de607-ad62-4c7d-ae3b-7cecb934aa9a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.197 253665 DEBUG nova.network.neutron [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [{"id": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "address": "fa:16:3e:cf:2e:8f", "network": {"id": "5ea999ce-3074-41ab-b630-d39c003b894a", "bridge": "br-int", "label": "tempest-network-smoke--460392593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d3de607-ad", "ovs_interfaceid": "4d3de607-ad62-4c7d-ae3b-7cecb934aa9a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.213 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c5540f5a-8dfa-4b11-8452-c6fe99db1d64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.213 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-unplugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.214 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.214 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.214 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.215 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] No waiting events found dispatching network-vif-unplugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.215 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-unplugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.215 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.215 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.216 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.216 253665 DEBUG oslo_concurrency.lockutils [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.216 253665 DEBUG nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] No waiting events found dispatching network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.216 253665 WARNING nova.compute.manager [req-c804617c-f3bf-46ab-8c24-2662b5dc8571 req-0b9ebcdf-4688-41cf-a558-136404326472 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received unexpected event network-vif-plugged-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.395 253665 DEBUG nova.network.neutron [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updated VIF entry in instance network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.396 253665 DEBUG nova.network.neutron [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.405 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Creating config drive at /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.415 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_wjotg0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.476 253665 DEBUG oslo_concurrency.lockutils [req-99bee4c7-33f2-4bb2-9874-f7105b17539a req-d3226d3b-f2b0-4678-aa5f-e522c5fba3f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.587 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_wjotg0z" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.624 253665 DEBUG nova.storage.rbd_utils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:08 np0005532048 nova_compute[253661]: 2025-11-22 09:35:08.628 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 214 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.7 MiB/s wr, 239 op/s
Nov 22 04:35:10 np0005532048 nova_compute[253661]: 2025-11-22 09:35:10.877 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.359 253665 DEBUG oslo_concurrency.processutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config 2a866674-0c27-4cfc-89f2-dfe8e9768900_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.731s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.360 253665 INFO nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Deleting local config drive /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900/disk.config because it was imported into RBD.#033[00m
Nov 22 04:35:11 np0005532048 kernel: tap0334ba91-f8: entered promiscuous mode
Nov 22 04:35:11 np0005532048 NetworkManager[48920]: <info>  [1763804111.4296] manager: (tap0334ba91-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/445)
Nov 22 04:35:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:11Z|01083|binding|INFO|Claiming lport 0334ba91-f8b0-462b-a47b-b421e8796a21 for this chassis.
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.431 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:11Z|01084|binding|INFO|0334ba91-f8b0-462b-a47b-b421e8796a21: Claiming fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.441 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376'], port_security=['fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8::f816:3eff:feb6:3376/64', 'neutron:device_id': '2a866674-0c27-4cfc-89f2-dfe8e9768900', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d5326a8-c171-4fdf-9f85-e6536ded5f96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0334ba91-f8b0-462b-a47b-b421e8796a21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.443 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0334ba91-f8b0-462b-a47b-b421e8796a21 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 bound to our chassis#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.444 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3e4e01e-5e3e-4572-b404-ee47aaec1186#033[00m
Nov 22 04:35:11 np0005532048 systemd-udevd[364255]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.511 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84322510-427e-43fc-bed1-505f93cf6146]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.512 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3e4e01e-51 in ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.513 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3e4e01e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.513 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7ceae220-44d7-416a-8608-b38db41a1713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.514 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7af97014-de38-426a-a3e5-581c7e3e7f0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:11Z|01085|binding|INFO|Setting lport 0334ba91-f8b0-462b-a47b-b421e8796a21 ovn-installed in OVS
Nov 22 04:35:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:11Z|01086|binding|INFO|Setting lport 0334ba91-f8b0-462b-a47b-b421e8796a21 up in Southbound
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:11 np0005532048 NetworkManager[48920]: <info>  [1763804111.5310] device (tap0334ba91-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:35:11 np0005532048 NetworkManager[48920]: <info>  [1763804111.5322] device (tap0334ba91-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.532 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[644a19f5-a28a-4489-81dd-7a70045f4de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 systemd-machined[215941]: New machine qemu-135-instance-0000006c.
Nov 22 04:35:11 np0005532048 systemd[1]: Started Virtual Machine qemu-135-instance-0000006c.
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.561 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b68f0396-dfda-4350-a38c-53789ea11f93]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.594 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[64632633-f2ca-4c4b-bf6a-6b8867e96a9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 systemd-udevd[364260]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:35:11 np0005532048 NetworkManager[48920]: <info>  [1763804111.6018] manager: (tapd3e4e01e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/446)
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.600 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9f4078-92e2-47f6-8c41-df8259c2d40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.637 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[477d1edd-1461-4a45-8875-7dd61fe07367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.642 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe28184a-e362-4648-9f90-cc1f92b6ee46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 NetworkManager[48920]: <info>  [1763804111.6709] device (tapd3e4e01e-50): carrier: link connected
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.680 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2199ce95-7a4b-43e5-8408-8435304f39bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.705 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[966aff60-6962-48ac-bffa-124951f4c476]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3e4e01e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:a9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696240, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364289, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.728 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4cdd3154-e080-496b-a122-573a97753d56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe75:a9a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696240, 'tstamp': 696240}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364290, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.756 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3b9fc2-6dc2-43d3-8d9d-c22c63bb7a6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3e4e01e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:a9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696240, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364291, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.801 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[941bbee7-0f4b-4792-8056-2085b4a936e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.848 253665 DEBUG nova.compute.manager [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.848 253665 DEBUG oslo_concurrency.lockutils [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.849 253665 DEBUG oslo_concurrency.lockutils [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.849 253665 DEBUG oslo_concurrency.lockutils [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.849 253665 DEBUG nova.compute.manager [req-4f10eae4-71c8-48be-95ba-33b62b9ae5ca req-04d72372-efcf-4556-841b-4f0eacc82c93 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Processing event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:35:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 206 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 165 op/s
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.889 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c93d2d88-3520-4c4c-b783-46ccd350a144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.890 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3e4e01e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.891 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.891 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3e4e01e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.893 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:11 np0005532048 NetworkManager[48920]: <info>  [1763804111.8946] manager: (tapd3e4e01e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Nov 22 04:35:11 np0005532048 kernel: tapd3e4e01e-50: entered promiscuous mode
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.898 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3e4e01e-50, col_values=(('external_ids', {'iface-id': 'ff0f834b-9623-4226-98e1-741634e7eb05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.899 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.902 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3e4e01e-5e3e-4572-b404-ee47aaec1186.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3e4e01e-5e3e-4572-b404-ee47aaec1186.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.903 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c62c2db5-3a06-405c-b2e5-fcf64a825264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.904 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-d3e4e01e-5e3e-4572-b404-ee47aaec1186
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/d3e4e01e-5e3e-4572-b404-ee47aaec1186.pid.haproxy
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID d3e4e01e-5e3e-4572-b404-ee47aaec1186
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:35:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:11.906 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'env', 'PROCESS_TAG=haproxy-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3e4e01e-5e3e-4572-b404-ee47aaec1186.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:35:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:11Z|01087|binding|INFO|Releasing lport ff0f834b-9623-4226-98e1-741634e7eb05 from this chassis (sb_readonly=0)
Nov 22 04:35:11 np0005532048 nova_compute[253661]: 2025-11-22 09:35:11.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3110288167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3110288167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.427 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.429 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804112.4269822, 2a866674-0c27-4cfc-89f2-dfe8e9768900 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.429 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] VM Started (Lifecycle Event)#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.434 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.440 253665 INFO nova.virt.libvirt.driver [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance spawned successfully.#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.440 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.455 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.462 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.468 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.469 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.469 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.469 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.470 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.470 253665 DEBUG nova.virt.libvirt.driver [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:12 np0005532048 podman[364364]: 2025-11-22 09:35:12.378177813 +0000 UTC m=+0.057068620 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.481 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.481 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804112.4285688, 2a866674-0c27-4cfc-89f2-dfe8e9768900 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.481 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.522 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.526 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804112.4323664, 2a866674-0c27-4cfc-89f2-dfe8e9768900 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.526 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.545 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.549 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.553 253665 INFO nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Took 12.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.554 253665 DEBUG nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.576 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.620 253665 INFO nova.compute.manager [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Took 13.62 seconds to build instance.#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.623 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804097.6225607, da98da35-5fb2-47cd-9d6b-a3bb2254bec9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.623 253665 INFO nova.compute.manager [-] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.641 253665 DEBUG nova.compute.manager [None req-fd156203-268a-470d-b710-e96f51238074 - - - - - -] [instance: da98da35-5fb2-47cd-9d6b-a3bb2254bec9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.642 253665 DEBUG oslo_concurrency.lockutils [None req-2c8cbbe0-eea2-42fd-8218-76bbbdc94ef7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:12 np0005532048 podman[364364]: 2025-11-22 09:35:12.748551844 +0000 UTC m=+0.427442661 container create a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.772575) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112772654, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1330, "num_deletes": 251, "total_data_size": 1917534, "memory_usage": 1944960, "flush_reason": "Manual Compaction"}
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112795130, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1886781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43971, "largest_seqno": 45300, "table_properties": {"data_size": 1880601, "index_size": 3383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13805, "raw_average_key_size": 20, "raw_value_size": 1867953, "raw_average_value_size": 2726, "num_data_blocks": 151, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763803988, "oldest_key_time": 1763803988, "file_creation_time": 1763804112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 22624 microseconds, and 10468 cpu microseconds.
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.795197) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1886781 bytes OK
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.795230) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.798865) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.798894) EVENT_LOG_v1 {"time_micros": 1763804112798887, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.798917) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1911501, prev total WAL file size 1911501, number of live WAL files 2.
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.799933) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1842KB)], [101(8544KB)]
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112800029, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10635854, "oldest_snapshot_seqno": -1}
Nov 22 04:35:12 np0005532048 systemd[1]: Started libpod-conmon-a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a.scope.
Nov 22 04:35:12 np0005532048 nova_compute[253661]: 2025-11-22 09:35:12.860 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c895ab309e207727e077cfd1951c34149c6808ab71f4a6a14342fc5743fd6a36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6659 keys, 8978774 bytes, temperature: kUnknown
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112890177, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8978774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8933739, "index_size": 27259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 173097, "raw_average_key_size": 25, "raw_value_size": 8813855, "raw_average_value_size": 1323, "num_data_blocks": 1065, "num_entries": 6659, "num_filter_entries": 6659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.890566) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8978774 bytes
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.894535) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.8 rd, 99.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.3 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(10.4) write-amplify(4.8) OK, records in: 7173, records dropped: 514 output_compression: NoCompression
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.894585) EVENT_LOG_v1 {"time_micros": 1763804112894564, "job": 60, "event": "compaction_finished", "compaction_time_micros": 90288, "compaction_time_cpu_micros": 35399, "output_level": 6, "num_output_files": 1, "total_output_size": 8978774, "num_input_records": 7173, "num_output_records": 6659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112895080, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804112896331, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.799690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:35:12 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:35:12.896439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:35:12 np0005532048 podman[364364]: 2025-11-22 09:35:12.901211551 +0000 UTC m=+0.580102348 container init a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:35:12 np0005532048 podman[364364]: 2025-11-22 09:35:12.907721452 +0000 UTC m=+0.586612259 container start a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:35:12 np0005532048 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [NOTICE]   (364385) : New worker (364387) forked
Nov 22 04:35:12 np0005532048 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [NOTICE]   (364385) : Loading success.
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.130 253665 INFO nova.virt.libvirt.driver [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Deleting instance files /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64_del#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.132 253665 INFO nova.virt.libvirt.driver [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Deletion of /var/lib/nova/instances/c5540f5a-8dfa-4b11-8452-c6fe99db1d64_del complete#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.192 253665 INFO nova.compute.manager [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Took 7.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.193 253665 DEBUG oslo.service.loopingcall [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.193 253665 DEBUG nova.compute.manager [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.193 253665 DEBUG nova.network.neutron [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:35:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 160 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 179 op/s
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.960 253665 DEBUG nova.compute.manager [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.962 253665 DEBUG oslo_concurrency.lockutils [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.962 253665 DEBUG oslo_concurrency.lockutils [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.962 253665 DEBUG oslo_concurrency.lockutils [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.962 253665 DEBUG nova.compute.manager [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] No waiting events found dispatching network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.963 253665 WARNING nova.compute.manager [req-cbeefcc5-77d6-4699-9c21-8b28b26198e4 req-373f844a-80f2-4b3e-8f3f-8d122c047bff 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received unexpected event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:35:13 np0005532048 nova_compute[253661]: 2025-11-22 09:35:13.989 253665 DEBUG nova.network.neutron [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.009 253665 INFO nova.compute.manager [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Took 0.82 seconds to deallocate network for instance.#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.062 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.063 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.082 253665 DEBUG nova.compute.manager [req-ae10ed67-13bf-4b5c-9d65-0033deccaf76 req-37da3148-2ba8-4706-b99e-59bd4ba5289b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Received event network-vif-deleted-4d3de607-ad62-4c7d-ae3b-7cecb934aa9a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.156 253665 DEBUG oslo_concurrency.processutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:14Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 04:35:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:14Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 04:35:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:14.484 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:14.486 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4117078659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.647 253665 DEBUG oslo_concurrency.processutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.654 253665 DEBUG nova.compute.provider_tree [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.672 253665 DEBUG nova.scheduler.client.report [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.707 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.752 253665 INFO nova.scheduler.client.report [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance c5540f5a-8dfa-4b11-8452-c6fe99db1d64#033[00m
Nov 22 04:35:14 np0005532048 nova_compute[253661]: 2025-11-22 09:35:14.845 253665 DEBUG oslo_concurrency.lockutils [None req-f47d1743-d57b-4f84-92d2-1c0516ea34df 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c5540f5a-8dfa-4b11-8452-c6fe99db1d64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:15 np0005532048 nova_compute[253661]: 2025-11-22 09:35:15.788 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804100.7845926, 361d3f1d-84a4-4159-a69a-8a0254446ab6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:15 np0005532048 nova_compute[253661]: 2025-11-22 09:35:15.788 253665 INFO nova.compute.manager [-] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:35:15 np0005532048 nova_compute[253661]: 2025-11-22 09:35:15.820 253665 DEBUG nova.compute.manager [None req-93f6abdd-e98b-4036-8c8d-410ebf43a029 - - - - - -] [instance: 361d3f1d-84a4-4159-a69a-8a0254446ab6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 160 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Nov 22 04:35:15 np0005532048 nova_compute[253661]: 2025-11-22 09:35:15.881 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:16 np0005532048 nova_compute[253661]: 2025-11-22 09:35:16.210 253665 DEBUG nova.compute.manager [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:16 np0005532048 nova_compute[253661]: 2025-11-22 09:35:16.210 253665 DEBUG nova.compute.manager [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing instance network info cache due to event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:16 np0005532048 nova_compute[253661]: 2025-11-22 09:35:16.210 253665 DEBUG oslo_concurrency.lockutils [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:16 np0005532048 nova_compute[253661]: 2025-11-22 09:35:16.211 253665 DEBUG oslo_concurrency.lockutils [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:16 np0005532048 nova_compute[253661]: 2025-11-22 09:35:16.211 253665 DEBUG nova.network.neutron [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:16 np0005532048 nova_compute[253661]: 2025-11-22 09:35:16.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:17 np0005532048 nova_compute[253661]: 2025-11-22 09:35:17.577 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 164 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Nov 22 04:35:17 np0005532048 nova_compute[253661]: 2025-11-22 09:35:17.893 253665 DEBUG nova.network.neutron [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updated VIF entry in instance network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:35:17 np0005532048 nova_compute[253661]: 2025-11-22 09:35:17.894 253665 DEBUG nova.network.neutron [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:17 np0005532048 nova_compute[253661]: 2025-11-22 09:35:17.913 253665 DEBUG oslo_concurrency.lockutils [req-6fbf7bed-60db-41e9-9c21-8f3b46d2dfad req-ddb1e742-d72e-4477-bd44-dee4b45563c4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:18 np0005532048 podman[364419]: 2025-11-22 09:35:18.381207069 +0000 UTC m=+0.062470145 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:35:18 np0005532048 podman[364418]: 2025-11-22 09:35:18.414474216 +0000 UTC m=+0.102738126 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:35:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:18Z|01088|binding|INFO|Releasing lport 4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9 from this chassis (sb_readonly=0)
Nov 22 04:35:18 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:18Z|01089|binding|INFO|Releasing lport ff0f834b-9623-4226-98e1-741634e7eb05 from this chassis (sb_readonly=0)
Nov 22 04:35:18 np0005532048 nova_compute[253661]: 2025-11-22 09:35:18.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:19 np0005532048 nova_compute[253661]: 2025-11-22 09:35:19.452 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:19 np0005532048 nova_compute[253661]: 2025-11-22 09:35:19.453 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:19 np0005532048 nova_compute[253661]: 2025-11-22 09:35:19.472 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:35:19 np0005532048 nova_compute[253661]: 2025-11-22 09:35:19.550 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:19 np0005532048 nova_compute[253661]: 2025-11-22 09:35:19.550 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:19 np0005532048 nova_compute[253661]: 2025-11-22 09:35:19.558 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:35:19 np0005532048 nova_compute[253661]: 2025-11-22 09:35:19.559 253665 INFO nova.compute.claims [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:35:19 np0005532048 nova_compute[253661]: 2025-11-22 09:35:19.739 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 167 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Nov 22 04:35:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584020299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.228 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.235 253665 DEBUG nova.compute.provider_tree [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.251 253665 DEBUG nova.scheduler.client.report [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.272 253665 INFO nova.compute.manager [None req-d390af23-f0e9-472c-839e-a786035cdb81 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Get console output#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.281 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.282 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.281 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.344 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.345 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.368 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.394 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.487 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:35:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:20.488 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.489 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.489 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating image(s)#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.509 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.534 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.561 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.567 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.623 253665 DEBUG nova.policy [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31c7a4aa8fa340d2881ddc3ed426b6db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a31947dfacfc450ba028c42968f103b2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.631 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804105.6253514, c5540f5a-8dfa-4b11-8452-c6fe99db1d64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.632 253665 INFO nova.compute.manager [-] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.650 253665 DEBUG oslo_concurrency.lockutils [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.651 253665 DEBUG oslo_concurrency.lockutils [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.651 253665 DEBUG nova.compute.manager [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.652 253665 DEBUG nova.compute.manager [None req-d4086b6c-08a3-475c-8c1c-ff5e85914d49 - - - - - -] [instance: c5540f5a-8dfa-4b11-8452-c6fe99db1d64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.657 253665 DEBUG nova.compute.manager [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.658 253665 DEBUG nova.objects.instance [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.663 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.664 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.664 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.665 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.689 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.693 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.746 253665 DEBUG nova.virt.libvirt.driver [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:35:20 np0005532048 nova_compute[253661]: 2025-11-22 09:35:20.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:21 np0005532048 nova_compute[253661]: 2025-11-22 09:35:21.240 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Successfully created port: c027d879-91b3-497d-9f51-8476006ea65c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:35:21 np0005532048 nova_compute[253661]: 2025-11-22 09:35:21.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 178 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 22 04:35:22 np0005532048 nova_compute[253661]: 2025-11-22 09:35:22.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:35:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:35:22 np0005532048 nova_compute[253661]: 2025-11-22 09:35:22.825 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:22 np0005532048 nova_compute[253661]: 2025-11-22 09:35:22.894 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] resizing rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.049 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Successfully updated port: c027d879-91b3-497d-9f51-8476006ea65c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.057 253665 DEBUG nova.compute.manager [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-changed-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.058 253665 DEBUG nova.compute.manager [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing instance network info cache due to event network-changed-c027d879-91b3-497d-9f51-8476006ea65c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.058 253665 DEBUG oslo_concurrency.lockutils [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.059 253665 DEBUG oslo_concurrency.lockutils [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.059 253665 DEBUG nova.network.neutron [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing network info cache for port c027d879-91b3-497d-9f51-8476006ea65c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.076 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.218 253665 DEBUG nova.network.neutron [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.591 253665 DEBUG nova.objects.instance [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'migration_context' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.611 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.613 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Ensure instance console log exists: /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.613 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.614 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.614 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.644 253665 DEBUG nova.network.neutron [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.669 253665 DEBUG oslo_concurrency.lockutils [req-b02e174f-2fa9-4b64-9790-8f249718651d req-9a6b3ae1-9469-48f6-8ad3-5bbac2a1b016 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.671 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.671 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:35:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:23 np0005532048 nova_compute[253661]: 2025-11-22 09:35:23.811 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:35:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 202 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Nov 22 04:35:24 np0005532048 kernel: tap8da41f38-38 (unregistering): left promiscuous mode
Nov 22 04:35:24 np0005532048 NetworkManager[48920]: <info>  [1763804124.0739] device (tap8da41f38-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:35:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:24Z|01090|binding|INFO|Releasing lport 8da41f38-3812-4494-9cab-c4854772a569 from this chassis (sb_readonly=0)
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.081 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:24Z|01091|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 down in Southbound
Nov 22 04:35:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:24Z|01092|binding|INFO|Removing iface tap8da41f38-38 ovn-installed in OVS
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.090 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ea:ba 10.100.0.4'], port_security=['fa:16:3e:02:ea:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f8530ae-f429-4807-81ca-84d8f964a38c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20570e02-4f3c-425d-9564-924b275d70dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0291e4d-91dd-4ee6-9074-0372622e253d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f04ee3-5979-45f2-bf12-c1c6b0bf9924, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8da41f38-3812-4494-9cab-c4854772a569) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.092 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8da41f38-3812-4494-9cab-c4854772a569 in datapath 20570e02-4f3c-425d-9564-924b275d70dc unbound from our chassis#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.094 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20570e02-4f3c-425d-9564-924b275d70dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.095 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c0d617-a059-428f-a671-dfa8b62b79d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.096 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc namespace which is not needed anymore#033[00m
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:24 np0005532048 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 22 04:35:24 np0005532048 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Consumed 15.508s CPU time.
Nov 22 04:35:24 np0005532048 systemd-machined[215941]: Machine qemu-133-instance-0000006b terminated.
Nov 22 04:35:24 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [NOTICE]   (363285) : haproxy version is 2.8.14-c23fe91
Nov 22 04:35:24 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [NOTICE]   (363285) : path to executable is /usr/sbin/haproxy
Nov 22 04:35:24 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [WARNING]  (363285) : Exiting Master process...
Nov 22 04:35:24 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [WARNING]  (363285) : Exiting Master process...
Nov 22 04:35:24 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [ALERT]    (363285) : Current worker (363305) exited with code 143 (Terminated)
Nov 22 04:35:24 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[363266]: [WARNING]  (363285) : All workers exited. Exiting... (0)
Nov 22 04:35:24 np0005532048 systemd[1]: libpod-83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2.scope: Deactivated successfully.
Nov 22 04:35:24 np0005532048 podman[364765]: 2025-11-22 09:35:24.28307829 +0000 UTC m=+0.072033692 container died 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:35:24 np0005532048 NetworkManager[48920]: <info>  [1763804124.3103] manager: (tap8da41f38-38): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Nov 22 04:35:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2-userdata-shm.mount: Deactivated successfully.
Nov 22 04:35:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-054c9b7e428385e12a38fc4d69601f1307ee63ce36fecad82d262095c96a25dd-merged.mount: Deactivated successfully.
Nov 22 04:35:24 np0005532048 podman[364765]: 2025-11-22 09:35:24.349354038 +0000 UTC m=+0.138309430 container cleanup 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:35:24 np0005532048 systemd[1]: libpod-conmon-83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2.scope: Deactivated successfully.
Nov 22 04:35:24 np0005532048 podman[364813]: 2025-11-22 09:35:24.434453225 +0000 UTC m=+0.054650430 container remove 83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.441 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[15181837-cadd-476b-a531-b44be9dde25d]: (4, ('Sat Nov 22 09:35:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc (83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2)\n83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2\nSat Nov 22 09:35:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc (83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2)\n83a8576e7f61a983ba3188e3553427d48ec8efb32c08a7fa81995bafe9cbb7f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.444 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23f51168-d4c1-48db-975a-81a590ee88e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.446 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20570e02-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:24 np0005532048 kernel: tap20570e02-40: left promiscuous mode
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.473 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5cfe94-dde6-42dc-9b41-87c8c173b281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.491 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1db4219-f8b1-42aa-be46-40b75c04a40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bcccaf52-807d-4752-8a76-35afbb7fdd62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.516 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[85850246-2561-4878-8cba-98df1f1fe106]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694389, 'reachable_time': 23472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364842, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:24 np0005532048 systemd[1]: run-netns-ovnmeta\x2d20570e02\x2d4f3c\x2d425d\x2d9564\x2d924b275d70dc.mount: Deactivated successfully.
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.520 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:35:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:24.521 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c4d93b-8d5b-41e6-b79b-8f377776ea9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:24 np0005532048 podman[364829]: 2025-11-22 09:35:24.598108295 +0000 UTC m=+0.107227708 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.773 253665 INFO nova.virt.libvirt.driver [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance shutdown successfully after 4 seconds.#033[00m
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.781 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance destroyed successfully.#033[00m
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.781 253665 DEBUG nova.objects.instance [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.797 253665 DEBUG nova.compute.manager [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:24 np0005532048 nova_compute[253661]: 2025-11-22 09:35:24.865 253665 DEBUG oslo_concurrency.lockutils [None req-0c92b1a3-b7fd-4bc3-ab7c-1a17c12fce15 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 4.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:25 np0005532048 podman[365013]: 2025-11-22 09:35:25.324378556 +0000 UTC m=+0.047458731 container create 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:35:25 np0005532048 systemd[1]: Started libpod-conmon-5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842.scope.
Nov 22 04:35:25 np0005532048 podman[365013]: 2025-11-22 09:35:25.305140918 +0000 UTC m=+0.028221123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:35:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:25 np0005532048 podman[365013]: 2025-11-22 09:35:25.424079006 +0000 UTC m=+0.147159201 container init 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:35:25 np0005532048 podman[365013]: 2025-11-22 09:35:25.433711525 +0000 UTC m=+0.156791700 container start 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:35:25 np0005532048 podman[365013]: 2025-11-22 09:35:25.438548945 +0000 UTC m=+0.161629120 container attach 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 04:35:25 np0005532048 eloquent_margulis[365030]: 167 167
Nov 22 04:35:25 np0005532048 systemd[1]: libpod-5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842.scope: Deactivated successfully.
Nov 22 04:35:25 np0005532048 conmon[365030]: conmon 5c960927c72689ed587c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842.scope/container/memory.events
Nov 22 04:35:25 np0005532048 podman[365035]: 2025-11-22 09:35:25.4857918 +0000 UTC m=+0.023797082 container died 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:35:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ae89b6c82d29cc8455cba0297577334154f152d0797137de100c71f6ecf2dec5-merged.mount: Deactivated successfully.
Nov 22 04:35:25 np0005532048 podman[365035]: 2025-11-22 09:35:25.527857856 +0000 UTC m=+0.065863138 container remove 5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:35:25 np0005532048 systemd[1]: libpod-conmon-5c960927c72689ed587ccbd1f5fc8c5964cf4fd2b3f89fc984c061a8db255842.scope: Deactivated successfully.
Nov 22 04:35:25 np0005532048 podman[365054]: 2025-11-22 09:35:25.714223281 +0000 UTC m=+0.042719974 container create c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:35:25 np0005532048 systemd[1]: Started libpod-conmon-c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf.scope.
Nov 22 04:35:25 np0005532048 podman[365054]: 2025-11-22 09:35:25.696730536 +0000 UTC m=+0.025227249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:35:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:25 np0005532048 podman[365054]: 2025-11-22 09:35:25.811864769 +0000 UTC m=+0.140361492 container init c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:35:25 np0005532048 podman[365054]: 2025-11-22 09:35:25.818790982 +0000 UTC m=+0.147287675 container start c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:35:25 np0005532048 podman[365054]: 2025-11-22 09:35:25.82195324 +0000 UTC m=+0.150449933 container attach c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.860 253665 DEBUG nova.network.neutron [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 202 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 136 op/s
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.897 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.897 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance network_info: |[{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.901 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start _get_guest_xml network_info=[{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.907 253665 WARNING nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.916 253665 DEBUG nova.virt.libvirt.host [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.918 253665 DEBUG nova.virt.libvirt.host [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.922 253665 DEBUG nova.virt.libvirt.host [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.922 253665 DEBUG nova.virt.libvirt.host [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.923 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.923 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.924 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.924 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.924 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.924 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.925 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.925 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.925 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.926 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.926 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.926 253665 DEBUG nova.virt.hardware [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:35:25 np0005532048 nova_compute[253661]: 2025-11-22 09:35:25.929 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.016 253665 DEBUG nova.compute.manager [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.021 253665 DEBUG oslo_concurrency.lockutils [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.022 253665 DEBUG oslo_concurrency.lockutils [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.022 253665 DEBUG oslo_concurrency.lockutils [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.022 253665 DEBUG nova.compute.manager [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.023 253665 WARNING nova.compute.manager [req-e2c9fcb3-e5f8-4fed-b0ca-a94e530fb076 req-58b5cfe8-7fa9-4196-bacc-1af98a92191b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state stopped and task_state None.#033[00m
Nov 22 04:35:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1804604923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.430 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.479 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.486 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:26Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:33:76 10.100.0.5
Nov 22 04:35:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:26Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:33:76 10.100.0.5
Nov 22 04:35:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208672219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.990 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.994 253665 DEBUG nova.virt.libvirt.vif [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:20Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.994 253665 DEBUG nova.network.os_vif_util [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.996 253665 DEBUG nova.network.os_vif_util [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:26 np0005532048 nova_compute[253661]: 2025-11-22 09:35:26.997 253665 DEBUG nova.objects.instance [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.021 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <uuid>cf5e117a-f203-4c8f-b795-01fb355ca5e8</uuid>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <name>instance-0000006d</name>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersNegativeTestJSON-server-627235813</nova:name>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:35:25</nova:creationTime>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <nova:user uuid="31c7a4aa8fa340d2881ddc3ed426b6db">tempest-ServersNegativeTestJSON-1692723590-project-member</nova:user>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <nova:project uuid="a31947dfacfc450ba028c42968f103b2">tempest-ServersNegativeTestJSON-1692723590</nova:project>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <nova:port uuid="c027d879-91b3-497d-9f51-8476006ea65c">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <entry name="serial">cf5e117a-f203-4c8f-b795-01fb355ca5e8</entry>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <entry name="uuid">cf5e117a-f203-4c8f-b795-01fb355ca5e8</entry>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:d9:42:5a"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <target dev="tapc027d879-91"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/console.log" append="off"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:35:27 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:35:27 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:35:27 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:35:27 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.023 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Preparing to wait for external event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.023 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.024 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.024 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.025 253665 DEBUG nova.virt.libvirt.vif [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:20Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.025 253665 DEBUG nova.network.os_vif_util [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.026 253665 DEBUG nova.network.os_vif_util [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.026 253665 DEBUG os_vif [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.027 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.027 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.028 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.032 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.033 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc027d879-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.034 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc027d879-91, col_values=(('external_ids', {'iface-id': 'c027d879-91b3-497d-9f51-8476006ea65c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:42:5a', 'vm-uuid': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:27 np0005532048 NetworkManager[48920]: <info>  [1763804127.0846] manager: (tapc027d879-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.096 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.098 253665 INFO os_vif [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.173 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.174 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.174 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No VIF found with MAC fa:16:3e:d9:42:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.177 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Using config drive#033[00m
Nov 22 04:35:27 np0005532048 nova_compute[253661]: 2025-11-22 09:35:27.200 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:27 np0005532048 zen_lalande[365070]: [
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:    {
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        "available": false,
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        "ceph_device": false,
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        "lsm_data": {},
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        "lvs": [],
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        "path": "/dev/sr0",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        "rejected_reasons": [
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "Insufficient space (<5GB)",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "Has a FileSystem"
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        ],
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        "sys_api": {
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "actuators": null,
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "device_nodes": "sr0",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "devname": "sr0",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "human_readable_size": "482.00 KB",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "id_bus": "ata",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "model": "QEMU DVD-ROM",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "nr_requests": "2",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "parent": "/dev/sr0",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "partitions": {},
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "path": "/dev/sr0",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "removable": "1",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "rev": "2.5+",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "ro": "0",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "rotational": "1",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "sas_address": "",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "sas_device_handle": "",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "scheduler_mode": "mq-deadline",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "sectors": 0,
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "sectorsize": "2048",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "size": 493568.0,
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "support_discard": "2048",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "type": "disk",
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:            "vendor": "QEMU"
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:        }
Nov 22 04:35:27 np0005532048 zen_lalande[365070]:    }
Nov 22 04:35:27 np0005532048 zen_lalande[365070]: ]
Nov 22 04:35:27 np0005532048 systemd[1]: libpod-c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf.scope: Deactivated successfully.
Nov 22 04:35:27 np0005532048 systemd[1]: libpod-c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf.scope: Consumed 1.619s CPU time.
Nov 22 04:35:27 np0005532048 podman[365054]: 2025-11-22 09:35:27.452982592 +0000 UTC m=+1.781479285 container died c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:35:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-29cfde0b99dd39c682965e73841a804696fa71f412620e25e5bf8fda9417af87-merged.mount: Deactivated successfully.
Nov 22 04:35:27 np0005532048 podman[365054]: 2025-11-22 09:35:27.514961283 +0000 UTC m=+1.843457976 container remove c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:35:27 np0005532048 systemd[1]: libpod-conmon-c6de113f69a3bb9789b22c3969116bfe613bcaf1c284f7cd186bedfc616c61bf.scope: Deactivated successfully.
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:27 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev cb3680f0-f703-4038-9259-c54cca930ac7 does not exist
Nov 22 04:35:27 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6dbc029d-ac53-41bf-b8b3-4f810945474c does not exist
Nov 22 04:35:27 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a8fbdc7b-e04f-4596-82b7-cb320a8ddd01 does not exist
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:35:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:35:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 231 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 201 op/s
Nov 22 04:35:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:27.978 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:27.979 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:27.980 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.080 253665 DEBUG nova.compute.manager [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.081 253665 DEBUG oslo_concurrency.lockutils [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.081 253665 DEBUG oslo_concurrency.lockutils [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.082 253665 DEBUG oslo_concurrency.lockutils [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.082 253665 DEBUG nova.compute.manager [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.082 253665 WARNING nova.compute.manager [req-eb48ee6d-0e95-4fd8-98ce-ba22a18beff8 req-269f6484-c31e-4743-989c-10dcb9d37d4a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state stopped and task_state None.#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.173 253665 INFO nova.compute.manager [None req-95b77ef9-31d5-442a-b11b-39366af88e40 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Get console output#033[00m
Nov 22 04:35:28 np0005532048 podman[367213]: 2025-11-22 09:35:28.24992782 +0000 UTC m=+0.081152649 container create 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:35:28 np0005532048 podman[367213]: 2025-11-22 09:35:28.189970189 +0000 UTC m=+0.021195048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:35:28 np0005532048 systemd[1]: Started libpod-conmon-82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736.scope.
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.330 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.352 253665 DEBUG oslo_concurrency.lockutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.353 253665 DEBUG oslo_concurrency.lockutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.353 253665 DEBUG nova.network.neutron [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.353 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'info_cache' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:28 np0005532048 podman[367213]: 2025-11-22 09:35:28.376934769 +0000 UTC m=+0.208159618 container init 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 22 04:35:28 np0005532048 podman[367213]: 2025-11-22 09:35:28.386667811 +0000 UTC m=+0.217892640 container start 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:35:28 np0005532048 podman[367213]: 2025-11-22 09:35:28.392217539 +0000 UTC m=+0.223442358 container attach 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:35:28 np0005532048 nostalgic_bassi[367230]: 167 167
Nov 22 04:35:28 np0005532048 systemd[1]: libpod-82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736.scope: Deactivated successfully.
Nov 22 04:35:28 np0005532048 podman[367213]: 2025-11-22 09:35:28.393892571 +0000 UTC m=+0.225117410 container died 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:35:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2a4a9e48dcfee7d0acd03821a050ba22c1d6f8bfeec43cfc58bceef71be364d1-merged.mount: Deactivated successfully.
Nov 22 04:35:28 np0005532048 podman[367213]: 2025-11-22 09:35:28.527188506 +0000 UTC m=+0.358413335 container remove 82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:35:28 np0005532048 systemd[1]: libpod-conmon-82f335834dc7f98ba9b90ed789b215c2e69a528b1101f01a152bbe35dd89e736.scope: Deactivated successfully.
Nov 22 04:35:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:35:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:35:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:28 np0005532048 podman[367253]: 2025-11-22 09:35:28.728152514 +0000 UTC m=+0.049970434 container create 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.771 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating config drive at /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config#033[00m
Nov 22 04:35:28 np0005532048 systemd[1]: Started libpod-conmon-85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b.scope.
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.777 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsz_bb24t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:28 np0005532048 podman[367253]: 2025-11-22 09:35:28.703599433 +0000 UTC m=+0.025417373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:35:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:28 np0005532048 podman[367253]: 2025-11-22 09:35:28.896422498 +0000 UTC m=+0.218240438 container init 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:35:28 np0005532048 podman[367253]: 2025-11-22 09:35:28.904968491 +0000 UTC m=+0.226786411 container start 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:35:28 np0005532048 podman[367253]: 2025-11-22 09:35:28.912497447 +0000 UTC m=+0.234315387 container attach 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.927 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsz_bb24t" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.957 253665 DEBUG nova.storage.rbd_utils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:28 np0005532048 nova_compute[253661]: 2025-11-22 09:35:28.963 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.025 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.408 253665 DEBUG oslo_concurrency.processutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.408 253665 INFO nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deleting local config drive /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config because it was imported into RBD.#033[00m
Nov 22 04:35:29 np0005532048 kernel: tapc027d879-91: entered promiscuous mode
Nov 22 04:35:29 np0005532048 NetworkManager[48920]: <info>  [1763804129.4867] manager: (tapc027d879-91): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Nov 22 04:35:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:29Z|01093|binding|INFO|Claiming lport c027d879-91b3-497d-9f51-8476006ea65c for this chassis.
Nov 22 04:35:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:29Z|01094|binding|INFO|c027d879-91b3-497d-9f51-8476006ea65c: Claiming fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.499 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.501 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 bound to our chassis#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.502 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674#033[00m
Nov 22 04:35:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:29Z|01095|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c ovn-installed in OVS
Nov 22 04:35:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:29Z|01096|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c up in Southbound
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:29 np0005532048 systemd-udevd[367328]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.518 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5df16e79-4143-490d-b6df-2c9315328ea1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.519 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa990966c-01 in ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.522 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa990966c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.522 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5ee243-207a-4e1a-beb5-1a4e3ca6c35d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.526 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f96ae01-b6ae-47ab-9b5a-4ed4139b8e33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.525 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:29 np0005532048 NetworkManager[48920]: <info>  [1763804129.5387] device (tapc027d879-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:35:29 np0005532048 NetworkManager[48920]: <info>  [1763804129.5402] device (tapc027d879-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.541 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[1070ae38-cc20-4218-9490-ecb282e0606a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 systemd-machined[215941]: New machine qemu-136-instance-0000006d.
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.559 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5accc57f-3b98-4907-b03a-33afd2dc291c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 systemd[1]: Started Virtual Machine qemu-136-instance-0000006d.
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.599 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf48f2ec-2bc2-4ae1-aa46-acacecd44b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 NetworkManager[48920]: <info>  [1763804129.6108] manager: (tapa990966c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/451)
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.605 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a214bc1-b11f-4101-9c22-349faceda2a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.655 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1382c4-d480-4884-bf9b-fb49d7d3fa82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.659 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[91e8e822-8740-4c04-b092-42b93978eb84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 NetworkManager[48920]: <info>  [1763804129.6897] device (tapa990966c-00): carrier: link connected
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.701 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5007c79e-2c67-4494-ae8f-f36668fa80b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.724 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a9108d56-548c-4928-82ce-8e15eb59a141]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698042, 'reachable_time': 32119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367371, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78524a29-d966-4942-9466-831dac9eee6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:6fb9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698042, 'tstamp': 698042}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367374, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.773 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d587404a-ef94-4827-8a9c-e7d0dd35f774]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698042, 'reachable_time': 32119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367377, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.815 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af730cb1-a10d-4940-9939-dfdf19a80e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 244 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 207 op/s
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.899 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[168f2a7b-d3e1-4db7-9095-54d9fede97ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.901 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.901 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.902 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:29 np0005532048 NetworkManager[48920]: <info>  [1763804129.9068] manager: (tapa990966c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Nov 22 04:35:29 np0005532048 kernel: tapa990966c-00: entered promiscuous mode
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.914 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.918 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:29 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:29Z|01097|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.937 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:29 np0005532048 nova_compute[253661]: 2025-11-22 09:35:29.942 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.943 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.945 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68647839-8740-4e8c-8bf8-9f61ea190f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.945 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-a990966c-0851-457f-bdd5-27cf73032674
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID a990966c-0851-457f-bdd5-27cf73032674
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:35:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:29.946 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'env', 'PROCESS_TAG=haproxy-a990966c-0851-457f-bdd5-27cf73032674', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a990966c-0851-457f-bdd5-27cf73032674.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:35:30 np0005532048 hardcore_austin[367270]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:35:30 np0005532048 hardcore_austin[367270]: --> relative data size: 1.0
Nov 22 04:35:30 np0005532048 hardcore_austin[367270]: --> All data devices are unavailable
Nov 22 04:35:30 np0005532048 systemd[1]: libpod-85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b.scope: Deactivated successfully.
Nov 22 04:35:30 np0005532048 systemd[1]: libpod-85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b.scope: Consumed 1.118s CPU time.
Nov 22 04:35:30 np0005532048 podman[367253]: 2025-11-22 09:35:30.12095042 +0000 UTC m=+1.442768380 container died 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.123 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804130.1214938, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Started (Lifecycle Event)#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.141 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.147 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804130.1217077, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.147 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.165 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.173 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.194 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:35:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c6dc6f28fe9493f993c7ebccdcdec7af6abf7bbd2790ff4036c3bb1cf7d78a21-merged.mount: Deactivated successfully.
Nov 22 04:35:30 np0005532048 podman[367253]: 2025-11-22 09:35:30.315063028 +0000 UTC m=+1.636880948 container remove 85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:35:30 np0005532048 systemd[1]: libpod-conmon-85cca2202ebef5bb8f9646d219175a46281fdf103d1a3754a7a7fbd6f8ac824b.scope: Deactivated successfully.
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.354 253665 DEBUG nova.compute.manager [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.356 253665 DEBUG oslo_concurrency.lockutils [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.357 253665 DEBUG oslo_concurrency.lockutils [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.357 253665 DEBUG oslo_concurrency.lockutils [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.357 253665 DEBUG nova.compute.manager [req-b471275b-0669-4367-9176-b1c60bcf6299 req-4887c4a2-9cf8-4709-bad6-6c985178835b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Processing event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.358 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.362 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804130.3620164, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.362 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.365 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.369 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance spawned successfully.#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.369 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.381 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.387 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.392 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.392 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.393 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.393 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.394 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.394 253665 DEBUG nova.virt.libvirt.driver [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.414 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:35:30 np0005532048 podman[367482]: 2025-11-22 09:35:30.36059641 +0000 UTC m=+0.029995667 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.456 253665 INFO nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Took 9.97 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.457 253665 DEBUG nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.526 253665 INFO nova.compute.manager [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Took 11.00 seconds to build instance.#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.544 253665 DEBUG oslo_concurrency.lockutils [None req-c05f55d5-809a-4bdb-ad1a-f7c8a4c947ee 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:30 np0005532048 podman[367482]: 2025-11-22 09:35:30.589138644 +0000 UTC m=+0.258537891 container create cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:35:30 np0005532048 systemd[1]: Started libpod-conmon-cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a.scope.
Nov 22 04:35:30 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:30 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2118a5a184395a8c9e4712c7d009d993e2f304960246506fc15d26ee155b6cdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:30 np0005532048 podman[367482]: 2025-11-22 09:35:30.721185147 +0000 UTC m=+0.390584414 container init cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.727 253665 DEBUG nova.network.neutron [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:30 np0005532048 podman[367482]: 2025-11-22 09:35:30.727789332 +0000 UTC m=+0.397188569 container start cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.751 253665 DEBUG oslo_concurrency.lockutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:30 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [NOTICE]   (367601) : New worker (367603) forked
Nov 22 04:35:30 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [NOTICE]   (367601) : Loading success.
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.780 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance destroyed successfully.#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.781 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.795 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.812 253665 DEBUG nova.virt.libvirt.vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:24Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.813 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.814 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.815 253665 DEBUG os_vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.818 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8da41f38-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.826 253665 INFO os_vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38')#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.835 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start _get_guest_xml network_info=[{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.849 253665 WARNING nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.856 253665 DEBUG nova.virt.libvirt.host [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.858 253665 DEBUG nova.virt.libvirt.host [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.865 253665 DEBUG nova.virt.libvirt.host [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.866 253665 DEBUG nova.virt.libvirt.host [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.867 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.867 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.868 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.868 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.869 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.869 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.870 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.870 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.871 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.874 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.874 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.875 253665 DEBUG nova.virt.hardware [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.875 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:30 np0005532048 nova_compute[253661]: 2025-11-22 09:35:30.895 253665 DEBUG oslo_concurrency.processutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:31 np0005532048 podman[367652]: 2025-11-22 09:35:31.032682474 +0000 UTC m=+0.050168099 container create 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:35:31 np0005532048 podman[367652]: 2025-11-22 09:35:31.009983919 +0000 UTC m=+0.027469564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:35:31 np0005532048 systemd[1]: Started libpod-conmon-52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb.scope.
Nov 22 04:35:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:31 np0005532048 podman[367652]: 2025-11-22 09:35:31.287638584 +0000 UTC m=+0.305124219 container init 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:35:31 np0005532048 podman[367652]: 2025-11-22 09:35:31.296668729 +0000 UTC m=+0.314154344 container start 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:35:31 np0005532048 systemd[1]: libpod-52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb.scope: Deactivated successfully.
Nov 22 04:35:31 np0005532048 relaxed_leavitt[367687]: 167 167
Nov 22 04:35:31 np0005532048 podman[367652]: 2025-11-22 09:35:31.304478013 +0000 UTC m=+0.321963628 container attach 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:35:31 np0005532048 conmon[367687]: conmon 52022af7067a038b36c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb.scope/container/memory.events
Nov 22 04:35:31 np0005532048 podman[367652]: 2025-11-22 09:35:31.305064437 +0000 UTC m=+0.322550072 container died 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:35:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f1911e3537ea44e56930853f37934d7cbec4faab451acd250ff6ad5c9106bd98-merged.mount: Deactivated successfully.
Nov 22 04:35:31 np0005532048 podman[367652]: 2025-11-22 09:35:31.413484733 +0000 UTC m=+0.430970338 container remove 52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:35:31 np0005532048 systemd[1]: libpod-conmon-52022af7067a038b36c93d83eb54be981dba7da12f1c0bd7d037cf8ec7ff5eeb.scope: Deactivated successfully.
Nov 22 04:35:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248668810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:31 np0005532048 nova_compute[253661]: 2025-11-22 09:35:31.481 253665 DEBUG oslo_concurrency.processutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:31 np0005532048 nova_compute[253661]: 2025-11-22 09:35:31.532 253665 DEBUG oslo_concurrency.processutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:31 np0005532048 podman[367732]: 2025-11-22 09:35:31.680840922 +0000 UTC m=+0.117647527 container create fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:35:31 np0005532048 podman[367732]: 2025-11-22 09:35:31.589382158 +0000 UTC m=+0.026188783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:35:31 np0005532048 systemd[1]: Started libpod-conmon-fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657.scope.
Nov 22 04:35:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:31 np0005532048 podman[367732]: 2025-11-22 09:35:31.867173167 +0000 UTC m=+0.303979802 container init fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:35:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 246 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 812 KiB/s rd, 3.9 MiB/s wr, 182 op/s
Nov 22 04:35:31 np0005532048 podman[367732]: 2025-11-22 09:35:31.877173025 +0000 UTC m=+0.313979630 container start fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 04:35:31 np0005532048 podman[367732]: 2025-11-22 09:35:31.902426373 +0000 UTC m=+0.339232978 container attach fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:35:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/673530446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.048 253665 DEBUG oslo_concurrency.processutils [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.053 253665 DEBUG nova.virt.libvirt.vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:24Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.054 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.056 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.058 253665 DEBUG nova.objects.instance [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.072 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <uuid>3f8530ae-f429-4807-81ca-84d8f964a38c</uuid>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <name>instance-0000006b</name>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1778115453</nova:name>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:35:30</nova:creationTime>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <nova:port uuid="8da41f38-3812-4494-9cab-c4854772a569">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <entry name="serial">3f8530ae-f429-4807-81ca-84d8f964a38c</entry>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <entry name="uuid">3f8530ae-f429-4807-81ca-84d8f964a38c</entry>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3f8530ae-f429-4807-81ca-84d8f964a38c_disk">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3f8530ae-f429-4807-81ca-84d8f964a38c_disk.config">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:02:ea:ba"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <target dev="tap8da41f38-38"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c/console.log" append="off"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <input type="keyboard" bus="usb"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:35:32 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:35:32 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:35:32 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:35:32 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.075 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.075 253665 DEBUG nova.virt.libvirt.driver [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.076 253665 DEBUG nova.virt.libvirt.vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:24Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.077 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.077 253665 DEBUG nova.network.os_vif_util [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.078 253665 DEBUG os_vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.079 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.079 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.080 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.084 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8da41f38-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.084 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8da41f38-38, col_values=(('external_ids', {'iface-id': '8da41f38-3812-4494-9cab-c4854772a569', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:ea:ba', 'vm-uuid': '3f8530ae-f429-4807-81ca-84d8f964a38c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 NetworkManager[48920]: <info>  [1763804132.0883] manager: (tap8da41f38-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.096 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.097 253665 INFO os_vif [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38')#033[00m
Nov 22 04:35:32 np0005532048 NetworkManager[48920]: <info>  [1763804132.2311] manager: (tap8da41f38-38): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Nov 22 04:35:32 np0005532048 kernel: tap8da41f38-38: entered promiscuous mode
Nov 22 04:35:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:32Z|01098|binding|INFO|Claiming lport 8da41f38-3812-4494-9cab-c4854772a569 for this chassis.
Nov 22 04:35:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:32Z|01099|binding|INFO|8da41f38-3812-4494-9cab-c4854772a569: Claiming fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 systemd-udevd[367356]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.243 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ea:ba 10.100.0.4'], port_security=['fa:16:3e:02:ea:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f8530ae-f429-4807-81ca-84d8f964a38c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20570e02-4f3c-425d-9564-924b275d70dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e0291e4d-91dd-4ee6-9074-0372622e253d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f04ee3-5979-45f2-bf12-c1c6b0bf9924, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8da41f38-3812-4494-9cab-c4854772a569) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.245 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8da41f38-3812-4494-9cab-c4854772a569 in datapath 20570e02-4f3c-425d-9564-924b275d70dc bound to our chassis#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.247 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20570e02-4f3c-425d-9564-924b275d70dc#033[00m
Nov 22 04:35:32 np0005532048 NetworkManager[48920]: <info>  [1763804132.2542] device (tap8da41f38-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:35:32 np0005532048 NetworkManager[48920]: <info>  [1763804132.2555] device (tap8da41f38-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:35:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:32Z|01100|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 ovn-installed in OVS
Nov 22 04:35:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:32Z|01101|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 up in Southbound
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.268 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.269 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78c213f0-e2dc-4db7-99c5-53f13a187e32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.270 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20570e02-41 in ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.276 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20570e02-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[032030b2-cce8-4e10-b29b-6dfe92309163]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.278 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a1faa629-6408-4250-ae97-89154e196d94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 systemd-machined[215941]: New machine qemu-137-instance-0000006b.
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.297 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[df4b1e41-b1ae-45cf-91d4-8f433854748f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 systemd[1]: Started Virtual Machine qemu-137-instance-0000006b.
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.328 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48c80abe-e93f-4536-a58b-0d887b0420f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.365 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[42272870-0baa-44cc-acb3-39441f3e1257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 NetworkManager[48920]: <info>  [1763804132.3729] manager: (tap20570e02-40): new Veth device (/org/freedesktop/NetworkManager/Devices/455)
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.374 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f90880-f89c-4526-a9b8-0e5dca33a9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.413 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0f26985d-7ab8-439c-9338-8b0f4796ab30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.418 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a85c2103-71b0-4cd9-87f0-5ef104f5d3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 NetworkManager[48920]: <info>  [1763804132.4467] device (tap20570e02-40): carrier: link connected
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.456 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cd10f4b1-31c6-435e-a738-66cd04d36ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.513 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2005de86-e277-4240-8bdd-0f7ff9d86415]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20570e02-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:a4:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698318, 'reachable_time': 40672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367809, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.535 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47df396e-80b7-40c0-9513-23a31d6a2868]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:a4f4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698318, 'tstamp': 698318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367810, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.577 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3478c5e7-eedc-4860-93d1-de3ccd3a923b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20570e02-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:a4:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698318, 'reachable_time': 40672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367811, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.620 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc862b2b-edac-49ca-bfdc-5406cee9acd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.704 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[277ac18a-0224-4f83-a653-72934d36fa3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.706 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20570e02-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.706 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.707 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20570e02-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 kernel: tap20570e02-40: entered promiscuous mode
Nov 22 04:35:32 np0005532048 NetworkManager[48920]: <info>  [1763804132.7106] manager: (tap20570e02-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.712 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20570e02-40, col_values=(('external_ids', {'iface-id': '4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:32Z|01102|binding|INFO|Releasing lport 4aaa4802-1d2c-466f-9a8f-02dc0ee6bbe9 from this chassis (sb_readonly=0)
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.717 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 sweet_easley[367770]: {
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:    "0": [
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:        {
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "devices": [
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "/dev/loop3"
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            ],
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_name": "ceph_lv0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_size": "21470642176",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "name": "ceph_lv0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "tags": {
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.cluster_name": "ceph",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.crush_device_class": "",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.encrypted": "0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.osd_id": "0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.type": "block",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.vdo": "0"
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            },
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "type": "block",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "vg_name": "ceph_vg0"
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:        }
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:    ],
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:    "1": [
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:        {
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "devices": [
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "/dev/loop4"
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            ],
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_name": "ceph_lv1",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_size": "21470642176",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "name": "ceph_lv1",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "tags": {
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.cluster_name": "ceph",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.crush_device_class": "",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.encrypted": "0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.osd_id": "1",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.type": "block",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.vdo": "0"
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            },
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "type": "block",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "vg_name": "ceph_vg1"
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:        }
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:    ],
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:    "2": [
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:        {
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "devices": [
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "/dev/loop5"
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            ],
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_name": "ceph_lv2",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_size": "21470642176",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "name": "ceph_lv2",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "tags": {
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.cluster_name": "ceph",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.crush_device_class": "",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.encrypted": "0",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.osd_id": "2",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.type": "block",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:                "ceph.vdo": "0"
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            },
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "type": "block",
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:            "vg_name": "ceph_vg2"
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:        }
Nov 22 04:35:32 np0005532048 sweet_easley[367770]:    ]
Nov 22 04:35:32 np0005532048 sweet_easley[367770]: }
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.727 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a75ef93-9ce1-419c-aa03-226eea98bc36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.733 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/20570e02-4f3c-425d-9564-924b275d70dc.pid.haproxy
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 20570e02-4f3c-425d-9564-924b275d70dc
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:35:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:32.734 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'env', 'PROCESS_TAG=haproxy-20570e02-4f3c-425d-9564-924b275d70dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20570e02-4f3c-425d-9564-924b275d70dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.736 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:32 np0005532048 systemd[1]: libpod-fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657.scope: Deactivated successfully.
Nov 22 04:35:32 np0005532048 conmon[367770]: conmon fcadfc99b71401a58ecd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657.scope/container/memory.events
Nov 22 04:35:32 np0005532048 podman[367832]: 2025-11-22 09:35:32.845158257 +0000 UTC m=+0.041744099 container died fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.879 253665 DEBUG nova.compute.manager [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.879 253665 DEBUG oslo_concurrency.lockutils [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.880 253665 DEBUG oslo_concurrency.lockutils [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.880 253665 DEBUG oslo_concurrency.lockutils [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.880 253665 DEBUG nova.compute.manager [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:32 np0005532048 nova_compute[253661]: 2025-11-22 09:35:32.880 253665 WARNING nova.compute.manager [req-15c7fdc6-32fe-4c9f-b297-178492fd7016 req-4fdf5b31-957f-4565-a14e-01dad9756304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state None.#033[00m
Nov 22 04:35:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-252bfdb22b74f194657adf982fa5a9180c6f934af152ec79df1d6af1348de388-merged.mount: Deactivated successfully.
Nov 22 04:35:33 np0005532048 podman[367832]: 2025-11-22 09:35:33.134450022 +0000 UTC m=+0.331035844 container remove fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:35:33 np0005532048 systemd[1]: libpod-conmon-fcadfc99b71401a58ecd07058353ac2c3a5f526fad6482e0546a219f28fd0657.scope: Deactivated successfully.
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.211 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 3f8530ae-f429-4807-81ca-84d8f964a38c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.213 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804133.2111423, 3f8530ae-f429-4807-81ca-84d8f964a38c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.213 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.216 253665 DEBUG nova.compute.manager [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.222 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance rebooted successfully.#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.223 253665 DEBUG nova.compute.manager [None req-62a614ac-9383-48d7-ac91-69e524d0f095 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.230 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:33 np0005532048 podman[367903]: 2025-11-22 09:35:33.253943963 +0000 UTC m=+0.098331646 container create 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.260 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.261 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804133.2123375, 3f8530ae-f429-4807-81ca-84d8f964a38c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.261 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Started (Lifecycle Event)#033[00m
Nov 22 04:35:33 np0005532048 podman[367903]: 2025-11-22 09:35:33.197868449 +0000 UTC m=+0.042256162 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.297 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:33 np0005532048 nova_compute[253661]: 2025-11-22 09:35:33.302 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:33 np0005532048 systemd[1]: Started libpod-conmon-8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d.scope.
Nov 22 04:35:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc8b44b411584b888a60508b08d5f7368fa8670e8b86e9a637d46fbcd929032/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:33 np0005532048 podman[367903]: 2025-11-22 09:35:33.362733879 +0000 UTC m=+0.207121592 container init 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:35:33 np0005532048 podman[367903]: 2025-11-22 09:35:33.370183704 +0000 UTC m=+0.214571387 container start 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:35:33 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [NOTICE]   (367971) : New worker (367975) forked
Nov 22 04:35:33 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [NOTICE]   (367971) : Loading success.
Nov 22 04:35:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 246 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 241 op/s
Nov 22 04:35:33 np0005532048 podman[368072]: 2025-11-22 09:35:33.906377239 +0000 UTC m=+0.048311303 container create e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:35:33 np0005532048 systemd[1]: Started libpod-conmon-e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a.scope.
Nov 22 04:35:33 np0005532048 podman[368072]: 2025-11-22 09:35:33.883744406 +0000 UTC m=+0.025678490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:35:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:34 np0005532048 podman[368072]: 2025-11-22 09:35:34.006647203 +0000 UTC m=+0.148581287 container init e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:35:34 np0005532048 podman[368072]: 2025-11-22 09:35:34.016842716 +0000 UTC m=+0.158776780 container start e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:35:34 np0005532048 podman[368072]: 2025-11-22 09:35:34.02103649 +0000 UTC m=+0.162970574 container attach e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:35:34 np0005532048 systemd[1]: libpod-e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a.scope: Deactivated successfully.
Nov 22 04:35:34 np0005532048 affectionate_diffie[368088]: 167 167
Nov 22 04:35:34 np0005532048 conmon[368088]: conmon e5dcb6caf73c0e8dc033 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a.scope/container/memory.events
Nov 22 04:35:34 np0005532048 podman[368093]: 2025-11-22 09:35:34.071387802 +0000 UTC m=+0.029849514 container died e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:35:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7c83346662e8366c9474f0dd84214aba2ff6037e8963afcc1fbc8869e35a84a7-merged.mount: Deactivated successfully.
Nov 22 04:35:34 np0005532048 podman[368093]: 2025-11-22 09:35:34.116533235 +0000 UTC m=+0.074994927 container remove e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:35:34 np0005532048 systemd[1]: libpod-conmon-e5dcb6caf73c0e8dc033f99d57214b8263547bcd512649bed178329d5575e87a.scope: Deactivated successfully.
Nov 22 04:35:34 np0005532048 podman[368114]: 2025-11-22 09:35:34.333866359 +0000 UTC m=+0.056134927 container create 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:35:34 np0005532048 systemd[1]: Started libpod-conmon-571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446.scope.
Nov 22 04:35:34 np0005532048 podman[368114]: 2025-11-22 09:35:34.313576955 +0000 UTC m=+0.035845543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:35:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:34 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:34 np0005532048 podman[368114]: 2025-11-22 09:35:34.43161432 +0000 UTC m=+0.153882908 container init 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:35:34 np0005532048 podman[368114]: 2025-11-22 09:35:34.438907071 +0000 UTC m=+0.161175629 container start 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:35:34 np0005532048 podman[368114]: 2025-11-22 09:35:34.442696976 +0000 UTC m=+0.164965654 container attach 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.313 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.313 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.331 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.382 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.383 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.383 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.383 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.405 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.406 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.421 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.422 253665 INFO nova.compute.claims [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:35:35 np0005532048 great_black[368131]: {
Nov 22 04:35:35 np0005532048 great_black[368131]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:35:35 np0005532048 great_black[368131]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:35:35 np0005532048 great_black[368131]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:35:35 np0005532048 great_black[368131]:        "osd_id": 1,
Nov 22 04:35:35 np0005532048 great_black[368131]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:35:35 np0005532048 great_black[368131]:        "type": "bluestore"
Nov 22 04:35:35 np0005532048 great_black[368131]:    },
Nov 22 04:35:35 np0005532048 great_black[368131]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:35:35 np0005532048 great_black[368131]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:35:35 np0005532048 great_black[368131]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:35:35 np0005532048 great_black[368131]:        "osd_id": 0,
Nov 22 04:35:35 np0005532048 great_black[368131]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:35:35 np0005532048 great_black[368131]:        "type": "bluestore"
Nov 22 04:35:35 np0005532048 great_black[368131]:    },
Nov 22 04:35:35 np0005532048 great_black[368131]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:35:35 np0005532048 great_black[368131]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:35:35 np0005532048 great_black[368131]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:35:35 np0005532048 great_black[368131]:        "osd_id": 2,
Nov 22 04:35:35 np0005532048 great_black[368131]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:35:35 np0005532048 great_black[368131]:        "type": "bluestore"
Nov 22 04:35:35 np0005532048 great_black[368131]:    }
Nov 22 04:35:35 np0005532048 great_black[368131]: }
Nov 22 04:35:35 np0005532048 systemd[1]: libpod-571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446.scope: Deactivated successfully.
Nov 22 04:35:35 np0005532048 systemd[1]: libpod-571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446.scope: Consumed 1.040s CPU time.
Nov 22 04:35:35 np0005532048 podman[368114]: 2025-11-22 09:35:35.480956357 +0000 UTC m=+1.203224915 container died 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.590 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.643 253665 DEBUG nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.643 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.644 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.644 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.644 253665 DEBUG nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.644 253665 WARNING nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG oslo_concurrency.lockutils [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.645 253665 DEBUG nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.646 253665 WARNING nova.compute.manager [req-aace16d3-578b-461b-afa8-751e7f8c2c73 req-b5b26c6f-10e7-426b-9b08-37b55e28363a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:35:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-96ea9b7508b76b1c9cb8cb0a89afe9d029ab3f5faeaad7a4d9ef33905d7d590c-merged.mount: Deactivated successfully.
Nov 22 04:35:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 246 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 205 op/s
Nov 22 04:35:35 np0005532048 podman[368114]: 2025-11-22 09:35:35.876122764 +0000 UTC m=+1.598391322 container remove 571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:35:35 np0005532048 systemd[1]: libpod-conmon-571c7fcf7520577f33689590762b481ea12f2ce14174c86bf6de2ce90ee4c446.scope: Deactivated successfully.
Nov 22 04:35:35 np0005532048 nova_compute[253661]: 2025-11-22 09:35:35.902 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:35:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.041 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.042 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:36 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0fb32c88-9313-4476-a403-407572b81c10 does not exist
Nov 22 04:35:36 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 962c79d1-09ec-4450-a7c5-4018752dc4c7 does not exist
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.060 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:35:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69129438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.104 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.112 253665 DEBUG nova.compute.provider_tree [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.132 253665 DEBUG nova.scheduler.client.report [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.155 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.156 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.160 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.160 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.168 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.168 253665 INFO nova.compute.claims [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.235 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.236 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.254 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.281 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.377 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.431 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.433 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.434 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Creating image(s)#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.461 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.496 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.524 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.530 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.637 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.640 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.641 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.641 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.674 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.678 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7b3234ab-db15-43a8-8093-469f6e62db91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3296429346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.893 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.904 253665 DEBUG nova.compute.provider_tree [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.923 253665 DEBUG nova.scheduler.client.report [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.963 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:36 np0005532048 nova_compute[253661]: 2025-11-22 09:35:36.964 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.027 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.027 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.052 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:35:37 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:37 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.068 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.099 253665 DEBUG nova.policy [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.195 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.199 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.199 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Creating image(s)#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.223 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.251 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.292 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.297 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.346 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [{"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.350 253665 DEBUG nova.policy [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31c7a4aa8fa340d2881ddc3ed426b6db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a31947dfacfc450ba028c42968f103b2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.372 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.373 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.374 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.382 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.383 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.383 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.384 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.405 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:37 np0005532048 nova_compute[253661]: 2025-11-22 09:35:37.410 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 250 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 223 op/s
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.263 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7b3234ab-db15-43a8-8093-469f6e62db91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.328 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.439 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.440 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.456 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.524 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.526 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.534 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.535 253665 INFO nova.compute.claims [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.641 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Successfully created port: a0713d25-85db-4bb0-9be1-0cb5253aa017 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:35:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.737 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:38 np0005532048 nova_compute[253661]: 2025-11-22 09:35:38.784 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Successfully created port: 735988ac-a658-458d-975f-872cfa132420 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:35:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590950912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.246 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.252 253665 DEBUG nova.compute.provider_tree [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.278 253665 DEBUG nova.scheduler.client.report [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.299 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.300 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.337 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Successfully updated port: a0713d25-85db-4bb0-9be1-0cb5253aa017 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.408 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.409 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.409 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.411 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.414 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.422 253665 DEBUG nova.compute.manager [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-changed-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.422 253665 DEBUG nova.compute.manager [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Refreshing instance network info cache due to event network-changed-a0713d25-85db-4bb0-9be1-0cb5253aa017. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.423 253665 DEBUG oslo_concurrency.lockutils [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.430 253665 DEBUG nova.objects.instance [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 7b3234ab-db15-43a8-8093-469f6e62db91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.441 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.446 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.447 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Ensure instance console log exists: /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.447 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.448 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.450 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.459 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.494 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.534 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] resizing rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.600 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.601 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.602 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Creating image(s)#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.624 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.654 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.691 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.696 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.781 253665 DEBUG nova.objects.instance [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'migration_context' on Instance uuid 4bcc50c8-3188-45f6-aa14-994c5ab8b966 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.794 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.794 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Ensure instance console log exists: /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.795 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.795 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.795 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.799 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.799 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.800 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.800 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.823 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:39 np0005532048 nova_compute[253661]: 2025-11-22 09:35:39.828 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 269 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.7 MiB/s wr, 218 op/s
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.064 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.098 253665 DEBUG nova.policy [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.243 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.326 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.359 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Successfully updated port: 735988ac-a658-458d-975f-872cfa132420 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.378 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.378 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.378 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.439 253665 DEBUG nova.objects.instance [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.450 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.450 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Ensure instance console log exists: /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.451 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.451 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.451 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.470 253665 DEBUG nova.compute.manager [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-changed-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.470 253665 DEBUG nova.compute.manager [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing instance network info cache due to event network-changed-735988ac-a658-458d-975f-872cfa132420. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.471 253665 DEBUG oslo_concurrency.lockutils [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.780 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:35:40 np0005532048 nova_compute[253661]: 2025-11-22 09:35:40.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.132 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Successfully created port: 21b54230-3ad3-4b65-b752-5a1b0472844e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.381 253665 DEBUG nova.network.neutron [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Updating instance_info_cache with network_info: [{"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.408 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.409 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance network_info: |[{"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.410 253665 DEBUG oslo_concurrency.lockutils [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.410 253665 DEBUG nova.network.neutron [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Refreshing network info cache for port a0713d25-85db-4bb0-9be1-0cb5253aa017 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.413 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start _get_guest_xml network_info=[{"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.424 253665 WARNING nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.429 253665 DEBUG nova.virt.libvirt.host [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.430 253665 DEBUG nova.virt.libvirt.host [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.438 253665 DEBUG nova.virt.libvirt.host [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.439 253665 DEBUG nova.virt.libvirt.host [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.439 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.440 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.440 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.440 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.440 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.441 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.441 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.441 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.441 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.442 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.442 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.442 253665 DEBUG nova.virt.hardware [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.446 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 314 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 186 op/s
Nov 22 04:35:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218277084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.943 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.975 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:41 np0005532048 nova_compute[253661]: 2025-11-22 09:35:41.980 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.256 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527630677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.457 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.459 253665 DEBUG nova.virt.libvirt.vif [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1440739346',display_name='tempest-ServersNegativeTestJSON-server-1440739346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1440739346',id=111,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-pm1o12oq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:37Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=4bcc50c8-3188-45f6-aa14-994c5ab8b966,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.459 253665 DEBUG nova.network.os_vif_util [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.460 253665 DEBUG nova.network.os_vif_util [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.461 253665 DEBUG nova.objects.instance [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4bcc50c8-3188-45f6-aa14-994c5ab8b966 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.474 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <uuid>4bcc50c8-3188-45f6-aa14-994c5ab8b966</uuid>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <name>instance-0000006f</name>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersNegativeTestJSON-server-1440739346</nova:name>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:35:41</nova:creationTime>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <nova:user uuid="31c7a4aa8fa340d2881ddc3ed426b6db">tempest-ServersNegativeTestJSON-1692723590-project-member</nova:user>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <nova:project uuid="a31947dfacfc450ba028c42968f103b2">tempest-ServersNegativeTestJSON-1692723590</nova:project>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <nova:port uuid="a0713d25-85db-4bb0-9be1-0cb5253aa017">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <entry name="serial">4bcc50c8-3188-45f6-aa14-994c5ab8b966</entry>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <entry name="uuid">4bcc50c8-3188-45f6-aa14-994c5ab8b966</entry>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:75:6e:95"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <target dev="tapa0713d25-85"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/console.log" append="off"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:35:42 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:35:42 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:35:42 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:35:42 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.475 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Preparing to wait for external event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.475 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.475 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.475 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.476 253665 DEBUG nova.virt.libvirt.vif [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1440739346',display_name='tempest-ServersNegativeTestJSON-server-1440739346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1440739346',id=111,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-pm1o12oq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:37Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=4bcc50c8-3188-45f6-aa14-994c5ab8b966,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.476 253665 DEBUG nova.network.os_vif_util [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.476 253665 DEBUG nova.network.os_vif_util [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.477 253665 DEBUG os_vif [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.477 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.478 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.478 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.482 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0713d25-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.482 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0713d25-85, col_values=(('external_ids', {'iface-id': 'a0713d25-85db-4bb0-9be1-0cb5253aa017', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:6e:95', 'vm-uuid': '4bcc50c8-3188-45f6-aa14-994c5ab8b966'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:42 np0005532048 NetworkManager[48920]: <info>  [1763804142.4851] manager: (tapa0713d25-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.494 253665 INFO os_vif [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85')#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.559 253665 DEBUG nova.network.neutron [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.572 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.573 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.573 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No VIF found with MAC fa:16:3e:75:6e:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.574 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Using config drive#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.616 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.625 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.626 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance network_info: |[{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.626 253665 DEBUG oslo_concurrency.lockutils [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.626 253665 DEBUG nova.network.neutron [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing network info cache for port 735988ac-a658-458d-975f-872cfa132420 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.632 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start _get_guest_xml network_info=[{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.653 253665 WARNING nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.660 253665 DEBUG nova.virt.libvirt.host [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.663 253665 DEBUG nova.virt.libvirt.host [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.667 253665 DEBUG nova.virt.libvirt.host [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.667 253665 DEBUG nova.virt.libvirt.host [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.670 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.670 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.671 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.671 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.671 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.672 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.672 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.672 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.672 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.673 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.673 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.673 253665 DEBUG nova.virt.hardware [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.677 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097962855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.794 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.872 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.873 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.887 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.888 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.893 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.893 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.897 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:42 np0005532048 nova_compute[253661]: 2025-11-22 09:35:42.897 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.023 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Successfully updated port: 21b54230-3ad3-4b65-b752-5a1b0472844e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.043 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.043 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.043 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.119 253665 DEBUG nova.compute.manager [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.120 253665 DEBUG nova.compute.manager [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing instance network info cache due to event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.120 253665 DEBUG oslo_concurrency.lockutils [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.166 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.167 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3224MB free_disk=59.84364700317383GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.167 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.167 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.215 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:35:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006528993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.246 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.277 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.284 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.334 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3f8530ae-f429-4807-81ca-84d8f964a38c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.334 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2a866674-0c27-4cfc-89f2-dfe8e9768900 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.334 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance cf5e117a-f203-4c8f-b795-01fb355ca5e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 7b3234ab-db15-43a8-8093-469f6e62db91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 4bcc50c8-3188-45f6-aa14-994c5ab8b966 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.348 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Creating config drive at /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.353 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyii9gv5m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.508 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyii9gv5m" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.538 253665 DEBUG nova.storage.rbd_utils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.542 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.596 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.756 253665 DEBUG oslo_concurrency.processutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config 4bcc50c8-3188-45f6-aa14-994c5ab8b966_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.757 253665 INFO nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Deleting local config drive /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966/disk.config because it was imported into RBD.#033[00m
Nov 22 04:35:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326226583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.812 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.813 253665 DEBUG nova.virt.libvirt.vif [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-340448396',display_name='tempest-TestGettingAddress-server-340448396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-340448396',id=110,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-quxnyf0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:36Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7b3234ab-db15-43a8-8093-469f6e62db91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.814 253665 DEBUG nova.network.os_vif_util [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.815 253665 DEBUG nova.network.os_vif_util [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.816 253665 DEBUG nova.objects.instance [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b3234ab-db15-43a8-8093-469f6e62db91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:43 np0005532048 kernel: tapa0713d25-85: entered promiscuous mode
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:43 np0005532048 NetworkManager[48920]: <info>  [1763804143.8281] manager: (tapa0713d25-85): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Nov 22 04:35:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:43Z|01103|binding|INFO|Claiming lport a0713d25-85db-4bb0-9be1-0cb5253aa017 for this chassis.
Nov 22 04:35:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:43Z|01104|binding|INFO|a0713d25-85db-4bb0-9be1-0cb5253aa017: Claiming fa:16:3e:75:6e:95 10.100.0.9
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.832 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <uuid>7b3234ab-db15-43a8-8093-469f6e62db91</uuid>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <name>instance-0000006e</name>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-340448396</nova:name>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:35:42</nova:creationTime>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <nova:port uuid="735988ac-a658-458d-975f-872cfa132420">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0e:5306" ipVersion="6"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <entry name="serial">7b3234ab-db15-43a8-8093-469f6e62db91</entry>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <entry name="uuid">7b3234ab-db15-43a8-8093-469f6e62db91</entry>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/7b3234ab-db15-43a8-8093-469f6e62db91_disk">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/7b3234ab-db15-43a8-8093-469f6e62db91_disk.config">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:0e:53:06"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <target dev="tap735988ac-a6"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/console.log" append="off"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:35:43 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:35:43 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:35:43 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:35:43 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.832 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Preparing to wait for external event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.835 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.835 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.835 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.836 253665 DEBUG nova.virt.libvirt.vif [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-340448396',display_name='tempest-TestGettingAddress-server-340448396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-340448396',id=110,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-quxnyf0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:36Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7b3234ab-db15-43a8-8093-469f6e62db91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.836 253665 DEBUG nova.network.os_vif_util [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.838 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:6e:95 10.100.0.9'], port_security=['fa:16:3e:75:6e:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4bcc50c8-3188-45f6-aa14-994c5ab8b966', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a0713d25-85db-4bb0-9be1-0cb5253aa017) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.838 253665 DEBUG nova.network.os_vif_util [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.839 253665 DEBUG os_vif [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:35:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.839 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a0713d25-85db-4bb0-9be1-0cb5253aa017 in datapath a990966c-0851-457f-bdd5-27cf73032674 bound to our chassis#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.839 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.841 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.844 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.845 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap735988ac-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.845 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap735988ac-a6, col_values=(('external_ids', {'iface-id': '735988ac-a658-458d-975f-872cfa132420', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:53:06', 'vm-uuid': '7b3234ab-db15-43a8-8093-469f6e62db91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.847 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:35:43 np0005532048 NetworkManager[48920]: <info>  [1763804143.8511] manager: (tap735988ac-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Nov 22 04:35:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:43Z|01105|binding|INFO|Setting lport a0713d25-85db-4bb0-9be1-0cb5253aa017 ovn-installed in OVS
Nov 22 04:35:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:43Z|01106|binding|INFO|Setting lport a0713d25-85db-4bb0-9be1-0cb5253aa017 up in Southbound
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:43 np0005532048 systemd-udevd[369033]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.866 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.868 253665 INFO os_vif [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6')#033[00m
Nov 22 04:35:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 394 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.1 MiB/s wr, 238 op/s
Nov 22 04:35:43 np0005532048 systemd-machined[215941]: New machine qemu-138-instance-0000006f.
Nov 22 04:35:43 np0005532048 NetworkManager[48920]: <info>  [1763804143.8806] device (tapa0713d25-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:35:43 np0005532048 NetworkManager[48920]: <info>  [1763804143.8813] device (tapa0713d25-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:35:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.884 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[90d71db5-0af1-4f6b-9143-e46d7ce78e61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:43 np0005532048 systemd[1]: Started Virtual Machine qemu-138-instance-0000006f.
Nov 22 04:35:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.926 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa3e637-5f3a-42e9-bde6-6f12f075ccde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.932 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ffcab0-ef3b-423c-ac2f-7f6f30d53b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.949 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.949 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.950 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:0e:53:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.950 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Using config drive#033[00m
Nov 22 04:35:43 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:43.982 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[35b7fd92-f2e9-4df3-be16-308965acf8f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:43 np0005532048 nova_compute[253661]: 2025-11-22 09:35:43.982 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.014 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a63ddbc5-9358-467d-964d-1a3da2f1d82e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698042, 'reachable_time': 32119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369067, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.036 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc02c109-f1a2-463c-8160-336e51affda3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa990966c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698058, 'tstamp': 698058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369068, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa990966c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698062, 'tstamp': 698062}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369068, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.039 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.041 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.048 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.049 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.049 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.049 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.102 253665 DEBUG nova.network.neutron [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Updated VIF entry in instance network info cache for port a0713d25-85db-4bb0-9be1-0cb5253aa017. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.102 253665 DEBUG nova.network.neutron [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Updating instance_info_cache with network_info: [{"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.126 253665 DEBUG oslo_concurrency.lockutils [req-f3c6bcbd-a3e9-4a3a-8b77-76ee960a0153 req-94ed301b-3382-49fe-9f93-770262ab5272 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-4bcc50c8-3188-45f6-aa14-994c5ab8b966" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672916499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.173 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.181 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.197 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:44Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 04:35:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:44Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.268 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.268 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.421 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Creating config drive at /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.426 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv3xz0qg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.587 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdv3xz0qg" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.623 253665 DEBUG nova.storage.rbd_utils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.634 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.682 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804144.6624904, 4bcc50c8-3188-45f6-aa14-994c5ab8b966 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.683 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] VM Started (Lifecycle Event)#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.702 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.709 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804144.662754, 4bcc50c8-3188-45f6-aa14-994c5ab8b966 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.709 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.725 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.729 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.746 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.831 253665 DEBUG oslo_concurrency.processutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config 7b3234ab-db15-43a8-8093-469f6e62db91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.831 253665 INFO nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Deleting local config drive /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91/disk.config because it was imported into RBD.#033[00m
Nov 22 04:35:44 np0005532048 kernel: tap735988ac-a6: entered promiscuous mode
Nov 22 04:35:44 np0005532048 systemd-udevd[369038]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:35:44 np0005532048 NetworkManager[48920]: <info>  [1763804144.8988] manager: (tap735988ac-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/460)
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:44Z|01107|binding|INFO|Claiming lport 735988ac-a658-458d-975f-872cfa132420 for this chassis.
Nov 22 04:35:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:44Z|01108|binding|INFO|735988ac-a658-458d-975f-872cfa132420: Claiming fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.914 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306'], port_security=['fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe0e:5306/64', 'neutron:device_id': '7b3234ab-db15-43a8-8093-469f6e62db91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d5326a8-c171-4fdf-9f85-e6536ded5f96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=735988ac-a658-458d-975f-872cfa132420) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.916 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 735988ac-a658-458d-975f-872cfa132420 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 bound to our chassis#033[00m
Nov 22 04:35:44 np0005532048 NetworkManager[48920]: <info>  [1763804144.9195] device (tap735988ac-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:35:44 np0005532048 NetworkManager[48920]: <info>  [1763804144.9206] device (tap735988ac-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.920 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3e4e01e-5e3e-4572-b404-ee47aaec1186#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:44Z|01109|binding|INFO|Setting lport 735988ac-a658-458d-975f-872cfa132420 ovn-installed in OVS
Nov 22 04:35:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:44Z|01110|binding|INFO|Setting lport 735988ac-a658-458d-975f-872cfa132420 up in Southbound
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.941 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48a7068f-291a-4c01-9031-594db39e4164]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:44 np0005532048 systemd-machined[215941]: New machine qemu-139-instance-0000006e.
Nov 22 04:35:44 np0005532048 systemd[1]: Started Virtual Machine qemu-139-instance-0000006e.
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.964 253665 DEBUG nova.network.neutron [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.984 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e801eb-0191-49b1-9ccd-fd38d9001943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.987 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.988 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance network_info: |[{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.988 253665 DEBUG oslo_concurrency.lockutils [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.989 253665 DEBUG nova.network.neutron [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:44.989 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7382bfb6-6ccd-4a91-9684-f8d68a7c997c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:44 np0005532048 nova_compute[253661]: 2025-11-22 09:35:44.992 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start _get_guest_xml network_info=[{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.016 253665 WARNING nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.022 253665 DEBUG nova.virt.libvirt.host [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.023 253665 DEBUG nova.virt.libvirt.host [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.027 253665 DEBUG nova.virt.libvirt.host [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.027 253665 DEBUG nova.virt.libvirt.host [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.028 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.028 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.028 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.028 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.029 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.030 253665 DEBUG nova.virt.hardware [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.032 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.033 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a8a370-cabc-4555-a662-6c8dfd3b6800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.058 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2de7c358-78c2-475f-9572-39bb8e02762c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3e4e01e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:a9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696240, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369181, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.088 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2d5a5e-65e1-4f02-91b4-c64f35e7d7cd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd3e4e01e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696257, 'tstamp': 696257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369183, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd3e4e01e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696261, 'tstamp': 696261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369183, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.090 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3e4e01e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3e4e01e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.093 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3e4e01e-50, col_values=(('external_ids', {'iface-id': 'ff0f834b-9623-4226-98e1-741634e7eb05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:45.094 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.381 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804145.3802052, 7b3234ab-db15-43a8-8093-469f6e62db91 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.381 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] VM Started (Lifecycle Event)#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.407 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.412 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804145.3804626, 7b3234ab-db15-43a8-8093-469f6e62db91 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.412 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.429 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.435 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.482 253665 DEBUG nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.482 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.482 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.483 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.483 253665 DEBUG nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Processing event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.483 253665 DEBUG nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.483 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.484 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.484 253665 DEBUG oslo_concurrency.lockutils [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.484 253665 DEBUG nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] No waiting events found dispatching network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.484 253665 WARNING nova.compute.manager [req-83bed230-cb39-40c4-95e4-3d82a814996a req-a5c0399b-66f3-4331-a8e3-06b6547013f1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received unexpected event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.485 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.490 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.495 253665 INFO nova.virt.libvirt.driver [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance spawned successfully.#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.495 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:35:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099570431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.533 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.563 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.569 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.628 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.629 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804145.4902775, 4bcc50c8-3188-45f6-aa14-994c5ab8b966 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.630 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.636 253665 DEBUG nova.network.neutron [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updated VIF entry in instance network info cache for port 735988ac-a658-458d-975f-872cfa132420. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.637 253665 DEBUG nova.network.neutron [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.649 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.649 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.650 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.651 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.651 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.652 253665 DEBUG nova.virt.libvirt.driver [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.661 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.662 253665 DEBUG oslo_concurrency.lockutils [req-93c6755f-18e2-4b60-a0fd-02c479734a10 req-4645e4f9-fbd5-4cf8-9606-87dc61832165 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.666 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.687 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.711 253665 INFO nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Took 8.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.711 253665 DEBUG nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.774 253665 INFO nova.compute.manager [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Took 9.65 seconds to build instance.#033[00m
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.789 253665 DEBUG oslo_concurrency.lockutils [None req-8f89763c-60f5-43e7-927f-fcfc8ca916ac 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 394 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.1 MiB/s wr, 173 op/s
Nov 22 04:35:45 np0005532048 nova_compute[253661]: 2025-11-22 09:35:45.893 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:35:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3412478433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.068 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.069 253665 DEBUG nova.virt.libvirt.vif [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:39Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.070 253665 DEBUG nova.network.os_vif_util [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.071 253665 DEBUG nova.network.os_vif_util [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.072 253665 DEBUG nova.objects.instance [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.086 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <name>instance-00000070</name>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:35:45</nova:creationTime>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <entry name="serial">c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <entry name="uuid">c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:ad:ee:9e"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <target dev="tap21b54230-3a"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log" append="off"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:35:46 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:35:46 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:35:46 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:35:46 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.087 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Preparing to wait for external event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.087 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.087 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.087 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.088 253665 DEBUG nova.virt.libvirt.vif [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:35:39Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.088 253665 DEBUG nova.network.os_vif_util [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.089 253665 DEBUG nova.network.os_vif_util [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.089 253665 DEBUG os_vif [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.090 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.091 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.093 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21b54230-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.094 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21b54230-3a, col_values=(('external_ids', {'iface-id': '21b54230-3ad3-4b65-b752-5a1b0472844e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:ee:9e', 'vm-uuid': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:46 np0005532048 NetworkManager[48920]: <info>  [1763804146.0965] manager: (tap21b54230-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.103 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.103 253665 INFO os_vif [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a')#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.171 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.172 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.172 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:ad:ee:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.173 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Using config drive#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.197 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.269 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.269 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.486 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Creating config drive at /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.490 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdl9cwtdx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.539 253665 DEBUG nova.network.neutron [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updated VIF entry in instance network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.541 253665 DEBUG nova.network.neutron [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.554 253665 DEBUG oslo_concurrency.lockutils [req-7bd616e4-d04e-4e6d-b0ae-3ad111febf14 req-2c4355ff-834b-4d8e-9a2d-43e41083be6b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.644 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdl9cwtdx" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:46Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:ea:ba 10.100.0.4
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.682 253665 DEBUG nova.storage.rbd_utils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.687 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.878 253665 DEBUG oslo_concurrency.processutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.879 253665 INFO nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Deleting local config drive /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/disk.config because it was imported into RBD.#033[00m
Nov 22 04:35:46 np0005532048 kernel: tap21b54230-3a: entered promiscuous mode
Nov 22 04:35:46 np0005532048 NetworkManager[48920]: <info>  [1763804146.9465] manager: (tap21b54230-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Nov 22 04:35:46 np0005532048 nova_compute[253661]: 2025-11-22 09:35:46.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:46Z|01111|binding|INFO|Claiming lport 21b54230-3ad3-4b65-b752-5a1b0472844e for this chassis.
Nov 22 04:35:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:46Z|01112|binding|INFO|21b54230-3ad3-4b65-b752-5a1b0472844e: Claiming fa:16:3e:ad:ee:9e 10.100.0.5
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.008 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:ee:9e 10.100.0.5'], port_security=['fa:16:3e:ad:ee:9e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '624d1a5b-7d33-4814-8a02-c8e1e513249a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a19e22c3-d4f6-4134-81df-8e8895569f77, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=21b54230-3ad3-4b65-b752-5a1b0472844e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.010 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 21b54230-3ad3-4b65-b752-5a1b0472844e in datapath 5c1e456e-4030-4169-b20f-3aec7a20c24e bound to our chassis#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.012 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c1e456e-4030-4169-b20f-3aec7a20c24e#033[00m
Nov 22 04:35:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:47Z|01113|binding|INFO|Setting lport 21b54230-3ad3-4b65-b752-5a1b0472844e up in Southbound
Nov 22 04:35:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:47Z|01114|binding|INFO|Setting lport 21b54230-3ad3-4b65-b752-5a1b0472844e ovn-installed in OVS
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.030 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:47 np0005532048 systemd-udevd[369360]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.037 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[278be5e0-0997-429f-9621-631a9131b610]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.048 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c1e456e-41 in ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:35:47 np0005532048 systemd-machined[215941]: New machine qemu-140-instance-00000070.
Nov 22 04:35:47 np0005532048 NetworkManager[48920]: <info>  [1763804147.0511] device (tap21b54230-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:35:47 np0005532048 NetworkManager[48920]: <info>  [1763804147.0528] device (tap21b54230-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.051 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c1e456e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.051 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5858e3c1-fe70-4fcd-ba15-670ce9bf0204]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.053 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d1bc8b-24f3-4c98-8dcd-2a2e0354c9fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 systemd[1]: Started Virtual Machine qemu-140-instance-00000070.
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.070 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[73f4cee3-ac33-4db5-90a3-ca6151fa274d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.101 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[35e741d0-107e-439c-9ce8-bc6654c2d9ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.141 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d4ce21-6494-452c-b049-610418abd22b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 NetworkManager[48920]: <info>  [1763804147.1557] manager: (tap5c1e456e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/463)
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.160 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[840604b6-9b13-4198-8751-8107de0f777e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.197 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1c007a-e6cd-432e-aa99-bcfdbf85b3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.202 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dc5ca214-60e4-496d-8c00-aa2ab20c32bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 NetworkManager[48920]: <info>  [1763804147.2317] device (tap5c1e456e-40): carrier: link connected
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.240 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ae2e81-504c-44aa-9d29-37589f40222c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.262 253665 DEBUG nova.compute.manager [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.263 253665 DEBUG oslo_concurrency.lockutils [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.263 253665 DEBUG oslo_concurrency.lockutils [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.263 253665 DEBUG oslo_concurrency.lockutils [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.263 253665 DEBUG nova.compute.manager [req-f991ef9f-3340-4103-8b7a-f6b8660070d1 req-3cd7a07f-f718-4720-bb91-a0b3ad9deac9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Processing event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7fb8bc-0216-472c-af77-0b4ea1ccfe93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c1e456e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:c2:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699796, 'reachable_time': 20850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369393, 'error': None, 'target': 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.289 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81cde3f2-28db-4aed-8daa-cf6bcea0880b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:c2f5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699796, 'tstamp': 699796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369394, 'error': None, 'target': 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.313 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[427ab25f-b205-4fd4-a850-16614fb49651]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c1e456e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:c2:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699796, 'reachable_time': 20850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369395, 'error': None, 'target': 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.356 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[764fbcf6-b20d-4b86-8354-50828edd6737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.454 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[373cc13f-5c09-4296-a83e-1c857c4e297f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.458 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c1e456e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.459 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.459 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c1e456e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:47 np0005532048 NetworkManager[48920]: <info>  [1763804147.4624] manager: (tap5c1e456e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Nov 22 04:35:47 np0005532048 kernel: tap5c1e456e-40: entered promiscuous mode
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.467 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c1e456e-40, col_values=(('external_ids', {'iface-id': '3ff32fba-8fe7-4d58-94eb-b5f91ea2b9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:47Z|01115|binding|INFO|Releasing lport 3ff32fba-8fe7-4d58-94eb-b5f91ea2b9e2 from this chassis (sb_readonly=0)
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.499 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c1e456e-4030-4169-b20f-3aec7a20c24e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c1e456e-4030-4169-b20f-3aec7a20c24e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.501 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[234031e9-a937-49b8-9c99-53efc63cdb90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.502 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-5c1e456e-4030-4169-b20f-3aec7a20c24e
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/5c1e456e-4030-4169-b20f-3aec7a20c24e.pid.haproxy
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 5c1e456e-4030-4169-b20f-3aec7a20c24e
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.504 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'env', 'PROCESS_TAG=haproxy-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c1e456e-4030-4169-b20f-3aec7a20c24e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.567 253665 DEBUG nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.567 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.567 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.568 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.570 253665 DEBUG nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Processing event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.571 253665 DEBUG nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.571 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.571 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.572 253665 DEBUG oslo_concurrency.lockutils [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.572 253665 DEBUG nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] No waiting events found dispatching network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.572 253665 WARNING nova.compute.manager [req-50c1b006-d469-484f-8784-e47b2fe2b74d req-f393331c-44e5-4ab1-a180-534f4744beee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received unexpected event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.573 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.578 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804147.5779262, 7b3234ab-db15-43a8-8093-469f6e62db91 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.578 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.581 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.585 253665 INFO nova.virt.libvirt.driver [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance spawned successfully.#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.586 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.606 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.610 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.618 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.620 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.627 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.628 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.629 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.629 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.629 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.630 253665 DEBUG nova.virt.libvirt.driver [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.637 253665 INFO nova.virt.libvirt.driver [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance spawned successfully.#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.637 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.652 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.653 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804147.6116383, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.653 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] VM Started (Lifecycle Event)#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.662 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.662 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.663 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.663 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.663 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.664 253665 DEBUG nova.virt.libvirt.driver [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.695 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.699 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.729 253665 INFO nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Took 11.30 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.730 253665 DEBUG nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.732 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.732 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804147.6121569, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.732 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.746 253665 INFO nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Took 8.15 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.747 253665 DEBUG nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.775 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.783 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804147.6168694, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.784 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.834 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.835 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.836 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.837 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.837 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.838 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.840 253665 INFO nova.compute.manager [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Terminating instance#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.843 253665 DEBUG nova.compute.manager [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.862 253665 INFO nova.compute.manager [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Took 12.49 seconds to build instance.#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.866 253665 INFO nova.compute.manager [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Took 9.37 seconds to build instance.#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.872 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:35:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 412 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 275 op/s
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.898 253665 DEBUG oslo_concurrency.lockutils [None req-dae988a6-4884-40b5-b1d6-07a6683d5ec4 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.901 253665 DEBUG oslo_concurrency.lockutils [None req-89e73447-6dfd-4dbf-80a9-81833f8a2be4 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:47 np0005532048 kernel: tapa0713d25-85 (unregistering): left promiscuous mode
Nov 22 04:35:47 np0005532048 NetworkManager[48920]: <info>  [1763804147.9112] device (tapa0713d25-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:47Z|01116|binding|INFO|Releasing lport a0713d25-85db-4bb0-9be1-0cb5253aa017 from this chassis (sb_readonly=0)
Nov 22 04:35:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:47Z|01117|binding|INFO|Setting lport a0713d25-85db-4bb0-9be1-0cb5253aa017 down in Southbound
Nov 22 04:35:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:47Z|01118|binding|INFO|Removing iface tapa0713d25-85 ovn-installed in OVS
Nov 22 04:35:47 np0005532048 podman[369468]: 2025-11-22 09:35:47.9341241 +0000 UTC m=+0.071974812 container create 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.934 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:47 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:47.937 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:6e:95 10.100.0.9'], port_security=['fa:16:3e:75:6e:95 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4bcc50c8-3188-45f6-aa14-994c5ab8b966', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a0713d25-85db-4bb0-9be1-0cb5253aa017) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:47 np0005532048 nova_compute[253661]: 2025-11-22 09:35:47.954 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:47 np0005532048 systemd[1]: Started libpod-conmon-43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230.scope.
Nov 22 04:35:47 np0005532048 podman[369468]: 2025-11-22 09:35:47.897648043 +0000 UTC m=+0.035498785 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:35:47 np0005532048 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 22 04:35:47 np0005532048 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Consumed 3.099s CPU time.
Nov 22 04:35:47 np0005532048 systemd-machined[215941]: Machine qemu-138-instance-0000006f terminated.
Nov 22 04:35:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:35:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8f47abe13203d442e803e0c61cc2d5415b1d609f3c4c552ce2b7214c39c4e00/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:35:48 np0005532048 podman[369468]: 2025-11-22 09:35:48.047029537 +0000 UTC m=+0.184880259 container init 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:35:48 np0005532048 podman[369468]: 2025-11-22 09:35:48.056018751 +0000 UTC m=+0.193869463 container start 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 04:35:48 np0005532048 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [NOTICE]   (369490) : New worker (369493) forked
Nov 22 04:35:48 np0005532048 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [NOTICE]   (369490) : Loading success.
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.150 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a0713d25-85db-4bb0-9be1-0cb5253aa017 in datapath a990966c-0851-457f-bdd5-27cf73032674 unbound from our chassis#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.152 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.166 253665 INFO nova.virt.libvirt.driver [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Instance destroyed successfully.#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.168 253665 DEBUG nova.objects.instance [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'resources' on Instance uuid 4bcc50c8-3188-45f6-aa14-994c5ab8b966 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.173 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82513854-c253-43a7-b7dd-5b3118895529]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.178 253665 DEBUG nova.virt.libvirt.vif [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1440739346',display_name='tempest-ServersNegativeTestJSON-server-1440739346',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1440739346',id=111,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-pm1o12oq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:45Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=4bcc50c8-3188-45f6-aa14-994c5ab8b966,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.180 253665 DEBUG nova.network.os_vif_util [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "address": "fa:16:3e:75:6e:95", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0713d25-85", "ovs_interfaceid": "a0713d25-85db-4bb0-9be1-0cb5253aa017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.181 253665 DEBUG nova.network.os_vif_util [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.181 253665 DEBUG os_vif [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.184 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.185 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0713d25-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.188 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.199 253665 INFO os_vif [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:6e:95,bridge_name='br-int',has_traffic_filtering=True,id=a0713d25-85db-4bb0-9be1-0cb5253aa017,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0713d25-85')#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.218 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[260539db-4db0-440e-b019-cdfd833dbdc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.223 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[583b3de0-8174-4dab-bc18-0264c524b9ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.278 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a05ed243-2962-4dc7-bd15-f388757adbe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.311 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[83c14402-b2bd-4c12-92a7-7ab3fc18c821]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698042, 'reachable_time': 32119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369534, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.331 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d7fd0b03-8e31-4a9c-9596-2b1aca177468]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa990966c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698058, 'tstamp': 698058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369535, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa990966c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 698062, 'tstamp': 698062}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369535, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.334 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:48 np0005532048 nova_compute[253661]: 2025-11-22 09:35:48.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.338 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.338 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.338 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:48.339 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:35:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.010 253665 INFO nova.virt.libvirt.driver [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Deleting instance files /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966_del#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.012 253665 INFO nova.virt.libvirt.driver [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Deletion of /var/lib/nova/instances/4bcc50c8-3188-45f6-aa14-994c5ab8b966_del complete#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.076 253665 INFO nova.compute.manager [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.077 253665 DEBUG oslo.service.loopingcall [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.078 253665 DEBUG nova.compute.manager [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.078 253665 DEBUG nova.network.neutron [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.352 253665 DEBUG nova.compute.manager [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.353 253665 DEBUG oslo_concurrency.lockutils [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.354 253665 DEBUG oslo_concurrency.lockutils [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.354 253665 DEBUG oslo_concurrency.lockutils [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.354 253665 DEBUG nova.compute.manager [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.354 253665 WARNING nova.compute.manager [req-675587c2-e47a-49c3-b1af-2c7aaef6375e req-edacabd8-bc22-4cd5-bd19-00445d319367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e for instance with vm_state active and task_state None.#033[00m
Nov 22 04:35:49 np0005532048 podman[369537]: 2025-11-22 09:35:49.378664913 +0000 UTC m=+0.068438052 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:35:49 np0005532048 podman[369538]: 2025-11-22 09:35:49.391280997 +0000 UTC m=+0.080906402 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.688 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-unplugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.689 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.690 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.690 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.691 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] No waiting events found dispatching network-vif-unplugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.692 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-unplugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.693 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.693 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.694 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.695 253665 DEBUG oslo_concurrency.lockutils [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.696 253665 DEBUG nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] No waiting events found dispatching network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.697 253665 WARNING nova.compute.manager [req-75e6f118-72c5-4e17-b3c3-78126faab95f req-b035700c-8493-4f66-8ba8-8230b881e8f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received unexpected event network-vif-plugged-a0713d25-85db-4bb0-9be1-0cb5253aa017 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.877 253665 DEBUG nova.network.neutron [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 419 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.2 MiB/s wr, 328 op/s
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.906 253665 INFO nova.compute.manager [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Took 0.83 seconds to deallocate network for instance.#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.970 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:49 np0005532048 nova_compute[253661]: 2025-11-22 09:35:49.971 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:50 np0005532048 nova_compute[253661]: 2025-11-22 09:35:50.117 253665 DEBUG oslo_concurrency.processutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:50 np0005532048 nova_compute[253661]: 2025-11-22 09:35:50.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:35:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141246286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:50 np0005532048 nova_compute[253661]: 2025-11-22 09:35:50.655 253665 DEBUG oslo_concurrency.processutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:50 np0005532048 nova_compute[253661]: 2025-11-22 09:35:50.661 253665 DEBUG nova.compute.provider_tree [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:50 np0005532048 nova_compute[253661]: 2025-11-22 09:35:50.677 253665 DEBUG nova.scheduler.client.report [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:50 np0005532048 nova_compute[253661]: 2025-11-22 09:35:50.697 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:50 np0005532048 nova_compute[253661]: 2025-11-22 09:35:50.743 253665 INFO nova.scheduler.client.report [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Deleted allocations for instance 4bcc50c8-3188-45f6-aa14-994c5ab8b966#033[00m
Nov 22 04:35:50 np0005532048 nova_compute[253661]: 2025-11-22 09:35:50.850 253665 DEBUG oslo_concurrency.lockutils [None req-fbab1328-2f95-467a-8e97-749560cbcecb 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "4bcc50c8-3188-45f6-aa14-994c5ab8b966" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:50 np0005532048 nova_compute[253661]: 2025-11-22 09:35:50.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.456 253665 DEBUG nova.compute.manager [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-changed-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.457 253665 DEBUG nova.compute.manager [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing instance network info cache due to event network-changed-735988ac-a658-458d-975f-872cfa132420. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.458 253665 DEBUG oslo_concurrency.lockutils [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.458 253665 DEBUG oslo_concurrency.lockutils [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.459 253665 DEBUG nova.network.neutron [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing network info cache for port 735988ac-a658-458d-975f-872cfa132420 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.787 253665 DEBUG nova.compute.manager [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Received event network-vif-deleted-a0713d25-85db-4bb0-9be1-0cb5253aa017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.788 253665 DEBUG nova.compute.manager [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.788 253665 DEBUG nova.compute.manager [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing instance network info cache due to event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.789 253665 DEBUG oslo_concurrency.lockutils [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.789 253665 DEBUG oslo_concurrency.lockutils [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:51 np0005532048 nova_compute[253661]: 2025-11-22 09:35:51.789 253665 DEBUG nova.network.neutron [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 406 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.7 MiB/s wr, 318 op/s
Nov 22 04:35:52 np0005532048 nova_compute[253661]: 2025-11-22 09:35:52.094 253665 INFO nova.compute.manager [None req-1e6ff597-7c15-4aa2-9c00-57f066f5af60 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Get console output#033[00m
Nov 22 04:35:52 np0005532048 nova_compute[253661]: 2025-11-22 09:35:52.101 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:35:52
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images']
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:35:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:35:52 np0005532048 nova_compute[253661]: 2025-11-22 09:35:52.993 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:52 np0005532048 nova_compute[253661]: 2025-11-22 09:35:52.996 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:52 np0005532048 nova_compute[253661]: 2025-11-22 09:35:52.997 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:52 np0005532048 nova_compute[253661]: 2025-11-22 09:35:52.998 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:52 np0005532048 nova_compute[253661]: 2025-11-22 09:35:52.999 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.001 253665 INFO nova.compute.manager [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Terminating instance#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.004 253665 DEBUG nova.compute.manager [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:35:53 np0005532048 kernel: tap8da41f38-38 (unregistering): left promiscuous mode
Nov 22 04:35:53 np0005532048 NetworkManager[48920]: <info>  [1763804153.0626] device (tap8da41f38-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:53Z|01119|binding|INFO|Releasing lport 8da41f38-3812-4494-9cab-c4854772a569 from this chassis (sb_readonly=0)
Nov 22 04:35:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:53Z|01120|binding|INFO|Setting lport 8da41f38-3812-4494-9cab-c4854772a569 down in Southbound
Nov 22 04:35:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:35:53Z|01121|binding|INFO|Removing iface tap8da41f38-38 ovn-installed in OVS
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.075 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.079 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ea:ba 10.100.0.4'], port_security=['fa:16:3e:02:ea:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f8530ae-f429-4807-81ca-84d8f964a38c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20570e02-4f3c-425d-9564-924b275d70dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e0291e4d-91dd-4ee6-9074-0372622e253d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f04ee3-5979-45f2-bf12-c1c6b0bf9924, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8da41f38-3812-4494-9cab-c4854772a569) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.080 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8da41f38-3812-4494-9cab-c4854772a569 in datapath 20570e02-4f3c-425d-9564-924b275d70dc unbound from our chassis#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.082 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20570e02-4f3c-425d-9564-924b275d70dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.083 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2a7eb7-6b65-4e55-8a72-2bfa6bb908a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.084 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc namespace which is not needed anymore#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.103 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 22 04:35:53 np0005532048 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006b.scope: Consumed 14.480s CPU time.
Nov 22 04:35:53 np0005532048 systemd-machined[215941]: Machine qemu-137-instance-0000006b terminated.
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.541 253665 INFO nova.virt.libvirt.driver [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Instance destroyed successfully.#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.541 253665 DEBUG nova.objects.instance [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid 3f8530ae-f429-4807-81ca-84d8f964a38c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.558 253665 DEBUG nova.virt.libvirt.vif [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1778115453',display_name='tempest-TestNetworkAdvancedServerOps-server-1778115453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1778115453',id=107,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+r3c5G7EAzAvDolEqHNwqbmQvWxBEdieJcgY8c742Oy3jPYQetvou66qf/+0L4oLTbdYIoGxiGleOdIQIziTFL9k2EXWuKOZj/cVROyz5ALJrQCnYT9x1mSwpv+ywspw==',key_name='tempest-TestNetworkAdvancedServerOps-641041807',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-jtawb2ql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:33Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=3f8530ae-f429-4807-81ca-84d8f964a38c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.558 253665 DEBUG nova.network.os_vif_util [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "8da41f38-3812-4494-9cab-c4854772a569", "address": "fa:16:3e:02:ea:ba", "network": {"id": "20570e02-4f3c-425d-9564-924b275d70dc", "bridge": "br-int", "label": "tempest-network-smoke--304052739", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8da41f38-38", "ovs_interfaceid": "8da41f38-3812-4494-9cab-c4854772a569", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.559 253665 DEBUG nova.network.os_vif_util [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.560 253665 DEBUG os_vif [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.562 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.563 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8da41f38-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.568 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.570 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.573 253665 INFO os_vif [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:ea:ba,bridge_name='br-int',has_traffic_filtering=True,id=8da41f38-3812-4494-9cab-c4854772a569,network=Network(20570e02-4f3c-425d-9564-924b275d70dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8da41f38-38')#033[00m
Nov 22 04:35:53 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [NOTICE]   (367971) : haproxy version is 2.8.14-c23fe91
Nov 22 04:35:53 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [NOTICE]   (367971) : path to executable is /usr/sbin/haproxy
Nov 22 04:35:53 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [WARNING]  (367971) : Exiting Master process...
Nov 22 04:35:53 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [WARNING]  (367971) : Exiting Master process...
Nov 22 04:35:53 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [ALERT]    (367971) : Current worker (367975) exited with code 143 (Terminated)
Nov 22 04:35:53 np0005532048 neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc[367943]: [WARNING]  (367971) : All workers exited. Exiting... (0)
Nov 22 04:35:53 np0005532048 systemd[1]: libpod-8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d.scope: Deactivated successfully.
Nov 22 04:35:53 np0005532048 podman[369621]: 2025-11-22 09:35:53.587513581 +0000 UTC m=+0.054508335 container died 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:35:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d-userdata-shm.mount: Deactivated successfully.
Nov 22 04:35:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3dc8b44b411584b888a60508b08d5f7368fa8670e8b86e9a637d46fbcd929032-merged.mount: Deactivated successfully.
Nov 22 04:35:53 np0005532048 podman[369621]: 2025-11-22 09:35:53.641528905 +0000 UTC m=+0.108523649 container cleanup 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:35:53 np0005532048 systemd[1]: libpod-conmon-8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d.scope: Deactivated successfully.
Nov 22 04:35:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:53 np0005532048 podman[369676]: 2025-11-22 09:35:53.73382341 +0000 UTC m=+0.066553056 container remove 8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.744 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0fa7a1-0814-45ee-8780-29aa2747cc0d]: (4, ('Sat Nov 22 09:35:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc (8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d)\n8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d\nSat Nov 22 09:35:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc (8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d)\n8462e956c131d554766393f48709109e6a20737b2b626d265d7a7dff3b0d8f2d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d524829-2f26-41fb-83d4-ce8ffb46c6d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.747 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20570e02-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.748 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 kernel: tap20570e02-40: left promiscuous mode
Nov 22 04:35:53 np0005532048 nova_compute[253661]: 2025-11-22 09:35:53.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.782 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[50427d23-7caf-41a0-ba8b-b2e115a84ed9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.797 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03153a1e-47e9-4bf6-9f4d-33297d53f29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.798 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1124b8a-9c84-421e-b8d1-c4d92a9f6af1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.817 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb16df4d-e3f0-4878-8b46-bc06305b9a52]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698309, 'reachable_time': 29226, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369689, 'error': None, 'target': 'ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:53 np0005532048 systemd[1]: run-netns-ovnmeta\x2d20570e02\x2d4f3c\x2d425d\x2d9564\x2d924b275d70dc.mount: Deactivated successfully.
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.822 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20570e02-4f3c-425d-9564-924b275d70dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:35:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:35:53.823 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e8492c93-3e6f-4cae-aa37-886220933560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:35:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 374 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.2 MiB/s wr, 397 op/s
Nov 22 04:35:54 np0005532048 nova_compute[253661]: 2025-11-22 09:35:54.038 253665 INFO nova.virt.libvirt.driver [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Deleting instance files /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c_del#033[00m
Nov 22 04:35:54 np0005532048 nova_compute[253661]: 2025-11-22 09:35:54.039 253665 INFO nova.virt.libvirt.driver [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Deletion of /var/lib/nova/instances/3f8530ae-f429-4807-81ca-84d8f964a38c_del complete#033[00m
Nov 22 04:35:54 np0005532048 nova_compute[253661]: 2025-11-22 09:35:54.116 253665 INFO nova.compute.manager [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:35:54 np0005532048 nova_compute[253661]: 2025-11-22 09:35:54.117 253665 DEBUG oslo.service.loopingcall [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:35:54 np0005532048 nova_compute[253661]: 2025-11-22 09:35:54.118 253665 DEBUG nova.compute.manager [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:35:54 np0005532048 nova_compute[253661]: 2025-11-22 09:35:54.118 253665 DEBUG nova.network.neutron [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.028 253665 DEBUG nova.network.neutron [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updated VIF entry in instance network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.029 253665 DEBUG nova.network.neutron [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.052 253665 DEBUG oslo_concurrency.lockutils [req-9564f653-5cf9-445d-bd06-bf793e9818d5 req-1e61329d-0094-491f-8a9c-5384d21de6ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.402 253665 DEBUG nova.compute.manager [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-changed-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.402 253665 DEBUG nova.compute.manager [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing instance network info cache due to event network-changed-8da41f38-3812-4494-9cab-c4854772a569. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.403 253665 DEBUG oslo_concurrency.lockutils [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.403 253665 DEBUG oslo_concurrency.lockutils [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.403 253665 DEBUG nova.network.neutron [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Refreshing network info cache for port 8da41f38-3812-4494-9cab-c4854772a569 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.406 253665 DEBUG nova.compute.manager [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.406 253665 DEBUG oslo_concurrency.lockutils [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.406 253665 DEBUG oslo_concurrency.lockutils [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.407 253665 DEBUG oslo_concurrency.lockutils [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.407 253665 DEBUG nova.compute.manager [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.407 253665 DEBUG nova.compute.manager [req-003ca775-757b-4acf-9be2-716972a4592c req-aee732bd-b31d-4f6c-9edc-2a4bf3793087 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-unplugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:35:55 np0005532048 podman[369691]: 2025-11-22 09:35:55.415650295 +0000 UTC m=+0.100196262 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.541 253665 DEBUG nova.network.neutron [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updated VIF entry in instance network info cache for port 735988ac-a658-458d-975f-872cfa132420. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.542 253665 DEBUG nova.network.neutron [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.554 253665 INFO nova.network.neutron [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Port 8da41f38-3812-4494-9cab-c4854772a569 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.555 253665 DEBUG nova.network.neutron [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.591 253665 DEBUG oslo_concurrency.lockutils [req-fef464be-2061-4462-a9a0-740ce04cf06d req-170f3aa5-f6be-44a0-a640-7dc030eeee37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.622 253665 DEBUG oslo_concurrency.lockutils [req-1587f4f2-8dd9-47f0-bec8-314f0650cb18 req-ae39c1bf-850f-4238-b298-fcacfea27aef 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3f8530ae-f429-4807-81ca-84d8f964a38c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.633 253665 DEBUG nova.network.neutron [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.646 253665 INFO nova.compute.manager [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Took 1.53 seconds to deallocate network for instance.#033[00m
Nov 22 04:35:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:35:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:35:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:35:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:35:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.697 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.698 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.811 253665 DEBUG oslo_concurrency.processutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:35:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 374 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 329 op/s
Nov 22 04:35:55 np0005532048 nova_compute[253661]: 2025-11-22 09:35:55.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:35:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981033853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:35:56 np0005532048 nova_compute[253661]: 2025-11-22 09:35:56.304 253665 DEBUG oslo_concurrency.processutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:35:56 np0005532048 nova_compute[253661]: 2025-11-22 09:35:56.312 253665 DEBUG nova.compute.provider_tree [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:35:56 np0005532048 nova_compute[253661]: 2025-11-22 09:35:56.351 253665 DEBUG nova.scheduler.client.report [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:35:56 np0005532048 nova_compute[253661]: 2025-11-22 09:35:56.380 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:56 np0005532048 nova_compute[253661]: 2025-11-22 09:35:56.423 253665 INFO nova.scheduler.client.report [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance 3f8530ae-f429-4807-81ca-84d8f964a38c#033[00m
Nov 22 04:35:56 np0005532048 nova_compute[253661]: 2025-11-22 09:35:56.504 253665 DEBUG oslo_concurrency.lockutils [None req-c0769ed3-82cb-4c85-9d19-1fff9cd24abc ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:35:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:35:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:35:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:35:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:35:57 np0005532048 nova_compute[253661]: 2025-11-22 09:35:57.543 253665 DEBUG nova.compute.manager [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:57 np0005532048 nova_compute[253661]: 2025-11-22 09:35:57.545 253665 DEBUG oslo_concurrency.lockutils [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:35:57 np0005532048 nova_compute[253661]: 2025-11-22 09:35:57.545 253665 DEBUG oslo_concurrency.lockutils [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:35:57 np0005532048 nova_compute[253661]: 2025-11-22 09:35:57.546 253665 DEBUG oslo_concurrency.lockutils [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3f8530ae-f429-4807-81ca-84d8f964a38c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:35:57 np0005532048 nova_compute[253661]: 2025-11-22 09:35:57.546 253665 DEBUG nova.compute.manager [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] No waiting events found dispatching network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:35:57 np0005532048 nova_compute[253661]: 2025-11-22 09:35:57.546 253665 WARNING nova.compute.manager [req-5e087717-59ba-4238-8c4d-0755278fa785 req-896f4b5a-ab85-41e5-bca6-9872c96c52c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received unexpected event network-vif-plugged-8da41f38-3812-4494-9cab-c4854772a569 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:35:57 np0005532048 nova_compute[253661]: 2025-11-22 09:35:57.548 253665 DEBUG nova.compute.manager [req-4ade8b58-ed2d-46b6-9ac4-e482963d8c40 req-cd0433eb-2199-4efc-bf94-b2054b1f50fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Received event network-vif-deleted-8da41f38-3812-4494-9cab-c4854772a569 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:35:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 335 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 333 op/s
Nov 22 04:35:58 np0005532048 nova_compute[253661]: 2025-11-22 09:35:58.567 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:35:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:35:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 293 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 95 KiB/s wr, 255 op/s
Nov 22 04:36:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:00Z|01122|binding|INFO|Releasing lport 3ff32fba-8fe7-4d58-94eb-b5f91ea2b9e2 from this chassis (sb_readonly=0)
Nov 22 04:36:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:00Z|01123|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 04:36:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:00Z|01124|binding|INFO|Releasing lport ff0f834b-9623-4226-98e1-741634e7eb05 from this chassis (sb_readonly=0)
Nov 22 04:36:00 np0005532048 nova_compute[253661]: 2025-11-22 09:36:00.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:00 np0005532048 nova_compute[253661]: 2025-11-22 09:36:00.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 299 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 948 KiB/s wr, 202 op/s
Nov 22 04:36:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:02Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:ee:9e 10.100.0.5
Nov 22 04:36:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:02Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:ee:9e 10.100.0.5
Nov 22 04:36:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:02Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:53:06 10.100.0.13
Nov 22 04:36:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:02Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:53:06 10.100.0.13
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0023904382976818388 of space, bias 1.0, pg target 0.7171314893045516 quantized to 32 (current 32)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:36:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:36:03 np0005532048 nova_compute[253661]: 2025-11-22 09:36:03.162 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804148.161207, 4bcc50c8-3188-45f6-aa14-994c5ab8b966 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:03 np0005532048 nova_compute[253661]: 2025-11-22 09:36:03.163 253665 INFO nova.compute.manager [-] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:36:03 np0005532048 nova_compute[253661]: 2025-11-22 09:36:03.183 253665 DEBUG nova.compute.manager [None req-fd152ee9-fec9-4aaa-98d4-e12aee3feb3e - - - - - -] [instance: 4bcc50c8-3188-45f6-aa14-994c5ab8b966] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:03 np0005532048 nova_compute[253661]: 2025-11-22 09:36:03.570 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 345 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 230 op/s
Nov 22 04:36:04 np0005532048 nova_compute[253661]: 2025-11-22 09:36:04.803 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 345 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 465 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 22 04:36:05 np0005532048 nova_compute[253661]: 2025-11-22 09:36:05.945 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:07 np0005532048 nova_compute[253661]: 2025-11-22 09:36:07.343 253665 INFO nova.compute.manager [None req-17593a9d-9395-471a-bfb6-36afded676aa 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Get console output#033[00m
Nov 22 04:36:07 np0005532048 nova_compute[253661]: 2025-11-22 09:36:07.354 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:36:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 598 KiB/s rd, 4.3 MiB/s wr, 149 op/s
Nov 22 04:36:08 np0005532048 nova_compute[253661]: 2025-11-22 09:36:08.538 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804153.5359373, 3f8530ae-f429-4807-81ca-84d8f964a38c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:08 np0005532048 nova_compute[253661]: 2025-11-22 09:36:08.539 253665 INFO nova.compute.manager [-] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:36:08 np0005532048 nova_compute[253661]: 2025-11-22 09:36:08.572 253665 DEBUG nova.compute.manager [None req-cb77386b-ce49-4bc6-8e3d-414260be3ed5 - - - - - -] [instance: 3f8530ae-f429-4807-81ca-84d8f964a38c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:08 np0005532048 nova_compute[253661]: 2025-11-22 09:36:08.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 594 KiB/s rd, 4.3 MiB/s wr, 144 op/s
Nov 22 04:36:10 np0005532048 nova_compute[253661]: 2025-11-22 09:36:10.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:10 np0005532048 nova_compute[253661]: 2025-11-22 09:36:10.947 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 4.3 MiB/s wr, 121 op/s
Nov 22 04:36:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:36:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444247548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:36:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:36:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444247548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:36:13 np0005532048 nova_compute[253661]: 2025-11-22 09:36:13.148 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:13 np0005532048 nova_compute[253661]: 2025-11-22 09:36:13.149 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:13 np0005532048 nova_compute[253661]: 2025-11-22 09:36:13.150 253665 DEBUG nova.objects.instance [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'flavor' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:13 np0005532048 nova_compute[253661]: 2025-11-22 09:36:13.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 3.4 MiB/s wr, 105 op/s
Nov 22 04:36:14 np0005532048 nova_compute[253661]: 2025-11-22 09:36:14.476 253665 DEBUG nova.objects.instance [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_requests' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:14 np0005532048 nova_compute[253661]: 2025-11-22 09:36:14.650 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:36:14 np0005532048 nova_compute[253661]: 2025-11-22 09:36:14.815 253665 DEBUG nova.policy [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:36:14 np0005532048 nova_compute[253661]: 2025-11-22 09:36:14.922 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:14 np0005532048 nova_compute[253661]: 2025-11-22 09:36:14.923 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:14 np0005532048 nova_compute[253661]: 2025-11-22 09:36:14.943 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:36:15 np0005532048 nova_compute[253661]: 2025-11-22 09:36:15.043 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:15 np0005532048 nova_compute[253661]: 2025-11-22 09:36:15.044 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:15 np0005532048 nova_compute[253661]: 2025-11-22 09:36:15.055 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:36:15 np0005532048 nova_compute[253661]: 2025-11-22 09:36:15.056 253665 INFO nova.compute.claims [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:36:15 np0005532048 nova_compute[253661]: 2025-11-22 09:36:15.422 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:15 np0005532048 nova_compute[253661]: 2025-11-22 09:36:15.560 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Successfully created port: ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:36:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 359 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 362 KiB/s wr, 25 op/s
Nov 22 04:36:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:36:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232426750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:36:15 np0005532048 nova_compute[253661]: 2025-11-22 09:36:15.949 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:15 np0005532048 nova_compute[253661]: 2025-11-22 09:36:15.970 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:15 np0005532048 nova_compute[253661]: 2025-11-22 09:36:15.981 253665 DEBUG nova.compute.provider_tree [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.001 253665 DEBUG nova.scheduler.client.report [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.029 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.031 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.093 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.094 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.109 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.126 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.225 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.227 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.228 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Creating image(s)#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.263 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.298 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.328 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.334 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.382 253665 DEBUG nova.policy [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac89f965408f4a26b39ee2ae4725ff14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0112f56c468c4f90971b92126078e951', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.386 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Successfully updated port: ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.398 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.399 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.399 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.420 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.421 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.422 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.422 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.446 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:16 np0005532048 nova_compute[253661]: 2025-11-22 09:36:16.451 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:17 np0005532048 nova_compute[253661]: 2025-11-22 09:36:17.246 253665 DEBUG nova.compute.manager [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-changed-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:17 np0005532048 nova_compute[253661]: 2025-11-22 09:36:17.247 253665 DEBUG nova.compute.manager [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing instance network info cache due to event network-changed-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:36:17 np0005532048 nova_compute[253661]: 2025-11-22 09:36:17.247 253665 DEBUG oslo_concurrency.lockutils [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:17 np0005532048 nova_compute[253661]: 2025-11-22 09:36:17.751 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Successfully created port: f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:36:17 np0005532048 nova_compute[253661]: 2025-11-22 09:36:17.879 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 385 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 1.5 MiB/s wr, 28 op/s
Nov 22 04:36:17 np0005532048 nova_compute[253661]: 2025-11-22 09:36:17.945 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] resizing rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.271 253665 DEBUG nova.objects.instance [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'migration_context' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.290 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.291 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Ensure instance console log exists: /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.292 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.292 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.292 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.579 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.969 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Successfully updated port: f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.986 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.987 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:18 np0005532048 nova_compute[253661]: 2025-11-22 09:36:18.988 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.042 253665 DEBUG nova.network.neutron [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.073 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.075 253665 DEBUG oslo_concurrency.lockutils [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.075 253665 DEBUG nova.network.neutron [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing network info cache for port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.080 253665 DEBUG nova.virt.libvirt.vif [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.081 253665 DEBUG nova.network.os_vif_util [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.082 253665 DEBUG nova.network.os_vif_util [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.082 253665 DEBUG os_vif [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.084 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.085 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.094 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.094 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef638c6f-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.095 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef638c6f-1e, col_values=(('external_ids', {'iface-id': 'ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:7d:ea', 'vm-uuid': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 NetworkManager[48920]: <info>  [1763804179.0988] manager: (tapef638c6f-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.100 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.107 253665 INFO os_vif [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e')#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.108 253665 DEBUG nova.virt.libvirt.vif [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.108 253665 DEBUG nova.network.os_vif_util [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.109 253665 DEBUG nova.network.os_vif_util [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.113 253665 DEBUG nova.virt.libvirt.guest [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] attach device xml: <interface type="ethernet">
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:ec:7d:ea"/>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <target dev="tapef638c6f-1e"/>
Nov 22 04:36:19 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:36:19 np0005532048 nova_compute[253661]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 22 04:36:19 np0005532048 NetworkManager[48920]: <info>  [1763804179.1270] manager: (tapef638c6f-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/466)
Nov 22 04:36:19 np0005532048 kernel: tapef638c6f-1e: entered promiscuous mode
Nov 22 04:36:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:19Z|01125|binding|INFO|Claiming lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for this chassis.
Nov 22 04:36:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:19Z|01126|binding|INFO|ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3: Claiming fa:16:3e:ec:7d:ea 10.100.0.18
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.159 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:7d:ea 10.100.0.18'], port_security=['fa:16:3e:ec:7d:ea 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1ca4427-d611-4bfe-814f-5bb5cca8ded7, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:19 np0005532048 systemd-udevd[369934]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.160 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 in datapath f987bf48-fed4-4a9a-a268-76d80e7b77fd bound to our chassis#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.163 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f987bf48-fed4-4a9a-a268-76d80e7b77fd#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.171 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 NetworkManager[48920]: <info>  [1763804179.1770] device (tapef638c6f-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:36:19 np0005532048 NetworkManager[48920]: <info>  [1763804179.1788] device (tapef638c6f-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:36:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:19Z|01127|binding|INFO|Setting lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 ovn-installed in OVS
Nov 22 04:36:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:19Z|01128|binding|INFO|Setting lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 up in Southbound
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f713388f-12f3-4fc5-9ee2-c7a41161ebb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.187 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf987bf48-f1 in ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.190 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf987bf48-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.190 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5af76da1-d0b6-4056-b378-acb2d56b830f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[110aefc9-f8f9-4e51-808b-6f54bfc3f29a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.215 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[50aae8da-17f3-44b1-acf9-ac459122e542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.225 253665 DEBUG nova.virt.libvirt.driver [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.225 253665 DEBUG nova.virt.libvirt.driver [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.225 253665 DEBUG nova.virt.libvirt.driver [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:ad:ee:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.225 253665 DEBUG nova.virt.libvirt.driver [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:ec:7d:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.230 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.247 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[27fad550-6939-4731-b60b-1eb4983f09ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.249 253665 DEBUG nova.virt.libvirt.guest [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:36:19</nova:creationTime>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 04:36:19 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    <nova:port uuid="ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3">
Nov 22 04:36:19 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:19 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:36:19 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:36:19 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.273 253665 DEBUG oslo_concurrency.lockutils [None req-cc8d8ead-c964-460c-8812-f796d8e7db0e 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.285 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f51c01fe-aab1-4713-a048-a133ba204b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 NetworkManager[48920]: <info>  [1763804179.2929] manager: (tapf987bf48-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/467)
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.292 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c554a74-c749-40b3-9a43-873cda1d2d06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.333 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0df48496-2110-4552-a4a8-ee9d34d74064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.337 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcea7fc-53df-4f1a-8e83-e84cb3b95424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 NetworkManager[48920]: <info>  [1763804179.3755] device (tapf987bf48-f0): carrier: link connected
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.382 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[12fd5796-cb7e-4895-83c0-b2e37de565a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.404 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8f5dcc-c363-4236-be82-f78c5a3f0cca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf987bf48-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:ec:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703011, 'reachable_time': 31820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369961, 'error': None, 'target': 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.423 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a440c05-db82-4f56-bd4b-d13f383a4292]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:ec89'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703011, 'tstamp': 703011}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369962, 'error': None, 'target': 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.442 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a8347b66-1d46-49f2-b60f-fe1f9869a68e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf987bf48-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:ec:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703011, 'reachable_time': 31820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369963, 'error': None, 'target': 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.470 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f0a675-af3c-42df-aa23-ddff57a44213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.541 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3497d8a0-a4cc-477b-a4d6-329caf7abbb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.543 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf987bf48-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.543 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.543 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf987bf48-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 kernel: tapf987bf48-f0: entered promiscuous mode
Nov 22 04:36:19 np0005532048 NetworkManager[48920]: <info>  [1763804179.5473] manager: (tapf987bf48-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf987bf48-f0, col_values=(('external_ids', {'iface-id': '832bb68b-158a-4026-bb7e-ec09f983eb31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:19Z|01129|binding|INFO|Releasing lport 832bb68b-158a-4026-bb7e-ec09f983eb31 from this chassis (sb_readonly=0)
Nov 22 04:36:19 np0005532048 nova_compute[253661]: 2025-11-22 09:36:19.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.566 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f987bf48-fed4-4a9a-a268-76d80e7b77fd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f987bf48-fed4-4a9a-a268-76d80e7b77fd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59fd2ef6-0b05-4ccc-9acb-73cd714b6f9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.568 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-f987bf48-fed4-4a9a-a268-76d80e7b77fd
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/f987bf48-fed4-4a9a-a268-76d80e7b77fd.pid.haproxy
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID f987bf48-fed4-4a9a-a268-76d80e7b77fd
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:36:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:19.569 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'env', 'PROCESS_TAG=haproxy-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f987bf48-fed4-4a9a-a268-76d80e7b77fd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:36:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 393 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 27 op/s
Nov 22 04:36:19 np0005532048 podman[369996]: 2025-11-22 09:36:19.958546611 +0000 UTC m=+0.058584768 container create d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:36:20 np0005532048 systemd[1]: Started libpod-conmon-d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff.scope.
Nov 22 04:36:20 np0005532048 podman[369996]: 2025-11-22 09:36:19.929254772 +0000 UTC m=+0.029292959 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:36:20 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db4f634e39e3768979e4e15c5856180a4869bf3f069714fa2e2fa318a4a1ad83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:20 np0005532048 podman[369996]: 2025-11-22 09:36:20.049069082 +0000 UTC m=+0.149107269 container init d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:36:20 np0005532048 podman[369996]: 2025-11-22 09:36:20.054492127 +0000 UTC m=+0.154530284 container start d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 04:36:20 np0005532048 podman[370010]: 2025-11-22 09:36:20.067241264 +0000 UTC m=+0.064745902 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 22 04:36:20 np0005532048 podman[370013]: 2025-11-22 09:36:20.073088639 +0000 UTC m=+0.066780502 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Nov 22 04:36:20 np0005532048 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [NOTICE]   (370052) : New worker (370055) forked
Nov 22 04:36:20 np0005532048 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [NOTICE]   (370052) : Loading success.
Nov 22 04:36:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:20.217 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.218 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:20 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:20.222 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.351 253665 DEBUG nova.compute.manager [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.351 253665 DEBUG nova.compute.manager [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing instance network info cache due to event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.352 253665 DEBUG oslo_concurrency.lockutils [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.613 253665 DEBUG nova.compute.manager [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.613 253665 DEBUG oslo_concurrency.lockutils [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.613 253665 DEBUG oslo_concurrency.lockutils [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.614 253665 DEBUG oslo_concurrency.lockutils [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.614 253665 DEBUG nova.compute.manager [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.614 253665 WARNING nova.compute.manager [req-fbffebc5-e07a-4e3e-bfcc-a88247c66213 req-e5e0ca01-054f-413f-bcf2-08e9d0fd3bfc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:36:20 np0005532048 nova_compute[253661]: 2025-11-22 09:36:20.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.145 253665 DEBUG nova.network.neutron [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.226 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.298 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.299 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance network_info: |[{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.299 253665 DEBUG oslo_concurrency.lockutils [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.300 253665 DEBUG nova.network.neutron [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.304 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start _get_guest_xml network_info=[{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.310 253665 WARNING nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.322 253665 DEBUG nova.virt.libvirt.host [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.323 253665 DEBUG nova.virt.libvirt.host [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.337 253665 DEBUG nova.virt.libvirt.host [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.337 253665 DEBUG nova.virt.libvirt.host [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.338 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.338 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.338 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.338 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.339 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.340 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.340 253665 DEBUG nova.virt.hardware [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.343 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.448 253665 DEBUG nova.network.neutron [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updated VIF entry in instance network info cache for port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.448 253665 DEBUG nova.network.neutron [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.462 253665 DEBUG oslo_concurrency.lockutils [req-f1e67923-c2c2-42e0-b89d-6eddba4f03e5 req-e506ff12-ae02-489d-ae08-3cfc4d8e7c29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.511 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.511 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.549 253665 DEBUG nova.objects.instance [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'flavor' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:21Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:7d:ea 10.100.0.18
Nov 22 04:36:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:21Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:7d:ea 10.100.0.18
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.592 253665 DEBUG nova.virt.libvirt.vif [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.593 253665 DEBUG nova.network.os_vif_util [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.594 253665 DEBUG nova.network.os_vif_util [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.599 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.603 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.606 253665 DEBUG nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Attempting to detach device tapef638c6f-1e from instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.606 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:ec:7d:ea"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <target dev="tapef638c6f-1e"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.614 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.618 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface>not found in domain: <domain type='kvm' id='140'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <name>instance-00000070</name>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:36:19</nova:creationTime>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:port uuid="ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='serial'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='uuid'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk' index='2'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config' index='1'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:ad:ee:9e'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target dev='tap21b54230-3a'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:ec:7d:ea'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target dev='tapef638c6f-1e'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='net1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <source path='/dev/pts/5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/5'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <source path='/dev/pts/5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c575,c897</label>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c575,c897</imagelabel>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.618 253665 INFO nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully detached device tapef638c6f-1e from instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 from the persistent domain config.#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.618 253665 DEBUG nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] (1/8): Attempting to detach device tapef638c6f-1e with device alias net1 from instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.619 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:ec:7d:ea"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <target dev="tapef638c6f-1e"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:36:21 np0005532048 kernel: tapef638c6f-1e (unregistering): left promiscuous mode
Nov 22 04:36:21 np0005532048 NetworkManager[48920]: <info>  [1763804181.7286] device (tapef638c6f-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:36:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:21Z|01130|binding|INFO|Releasing lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 from this chassis (sb_readonly=0)
Nov 22 04:36:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:21Z|01131|binding|INFO|Setting lport ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 down in Southbound
Nov 22 04:36:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:21Z|01132|binding|INFO|Removing iface tapef638c6f-1e ovn-installed in OVS
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.737 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.750 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763804181.7432754, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.751 253665 DEBUG nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Start waiting for the detach event from libvirt for device tapef638c6f-1e with device alias net1 for instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.752 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.766 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface>not found in domain: <domain type='kvm' id='140'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <name>instance-00000070</name>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:36:19</nova:creationTime>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:port uuid="ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='serial'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='uuid'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk' index='2'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config' index='1'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:ad:ee:9e'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target dev='tap21b54230-3a'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <source path='/dev/pts/5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/5'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <source path='/dev/pts/5'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c575,c897</label>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c575,c897</imagelabel>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.766 253665 INFO nova.virt.libvirt.driver [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully detached device tapef638c6f-1e from instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 from the live domain config.#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.767 253665 DEBUG nova.virt.libvirt.vif [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.768 253665 DEBUG nova.network.os_vif_util [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.769 253665 DEBUG nova.network.os_vif_util [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.769 253665 DEBUG os_vif [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.772 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.772 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef638c6f-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.777 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:36:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:36:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2520301668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.794 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.804 253665 INFO os_vif [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e')#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.805 253665 DEBUG nova.virt.libvirt.guest [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:36:21</nova:creationTime>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 04:36:21 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:21 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:36:21 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.811 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.838 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:21 np0005532048 nova_compute[253661]: 2025-11-22 09:36:21.843 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 405 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:36:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.895 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:7d:ea 10.100.0.18'], port_security=['fa:16:3e:ec:7d:ea 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1ca4427-d611-4bfe-814f-5bb5cca8ded7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.897 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 in datapath f987bf48-fed4-4a9a-a268-76d80e7b77fd unbound from our chassis#033[00m
Nov 22 04:36:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.900 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f987bf48-fed4-4a9a-a268-76d80e7b77fd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:36:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.901 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc1565c-0f0b-4dcb-9b02-88441b9b5041]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:21.902 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd namespace which is not needed anymore#033[00m
Nov 22 04:36:22 np0005532048 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [NOTICE]   (370052) : haproxy version is 2.8.14-c23fe91
Nov 22 04:36:22 np0005532048 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [NOTICE]   (370052) : path to executable is /usr/sbin/haproxy
Nov 22 04:36:22 np0005532048 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [WARNING]  (370052) : Exiting Master process...
Nov 22 04:36:22 np0005532048 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [ALERT]    (370052) : Current worker (370055) exited with code 143 (Terminated)
Nov 22 04:36:22 np0005532048 neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd[370017]: [WARNING]  (370052) : All workers exited. Exiting... (0)
Nov 22 04:36:22 np0005532048 systemd[1]: libpod-d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff.scope: Deactivated successfully.
Nov 22 04:36:22 np0005532048 podman[370147]: 2025-11-22 09:36:22.087174587 +0000 UTC m=+0.051940303 container died d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:36:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff-userdata-shm.mount: Deactivated successfully.
Nov 22 04:36:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-db4f634e39e3768979e4e15c5856180a4869bf3f069714fa2e2fa318a4a1ad83-merged.mount: Deactivated successfully.
Nov 22 04:36:22 np0005532048 podman[370147]: 2025-11-22 09:36:22.138090304 +0000 UTC m=+0.102856000 container cleanup d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 04:36:22 np0005532048 systemd[1]: libpod-conmon-d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff.scope: Deactivated successfully.
Nov 22 04:36:22 np0005532048 podman[370177]: 2025-11-22 09:36:22.216793221 +0000 UTC m=+0.053878112 container remove d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:36:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.233 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[120bb615-ee49-42e1-9999-6b9e483f7ca8]: (4, ('Sat Nov 22 09:36:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd (d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff)\nd61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff\nSat Nov 22 09:36:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd (d61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff)\nd61633dbc94ceb26a9caa2254f9e1aef5569baa298e4797ca224db08745e02ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.235 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[55cc542b-21af-4827-afb9-65c58f3bbb69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.236 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf987bf48-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:22 np0005532048 kernel: tapf987bf48-f0: left promiscuous mode
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.259 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.265 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[37fe4ba1-bb57-4e2f-80e9-5acdb0fc0dfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.280 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d01a186-2c69-4c9c-a546-66d0b92a30f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.284 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7b731c45-f6bd-4293-a1e0-dcf3a343ee66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[990fc356-5e26-4a7b-b4be-8dadaa8f1c7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703001, 'reachable_time': 20673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370190, 'error': None, 'target': 'ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.313 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f987bf48-fed4-4a9a-a268-76d80e7b77fd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:36:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:22.313 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[756711e8-53ce-49fc-8be1-8d04b44f9b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:22 np0005532048 systemd[1]: run-netns-ovnmeta\x2df987bf48\x2dfed4\x2d4a9a\x2da268\x2d76d80e7b77fd.mount: Deactivated successfully.
Nov 22 04:36:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:36:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011403817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.360 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.362 253665 DEBUG nova.virt.libvirt.vif [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-635689639',display_name='tempest-TestNetworkAdvancedServerOps-server-635689639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-635689639',id=113,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGat/0/6ionKBrSzyBS7EbGqwOoirIfAackkh+AYjCZXxoZzDjZWyHoUi84+Rs5w5CQ8NN8aOtxfB73LToni6HeOyO4Tgvy+GHztLu+Mg7hY5eYsKNagHEATOhR/nV+7Ew==',key_name='tempest-TestNetworkAdvancedServerOps-353719525',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-920qa6ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:16Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=750659ed-67e0-44d4-a5b3-b8d0029ffa7e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.362 253665 DEBUG nova.network.os_vif_util [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.364 253665 DEBUG nova.network.os_vif_util [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.365 253665 DEBUG nova.objects.instance [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.385 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <uuid>750659ed-67e0-44d4-a5b3-b8d0029ffa7e</uuid>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <name>instance-00000071</name>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-635689639</nova:name>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:36:21</nova:creationTime>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <nova:user uuid="ac89f965408f4a26b39ee2ae4725ff14">tempest-TestNetworkAdvancedServerOps-1215776227-project-member</nova:user>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <nova:project uuid="0112f56c468c4f90971b92126078e951">tempest-TestNetworkAdvancedServerOps-1215776227</nova:project>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <nova:port uuid="f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <entry name="serial">750659ed-67e0-44d4-a5b3-b8d0029ffa7e</entry>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <entry name="uuid">750659ed-67e0-44d4-a5b3-b8d0029ffa7e</entry>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:ac:c2:c9"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <target dev="tapf4a3cf1b-5c"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/console.log" append="off"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:36:22 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:36:22 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:36:22 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:36:22 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.386 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Preparing to wait for external event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.386 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.387 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.387 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.389 253665 DEBUG nova.virt.libvirt.vif [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-635689639',display_name='tempest-TestNetworkAdvancedServerOps-server-635689639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-635689639',id=113,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGat/0/6ionKBrSzyBS7EbGqwOoirIfAackkh+AYjCZXxoZzDjZWyHoUi84+Rs5w5CQ8NN8aOtxfB73LToni6HeOyO4Tgvy+GHztLu+Mg7hY5eYsKNagHEATOhR/nV+7Ew==',key_name='tempest-TestNetworkAdvancedServerOps-353719525',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-920qa6ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:16Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=750659ed-67e0-44d4-a5b3-b8d0029ffa7e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.389 253665 DEBUG nova.network.os_vif_util [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.392 253665 DEBUG nova.network.os_vif_util [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.392 253665 DEBUG os_vif [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.393 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.393 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.400 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4a3cf1b-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.400 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4a3cf1b-5c, col_values=(('external_ids', {'iface-id': 'f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:c2:c9', 'vm-uuid': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:22 np0005532048 NetworkManager[48920]: <info>  [1763804182.4028] manager: (tapf4a3cf1b-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.412 253665 INFO os_vif [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c')#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.462 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.463 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.464 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] No VIF found with MAC fa:16:3e:ac:c2:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.465 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Using config drive#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.488 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:36:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.764 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.765 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:22 np0005532048 nova_compute[253661]: 2025-11-22 09:36:22.765 253665 DEBUG nova.network.neutron [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.214 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.215 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.215 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.216 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.216 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.217 253665 WARNING nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.217 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-unplugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.217 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.217 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.218 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.218 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-unplugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.218 253665 WARNING nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-unplugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.219 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.219 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.219 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.219 253665 DEBUG oslo_concurrency.lockutils [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.220 253665 DEBUG nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.220 253665 WARNING nova.compute.manager [req-7eb8e142-a7bb-4c4d-b4de-02a4d0397f4c req-5b32bc81-e8d6-4172-8010-82ab41e382f4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.222 253665 DEBUG nova.compute.manager [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-deleted-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.223 253665 INFO nova.compute.manager [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Neutron deleted interface ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.223 253665 DEBUG nova.network.neutron [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.249 253665 DEBUG nova.objects.instance [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'system_metadata' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.272 253665 DEBUG nova.objects.instance [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'flavor' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.289 253665 DEBUG nova.virt.libvirt.vif [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.290 253665 DEBUG nova.network.os_vif_util [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.291 253665 DEBUG nova.network.os_vif_util [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.294 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.298 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface>not found in domain: <domain type='kvm' id='140'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <name>instance-00000070</name>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:36:21</nova:creationTime>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:36:23 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='serial'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='uuid'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk' index='2'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config' index='1'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:ad:ee:9e'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target dev='tap21b54230-3a'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <source path='/dev/pts/5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/5'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <source path='/dev/pts/5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c575,c897</label>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c575,c897</imagelabel>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:36:23 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:36:23 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.299 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.304 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ec:7d:ea"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapef638c6f-1e"/></interface>not found in domain: <domain type='kvm' id='140'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <name>instance-00000070</name>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <uuid>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</uuid>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:36:21</nova:creationTime>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:36:23 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='serial'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='uuid'>c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk' index='2'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_disk.config' index='1'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:ad:ee:9e'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target dev='tap21b54230-3a'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <source path='/dev/pts/5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/5'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <source path='/dev/pts/5'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5/console.log' append='off'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c575,c897</label>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c575,c897</imagelabel>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:36:23 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:36:23 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.304 253665 WARNING nova.virt.libvirt.driver [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Detaching interface fa:16:3e:ec:7d:ea failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapef638c6f-1e' not found.#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.305 253665 DEBUG nova.virt.libvirt.vif [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.306 253665 DEBUG nova.network.os_vif_util [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "address": "fa:16:3e:ec:7d:ea", "network": {"id": "f987bf48-fed4-4a9a-a268-76d80e7b77fd", "bridge": "br-int", "label": "tempest-network-smoke--1394323532", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef638c6f-1e", "ovs_interfaceid": "ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.307 253665 DEBUG nova.network.os_vif_util [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.307 253665 DEBUG os_vif [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.310 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef638c6f-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.310 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.313 253665 INFO os_vif [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:7d:ea,bridge_name='br-int',has_traffic_filtering=True,id=ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3,network=Network(f987bf48-fed4-4a9a-a268-76d80e7b77fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef638c6f-1e')#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.314 253665 DEBUG nova.virt.libvirt.guest [req-5f632fe8-01ad-4df5-8fca-d2bd44a1d4bb req-cb080645-3d95-4bf2-95b7-b3b0baf458ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-1782158666</nova:name>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:36:23</nova:creationTime>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    <nova:port uuid="21b54230-3ad3-4b65-b752-5a1b0472844e">
Nov 22 04:36:23 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:36:23 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:36:23 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:36:23 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.344 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Creating config drive at /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.349 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22ufgn1t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.509 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22ufgn1t" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.556 253665 DEBUG nova.storage.rbd_utils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] rbd image 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.562 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.670 253665 DEBUG nova.network.neutron [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updated VIF entry in instance network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.672 253665 DEBUG nova.network.neutron [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.696 253665 DEBUG oslo_concurrency.lockutils [req-2769d5a3-41fe-4760-9f0e-75792ed4a2a4 req-4013ce65-7c67-4d83-a978-92a642d5a25c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.735 253665 DEBUG oslo_concurrency.processutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config 750659ed-67e0-44d4-a5b3-b8d0029ffa7e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.736 253665 INFO nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Deleting local config drive /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e/disk.config because it was imported into RBD.#033[00m
Nov 22 04:36:23 np0005532048 kernel: tapf4a3cf1b-5c: entered promiscuous mode
Nov 22 04:36:23 np0005532048 NetworkManager[48920]: <info>  [1763804183.8036] manager: (tapf4a3cf1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/470)
Nov 22 04:36:23 np0005532048 systemd-udevd[370192]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:36:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:23Z|01133|binding|INFO|Claiming lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for this chassis.
Nov 22 04:36:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:23Z|01134|binding|INFO|f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b: Claiming fa:16:3e:ac:c2:c9 10.100.0.11
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.817 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:c2:c9 10.100.0.11'], port_security=['fa:16:3e:ac:c2:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37020e16-bbf7-4d46-a463-62f41acbbdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bdf6e5f8-acae-4ca0-a205-a73594668944', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f34ee933-6c38-4761-bdaf-c769de521957, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:23 np0005532048 NetworkManager[48920]: <info>  [1763804183.8192] device (tapf4a3cf1b-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:36:23 np0005532048 NetworkManager[48920]: <info>  [1763804183.8200] device (tapf4a3cf1b-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.819 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b in datapath 37020e16-bbf7-4d46-a463-62f41acbbdab bound to our chassis#033[00m
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.823 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37020e16-bbf7-4d46-a463-62f41acbbdab#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.827 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:23Z|01135|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b ovn-installed in OVS
Nov 22 04:36:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:23Z|01136|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b up in Southbound
Nov 22 04:36:23 np0005532048 nova_compute[253661]: 2025-11-22 09:36:23.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[089c81ec-a927-41f7-b591-ac697b075e3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.840 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37020e16-b1 in ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.842 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37020e16-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.843 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94963a80-b472-4a01-9d78-e81045b38aeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.843 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3841dd58-67a1-482d-8543-66c3bc5cebb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:23 np0005532048 systemd-machined[215941]: New machine qemu-141-instance-00000071.
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.858 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d15d52-f1b1-40db-90cc-3baaf8845e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:23 np0005532048 systemd[1]: Started Virtual Machine qemu-141-instance-00000071.
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d52f325-a442-44af-8b45-edc137270675]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 405 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.919 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[85538713-6723-4855-baae-43549b45bcf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:23 np0005532048 NetworkManager[48920]: <info>  [1763804183.9281] manager: (tap37020e16-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/471)
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.927 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59ba4ed9-bf47-44cf-a79f-067b8d1c11a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.969 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e4ad8a-3007-445f-8042-74b115e7994f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:23.972 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee0c1e3-ea4d-45ac-96ea-6bf9d7dfea4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:23Z|01137|binding|INFO|Releasing lport 3ff32fba-8fe7-4d58-94eb-b5f91ea2b9e2 from this chassis (sb_readonly=0)
Nov 22 04:36:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:23Z|01138|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 04:36:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:23Z|01139|binding|INFO|Releasing lport ff0f834b-9623-4226-98e1-741634e7eb05 from this chassis (sb_readonly=0)
Nov 22 04:36:24 np0005532048 NetworkManager[48920]: <info>  [1763804184.0033] device (tap37020e16-b0): carrier: link connected
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.008 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[92cfb2a9-ec6e-4582-8c77-6a04a9e796cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.022 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.031 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[465a8f64-b57a-4169-a2cb-0be084edb204]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37020e16-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:87:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703474, 'reachable_time': 42367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370301, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.050 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d532ece5-16e8-42ee-97fa-dbb88f892507]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:87fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703474, 'tstamp': 703474}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370302, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.074 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32880b3d-d220-4677-ae7b-48f3f483561c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37020e16-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:87:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703474, 'reachable_time': 42367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370303, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.109 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.111 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.111 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.112 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.112 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.115 253665 INFO nova.compute.manager [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Terminating instance#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.117 253665 DEBUG nova.compute.manager [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.116 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5551a2-c0f0-4694-9954-e595c7d95335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.191 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[62c7b17d-2e3d-44b8-b073-98c6d76690b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.193 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37020e16-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.193 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.194 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37020e16-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 kernel: tap37020e16-b0: entered promiscuous mode
Nov 22 04:36:24 np0005532048 NetworkManager[48920]: <info>  [1763804184.1973] manager: (tap37020e16-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.201 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.201 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37020e16-b0, col_values=(('external_ids', {'iface-id': 'fc048c06-919a-46ba-ac90-0356d56c12a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:24Z|01140|binding|INFO|Releasing lport fc048c06-919a-46ba-ac90-0356d56c12a5 from this chassis (sb_readonly=0)
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.202 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.226 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.229 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.233 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2e3804-61f4-41be-9aa1-03e418d81e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.234 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.235 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'env', 'PROCESS_TAG=haproxy-37020e16-bbf7-4d46-a463-62f41acbbdab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37020e16-bbf7-4d46-a463-62f41acbbdab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:36:24 np0005532048 kernel: tap735988ac-a6 (unregistering): left promiscuous mode
Nov 22 04:36:24 np0005532048 NetworkManager[48920]: <info>  [1763804184.4211] device (tap735988ac-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.438 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:24Z|01141|binding|INFO|Releasing lport 735988ac-a658-458d-975f-872cfa132420 from this chassis (sb_readonly=0)
Nov 22 04:36:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:24Z|01142|binding|INFO|Setting lport 735988ac-a658-458d-975f-872cfa132420 down in Southbound
Nov 22 04:36:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:24Z|01143|binding|INFO|Removing iface tap735988ac-a6 ovn-installed in OVS
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.446 253665 INFO nova.network.neutron [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Port ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.446 253665 DEBUG nova.network.neutron [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [{"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:24.447 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306'], port_security=['fa:16:3e:0e:53:06 10.100.0.13 2001:db8::f816:3eff:fe0e:5306'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe0e:5306/64', 'neutron:device_id': '7b3234ab-db15-43a8-8093-469f6e62db91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d5326a8-c171-4fdf-9f85-e6536ded5f96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=735988ac-a658-458d-975f-872cfa132420) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.508 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 22 04:36:24 np0005532048 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Consumed 15.514s CPU time.
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.532 253665 DEBUG oslo_concurrency.lockutils [None req-7da8cc45-185c-444c-a6a0-4c399140106d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-ef638c6f-1ecf-41b0-8e1d-8cf4b1932ea3" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:24 np0005532048 systemd-machined[215941]: Machine qemu-139-instance-0000006e terminated.
Nov 22 04:36:24 np0005532048 podman[370338]: 2025-11-22 09:36:24.628738463 +0000 UTC m=+0.026676716 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.765 253665 INFO nova.virt.libvirt.driver [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Instance destroyed successfully.#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.765 253665 DEBUG nova.objects.instance [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 7b3234ab-db15-43a8-8093-469f6e62db91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.780 253665 DEBUG nova.virt.libvirt.vif [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-340448396',display_name='tempest-TestGettingAddress-server-340448396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-340448396',id=110,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-quxnyf0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7b3234ab-db15-43a8-8093-469f6e62db91,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.781 253665 DEBUG nova.network.os_vif_util [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.782 253665 DEBUG nova.network.os_vif_util [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.783 253665 DEBUG os_vif [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.785 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap735988ac-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.793 253665 INFO os_vif [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:53:06,bridge_name='br-int',has_traffic_filtering=True,id=735988ac-a658-458d-975f-872cfa132420,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap735988ac-a6')#033[00m
Nov 22 04:36:24 np0005532048 podman[370338]: 2025-11-22 09:36:24.834855298 +0000 UTC m=+0.232793531 container create 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.859 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.860 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.860 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.860 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.860 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.861 253665 INFO nova.compute.manager [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Terminating instance#033[00m
Nov 22 04:36:24 np0005532048 nova_compute[253661]: 2025-11-22 09:36:24.862 253665 DEBUG nova.compute.manager [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:36:24 np0005532048 systemd[1]: Started libpod-conmon-743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7.scope.
Nov 22 04:36:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:25 np0005532048 kernel: tap21b54230-3a (unregistering): left promiscuous mode
Nov 22 04:36:25 np0005532048 NetworkManager[48920]: <info>  [1763804185.0340] device (tap21b54230-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.042 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:25Z|01144|binding|INFO|Releasing lport 21b54230-3ad3-4b65-b752-5a1b0472844e from this chassis (sb_readonly=0)
Nov 22 04:36:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:25Z|01145|binding|INFO|Setting lport 21b54230-3ad3-4b65-b752-5a1b0472844e down in Southbound
Nov 22 04:36:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:25Z|01146|binding|INFO|Removing iface tap21b54230-3a ovn-installed in OVS
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.046 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59bd18a679f16bc779df4f96decf1dfaf90586c9dd2d247ab24ea9be9b34ce71/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.051 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:ee:9e 10.100.0.5'], port_security=['fa:16:3e:ad:ee:9e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '624d1a5b-7d33-4814-8a02-c8e1e513249a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a19e22c3-d4f6-4134-81df-8e8895569f77, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=21b54230-3ad3-4b65-b752-5a1b0472844e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:25 np0005532048 podman[370338]: 2025-11-22 09:36:25.072009026 +0000 UTC m=+0.469947279 container init 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 podman[370338]: 2025-11-22 09:36:25.083655545 +0000 UTC m=+0.481593778 container start 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:36:25 np0005532048 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 22 04:36:25 np0005532048 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000070.scope: Consumed 16.626s CPU time.
Nov 22 04:36:25 np0005532048 systemd-machined[215941]: Machine qemu-140-instance-00000070 terminated.
Nov 22 04:36:25 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [NOTICE]   (370388) : New worker (370390) forked
Nov 22 04:36:25 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [NOTICE]   (370388) : Loading success.
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.174 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 735988ac-a658-458d-975f-872cfa132420 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 unbound from our chassis#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.175 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3e4e01e-5e3e-4572-b404-ee47aaec1186#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.193 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[95ac292d-0295-4fcb-8728-04bf0251f715]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.234 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[33d3c718-b805-4ab7-8821-225ddf0c2541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.238 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9fed38-f957-4d5b-a2ae-f82a8532af56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 NetworkManager[48920]: <info>  [1763804185.2839] manager: (tap21b54230-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.284 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c4653-823e-49d5-96b1-c2c22d9ef979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.304 253665 INFO nova.virt.libvirt.driver [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance destroyed successfully.#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.305 253665 DEBUG nova.objects.instance [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.319 253665 DEBUG nova.virt.libvirt.vif [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1782158666',display_name='tempest-TestNetworkBasicOps-server-1782158666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1782158666',id=112,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDPiBcw6ZBDV6tkcFqZlu1lhHm0MiwPZL0IeJktdDdG+KviOMJlJVqa2KjSkpAnVj7qVvepWVZ/5HnvGItDWSBQoVV3xo5zh9nkcAbH2rvl/77Z48jaH2d5pf2BXb0+xQQ==',key_name='tempest-TestNetworkBasicOps-399477283',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1prejprq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:47Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.319 253665 DEBUG nova.network.os_vif_util [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "21b54230-3ad3-4b65-b752-5a1b0472844e", "address": "fa:16:3e:ad:ee:9e", "network": {"id": "5c1e456e-4030-4169-b20f-3aec7a20c24e", "bridge": "br-int", "label": "tempest-network-smoke--1510868784", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b54230-3a", "ovs_interfaceid": "21b54230-3ad3-4b65-b752-5a1b0472844e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.320 253665 DEBUG nova.network.os_vif_util [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.320 253665 DEBUG os_vif [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.322 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.322 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21b54230-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.324 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d58076-cf73-4f06-af35-1ea2a7259688]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3e4e01e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:a9:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696240, 'reachable_time': 35247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370408, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.324 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.330 253665 INFO os_vif [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:ee:9e,bridge_name='br-int',has_traffic_filtering=True,id=21b54230-3ad3-4b65-b752-5a1b0472844e,network=Network(5c1e456e-4030-4169-b20f-3aec7a20c24e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b54230-3a')#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.349 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84b2225c-e29c-4679-a6a3-d681c22672fe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd3e4e01e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696257, 'tstamp': 696257}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370420, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd3e4e01e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696261, 'tstamp': 696261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370420, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.352 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3e4e01e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.354 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.356 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3e4e01e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.356 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.356 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3e4e01e-50, col_values=(('external_ids', {'iface-id': 'ff0f834b-9623-4226-98e1-741634e7eb05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.357 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.358 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 21b54230-3ad3-4b65-b752-5a1b0472844e in datapath 5c1e456e-4030-4169-b20f-3aec7a20c24e unbound from our chassis#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.359 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c1e456e-4030-4169-b20f-3aec7a20c24e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.360 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[416c0c90-2270-43b7-97a3-ade7929a76d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.361 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e namespace which is not needed anymore#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.399 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Processing event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.400 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 WARNING nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.401 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-unplugged-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG oslo_concurrency.lockutils [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] No waiting events found dispatching network-vif-unplugged-735988ac-a658-458d-975f-872cfa132420 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.402 253665 DEBUG nova.compute.manager [req-c3537e99-183e-48f5-9867-d546e52fe101 req-62144437-a58a-430c-96bf-7bbf4450fd6e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-unplugged-735988ac-a658-458d-975f-872cfa132420 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:36:25 np0005532048 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [NOTICE]   (369490) : haproxy version is 2.8.14-c23fe91
Nov 22 04:36:25 np0005532048 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [NOTICE]   (369490) : path to executable is /usr/sbin/haproxy
Nov 22 04:36:25 np0005532048 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [WARNING]  (369490) : Exiting Master process...
Nov 22 04:36:25 np0005532048 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [ALERT]    (369490) : Current worker (369493) exited with code 143 (Terminated)
Nov 22 04:36:25 np0005532048 neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e[369486]: [WARNING]  (369490) : All workers exited. Exiting... (0)
Nov 22 04:36:25 np0005532048 systemd[1]: libpod-43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230.scope: Deactivated successfully.
Nov 22 04:36:25 np0005532048 podman[370454]: 2025-11-22 09:36:25.542583628 +0000 UTC m=+0.077035797 container died 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.576 253665 INFO nova.virt.libvirt.driver [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Deleting instance files /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91_del#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.578 253665 INFO nova.virt.libvirt.driver [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Deletion of /var/lib/nova/instances/7b3234ab-db15-43a8-8093-469f6e62db91_del complete#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.622 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-changed-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.622 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing instance network info cache due to event network-changed-735988ac-a658-458d-975f-872cfa132420. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.622 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.623 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.623 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Refreshing network info cache for port 735988ac-a658-458d-975f-872cfa132420 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:36:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230-userdata-shm.mount: Deactivated successfully.
Nov 22 04:36:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e8f47abe13203d442e803e0c61cc2d5415b1d609f3c4c552ce2b7214c39c4e00-merged.mount: Deactivated successfully.
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.633 253665 INFO nova.compute.manager [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Took 1.52 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.634 253665 DEBUG oslo.service.loopingcall [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.635 253665 DEBUG nova.compute.manager [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.635 253665 DEBUG nova.network.neutron [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:36:25 np0005532048 podman[370454]: 2025-11-22 09:36:25.638795451 +0000 UTC m=+0.173247610 container cleanup 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 04:36:25 np0005532048 systemd[1]: libpod-conmon-43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230.scope: Deactivated successfully.
Nov 22 04:36:25 np0005532048 podman[370470]: 2025-11-22 09:36:25.688490376 +0000 UTC m=+0.117820211 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 22 04:36:25 np0005532048 podman[370541]: 2025-11-22 09:36:25.738031269 +0000 UTC m=+0.071819617 container remove 43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4f76e3-7066-4f97-8d3c-41c88e83d115]: (4, ('Sat Nov 22 09:36:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e (43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230)\n43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230\nSat Nov 22 09:36:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e (43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230)\n43b82b6716b9b665da42bfafd00e3b4d77070d15baf5a275696cc6b5f37a1230\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.748 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b0a77a-681d-4701-826e-60348527f223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.749 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c1e456e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 kernel: tap5c1e456e-40: left promiscuous mode
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.768 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.771 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74c70c6e-ed5a-4d3b-b7ee-00ef2aec74c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.772 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804185.7712848, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.772 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.775 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.780 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.782 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[830cdee7-f179-4819-9cf1-5dc4d9562de8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.784 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba1fc88-e05c-4aed-a3a0-b3a0931404c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.787 253665 INFO nova.virt.libvirt.driver [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance spawned successfully.#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.788 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.793 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.797 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.803 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aa121216-aefb-4baf-96d9-c7ed70efd660]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699787, 'reachable_time': 17261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370569, 'error': None, 'target': 'ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.806 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c1e456e-4030-4169-b20f-3aec7a20c24e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:36:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:25.806 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6691828f-9623-4697-b2d5-339b92af21d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:25 np0005532048 systemd[1]: run-netns-ovnmeta\x2d5c1e456e\x2d4030\x2d4169\x2db20f\x2d3aec7a20c24e.mount: Deactivated successfully.
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.806 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.807 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.807 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.808 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.808 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.808 253665 DEBUG nova.virt.libvirt.driver [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.813 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.814 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804185.7715914, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.814 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.838 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.842 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804185.7813919, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.843 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.858 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.861 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.875 253665 INFO nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Took 9.65 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.876 253665 DEBUG nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.884 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:36:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 405 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.946 253665 INFO nova.virt.libvirt.driver [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Deleting instance files /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_del#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.947 253665 INFO nova.virt.libvirt.driver [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Deletion of /var/lib/nova/instances/c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5_del complete#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.955 253665 INFO nova.compute.manager [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Took 10.94 seconds to build instance.#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.980 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:25 np0005532048 nova_compute[253661]: 2025-11-22 09:36:25.986 253665 DEBUG oslo_concurrency.lockutils [None req-77117639-7ac0-4abf-85fb-73153f75494b ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:26 np0005532048 nova_compute[253661]: 2025-11-22 09:36:26.012 253665 INFO nova.compute.manager [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Took 1.15 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:36:26 np0005532048 nova_compute[253661]: 2025-11-22 09:36:26.012 253665 DEBUG oslo.service.loopingcall [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:36:26 np0005532048 nova_compute[253661]: 2025-11-22 09:36:26.013 253665 DEBUG nova.compute.manager [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:36:26 np0005532048 nova_compute[253661]: 2025-11-22 09:36:26.013 253665 DEBUG nova.network.neutron [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:36:26 np0005532048 nova_compute[253661]: 2025-11-22 09:36:26.726 253665 DEBUG nova.network.neutron [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:26 np0005532048 nova_compute[253661]: 2025-11-22 09:36:26.751 253665 INFO nova.compute.manager [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Took 1.12 seconds to deallocate network for instance.#033[00m
Nov 22 04:36:26 np0005532048 nova_compute[253661]: 2025-11-22 09:36:26.800 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:26 np0005532048 nova_compute[253661]: 2025-11-22 09:36:26.801 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:26 np0005532048 nova_compute[253661]: 2025-11-22 09:36:26.917 253665 DEBUG oslo_concurrency.processutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.089 253665 DEBUG nova.network.neutron [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.105 253665 INFO nova.compute.manager [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.157 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.230 253665 INFO nova.compute.manager [None req-0cd4ff5d-cc69-4d2c-a4b4-237b8e2e3871 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Pausing#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.231 253665 DEBUG nova.objects.instance [None req-0cd4ff5d-cc69-4d2c-a4b4-237b8e2e3871 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'flavor' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.261 253665 DEBUG nova.compute.manager [None req-0cd4ff5d-cc69-4d2c-a4b4-237b8e2e3871 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.262 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804187.2611864, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.262 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.285 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.292 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.337 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 22 04:36:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:36:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330044043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.416 253665 DEBUG oslo_concurrency.processutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.423 253665 DEBUG nova.compute.provider_tree [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.438 253665 DEBUG nova.scheduler.client.report [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.463 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.467 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.537 253665 INFO nova.scheduler.client.report [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 7b3234ab-db15-43a8-8093-469f6e62db91#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.642 253665 DEBUG oslo_concurrency.processutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.692 253665 DEBUG nova.compute.manager [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.693 253665 DEBUG oslo_concurrency.lockutils [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.693 253665 DEBUG oslo_concurrency.lockutils [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.694 253665 DEBUG oslo_concurrency.lockutils [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.694 253665 DEBUG nova.compute.manager [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] No waiting events found dispatching network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.694 253665 WARNING nova.compute.manager [req-cc80cd8f-79ad-4732-82fe-5320911af124 req-fc31c5bd-a895-4d55-9bad-26c348335fd5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received unexpected event network-vif-plugged-735988ac-a658-458d-975f-872cfa132420 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.698 253665 DEBUG oslo_concurrency.lockutils [None req-0494d3bf-aa1b-4951-a927-5e6cc5d3e3ef 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7b3234ab-db15-43a8-8093-469f6e62db91" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.709 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updated VIF entry in instance network info cache for port 735988ac-a658-458d-975f-872cfa132420. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.710 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Updating instance_info_cache with network_info: [{"id": "735988ac-a658-458d-975f-872cfa132420", "address": "fa:16:3e:0e:53:06", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0e:5306", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap735988ac-a6", "ovs_interfaceid": "735988ac-a658-458d-975f-872cfa132420", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.732 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7b3234ab-db15-43a8-8093-469f6e62db91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.732 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.732 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing instance network info cache due to event network-changed-21b54230-3ad3-4b65-b752-5a1b0472844e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.733 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.733 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.733 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Refreshing network info cache for port 21b54230-3ad3-4b65-b752-5a1b0472844e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.850 253665 DEBUG nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.851 253665 DEBUG oslo_concurrency.lockutils [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.851 253665 DEBUG oslo_concurrency.lockutils [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.851 253665 DEBUG oslo_concurrency.lockutils [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.852 253665 DEBUG nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.852 253665 WARNING nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received unexpected event network-vif-plugged-21b54230-3ad3-4b65-b752-5a1b0472844e for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.854 253665 DEBUG nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Received event network-vif-deleted-735988ac-a658-458d-975f-872cfa132420 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:27 np0005532048 nova_compute[253661]: 2025-11-22 09:36:27.855 253665 DEBUG nova.compute.manager [req-67467b2a-588f-4307-a9d7-cdbfc24849a9 req-96587aa1-2136-41c6-9587-382cb9bd7b55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-deleted-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 301 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Nov 22 04:36:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:27.978 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:27.979 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:27.980 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.048 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:36:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:36:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327648505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.121 253665 DEBUG oslo_concurrency.processutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.127 253665 DEBUG nova.compute.provider_tree [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.160 253665 DEBUG nova.scheduler.client.report [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.183 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.217 253665 INFO nova.scheduler.client.report [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.279 253665 DEBUG oslo_concurrency.lockutils [None req-b37d3121-f91d-4098-b6d2-07c30bba5a8d 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.394 253665 DEBUG nova.network.neutron [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.413 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.413 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-unplugged-21b54230-3ad3-4b65-b752-5a1b0472844e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG oslo_concurrency.lockutils [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] No waiting events found dispatching network-vif-unplugged-21b54230-3ad3-4b65-b752-5a1b0472844e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.414 253665 DEBUG nova.compute.manager [req-1e0f8403-ec8e-4682-9be2-622cf25b9867 req-117b54a2-c845-487c-ba2c-454e58a21b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Received event network-vif-unplugged-21b54230-3ad3-4b65-b752-5a1b0472844e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.665 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.668 253665 INFO nova.compute.manager [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Terminating instance#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.669 253665 DEBUG nova.compute.manager [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:36:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:28 np0005532048 kernel: tap0334ba91-f8 (unregistering): left promiscuous mode
Nov 22 04:36:28 np0005532048 NetworkManager[48920]: <info>  [1763804188.7429] device (tap0334ba91-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:36:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:28Z|01147|binding|INFO|Releasing lport 0334ba91-f8b0-462b-a47b-b421e8796a21 from this chassis (sb_readonly=0)
Nov 22 04:36:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:28Z|01148|binding|INFO|Setting lport 0334ba91-f8b0-462b-a47b-b421e8796a21 down in Southbound
Nov 22 04:36:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:28Z|01149|binding|INFO|Removing iface tap0334ba91-f8 ovn-installed in OVS
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.754 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.760 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.779 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376'], port_security=['fa:16:3e:b6:33:76 10.100.0.5 2001:db8::f816:3eff:feb6:3376'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8::f816:3eff:feb6:3376/64', 'neutron:device_id': '2a866674-0c27-4cfc-89f2-dfe8e9768900', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d5326a8-c171-4fdf-9f85-e6536ded5f96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b741a31-36e5-42a1-8d34-26158fe9deb6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=0334ba91-f8b0-462b-a47b-b421e8796a21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.780 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 0334ba91-f8b0-462b-a47b-b421e8796a21 in datapath d3e4e01e-5e3e-4572-b404-ee47aaec1186 unbound from our chassis#033[00m
Nov 22 04:36:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.782 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3e4e01e-5e3e-4572-b404-ee47aaec1186, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:36:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.783 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[60d127bc-0527-4f11-a940-3770aab72ff7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:28.784 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 namespace which is not needed anymore#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:28 np0005532048 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 22 04:36:28 np0005532048 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Consumed 17.217s CPU time.
Nov 22 04:36:28 np0005532048 systemd-machined[215941]: Machine qemu-135-instance-0000006c terminated.
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.919 253665 INFO nova.virt.libvirt.driver [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Instance destroyed successfully.#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.919 253665 DEBUG nova.objects.instance [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 2a866674-0c27-4cfc-89f2-dfe8e9768900 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.936 253665 DEBUG nova.virt.libvirt.vif [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:34:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-534004704',display_name='tempest-TestGettingAddress-server-534004704',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-534004704',id=108,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLYbMWe4z302rooKb1Fl9KsWEsQq9eJv7uwrie/+E2IEF73PZ7Q/MP1db2I4qPqzgaz7gDwBLtve+rM5AYXA2YyYtxocXJ5KxIrfavkYohl0lPkuqWw4VEg4gSQE4G/PeA==',key_name='tempest-TestGettingAddress-1586923381',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-svtsxafy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:35:12Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2a866674-0c27-4cfc-89f2-dfe8e9768900,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.937 253665 DEBUG nova.network.os_vif_util [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.938 253665 DEBUG nova.network.os_vif_util [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.939 253665 DEBUG os_vif [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.942 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0334ba91-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:28 np0005532048 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [NOTICE]   (364385) : haproxy version is 2.8.14-c23fe91
Nov 22 04:36:28 np0005532048 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [NOTICE]   (364385) : path to executable is /usr/sbin/haproxy
Nov 22 04:36:28 np0005532048 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [WARNING]  (364385) : Exiting Master process...
Nov 22 04:36:28 np0005532048 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [WARNING]  (364385) : Exiting Master process...
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.943 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:28 np0005532048 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [ALERT]    (364385) : Current worker (364387) exited with code 143 (Terminated)
Nov 22 04:36:28 np0005532048 neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186[364381]: [WARNING]  (364385) : All workers exited. Exiting... (0)
Nov 22 04:36:28 np0005532048 nova_compute[253661]: 2025-11-22 09:36:28.948 253665 INFO os_vif [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:33:76,bridge_name='br-int',has_traffic_filtering=True,id=0334ba91-f8b0-462b-a47b-b421e8796a21,network=Network(d3e4e01e-5e3e-4572-b404-ee47aaec1186),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0334ba91-f8')#033[00m
Nov 22 04:36:28 np0005532048 systemd[1]: libpod-a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a.scope: Deactivated successfully.
Nov 22 04:36:28 np0005532048 podman[370635]: 2025-11-22 09:36:28.953482362 +0000 UTC m=+0.047737918 container died a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:36:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a-userdata-shm.mount: Deactivated successfully.
Nov 22 04:36:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c895ab309e207727e077cfd1951c34149c6808ab71f4a6a14342fc5743fd6a36-merged.mount: Deactivated successfully.
Nov 22 04:36:29 np0005532048 podman[370635]: 2025-11-22 09:36:29.023383561 +0000 UTC m=+0.117639117 container cleanup a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 04:36:29 np0005532048 systemd[1]: libpod-conmon-a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a.scope: Deactivated successfully.
Nov 22 04:36:29 np0005532048 podman[370689]: 2025-11-22 09:36:29.100037237 +0000 UTC m=+0.051520342 container remove a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:36:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.110 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88167098-c059-428b-b7ff-b1f332a62c40]: (4, ('Sat Nov 22 09:36:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 (a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a)\na5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a\nSat Nov 22 09:36:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 (a5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a)\na5a04d585e92fac0c8de22bb109da5aee1f593a28df0e735b4e4143bc048ec0a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.113 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f907416-609d-4010-9862-226630b65b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.114 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3e4e01e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.116 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:29 np0005532048 kernel: tapd3e4e01e-50: left promiscuous mode
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.133 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.134 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18bfc178-afcf-4af0-9cab-b14e12cee02b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.159 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf24145-a157-4783-bdf1-1088533c3cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.162 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20ad9f7e-ad90-44c1-aac9-8c330a1f0773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.187 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd2d2c1-8eb7-4934-b5b5-7532095499e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696232, 'reachable_time': 43043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370705, 'error': None, 'target': 'ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.190 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3e4e01e-5e3e-4572-b404-ee47aaec1186 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:36:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:29.190 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e8f876-d86a-480c-bebe-d5844d338e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:29 np0005532048 systemd[1]: run-netns-ovnmeta\x2dd3e4e01e\x2d5e3e\x2d4572\x2db404\x2dee47aaec1186.mount: Deactivated successfully.
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.278 253665 INFO nova.compute.manager [None req-da93e537-3a07-47c7-a2b6-83eea84e9f3f 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Unpausing#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.279 253665 DEBUG nova.objects.instance [None req-da93e537-3a07-47c7-a2b6-83eea84e9f3f 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'flavor' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.304 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804189.3041005, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.304 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:36:29 np0005532048 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.309 253665 DEBUG nova.virt.libvirt.guest [None req-da93e537-3a07-47c7-a2b6-83eea84e9f3f 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.310 253665 DEBUG nova.compute.manager [None req-da93e537-3a07-47c7-a2b6-83eea84e9f3f 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.317 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.319 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.338 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.435 253665 INFO nova.virt.libvirt.driver [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Deleting instance files /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900_del#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.436 253665 INFO nova.virt.libvirt.driver [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Deletion of /var/lib/nova/instances/2a866674-0c27-4cfc-89f2-dfe8e9768900_del complete#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.480 253665 INFO nova.compute.manager [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.483 253665 DEBUG oslo.service.loopingcall [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.484 253665 DEBUG nova.compute.manager [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.484 253665 DEBUG nova.network.neutron [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:36:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 246 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 644 KiB/s wr, 149 op/s
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.925 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.925 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing instance network info cache due to event network-changed-0334ba91-f8b0-462b-a47b-b421e8796a21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.926 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.926 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:29 np0005532048 nova_compute[253661]: 2025-11-22 09:36:29.926 253665 DEBUG nova.network.neutron [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Refreshing network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.232 253665 DEBUG nova.compute.manager [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.232 253665 DEBUG nova.compute.manager [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing instance network info cache due to event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.232 253665 DEBUG oslo_concurrency.lockutils [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.235 253665 DEBUG oslo_concurrency.lockutils [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.235 253665 DEBUG nova.network.neutron [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.526 253665 DEBUG nova.network.neutron [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.546 253665 INFO nova.compute.manager [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.605 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.605 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.695 253665 DEBUG oslo_concurrency.processutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:30 np0005532048 nova_compute[253661]: 2025-11-22 09:36:30.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:36:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239708323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.152 253665 DEBUG oslo_concurrency.processutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.157 253665 DEBUG nova.compute.provider_tree [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.174 253665 DEBUG nova.scheduler.client.report [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.204 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.237 253665 INFO nova.scheduler.client.report [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 2a866674-0c27-4cfc-89f2-dfe8e9768900#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.299 253665 DEBUG oslo_concurrency.lockutils [None req-68bae8b4-fdc1-4e7c-98ee-0357f97b69e1 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.576 253665 DEBUG nova.network.neutron [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updated VIF entry in instance network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.577 253665 DEBUG nova.network.neutron [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.605 253665 DEBUG oslo_concurrency.lockutils [req-3fefbb1f-add0-452f-93da-7277bad6b2c6 req-5c4aebb4-2230-4b06-9f22-b12a1c84a61a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.704 253665 DEBUG nova.network.neutron [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updated VIF entry in instance network info cache for port 0334ba91-f8b0-462b-a47b-b421e8796a21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.704 253665 DEBUG nova.network.neutron [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Updating instance_info_cache with network_info: [{"id": "0334ba91-f8b0-462b-a47b-b421e8796a21", "address": "fa:16:3e:b6:33:76", "network": {"id": "d3e4e01e-5e3e-4572-b404-ee47aaec1186", "bridge": "br-int", "label": "tempest-network-smoke--1983448677", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb6:3376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0334ba91-f8", "ovs_interfaceid": "0334ba91-f8b0-462b-a47b-b421e8796a21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.721 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2a866674-0c27-4cfc-89f2-dfe8e9768900" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.722 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-unplugged-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.722 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.723 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.723 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.723 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] No waiting events found dispatching network-vif-unplugged-0334ba91-f8b0-462b-a47b-b421e8796a21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.724 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-unplugged-0334ba91-f8b0-462b-a47b-b421e8796a21 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.724 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.724 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.725 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.725 253665 DEBUG oslo_concurrency.lockutils [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2a866674-0c27-4cfc-89f2-dfe8e9768900-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.725 253665 DEBUG nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] No waiting events found dispatching network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.725 253665 WARNING nova.compute.manager [req-41c96907-d4c7-423d-a33b-482248627ec4 req-7ebc965f-8f68-443e-a3fc-f0e8d323c1c1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received unexpected event network-vif-plugged-0334ba91-f8b0-462b-a47b-b421e8796a21 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:36:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 223 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 295 KiB/s wr, 135 op/s
Nov 22 04:36:31 np0005532048 nova_compute[253661]: 2025-11-22 09:36:31.991 253665 DEBUG nova.compute.manager [req-0a474a9a-ed33-49ae-aae1-7e4cfa512080 req-8faf0496-139f-4eb3-83ac-de76125491fb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Received event network-vif-deleted-0334ba91-f8b0-462b-a47b-b421e8796a21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:32Z|01150|binding|INFO|Releasing lport fc048c06-919a-46ba-ac90-0356d56c12a5 from this chassis (sb_readonly=0)
Nov 22 04:36:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:32Z|01151|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 04:36:32 np0005532048 nova_compute[253661]: 2025-11-22 09:36:32.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 161 op/s
Nov 22 04:36:33 np0005532048 nova_compute[253661]: 2025-11-22 09:36:33.945 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:35Z|01152|binding|INFO|Releasing lport fc048c06-919a-46ba-ac90-0356d56c12a5 from this chassis (sb_readonly=0)
Nov 22 04:36:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:35Z|01153|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 04:36:35 np0005532048 nova_compute[253661]: 2025-11-22 09:36:35.223 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:35 np0005532048 nova_compute[253661]: 2025-11-22 09:36:35.263 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 158 op/s
Nov 22 04:36:35 np0005532048 nova_compute[253661]: 2025-11-22 09:36:35.998 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:36 np0005532048 nova_compute[253661]: 2025-11-22 09:36:36.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:36:36 np0005532048 nova_compute[253661]: 2025-11-22 09:36:36.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:36:36 np0005532048 nova_compute[253661]: 2025-11-22 09:36:36.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:36:36 np0005532048 nova_compute[253661]: 2025-11-22 09:36:36.434 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:36 np0005532048 nova_compute[253661]: 2025-11-22 09:36:36.434 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:36 np0005532048 nova_compute[253661]: 2025-11-22 09:36:36.434 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:36:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:36:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:36:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:37 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 66bfe3a4-b9a9-4a7a-9fe9-ce120f573c3e does not exist
Nov 22 04:36:37 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9928045e-b008-4389-a919-e7b811aff2c5 does not exist
Nov 22 04:36:37 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ab2bd940-0c14-46ae-856a-a8fbbbe966fb does not exist
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:36:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:36:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 167 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 159 op/s
Nov 22 04:36:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:36:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:36:38 np0005532048 nova_compute[253661]: 2025-11-22 09:36:38.188 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:38 np0005532048 nova_compute[253661]: 2025-11-22 09:36:38.232 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:38 np0005532048 nova_compute[253661]: 2025-11-22 09:36:38.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:36:38 np0005532048 nova_compute[253661]: 2025-11-22 09:36:38.233 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:36:38 np0005532048 podman[371119]: 2025-11-22 09:36:38.237654957 +0000 UTC m=+0.045718077 container create 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:36:38 np0005532048 systemd[1]: Started libpod-conmon-4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153.scope.
Nov 22 04:36:38 np0005532048 podman[371119]: 2025-11-22 09:36:38.216787578 +0000 UTC m=+0.024850738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:36:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:38 np0005532048 podman[371119]: 2025-11-22 09:36:38.337914491 +0000 UTC m=+0.145977651 container init 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:36:38 np0005532048 podman[371119]: 2025-11-22 09:36:38.346587677 +0000 UTC m=+0.154650807 container start 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:36:38 np0005532048 podman[371119]: 2025-11-22 09:36:38.351013276 +0000 UTC m=+0.159076406 container attach 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:36:38 np0005532048 gracious_curie[371135]: 167 167
Nov 22 04:36:38 np0005532048 systemd[1]: libpod-4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153.scope: Deactivated successfully.
Nov 22 04:36:38 np0005532048 podman[371119]: 2025-11-22 09:36:38.354146234 +0000 UTC m=+0.162209374 container died 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:36:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-808da85b293a5e94177fdd56ae4917782efd24694d7667d4e33b7f6a2e601737-merged.mount: Deactivated successfully.
Nov 22 04:36:38 np0005532048 podman[371119]: 2025-11-22 09:36:38.402140358 +0000 UTC m=+0.210203498 container remove 4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:36:38 np0005532048 systemd[1]: libpod-conmon-4752020b32a7520b7dd0deeaef06632d2d9df9b2b068a07093556ab4d2649153.scope: Deactivated successfully.
Nov 22 04:36:38 np0005532048 podman[371161]: 2025-11-22 09:36:38.593888837 +0000 UTC m=+0.045298778 container create edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:36:38 np0005532048 systemd[1]: Started libpod-conmon-edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad.scope.
Nov 22 04:36:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:38 np0005532048 podman[371161]: 2025-11-22 09:36:38.574534515 +0000 UTC m=+0.025944466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:36:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:38 np0005532048 podman[371161]: 2025-11-22 09:36:38.695996056 +0000 UTC m=+0.147406027 container init edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:36:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:38 np0005532048 podman[371161]: 2025-11-22 09:36:38.704839356 +0000 UTC m=+0.156249297 container start edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:36:38 np0005532048 podman[371161]: 2025-11-22 09:36:38.708548328 +0000 UTC m=+0.159958299 container attach edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:36:38 np0005532048 nova_compute[253661]: 2025-11-22 09:36:38.949 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:39 np0005532048 nova_compute[253661]: 2025-11-22 09:36:39.762 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804184.7607296, 7b3234ab-db15-43a8-8093-469f6e62db91 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:39 np0005532048 nova_compute[253661]: 2025-11-22 09:36:39.763 253665 INFO nova.compute.manager [-] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:36:39 np0005532048 nova_compute[253661]: 2025-11-22 09:36:39.799 253665 DEBUG nova.compute.manager [None req-41440bae-1652-46f1-a970-fd02a911f8a4 - - - - - -] [instance: 7b3234ab-db15-43a8-8093-469f6e62db91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:39 np0005532048 nova_compute[253661]: 2025-11-22 09:36:39.815 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:39 np0005532048 loving_kilby[371177]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:36:39 np0005532048 loving_kilby[371177]: --> relative data size: 1.0
Nov 22 04:36:39 np0005532048 loving_kilby[371177]: --> All data devices are unavailable
Nov 22 04:36:39 np0005532048 systemd[1]: libpod-edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad.scope: Deactivated successfully.
Nov 22 04:36:39 np0005532048 podman[371161]: 2025-11-22 09:36:39.861091149 +0000 UTC m=+1.312501090 container died edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 22 04:36:39 np0005532048 systemd[1]: libpod-edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad.scope: Consumed 1.063s CPU time.
Nov 22 04:36:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7a774fc1ff79f622d3a9db91c623eb0e9853743afdf51edf6d905fdc88e4b4c0-merged.mount: Deactivated successfully.
Nov 22 04:36:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 175 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 655 KiB/s rd, 674 KiB/s wr, 76 op/s
Nov 22 04:36:39 np0005532048 podman[371161]: 2025-11-22 09:36:39.922084176 +0000 UTC m=+1.373494117 container remove edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:36:39 np0005532048 systemd[1]: libpod-conmon-edb6c309799f67c321fe2c5680b009d25dceb117f15108e4f564b65cc580b2ad.scope: Deactivated successfully.
Nov 22 04:36:40 np0005532048 nova_compute[253661]: 2025-11-22 09:36:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:36:40 np0005532048 nova_compute[253661]: 2025-11-22 09:36:40.302 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804185.3014693, c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:40 np0005532048 nova_compute[253661]: 2025-11-22 09:36:40.303 253665 INFO nova.compute.manager [-] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:36:40 np0005532048 nova_compute[253661]: 2025-11-22 09:36:40.325 253665 DEBUG nova.compute.manager [None req-02e52f06-cf57-46ec-9819-1f1c61e44eb2 - - - - - -] [instance: c8a8c709-00a1-4de2-ba0c-0c3f8acaa7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:40Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:c2:c9 10.100.0.11
Nov 22 04:36:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:40Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:c2:c9 10.100.0.11
Nov 22 04:36:40 np0005532048 podman[371359]: 2025-11-22 09:36:40.606009473 +0000 UTC m=+0.043362229 container create 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:36:40 np0005532048 systemd[1]: Started libpod-conmon-0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f.scope.
Nov 22 04:36:40 np0005532048 podman[371359]: 2025-11-22 09:36:40.587355469 +0000 UTC m=+0.024708235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:36:40 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:40 np0005532048 podman[371359]: 2025-11-22 09:36:40.707254761 +0000 UTC m=+0.144607537 container init 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:36:40 np0005532048 podman[371359]: 2025-11-22 09:36:40.7164841 +0000 UTC m=+0.153836856 container start 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:36:40 np0005532048 podman[371359]: 2025-11-22 09:36:40.719906895 +0000 UTC m=+0.157259671 container attach 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:36:40 np0005532048 zealous_brown[371375]: 167 167
Nov 22 04:36:40 np0005532048 systemd[1]: libpod-0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f.scope: Deactivated successfully.
Nov 22 04:36:40 np0005532048 conmon[371375]: conmon 0d157f3f0b828379924b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f.scope/container/memory.events
Nov 22 04:36:40 np0005532048 podman[371359]: 2025-11-22 09:36:40.726395027 +0000 UTC m=+0.163747783 container died 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:36:40 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ef17f0b0cf6e33b0acd7da39c79722f97ba545b778635927179b270c71ce2561-merged.mount: Deactivated successfully.
Nov 22 04:36:40 np0005532048 podman[371359]: 2025-11-22 09:36:40.764905444 +0000 UTC m=+0.202258200 container remove 0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:36:40 np0005532048 systemd[1]: libpod-conmon-0d157f3f0b828379924b2cddc4960ba301ddc67db0c96b502d1c01f93eea643f.scope: Deactivated successfully.
Nov 22 04:36:40 np0005532048 podman[371399]: 2025-11-22 09:36:40.956244442 +0000 UTC m=+0.042737204 container create 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:36:40 np0005532048 systemd[1]: Started libpod-conmon-8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76.scope.
Nov 22 04:36:41 np0005532048 nova_compute[253661]: 2025-11-22 09:36:41.001 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:41 np0005532048 podman[371399]: 2025-11-22 09:36:40.936735617 +0000 UTC m=+0.023228399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:36:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:41 np0005532048 podman[371399]: 2025-11-22 09:36:41.050527547 +0000 UTC m=+0.137020329 container init 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:36:41 np0005532048 podman[371399]: 2025-11-22 09:36:41.058405343 +0000 UTC m=+0.144898105 container start 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:36:41 np0005532048 podman[371399]: 2025-11-22 09:36:41.062658199 +0000 UTC m=+0.149150981 container attach 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:36:41 np0005532048 nova_compute[253661]: 2025-11-22 09:36:41.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:36:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 185 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 408 KiB/s rd, 1.3 MiB/s wr, 77 op/s
Nov 22 04:36:41 np0005532048 charming_turing[371416]: {
Nov 22 04:36:41 np0005532048 charming_turing[371416]:    "0": [
Nov 22 04:36:41 np0005532048 charming_turing[371416]:        {
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "devices": [
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "/dev/loop3"
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            ],
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_name": "ceph_lv0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_size": "21470642176",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "name": "ceph_lv0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "tags": {
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.cluster_name": "ceph",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.crush_device_class": "",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.encrypted": "0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.osd_id": "0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.type": "block",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.vdo": "0"
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            },
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "type": "block",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "vg_name": "ceph_vg0"
Nov 22 04:36:41 np0005532048 charming_turing[371416]:        }
Nov 22 04:36:41 np0005532048 charming_turing[371416]:    ],
Nov 22 04:36:41 np0005532048 charming_turing[371416]:    "1": [
Nov 22 04:36:41 np0005532048 charming_turing[371416]:        {
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "devices": [
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "/dev/loop4"
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            ],
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_name": "ceph_lv1",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_size": "21470642176",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "name": "ceph_lv1",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "tags": {
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.cluster_name": "ceph",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.crush_device_class": "",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.encrypted": "0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.osd_id": "1",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.type": "block",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.vdo": "0"
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            },
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "type": "block",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "vg_name": "ceph_vg1"
Nov 22 04:36:41 np0005532048 charming_turing[371416]:        }
Nov 22 04:36:41 np0005532048 charming_turing[371416]:    ],
Nov 22 04:36:41 np0005532048 charming_turing[371416]:    "2": [
Nov 22 04:36:41 np0005532048 charming_turing[371416]:        {
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "devices": [
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "/dev/loop5"
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            ],
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_name": "ceph_lv2",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_size": "21470642176",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "name": "ceph_lv2",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "tags": {
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.cluster_name": "ceph",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.crush_device_class": "",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.encrypted": "0",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.osd_id": "2",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.type": "block",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:                "ceph.vdo": "0"
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            },
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "type": "block",
Nov 22 04:36:41 np0005532048 charming_turing[371416]:            "vg_name": "ceph_vg2"
Nov 22 04:36:41 np0005532048 charming_turing[371416]:        }
Nov 22 04:36:41 np0005532048 charming_turing[371416]:    ]
Nov 22 04:36:41 np0005532048 charming_turing[371416]: }
Nov 22 04:36:41 np0005532048 systemd[1]: libpod-8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76.scope: Deactivated successfully.
Nov 22 04:36:42 np0005532048 podman[371425]: 2025-11-22 09:36:42.015470403 +0000 UTC m=+0.025905525 container died 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 22 04:36:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9b28cfe46d3473ee25524c0807d1fa799a1c60f8376859d94251e910de104a96-merged.mount: Deactivated successfully.
Nov 22 04:36:42 np0005532048 podman[371425]: 2025-11-22 09:36:42.073504747 +0000 UTC m=+0.083939829 container remove 8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:36:42 np0005532048 systemd[1]: libpod-conmon-8b433696f84234a4f58313e77c41926f925b92b74652173c9b101e861c455b76.scope: Deactivated successfully.
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.249 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.249 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:36:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033510652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:42 np0005532048 podman[371603]: 2025-11-22 09:36:42.789654406 +0000 UTC m=+0.047100142 container create 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.816 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.816 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.820 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:36:42 np0005532048 nova_compute[253661]: 2025-11-22 09:36:42.821 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:36:42 np0005532048 systemd[1]: Started libpod-conmon-9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915.scope.
Nov 22 04:36:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:42 np0005532048 podman[371603]: 2025-11-22 09:36:42.768927632 +0000 UTC m=+0.026373408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:36:42 np0005532048 podman[371603]: 2025-11-22 09:36:42.876616849 +0000 UTC m=+0.134062605 container init 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 04:36:42 np0005532048 podman[371603]: 2025-11-22 09:36:42.889371067 +0000 UTC m=+0.146816803 container start 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:36:42 np0005532048 podman[371603]: 2025-11-22 09:36:42.892623487 +0000 UTC m=+0.150069223 container attach 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:36:42 np0005532048 festive_nobel[371620]: 167 167
Nov 22 04:36:42 np0005532048 systemd[1]: libpod-9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915.scope: Deactivated successfully.
Nov 22 04:36:42 np0005532048 conmon[371620]: conmon 9ba2adaf5186aa070669 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915.scope/container/memory.events
Nov 22 04:36:42 np0005532048 podman[371603]: 2025-11-22 09:36:42.900038291 +0000 UTC m=+0.157484087 container died 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:36:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0309fd3b478628b11ce8d9cc175665e1e442dec44026ff654d54c7d636e79246-merged.mount: Deactivated successfully.
Nov 22 04:36:42 np0005532048 podman[371603]: 2025-11-22 09:36:42.994769168 +0000 UTC m=+0.252214904 container remove 9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:36:43 np0005532048 systemd[1]: libpod-conmon-9ba2adaf5186aa070669b25e5391245c46c84f12b7a2611aece3212d5b060915.scope: Deactivated successfully.
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.068 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.069 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3321MB free_disk=59.905818939208984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.069 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.069 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.145 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance cf5e117a-f203-4c8f-b795-01fb355ca5e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.146 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 750659ed-67e0-44d4-a5b3-b8d0029ffa7e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.146 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.146 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.168 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:36:43 np0005532048 podman[371644]: 2025-11-22 09:36:43.174289122 +0000 UTC m=+0.039054333 container create 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.187 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.188 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.205 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:36:43 np0005532048 systemd[1]: Started libpod-conmon-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope.
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.237 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:36:43 np0005532048 podman[371644]: 2025-11-22 09:36:43.158303415 +0000 UTC m=+0.023068646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:36:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:43 np0005532048 podman[371644]: 2025-11-22 09:36:43.286170004 +0000 UTC m=+0.150935225 container init 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:36:43 np0005532048 podman[371644]: 2025-11-22 09:36:43.293481476 +0000 UTC m=+0.158246687 container start 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:36:43 np0005532048 podman[371644]: 2025-11-22 09:36:43.298998823 +0000 UTC m=+0.163764054 container attach 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.299 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:36:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819653557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.745 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.753 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.768 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.897 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.898 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.916 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804188.915716, 2a866674-0c27-4cfc-89f2-dfe8e9768900 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.917 253665 INFO nova.compute.manager [-] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.936 253665 DEBUG nova.compute.manager [None req-5356d49e-5875-4915-bd03-aab039725fdd - - - - - -] [instance: 2a866674-0c27-4cfc-89f2-dfe8e9768900] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:43 np0005532048 nova_compute[253661]: 2025-11-22 09:36:43.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]: {
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "osd_id": 1,
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "type": "bluestore"
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:    },
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "osd_id": 0,
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "type": "bluestore"
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:    },
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "osd_id": 2,
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:        "type": "bluestore"
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]:    }
Nov 22 04:36:44 np0005532048 amazing_kowalevski[371661]: }
Nov 22 04:36:44 np0005532048 systemd[1]: libpod-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope: Deactivated successfully.
Nov 22 04:36:44 np0005532048 systemd[1]: libpod-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope: Consumed 1.097s CPU time.
Nov 22 04:36:44 np0005532048 conmon[371661]: conmon 7cb6456b74a9a62246bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope/container/memory.events
Nov 22 04:36:44 np0005532048 podman[371644]: 2025-11-22 09:36:44.389094593 +0000 UTC m=+1.253859824 container died 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:36:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1f1fdfdb3aac850710484649a280b5b5b925f193ab40c7349e57abeece9e3c72-merged.mount: Deactivated successfully.
Nov 22 04:36:44 np0005532048 podman[371644]: 2025-11-22 09:36:44.453399102 +0000 UTC m=+1.318164313 container remove 7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:36:44 np0005532048 systemd[1]: libpod-conmon-7cb6456b74a9a62246bd89b59ebf307117d2eeb23bd5094224211f27da05056d.scope: Deactivated successfully.
Nov 22 04:36:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:36:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:36:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e46ed7d9-5dd7-4711-a634-5c3789efc3ca does not exist
Nov 22 04:36:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c64d6edc-a093-41dd-992c-232999fc859e does not exist
Nov 22 04:36:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:36:44 np0005532048 nova_compute[253661]: 2025-11-22 09:36:44.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:45 np0005532048 nova_compute[253661]: 2025-11-22 09:36:45.899 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:36:45 np0005532048 nova_compute[253661]: 2025-11-22 09:36:45.899 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:36:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 04:36:46 np0005532048 nova_compute[253661]: 2025-11-22 09:36:46.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:46 np0005532048 nova_compute[253661]: 2025-11-22 09:36:46.019 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:46 np0005532048 nova_compute[253661]: 2025-11-22 09:36:46.020 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:46 np0005532048 nova_compute[253661]: 2025-11-22 09:36:46.020 253665 INFO nova.compute.manager [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Shelving#033[00m
Nov 22 04:36:46 np0005532048 nova_compute[253661]: 2025-11-22 09:36:46.043 253665 DEBUG nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:36:46 np0005532048 nova_compute[253661]: 2025-11-22 09:36:46.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:36:46 np0005532048 nova_compute[253661]: 2025-11-22 09:36:46.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.337 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.338 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.352 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.413 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.414 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.422 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.422 253665 INFO nova.compute.claims [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.585 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.632 253665 INFO nova.compute.manager [None req-56dc8adc-669b-4794-aa49-5617ff07e522 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Get console output#033[00m
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.640 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:36:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:36:47 np0005532048 nova_compute[253661]: 2025-11-22 09:36:47.992 253665 DEBUG nova.objects.instance [None req-30a0a7c3-42b8-4be8-91c9-fd345d2368f0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.027 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804208.0180218, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.027 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.049 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.064 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.085 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 22 04:36:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:36:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/842025287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.115 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.124 253665 DEBUG nova.compute.provider_tree [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.137 253665 DEBUG nova.scheduler.client.report [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.175 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.176 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.268 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.269 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.295 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.323 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.440 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.442 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.443 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Creating image(s)#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.477 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.504 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.531 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.535 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.629 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.631 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.632 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.632 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.656 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.660 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.769 253665 DEBUG nova.policy [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:36:48 np0005532048 kernel: tapc027d879-91 (unregistering): left promiscuous mode
Nov 22 04:36:48 np0005532048 NetworkManager[48920]: <info>  [1763804208.8051] device (tapc027d879-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:36:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:48Z|01154|binding|INFO|Releasing lport c027d879-91b3-497d-9f51-8476006ea65c from this chassis (sb_readonly=0)
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:48Z|01155|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c down in Southbound
Nov 22 04:36:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:48Z|01156|binding|INFO|Removing iface tapc027d879-91 ovn-installed in OVS
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.845 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.849 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.850 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 unbound from our chassis#033[00m
Nov 22 04:36:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.852 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a990966c-0851-457f-bdd5-27cf73032674, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:36:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.853 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0d47f326-0cad-4304-9626-5d75981a7dc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.854 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace which is not needed anymore#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.860 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:48 np0005532048 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 22 04:36:48 np0005532048 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Consumed 16.862s CPU time.
Nov 22 04:36:48 np0005532048 systemd-machined[215941]: Machine qemu-136-instance-0000006d terminated.
Nov 22 04:36:48 np0005532048 kernel: tapf4a3cf1b-5c (unregistering): left promiscuous mode
Nov 22 04:36:48 np0005532048 NetworkManager[48920]: <info>  [1763804208.9418] device (tapf4a3cf1b-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.955 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:48Z|01157|binding|INFO|Releasing lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b from this chassis (sb_readonly=0)
Nov 22 04:36:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:48Z|01158|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b down in Southbound
Nov 22 04:36:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:48Z|01159|binding|INFO|Removing iface tapf4a3cf1b-5c ovn-installed in OVS
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:48.967 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:c2:c9 10.100.0.11'], port_security=['fa:16:3e:ac:c2:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37020e16-bbf7-4d46-a463-62f41acbbdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bdf6e5f8-acae-4ca0-a205-a73594668944', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f34ee933-6c38-4761-bdaf-c769de521957, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:48 np0005532048 nova_compute[253661]: 2025-11-22 09:36:48.984 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:48 np0005532048 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 22 04:36:48 np0005532048 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000071.scope: Consumed 15.415s CPU time.
Nov 22 04:36:48 np0005532048 systemd-machined[215941]: Machine qemu-141-instance-00000071 terminated.
Nov 22 04:36:49 np0005532048 NetworkManager[48920]: <info>  [1763804209.0691] manager: (tapc027d879-91): new Tun device (/org/freedesktop/NetworkManager/Devices/474)
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [NOTICE]   (367601) : haproxy version is 2.8.14-c23fe91
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [NOTICE]   (367601) : path to executable is /usr/sbin/haproxy
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [WARNING]  (367601) : Exiting Master process...
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [ALERT]    (367601) : Current worker (367603) exited with code 143 (Terminated)
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[367597]: [WARNING]  (367601) : All workers exited. Exiting... (0)
Nov 22 04:36:49 np0005532048 systemd[1]: libpod-cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a.scope: Deactivated successfully.
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.091 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance shutdown successfully after 3 seconds.#033[00m
Nov 22 04:36:49 np0005532048 podman[371921]: 2025-11-22 09:36:49.096930109 +0000 UTC m=+0.125862461 container died cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.100 253665 DEBUG nova.compute.manager [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.101 253665 DEBUG oslo_concurrency.lockutils [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.101 253665 DEBUG oslo_concurrency.lockutils [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.101 253665 DEBUG oslo_concurrency.lockutils [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.101 253665 DEBUG nova.compute.manager [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.102 253665 WARNING nova.compute.manager [req-7e361b81-b22e-4717-98e3-9ecc8093bf84 req-be242198-1b10-47a7-a3d4-e2aabac95dd6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state shelving.#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.117 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance destroyed successfully.#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.118 253665 DEBUG nova.objects.instance [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'numa_topology' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:49 np0005532048 NetworkManager[48920]: <info>  [1763804209.1434] manager: (tapf4a3cf1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.149 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.158 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.161 253665 DEBUG nova.compute.manager [None req-30a0a7c3-42b8-4be8-91c9-fd345d2368f0 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a-userdata-shm.mount: Deactivated successfully.
Nov 22 04:36:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2118a5a184395a8c9e4712c7d009d993e2f304960246506fc15d26ee155b6cdb-merged.mount: Deactivated successfully.
Nov 22 04:36:49 np0005532048 podman[371921]: 2025-11-22 09:36:49.241483974 +0000 UTC m=+0.270416326 container cleanup cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:36:49 np0005532048 systemd[1]: libpod-conmon-cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a.scope: Deactivated successfully.
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.253 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:49 np0005532048 podman[371979]: 2025-11-22 09:36:49.335724468 +0000 UTC m=+0.061738046 container remove cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.341 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.342 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[323b42f7-9fae-4314-a5f8-617c1dd3ea58]: (4, ('Sat Nov 22 09:36:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a)\ncad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a\nSat Nov 22 09:36:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (cad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a)\ncad801cc06e01b0e77995b608ac2e2838df3283060711bdea5136b68b814914a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.343 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f13e4726-6301-41e0-af6d-4190340aeb48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.344 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:49 np0005532048 kernel: tapa990966c-00: left promiscuous mode
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.370 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e7370711-bfd4-4af1-bf61-49ee5ecaf073]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.371 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.386 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f780253b-d0ec-4ed4-b3a2-8f07c82b7903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.389 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c72746cb-3d4f-41d3-b703-7b6a3b991e3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.400 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Beginning cold snapshot process#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.407 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[113f773d-a7b7-4858-8414-da8198954c0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 698032, 'reachable_time': 15836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372054, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 systemd[1]: run-netns-ovnmeta\x2da990966c\x2d0851\x2d457f\x2dbdd5\x2d27cf73032674.mount: Deactivated successfully.
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.410 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.410 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fc8142-8eaa-40d6-aff1-3fa45548359a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.412 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b in datapath 37020e16-bbf7-4d46-a463-62f41acbbdab unbound from our chassis#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.413 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37020e16-bbf7-4d46-a463-62f41acbbdab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.415 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07ed794e-be31-4a19-9c48-908792e92686]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.415 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab namespace which is not needed anymore#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.471 253665 DEBUG nova.objects.instance [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.484 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.484 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Ensure instance console log exists: /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.485 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.485 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.485 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [NOTICE]   (370388) : haproxy version is 2.8.14-c23fe91
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [NOTICE]   (370388) : path to executable is /usr/sbin/haproxy
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [WARNING]  (370388) : Exiting Master process...
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.559 253665 DEBUG nova.virt.libvirt.imagebackend [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [ALERT]    (370388) : Current worker (370390) exited with code 143 (Terminated)
Nov 22 04:36:49 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[370381]: [WARNING]  (370388) : All workers exited. Exiting... (0)
Nov 22 04:36:49 np0005532048 systemd[1]: libpod-743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7.scope: Deactivated successfully.
Nov 22 04:36:49 np0005532048 podman[372090]: 2025-11-22 09:36:49.570293601 +0000 UTC m=+0.053369678 container died 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:36:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7-userdata-shm.mount: Deactivated successfully.
Nov 22 04:36:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-59bd18a679f16bc779df4f96decf1dfaf90586c9dd2d247ab24ea9be9b34ce71-merged.mount: Deactivated successfully.
Nov 22 04:36:49 np0005532048 podman[372090]: 2025-11-22 09:36:49.656587097 +0000 UTC m=+0.139663164 container cleanup 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:36:49 np0005532048 systemd[1]: libpod-conmon-743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7.scope: Deactivated successfully.
Nov 22 04:36:49 np0005532048 podman[372154]: 2025-11-22 09:36:49.719680086 +0000 UTC m=+0.040249762 container remove 743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.725 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1bed86c-d8fa-4794-b7a3-3c6479e95c39]: (4, ('Sat Nov 22 09:36:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab (743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7)\n743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7\nSat Nov 22 09:36:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab (743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7)\n743de5211d5ce6ef12e323c8d39ffa9d637a858e9e98e04909b3874e53a62ea7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.727 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73132327-e4db-4912-b8b2-c4af99a47f19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.728 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37020e16-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.730 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:49 np0005532048 kernel: tap37020e16-b0: left promiscuous mode
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.753 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.756 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ec9256-e58d-40e8-922b-33131a937ce4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.773 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f67af7fe-c1d7-4d30-9238-9ef31369f776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.775 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e16c0b6f-93bb-496e-885d-a4c45709ecdc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.795 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f065cf78-127d-4828-ab07-a6fadb646a1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703465, 'reachable_time': 19808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372175, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.797 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:36:49 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:49.798 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6f44f8ee-41ff-4be8-8459-d3c64d4d2d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:49 np0005532048 nova_compute[253661]: 2025-11-22 09:36:49.821 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] creating snapshot(35d14d76d37749a08ddb60dfc3439544) on rbd image(cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:36:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 200 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:36:50 np0005532048 systemd[1]: run-netns-ovnmeta\x2d37020e16\x2dbbf7\x2d4d46\x2da463\x2d62f41acbbdab.mount: Deactivated successfully.
Nov 22 04:36:50 np0005532048 podman[372194]: 2025-11-22 09:36:50.326185529 +0000 UTC m=+0.063137571 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:36:50 np0005532048 podman[372195]: 2025-11-22 09:36:50.330648759 +0000 UTC m=+0.071638772 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:36:50 np0005532048 nova_compute[253661]: 2025-11-22 09:36:50.708 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Successfully created port: 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:36:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Nov 22 04:36:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Nov 22 04:36:50 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.006 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.194 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.195 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.195 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.196 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.196 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.196 253665 WARNING nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.197 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.197 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.198 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.198 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.198 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.199 253665 WARNING nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state suspended and task_state None.#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.199 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.199 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.199 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.200 253665 DEBUG oslo_concurrency.lockutils [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.200 253665 DEBUG nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.200 253665 WARNING nova.compute.manager [req-f235f5dc-ad92-4abc-9d10-ddc17055483e req-fc65c1d1-877b-4819-89ee-9aa1d93f3240 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state suspended and task_state None.#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.242 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] cloning vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk@35d14d76d37749a08ddb60dfc3439544 to images/fd10acf7-7116-43c7-8e62-b2aed4e8d629 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.386 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] flattening images/fd10acf7-7116-43c7-8e62-b2aed4e8d629 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.681 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.853 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] removing snapshot(35d14d76d37749a08ddb60dfc3439544) on rbd image(cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.871 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Successfully updated port: 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.887 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.888 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.888 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:36:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 219 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 30 op/s
Nov 22 04:36:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Nov 22 04:36:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Nov 22 04:36:51 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Nov 22 04:36:51 np0005532048 nova_compute[253661]: 2025-11-22 09:36:51.981 253665 DEBUG nova.storage.rbd_utils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] creating snapshot(snap) on rbd image(fd10acf7-7116-43c7-8e62-b2aed4e8d629) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.054 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.062 253665 INFO nova.compute.manager [None req-18f7055b-300e-4c42-8a39-6367c1536af7 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Get console output#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.233 253665 INFO nova.compute.manager [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Resuming#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.235 253665 DEBUG nova.objects.instance [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'flavor' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:36:52
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.266 253665 DEBUG oslo_concurrency.lockutils [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.267 253665 DEBUG oslo_concurrency.lockutils [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.267 253665 DEBUG nova.network.neutron [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:36:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.924 253665 DEBUG nova.network.neutron [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Nov 22 04:36:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Nov 22 04:36:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.954 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.955 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance network_info: |[{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.959 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start _get_guest_xml network_info=[{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.965 253665 WARNING nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.972 253665 DEBUG nova.virt.libvirt.host [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.973 253665 DEBUG nova.virt.libvirt.host [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.978 253665 DEBUG nova.virt.libvirt.host [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.978 253665 DEBUG nova.virt.libvirt.host [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.979 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.979 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.979 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.980 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.981 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.981 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.981 253665 DEBUG nova.virt.hardware [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:36:52 np0005532048 nova_compute[253661]: 2025-11-22 09:36:52.983 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:36:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479172189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.492 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.520 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.524 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.810 253665 DEBUG nova.network.neutron [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [{"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.828 253665 DEBUG oslo_concurrency.lockutils [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.834 253665 DEBUG nova.virt.libvirt.vif [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-635689639',display_name='tempest-TestNetworkAdvancedServerOps-server-635689639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-635689639',id=113,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGat/0/6ionKBrSzyBS7EbGqwOoirIfAackkh+AYjCZXxoZzDjZWyHoUi84+Rs5w5CQ8NN8aOtxfB73LToni6HeOyO4Tgvy+GHztLu+Mg7hY5eYsKNagHEATOhR/nV+7Ew==',key_name='tempest-TestNetworkAdvancedServerOps-353719525',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:36:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-920qa6ny',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:36:49Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=750659ed-67e0-44d4-a5b3-b8d0029ffa7e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.835 253665 DEBUG nova.network.os_vif_util [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.835 253665 DEBUG nova.network.os_vif_util [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.836 253665 DEBUG os_vif [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.837 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.837 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.841 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4a3cf1b-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.841 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4a3cf1b-5c, col_values=(('external_ids', {'iface-id': 'f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:c2:c9', 'vm-uuid': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.841 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.842 253665 INFO os_vif [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c')#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.862 253665 DEBUG nova.objects.instance [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'numa_topology' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 292 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 8.0 MiB/s wr, 195 op/s
Nov 22 04:36:53 np0005532048 kernel: tapf4a3cf1b-5c: entered promiscuous mode
Nov 22 04:36:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:53Z|01160|binding|INFO|Claiming lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for this chassis.
Nov 22 04:36:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:53Z|01161|binding|INFO|f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b: Claiming fa:16:3e:ac:c2:c9 10.100.0.11
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:53 np0005532048 NetworkManager[48920]: <info>  [1763804213.9408] manager: (tapf4a3cf1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/476)
Nov 22 04:36:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.947 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:c2:c9 10.100.0.11'], port_security=['fa:16:3e:ac:c2:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37020e16-bbf7-4d46-a463-62f41acbbdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bdf6e5f8-acae-4ca0-a205-a73594668944', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f34ee933-6c38-4761-bdaf-c769de521957, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.948 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b in datapath 37020e16-bbf7-4d46-a463-62f41acbbdab bound to our chassis#033[00m
Nov 22 04:36:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.950 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37020e16-bbf7-4d46-a463-62f41acbbdab#033[00m
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:53Z|01162|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b ovn-installed in OVS
Nov 22 04:36:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:53Z|01163|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b up in Southbound
Nov 22 04:36:53 np0005532048 nova_compute[253661]: 2025-11-22 09:36:53.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.966 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a5712c83-ee75-474b-b1d0-75dc93cd0893]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.968 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37020e16-b1 in ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:36:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.971 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37020e16-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:36:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.971 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf66f9a-7dba-4f35-a1bb-89f35f9e598e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.972 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4b984b-39a8-469e-acc8-e259b322349d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:53 np0005532048 systemd-machined[215941]: New machine qemu-142-instance-00000071.
Nov 22 04:36:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:53.986 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ff37fc9f-20b9-4ec2-b825-36c51ca7f959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:36:53 np0005532048 systemd[1]: Started Virtual Machine qemu-142-instance-00000071.
Nov 22 04:36:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841748321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.006 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b6710001-1a42-4e07-ad02-fb837135808a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 systemd-udevd[372406]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.023 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.025 253665 DEBUG nova.virt.libvirt.vif [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-628839971',display_name='tempest-TestNetworkBasicOps-server-628839971',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-628839971',id=114,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIZ5gGdNvaqAtX8j4rLIehpVsycYZstZu428EjSgRsIaTKO3qobX2DWEa45t7eW4vzvXR6ESLf4/AnMv9en3fY5WkAniEGuSXx7koBFV1HR0ktIagOKt25I/jbmVsb/jUA==',key_name='tempest-TestNetworkBasicOps-971917795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-n0lt6esc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:48Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.025 253665 DEBUG nova.network.os_vif_util [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.026 253665 DEBUG nova.network.os_vif_util [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:54 np0005532048 NetworkManager[48920]: <info>  [1763804214.0281] device (tapf4a3cf1b-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:36:54 np0005532048 NetworkManager[48920]: <info>  [1763804214.0289] device (tapf4a3cf1b-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.029 253665 DEBUG nova.objects.instance [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.036 253665 DEBUG nova.compute.manager [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.036 253665 DEBUG nova.compute.manager [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing instance network info cache due to event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.037 253665 DEBUG oslo_concurrency.lockutils [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.037 253665 DEBUG oslo_concurrency.lockutils [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.037 253665 DEBUG nova.network.neutron [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.042 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6746c887-aafd-427e-abb5-d44c00f351e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 systemd-udevd[372410]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:36:54 np0005532048 NetworkManager[48920]: <info>  [1763804214.0488] manager: (tap37020e16-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/477)
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.049 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[044b8fab-0684-4e82-b4fa-4e3778ab1fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.058 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <uuid>1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9</uuid>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <name>instance-00000072</name>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkBasicOps-server-628839971</nova:name>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:36:52</nova:creationTime>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <nova:port uuid="22b006cb-c06d-4ebb-9f02-ccbbdfc34f26">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <entry name="serial">1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9</entry>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <entry name="uuid">1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9</entry>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:c9:83:85"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <target dev="tap22b006cb-c0"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/console.log" append="off"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:36:54 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:36:54 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:36:54 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:36:54 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.060 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Preparing to wait for external event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.061 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.061 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.061 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.062 253665 DEBUG nova.virt.libvirt.vif [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-628839971',display_name='tempest-TestNetworkBasicOps-server-628839971',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-628839971',id=114,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIZ5gGdNvaqAtX8j4rLIehpVsycYZstZu428EjSgRsIaTKO3qobX2DWEa45t7eW4vzvXR6ESLf4/AnMv9en3fY5WkAniEGuSXx7koBFV1HR0ktIagOKt25I/jbmVsb/jUA==',key_name='tempest-TestNetworkBasicOps-971917795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-n0lt6esc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:48Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.062 253665 DEBUG nova.network.os_vif_util [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.063 253665 DEBUG nova.network.os_vif_util [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.063 253665 DEBUG os_vif [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.067 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.067 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.068 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.072 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22b006cb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.073 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22b006cb-c0, col_values=(('external_ids', {'iface-id': '22b006cb-c06d-4ebb-9f02-ccbbdfc34f26', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:83:85', 'vm-uuid': '1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:54 np0005532048 NetworkManager[48920]: <info>  [1763804214.0776] manager: (tap22b006cb-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.079 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.084 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.085 253665 INFO os_vif [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0')#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.085 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2f28cbed-a599-4c7d-84aa-e931b1aa6ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.089 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2c0935-9118-481f-985f-dba8a7277788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 NetworkManager[48920]: <info>  [1763804214.1221] device (tap37020e16-b0): carrier: link connected
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.130 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ddb6a6-727e-45b9-a701-fc66d5374109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.135 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.136 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.136 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:c9:83:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.137 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Using config drive#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.159 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[547c0d9a-4110-4410-a8cc-d2b4bb8ab5fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37020e16-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:87:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706486, 'reachable_time': 43632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372443, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.166 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.179 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[803bcdaa-3eb6-491e-9d1c-777e99deee7d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:87fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706486, 'tstamp': 706486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372462, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.208 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be2921d1-32ee-49c4-a175-c4b2d8b644a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37020e16-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:87:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706486, 'reachable_time': 43632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372463, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.246 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[995043fc-8e6a-4df1-9e8d-cb2579cc2c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.328 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b563278-c386-498b-87b0-7052dbb67f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37020e16-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.331 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37020e16-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:54 np0005532048 NetworkManager[48920]: <info>  [1763804214.3344] manager: (tap37020e16-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/479)
Nov 22 04:36:54 np0005532048 kernel: tap37020e16-b0: entered promiscuous mode
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.343 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37020e16-b0, col_values=(('external_ids', {'iface-id': 'fc048c06-919a-46ba-ac90-0356d56c12a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:54Z|01164|binding|INFO|Releasing lport fc048c06-919a-46ba-ac90-0356d56c12a5 from this chassis (sb_readonly=0)
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.370 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.371 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.372 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fade5d8d-9796-43e9-b1dc-7994adb50b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.373 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/37020e16-bbf7-4d46-a463-62f41acbbdab.pid.haproxy
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 37020e16-bbf7-4d46-a463-62f41acbbdab
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:36:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:54.374 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'env', 'PROCESS_TAG=haproxy-37020e16-bbf7-4d46-a463-62f41acbbdab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37020e16-bbf7-4d46-a463-62f41acbbdab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.479 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Snapshot image upload complete#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.480 253665 DEBUG nova.compute.manager [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.521 253665 INFO nova.compute.manager [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Shelve offloading#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.532 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance destroyed successfully.#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.532 253665 DEBUG nova.compute.manager [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.534 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.535 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.535 253665 DEBUG nova.network.neutron [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.627 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 750659ed-67e0-44d4-a5b3-b8d0029ffa7e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.628 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804214.6266155, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.628 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Started (Lifecycle Event)#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.647 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.665 253665 DEBUG nova.compute.manager [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.666 253665 DEBUG nova.objects.instance [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'pci_devices' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.670 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.680 253665 INFO nova.virt.libvirt.driver [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance running successfully.#033[00m
Nov 22 04:36:54 np0005532048 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.683 253665 DEBUG nova.virt.libvirt.guest [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.684 253665 DEBUG nova.compute.manager [None req-3d0d13af-d8c9-41b8-9341-ad6adf2e508f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.687 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.687 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804214.6316602, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.687 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.706 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.709 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:36:54 np0005532048 nova_compute[253661]: 2025-11-22 09:36:54.737 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 22 04:36:54 np0005532048 podman[372549]: 2025-11-22 09:36:54.827702855 +0000 UTC m=+0.066697690 container create 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:36:54 np0005532048 systemd[1]: Started libpod-conmon-7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b.scope.
Nov 22 04:36:54 np0005532048 podman[372549]: 2025-11-22 09:36:54.799294899 +0000 UTC m=+0.038289754 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:36:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bba967c037ed38e7577ecc1bb77c57e590063a454084161df85b4b7d476e334b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:54 np0005532048 podman[372549]: 2025-11-22 09:36:54.925802005 +0000 UTC m=+0.164796870 container init 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:36:54 np0005532048 podman[372549]: 2025-11-22 09:36:54.931901496 +0000 UTC m=+0.170896331 container start 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:36:54 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [NOTICE]   (372568) : New worker (372570) forked
Nov 22 04:36:54 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [NOTICE]   (372568) : Loading success.
Nov 22 04:36:55 np0005532048 nova_compute[253661]: 2025-11-22 09:36:55.673 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Creating config drive at /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config#033[00m
Nov 22 04:36:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:36:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:36:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:36:55 np0005532048 nova_compute[253661]: 2025-11-22 09:36:55.679 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps46b75wm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:36:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:36:55 np0005532048 nova_compute[253661]: 2025-11-22 09:36:55.837 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps46b75wm" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:55 np0005532048 nova_compute[253661]: 2025-11-22 09:36:55.866 253665 DEBUG nova.storage.rbd_utils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:55 np0005532048 nova_compute[253661]: 2025-11-22 09:36:55.870 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 292 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 8.0 MiB/s wr, 193 op/s
Nov 22 04:36:55 np0005532048 nova_compute[253661]: 2025-11-22 09:36:55.940 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:55 np0005532048 nova_compute[253661]: 2025-11-22 09:36:55.940 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:55 np0005532048 nova_compute[253661]: 2025-11-22 09:36:55.952 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.049 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.070 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.070 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.078 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.078 253665 INFO nova.compute.claims [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.084 253665 DEBUG oslo_concurrency.processutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.085 253665 INFO nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Deleting local config drive /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9/disk.config because it was imported into RBD.#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.103 253665 DEBUG nova.network.neutron [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updated VIF entry in instance network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.103 253665 DEBUG nova.network.neutron [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.127 253665 DEBUG oslo_concurrency.lockutils [req-5e5de446-0df8-4792-894f-8ce392252a9c req-7a299304-ba02-4b19-94a0-99bb023d918c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:56 np0005532048 kernel: tap22b006cb-c0: entered promiscuous mode
Nov 22 04:36:56 np0005532048 NetworkManager[48920]: <info>  [1763804216.1478] manager: (tap22b006cb-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/480)
Nov 22 04:36:56 np0005532048 systemd-udevd[372429]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:36:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:56Z|01165|binding|INFO|Claiming lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 for this chassis.
Nov 22 04:36:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:56Z|01166|binding|INFO|22b006cb-c06d-4ebb-9f02-ccbbdfc34f26: Claiming fa:16:3e:c9:83:85 10.100.0.8
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.165 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:83:85 10.100.0.8'], port_security=['fa:16:3e:c9:83:85 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669fa85d-7478-40e5-958b-7300ef3552b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '714d001a-9857-4892-9e43-4add0015169f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f6a9cc6-46e5-4035-8aed-8dfaed3a2f4d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.167 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 in datapath 669fa85d-7478-40e5-958b-7300ef3552b5 bound to our chassis#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.169 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 669fa85d-7478-40e5-958b-7300ef3552b5#033[00m
Nov 22 04:36:56 np0005532048 NetworkManager[48920]: <info>  [1763804216.1729] device (tap22b006cb-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:36:56 np0005532048 NetworkManager[48920]: <info>  [1763804216.1737] device (tap22b006cb-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:36:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:56Z|01167|binding|INFO|Setting lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 ovn-installed in OVS
Nov 22 04:36:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:56Z|01168|binding|INFO|Setting lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 up in Southbound
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.180 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.186 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea876067-6195-4a8d-8a85-1339488746c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.187 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap669fa85d-71 in ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.190 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap669fa85d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.190 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[69350cbd-a8f9-4525-8aa7-0aa894488bfc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.191 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6a00b899-2a57-454f-a8db-9ffc67dd66c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 systemd-machined[215941]: New machine qemu-143-instance-00000072.
Nov 22 04:36:56 np0005532048 systemd[1]: Started Virtual Machine qemu-143-instance-00000072.
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.203 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9e75ba39-2988-4f62-b353-8fcc47711678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.221 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5faa5112-ed65-498d-908a-76a44a5719d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.240 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.260 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dd787000-c16e-4fc3-9507-0b43a50c865c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 NetworkManager[48920]: <info>  [1763804216.2685] manager: (tap669fa85d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/481)
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.267 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ff38814e-ec28-4407-a4e6-e18dcae5f47b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 podman[372620]: 2025-11-22 09:36:56.284944285 +0000 UTC m=+0.159996010 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.300 253665 DEBUG nova.compute.manager [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.300 253665 DEBUG oslo_concurrency.lockutils [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.301 253665 DEBUG oslo_concurrency.lockutils [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.301 253665 DEBUG oslo_concurrency.lockutils [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.302 253665 DEBUG nova.compute.manager [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.302 253665 WARNING nova.compute.manager [req-20203ac5-d8d0-4102-85c2-d677bd41976e req-a14a8b22-f27d-4437-9f37-56976d14fdca 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state active and task_state None.#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.304 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1f4eaf-c62a-4114-b28f-87a33c852704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.308 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[66513390-df3c-46cb-9765-a234af838515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 NetworkManager[48920]: <info>  [1763804216.3408] device (tap669fa85d-70): carrier: link connected
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.351 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[470d50f3-9c19-4009-90ab-dc665fec7f4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.370 253665 DEBUG nova.compute.manager [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.370 253665 DEBUG oslo_concurrency.lockutils [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.371 253665 DEBUG oslo_concurrency.lockutils [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.372 253665 DEBUG oslo_concurrency.lockutils [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.372 253665 DEBUG nova.compute.manager [req-b2817310-ab9d-4760-81e0-6f032915c76c req-e11c1cd2-8e4c-4053-b9d4-055ef8ce1680 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Processing event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.373 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6238c5a1-09b8-4f48-92ac-4c7f1a54cbfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap669fa85d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:cb:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706707, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372680, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[447daf92-dff5-4784-9098-83b271f63087]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:cbce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706707, 'tstamp': 706707}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372691, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.419 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8963aec1-9759-4629-89b0-0233dd43abf8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap669fa85d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:cb:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706707, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372701, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.459 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7523d0-51e8-4e1e-966b-e1fea58a3a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.542 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4581a881-1efb-4e1c-9eb4-4fca3fe01a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.544 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap669fa85d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.544 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.545 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap669fa85d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:56 np0005532048 NetworkManager[48920]: <info>  [1763804216.5481] manager: (tap669fa85d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.547 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:56 np0005532048 kernel: tap669fa85d-70: entered promiscuous mode
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.556 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap669fa85d-70, col_values=(('external_ids', {'iface-id': 'b0af7c96-3c08-40c2-b3ca-1e251090d01d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:56Z|01169|binding|INFO|Releasing lport b0af7c96-3c08-40c2-b3ca-1e251090d01d from this chassis (sb_readonly=0)
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.581 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.592 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/669fa85d-7478-40e5-958b-7300ef3552b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/669fa85d-7478-40e5-958b-7300ef3552b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.593 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bd27346a-aeba-485c-9ce0-c1d3ac52e950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.594 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-669fa85d-7478-40e5-958b-7300ef3552b5
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/669fa85d-7478-40e5-958b-7300ef3552b5.pid.haproxy
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 669fa85d-7478-40e5-958b-7300ef3552b5
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:36:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:56.594 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'env', 'PROCESS_TAG=haproxy-669fa85d-7478-40e5-958b-7300ef3552b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/669fa85d-7478-40e5-958b-7300ef3552b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:36:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:36:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:36:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:36:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:36:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:36:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:36:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462887379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.761 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.770 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804216.7696567, 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.770 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] VM Started (Lifecycle Event)#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.773 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.778 253665 DEBUG nova.compute.provider_tree [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.782 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.786 253665 INFO nova.virt.libvirt.driver [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance spawned successfully.#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.786 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.804 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.806 253665 DEBUG nova.scheduler.client.report [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.816 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.820 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.821 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.821 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.821 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.822 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.822 253665 DEBUG nova.virt.libvirt.driver [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.851 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.852 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.855 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.856 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804216.770867, 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.856 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.886 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.891 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804216.7811797, 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.891 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.909 253665 INFO nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Took 8.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.910 253665 DEBUG nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.911 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.915 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.916 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.924 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.967 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.970 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:36:56 np0005532048 nova_compute[253661]: 2025-11-22 09:36:56.993 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.026 253665 INFO nova.compute.manager [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Took 9.63 seconds to build instance.#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.049 253665 DEBUG oslo_concurrency.lockutils [None req-d148aeeb-3298-4a6f-8eba-8eab531781c8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:57 np0005532048 podman[372781]: 2025-11-22 09:36:56.979273442 +0000 UTC m=+0.030090920 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.082 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.084 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.084 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Creating image(s)#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.108 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.331 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.363 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.368 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.418 253665 DEBUG nova.policy [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.422 253665 DEBUG nova.network.neutron [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.439 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.465 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.466 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.467 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.468 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.497 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:36:57 np0005532048 nova_compute[253661]: 2025-11-22 09:36:57.503 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2837c740-6ce1-47d5-ad27-107211f74db7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:36:57 np0005532048 podman[372781]: 2025-11-22 09:36:57.539540825 +0000 UTC m=+0.590358223 container create 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:36:57 np0005532048 systemd[1]: Started libpod-conmon-16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239.scope.
Nov 22 04:36:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 325 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 9.6 MiB/s wr, 198 op/s
Nov 22 04:36:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:36:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb6d1ba9fa3eb3ce0aa01e1a76d7c9a2d999425c99d5548d6568348e14d15459/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:36:57 np0005532048 podman[372781]: 2025-11-22 09:36:57.97444656 +0000 UTC m=+1.025263978 container init 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:36:57 np0005532048 podman[372781]: 2025-11-22 09:36:57.981924827 +0000 UTC m=+1.032742225 container start 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 04:36:58 np0005532048 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [NOTICE]   (372888) : New worker (372893) forked
Nov 22 04:36:58 np0005532048 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [NOTICE]   (372888) : Loading success.
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.153 253665 INFO nova.compute.manager [None req-7a9e7e64-a80c-419c-9955-05b15582d79f ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Get console output#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.160 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.308 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 2837c740-6ce1-47d5-ad27-107211f74db7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.805s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.386 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.425 253665 DEBUG nova.compute.manager [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.426 253665 DEBUG oslo_concurrency.lockutils [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.426 253665 DEBUG oslo_concurrency.lockutils [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.426 253665 DEBUG oslo_concurrency.lockutils [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.426 253665 DEBUG nova.compute.manager [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.427 253665 WARNING nova.compute.manager [req-312be558-9532-421b-9c50-f354f5556497 req-00311cc4-93b6-4ae3-be1b-d8be327a20ae 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state active and task_state None.#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.518 253665 DEBUG nova.compute.manager [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.518 253665 DEBUG oslo_concurrency.lockutils [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.519 253665 DEBUG oslo_concurrency.lockutils [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.519 253665 DEBUG oslo_concurrency.lockutils [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.519 253665 DEBUG nova.compute.manager [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] No waiting events found dispatching network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.520 253665 WARNING nova.compute.manager [req-0023a5e5-59f6-4f41-8ea3-8e4d8911e9cf req-9abc0252-af97-4f51-a6ac-4a0c9e22bb3e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received unexpected event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.525 253665 DEBUG nova.objects.instance [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 2837c740-6ce1-47d5-ad27-107211f74db7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.538 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.539 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Ensure instance console log exists: /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.539 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.540 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.540 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:36:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Nov 22 04:36:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Nov 22 04:36:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.780 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Successfully created port: 18df29f5-368d-4b94-ac69-8541de164d02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.793 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance destroyed successfully.#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.793 253665 DEBUG nova.objects.instance [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'resources' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.813 253665 DEBUG nova.virt.libvirt.vif [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member',shelved_at='2025-11-22T09:36:54.480378',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fd10acf7-7116-43c7-8e62-b2aed4e8d629'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:36:49Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.814 253665 DEBUG nova.network.os_vif_util [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.815 253665 DEBUG nova.network.os_vif_util [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.815 253665 DEBUG os_vif [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.817 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc027d879-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.821 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:58 np0005532048 nova_compute[253661]: 2025-11-22 09:36:58.824 253665 INFO os_vif [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.065 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.066 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.066 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.066 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.066 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.068 253665 INFO nova.compute.manager [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Terminating instance#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.069 253665 DEBUG nova.compute.manager [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:36:59 np0005532048 kernel: tapf4a3cf1b-5c (unregistering): left promiscuous mode
Nov 22 04:36:59 np0005532048 NetworkManager[48920]: <info>  [1763804219.1664] device (tapf4a3cf1b-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:36:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:59Z|01170|binding|INFO|Releasing lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b from this chassis (sb_readonly=0)
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:59Z|01171|binding|INFO|Setting lport f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b down in Southbound
Nov 22 04:36:59 np0005532048 ovn_controller[152872]: 2025-11-22T09:36:59Z|01172|binding|INFO|Removing iface tapf4a3cf1b-5c ovn-installed in OVS
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.189 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.194 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:c2:c9 10.100.0.11'], port_security=['fa:16:3e:ac:c2:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '750659ed-67e0-44d4-a5b3-b8d0029ffa7e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37020e16-bbf7-4d46-a463-62f41acbbdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0112f56c468c4f90971b92126078e951', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'bdf6e5f8-acae-4ca0-a205-a73594668944', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f34ee933-6c38-4761-bdaf-c769de521957, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.196 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b in datapath 37020e16-bbf7-4d46-a463-62f41acbbdab unbound from our chassis#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.198 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37020e16-bbf7-4d46-a463-62f41acbbdab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.199 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0cc854-0616-462b-a17b-4d523acf30ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.200 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab namespace which is not needed anymore#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:59 np0005532048 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.238 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Successfully created port: a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:36:59 np0005532048 systemd-machined[215941]: Machine qemu-142-instance-00000071 terminated.
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.311 253665 INFO nova.virt.libvirt.driver [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance destroyed successfully.#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.312 253665 DEBUG nova.objects.instance [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lazy-loading 'resources' on Instance uuid 750659ed-67e0-44d4-a5b3-b8d0029ffa7e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.323 253665 DEBUG nova.virt.libvirt.vif [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-635689639',display_name='tempest-TestNetworkAdvancedServerOps-server-635689639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-635689639',id=113,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGat/0/6ionKBrSzyBS7EbGqwOoirIfAackkh+AYjCZXxoZzDjZWyHoUi84+Rs5w5CQ8NN8aOtxfB73LToni6HeOyO4Tgvy+GHztLu+Mg7hY5eYsKNagHEATOhR/nV+7Ew==',key_name='tempest-TestNetworkAdvancedServerOps-353719525',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:36:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0112f56c468c4f90971b92126078e951',ramdisk_id='',reservation_id='r-920qa6ny',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1215776227',owner_user_name='tempest-TestNetworkAdvancedServerOps-1215776227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:36:54Z,user_data=None,user_id='ac89f965408f4a26b39ee2ae4725ff14',uuid=750659ed-67e0-44d4-a5b3-b8d0029ffa7e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.324 253665 DEBUG nova.network.os_vif_util [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converting VIF {"id": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "address": "fa:16:3e:ac:c2:c9", "network": {"id": "37020e16-bbf7-4d46-a463-62f41acbbdab", "bridge": "br-int", "label": "tempest-network-smoke--290144168", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0112f56c468c4f90971b92126078e951", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4a3cf1b-5c", "ovs_interfaceid": "f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.324 253665 DEBUG nova.network.os_vif_util [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.325 253665 DEBUG os_vif [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.327 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4a3cf1b-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.332 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.340 253665 INFO os_vif [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:c2:c9,bridge_name='br-int',has_traffic_filtering=True,id=f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b,network=Network(37020e16-bbf7-4d46-a463-62f41acbbdab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4a3cf1b-5c')#033[00m
Nov 22 04:36:59 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [NOTICE]   (372568) : haproxy version is 2.8.14-c23fe91
Nov 22 04:36:59 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [NOTICE]   (372568) : path to executable is /usr/sbin/haproxy
Nov 22 04:36:59 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [WARNING]  (372568) : Exiting Master process...
Nov 22 04:36:59 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [ALERT]    (372568) : Current worker (372570) exited with code 143 (Terminated)
Nov 22 04:36:59 np0005532048 neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab[372564]: [WARNING]  (372568) : All workers exited. Exiting... (0)
Nov 22 04:36:59 np0005532048 systemd[1]: libpod-7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b.scope: Deactivated successfully.
Nov 22 04:36:59 np0005532048 podman[373024]: 2025-11-22 09:36:59.403836077 +0000 UTC m=+0.075692733 container died 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:36:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b-userdata-shm.mount: Deactivated successfully.
Nov 22 04:36:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bba967c037ed38e7577ecc1bb77c57e590063a454084161df85b4b7d476e334b-merged.mount: Deactivated successfully.
Nov 22 04:36:59 np0005532048 podman[373024]: 2025-11-22 09:36:59.515561026 +0000 UTC m=+0.187417672 container cleanup 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:36:59 np0005532048 systemd[1]: libpod-conmon-7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b.scope: Deactivated successfully.
Nov 22 04:36:59 np0005532048 podman[373073]: 2025-11-22 09:36:59.613602384 +0000 UTC m=+0.068605067 container remove 7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.627 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f9516e90-c2db-44e6-9df5-f94785990259]: (4, ('Sat Nov 22 09:36:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab (7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b)\n7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b\nSat Nov 22 09:36:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab (7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b)\n7fb9245339758a92532db09416b98a88074f0deed2364e7fc0a3f16b9c11b07b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.630 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[75e91f5d-b119-44ac-b642-a8e172294467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.631 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37020e16-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:59 np0005532048 kernel: tap37020e16-b0: left promiscuous mode
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.673 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2288b465-d72a-4446-9455-e9a1229ce91d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:59 np0005532048 nova_compute[253661]: 2025-11-22 09:36:59.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.689 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32b251a9-32cb-42eb-b7d6-1a56fb90f55d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.690 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82615098-8fa4-4fec-955b-46b171dbc2e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.705 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08c5df62-70db-4fe9-9a88-df1ff1c20730]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706477, 'reachable_time': 37432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373088, 'error': None, 'target': 'ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.708 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37020e16-bbf7-4d46-a463-62f41acbbdab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:36:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:36:59.708 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[87765525-730a-4db1-bb4a-ce1e1b444b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:36:59 np0005532048 systemd[1]: run-netns-ovnmeta\x2d37020e16\x2dbbf7\x2d4d46\x2da463\x2d62f41acbbdab.mount: Deactivated successfully.
Nov 22 04:36:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 336 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 8.4 MiB/s wr, 270 op/s
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.061 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Successfully updated port: 18df29f5-368d-4b94-ac69-8541de164d02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.162 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deleting instance files /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8_del#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.164 253665 INFO nova.virt.libvirt.driver [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deletion of /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8_del complete#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.348 253665 INFO nova.scheduler.client.report [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Deleted allocations for instance cf5e117a-f203-4c8f-b795-01fb355ca5e8#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.383 253665 INFO nova.virt.libvirt.driver [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Deleting instance files /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e_del#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.384 253665 INFO nova.virt.libvirt.driver [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Deletion of /var/lib/nova/instances/750659ed-67e0-44d4-a5b3-b8d0029ffa7e_del complete#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.391 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.391 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.448 253665 INFO nova.compute.manager [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Took 1.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.449 253665 DEBUG oslo.service.loopingcall [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.450 253665 DEBUG nova.compute.manager [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.450 253665 DEBUG nova.network.neutron [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.506 253665 DEBUG oslo_concurrency.processutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.563 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-changed-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.564 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing instance network info cache due to event network-changed-c027d879-91b3-497d-9f51-8476006ea65c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.564 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.565 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.565 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing network info cache for port c027d879-91b3-497d-9f51-8476006ea65c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.606 253665 DEBUG nova.compute.manager [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-changed-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.607 253665 DEBUG nova.compute.manager [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing instance network info cache due to event network-changed-18df29f5-368d-4b94-ac69-8541de164d02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.607 253665 DEBUG oslo_concurrency.lockutils [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.608 253665 DEBUG oslo_concurrency.lockutils [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.608 253665 DEBUG nova.network.neutron [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing network info cache for port 18df29f5-368d-4b94-ac69-8541de164d02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.787 253665 DEBUG nova.network.neutron [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:37:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:37:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694703152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:37:00 np0005532048 nova_compute[253661]: 2025-11-22 09:37:00.994 253665 DEBUG oslo_concurrency.processutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.001 253665 DEBUG nova.compute.provider_tree [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.017 253665 DEBUG nova.scheduler.client.report [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.045 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.106 253665 DEBUG oslo_concurrency.lockutils [None req-dce35dd2-ead4-400b-83a3-ca48a75a560a 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 15.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.310 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Successfully updated port: a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.329 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.332 253665 DEBUG nova.network.neutron [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.342 253665 DEBUG oslo_concurrency.lockutils [req-61a4e6fd-b121-49c7-813f-25edfb788a0e req-677a5b0f-8471-4c63-b491-15efe59b8962 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.342 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.342 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.565 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.800 253665 DEBUG nova.network.neutron [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.839 253665 INFO nova.compute.manager [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Took 1.39 seconds to deallocate network for instance.#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.898 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:01 np0005532048 nova_compute[253661]: 2025-11-22 09:37:01.898 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 316 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 6.7 MiB/s wr, 286 op/s
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.049 253665 DEBUG oslo_concurrency.processutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.294 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updated VIF entry in instance network info cache for port c027d879-91b3-497d-9f51-8476006ea65c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.294 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapc027d879-91", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.315 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing instance network info cache due to event network-changed-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.316 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Refreshing network info cache for port f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.475 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:37:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:37:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769277003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.522 253665 DEBUG oslo_concurrency.processutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.529 253665 DEBUG nova.compute.provider_tree [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.547 253665 DEBUG nova.scheduler.client.report [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.580 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.634 253665 DEBUG nova.compute.manager [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.637 253665 DEBUG nova.compute.manager [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing instance network info cache due to event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.638 253665 DEBUG oslo_concurrency.lockutils [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.638 253665 DEBUG oslo_concurrency.lockutils [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.638 253665 DEBUG nova.network.neutron [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.642 253665 INFO nova.scheduler.client.report [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Deleted allocations for instance 750659ed-67e0-44d4-a5b3-b8d0029ffa7e#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.724 253665 DEBUG oslo_concurrency.lockutils [None req-b58b79e6-3c77-4260-a08a-1f403f663d58 ac89f965408f4a26b39ee2ae4725ff14 0112f56c468c4f90971b92126078e951 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.752 253665 DEBUG nova.compute.manager [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-changed-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.753 253665 DEBUG nova.compute.manager [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing instance network info cache due to event network-changed-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.753 253665 DEBUG oslo_concurrency.lockutils [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.842 253665 DEBUG nova.network.neutron [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017212732792863188 of space, bias 1.0, pg target 0.5163819837858956 quantized to 32 (current 32)
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014231600410236375 of space, bias 1.0, pg target 0.4269480123070913 quantized to 32 (current 32)
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.860 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-750659ed-67e0-44d4-a5b3-b8d0029ffa7e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.861 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.861 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.862 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.862 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.862 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.863 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-unplugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.863 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.863 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.863 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.864 253665 DEBUG oslo_concurrency.lockutils [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "750659ed-67e0-44d4-a5b3-b8d0029ffa7e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.864 253665 DEBUG nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] No waiting events found dispatching network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:02 np0005532048 nova_compute[253661]: 2025-11-22 09:37:02.864 253665 WARNING nova.compute.manager [req-b8fcaf72-8bd8-4ee1-9b0e-c7a0efe6b5d8 req-97fdc67c-fa00-451d-9899-aeb1a8f98779 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received unexpected event network-vif-plugged-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:37:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 213 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.066 253665 DEBUG nova.network.neutron [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.086 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804209.0854855, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.087 253665 INFO nova.compute.manager [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.089 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.089 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance network_info: |[{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.090 253665 DEBUG oslo_concurrency.lockutils [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.090 253665 DEBUG nova.network.neutron [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing network info cache for port a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.093 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start _get_guest_xml network_info=[{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.098 253665 WARNING nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.103 253665 DEBUG nova.virt.libvirt.host [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.103 253665 DEBUG nova.virt.libvirt.host [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.110 253665 DEBUG nova.compute.manager [None req-3c260157-268b-4753-ba7b-9e82e8ee55ba - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.116 253665 DEBUG nova.virt.libvirt.host [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.116 253665 DEBUG nova.virt.libvirt.host [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.117 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.117 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.117 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.118 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.119 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.119 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.119 253665 DEBUG nova.virt.hardware [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.123 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.331 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.486 253665 DEBUG nova.network.neutron [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updated VIF entry in instance network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.488 253665 DEBUG nova.network.neutron [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.514 253665 DEBUG oslo_concurrency.lockutils [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.515 253665 DEBUG nova.compute.manager [req-e21355fe-c86d-4966-87ff-a570ca87100e req-2d5fc233-e77e-4834-bae2-49e95d121482 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Received event network-vif-deleted-f4a3cf1b-5ca5-4e5f-8d2b-8b9b319c587b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:37:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/255243496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.600 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.622 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:04 np0005532048 nova_compute[253661]: 2025-11-22 09:37:04.626 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:37:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602858101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.086 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.087 253665 DEBUG nova.virt.libvirt.vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.088 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.088 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.089 253665 DEBUG nova.virt.libvirt.vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.089 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.090 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.091 253665 DEBUG nova.objects.instance [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2837c740-6ce1-47d5-ad27-107211f74db7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.101 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.101 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.101 253665 INFO nova.compute.manager [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Unshelving#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.106 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <uuid>2837c740-6ce1-47d5-ad27-107211f74db7</uuid>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <name>instance-00000073</name>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-1413808402</nova:name>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:37:04</nova:creationTime>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <nova:port uuid="18df29f5-368d-4b94-ac69-8541de164d02">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <nova:port uuid="a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9d:fd83" ipVersion="6"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <entry name="serial">2837c740-6ce1-47d5-ad27-107211f74db7</entry>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <entry name="uuid">2837c740-6ce1-47d5-ad27-107211f74db7</entry>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2837c740-6ce1-47d5-ad27-107211f74db7_disk">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/2837c740-6ce1-47d5-ad27-107211f74db7_disk.config">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:90:34:a1"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <target dev="tap18df29f5-36"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:9d:fd:83"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <target dev="tapa8c9a54f-9f"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/console.log" append="off"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:37:05 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:37:05 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:37:05 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:37:05 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.107 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Preparing to wait for external event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.107 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.107 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Preparing to wait for external event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.108 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.109 253665 DEBUG nova.virt.libvirt.vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.109 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.110 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.111 253665 DEBUG os_vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.112 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.112 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.113 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.122 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18df29f5-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.123 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18df29f5-36, col_values=(('external_ids', {'iface-id': '18df29f5-368d-4b94-ac69-8541de164d02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:34:a1', 'vm-uuid': '2837c740-6ce1-47d5-ad27-107211f74db7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.124 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:05 np0005532048 NetworkManager[48920]: <info>  [1763804225.1257] manager: (tap18df29f5-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.126 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.133 253665 INFO os_vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36')#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.134 253665 DEBUG nova.virt.libvirt.vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:36:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.134 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.135 253665 DEBUG nova.network.os_vif_util [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.135 253665 DEBUG os_vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.136 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.136 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.139 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.140 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8c9a54f-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.140 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8c9a54f-9f, col_values=(('external_ids', {'iface-id': 'a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:fd:83', 'vm-uuid': '2837c740-6ce1-47d5-ad27-107211f74db7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.141 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:05 np0005532048 NetworkManager[48920]: <info>  [1763804225.1425] manager: (tapa8c9a54f-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.150 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.152 253665 INFO os_vif [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f')#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.183 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.183 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.188 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_requests' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.200 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'numa_topology' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.206 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.206 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.206 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:90:34:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.207 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:9d:fd:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.207 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Using config drive#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.233 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.255 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.255 253665 INFO nova.compute.claims [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.378 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:37:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32284843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.903 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.910 253665 DEBUG nova.compute.provider_tree [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:37:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 213 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.924 253665 DEBUG nova.scheduler.client.report [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:37:05 np0005532048 nova_compute[253661]: 2025-11-22 09:37:05.942 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.171 253665 INFO nova.network.neutron [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating port c027d879-91b3-497d-9f51-8476006ea65c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.194 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Creating config drive at /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.201 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppa6ahe7y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.247 253665 DEBUG nova.network.neutron [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updated VIF entry in instance network info cache for port a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.249 253665 DEBUG nova.network.neutron [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.271 253665 DEBUG oslo_concurrency.lockutils [req-6cb60ac4-9d76-4385-9898-b3ea148aa638 req-c73d0f23-c4b4-4a4b-ae40-ee9620241017 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.352 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppa6ahe7y" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.385 253665 DEBUG nova.storage.rbd_utils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:06 np0005532048 nova_compute[253661]: 2025-11-22 09:37:06.391 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.142 253665 DEBUG oslo_concurrency.processutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config 2837c740-6ce1-47d5-ad27-107211f74db7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.144 253665 INFO nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Deleting local config drive /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7/disk.config because it was imported into RBD.#033[00m
Nov 22 04:37:07 np0005532048 kernel: tap18df29f5-36: entered promiscuous mode
Nov 22 04:37:07 np0005532048 NetworkManager[48920]: <info>  [1763804227.2062] manager: (tap18df29f5-36): new Tun device (/org/freedesktop/NetworkManager/Devices/485)
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01173|binding|INFO|Claiming lport 18df29f5-368d-4b94-ac69-8541de164d02 for this chassis.
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01174|binding|INFO|18df29f5-368d-4b94-ac69-8541de164d02: Claiming fa:16:3e:90:34:a1 10.100.0.7
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.212 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.212 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.213 253665 DEBUG nova.network.neutron [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:07 np0005532048 NetworkManager[48920]: <info>  [1763804227.2219] manager: (tapa8c9a54f-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/486)
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.223 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:34:a1 10.100.0.7'], port_security=['fa:16:3e:90:34:a1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2837c740-6ce1-47d5-ad27-107211f74db7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=416fdb0b-60ab-41a3-b089-f86f3fe1761e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=18df29f5-368d-4b94-ac69-8541de164d02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.224 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 18df29f5-368d-4b94-ac69-8541de164d02 in datapath a1a3f352-95a9-4122-aecd-94a4bbf79683 bound to our chassis#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.226 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a3f352-95a9-4122-aecd-94a4bbf79683#033[00m
Nov 22 04:37:07 np0005532048 kernel: tapa8c9a54f-9f: entered promiscuous mode
Nov 22 04:37:07 np0005532048 systemd-udevd[373298]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01175|binding|INFO|Setting lport 18df29f5-368d-4b94-ac69-8541de164d02 ovn-installed in OVS
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01176|binding|INFO|Setting lport 18df29f5-368d-4b94-ac69-8541de164d02 up in Southbound
Nov 22 04:37:07 np0005532048 systemd-udevd[373297]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.246 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b17d856-e687-464a-bf51-73ee7c012534]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.248 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1a3f352-91 in ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01177|if_status|INFO|Dropped 4 log messages in last 1292 seconds (most recently, 1292 seconds ago) due to excessive rate
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01178|if_status|INFO|Not updating pb chassis for a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de now as sb is readonly
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01179|binding|INFO|Claiming lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de for this chassis.
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01180|binding|INFO|a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de: Claiming fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.251 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1a3f352-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.251 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e609bd03-51f9-43fa-ac38-89cbeb395a5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.254 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abcc3ebd-9aa6-432d-ac1f-0cabacffac2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 NetworkManager[48920]: <info>  [1763804227.2593] device (tap18df29f5-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:37:07 np0005532048 NetworkManager[48920]: <info>  [1763804227.2602] device (tap18df29f5-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:37:07 np0005532048 NetworkManager[48920]: <info>  [1763804227.2623] device (tapa8c9a54f-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:37:07 np0005532048 NetworkManager[48920]: <info>  [1763804227.2629] device (tapa8c9a54f-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.263 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83'], port_security=['fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:fd83/64', 'neutron:device_id': '2837c740-6ce1-47d5-ad27-107211f74db7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f56771e6-e0a6-4947-ad39-6cb384a012bf, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01181|binding|INFO|Setting lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de ovn-installed in OVS
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01182|binding|INFO|Setting lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de up in Southbound
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.275 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f35898d5-a116-41b8-a255-4abb94eff7ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 systemd-machined[215941]: New machine qemu-144-instance-00000073.
Nov 22 04:37:07 np0005532048 systemd[1]: Started Virtual Machine qemu-144-instance-00000073.
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.297 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3c38523c-d527-436e-bf1c-53f72bc08db2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.319 253665 DEBUG nova.compute.manager [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-changed-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.319 253665 DEBUG nova.compute.manager [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing instance network info cache due to event network-changed-c027d879-91b3-497d-9f51-8476006ea65c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.319 253665 DEBUG oslo_concurrency.lockutils [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.336 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf4d9f-e99a-4818-89b8-09bb87127601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.343 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74fde671-d136-4309-a7aa-cf6dbcdf1383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 NetworkManager[48920]: <info>  [1763804227.3447] manager: (tapa1a3f352-90): new Veth device (/org/freedesktop/NetworkManager/Devices/487)
Nov 22 04:37:07 np0005532048 systemd-udevd[373303]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.388 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c393b1b0-a03d-4615-861c-1d705ac0b9cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.393 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8e975170-9b63-4b2c-9bed-5ed18e426dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 NetworkManager[48920]: <info>  [1763804227.4207] device (tapa1a3f352-90): carrier: link connected
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.429 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf4f569-e2b2-4b83-aa98-f3fa62e17273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.455 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5b5d13-59a9-40d2-9c98-ce720ff01313]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a3f352-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:dc:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707815, 'reachable_time': 26897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373333, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.477 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c43f2c7c-203c-47b2-9fd8-ab60b1eac7cf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:dc76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707815, 'tstamp': 707815}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373334, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01183|binding|INFO|Releasing lport b0af7c96-3c08-40c2-b3ca-1e251090d01d from this chassis (sb_readonly=0)
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.514 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6a4805-91a6-4844-83fd-ca89804b8fde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a3f352-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:dc:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707815, 'reachable_time': 26897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373335, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.587 253665 DEBUG nova.compute.manager [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.588 253665 DEBUG oslo_concurrency.lockutils [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.588 253665 DEBUG oslo_concurrency.lockutils [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.588 253665 DEBUG oslo_concurrency.lockutils [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.589 253665 DEBUG nova.compute.manager [req-eb8e4cce-fa19-499f-a1ce-a31cffd5c47f req-b11fb0f2-69ed-47c8-871c-c0fee694e039 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Processing event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.607 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[377ef26b-4a5f-434e-bb4b-0f823009d119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.690 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[97488eea-1b21-40a1-ac4b-76b65bcb6b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.692 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a3f352-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.693 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.693 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a3f352-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:07 np0005532048 kernel: tapa1a3f352-90: entered promiscuous mode
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:07 np0005532048 NetworkManager[48920]: <info>  [1763804227.6961] manager: (tapa1a3f352-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.697 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a3f352-90, col_values=(('external_ids', {'iface-id': '6e07e124-b404-4946-958f-042e8d633a40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:07Z|01184|binding|INFO|Releasing lport 6e07e124-b404-4946-958f-042e8d633a40 from this chassis (sb_readonly=0)
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.715 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1a3f352-95a9-4122-aecd-94a4bbf79683.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1a3f352-95a9-4122-aecd-94a4bbf79683.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.716 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b6bcfe-33f9-4491-93c1-b68222541092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.717 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-a1a3f352-95a9-4122-aecd-94a4bbf79683
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/a1a3f352-95a9-4122-aecd-94a4bbf79683.pid.haproxy
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID a1a3f352-95a9-4122-aecd-94a4bbf79683
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:37:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:07.718 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'env', 'PROCESS_TAG=haproxy-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1a3f352-95a9-4122-aecd-94a4bbf79683.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.837 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804227.8365903, 2837c740-6ce1-47d5-ad27-107211f74db7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.838 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] VM Started (Lifecycle Event)#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.852 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.856 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804227.8367171, 2837c740-6ce1-47d5-ad27-107211f74db7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.856 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.870 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.874 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:07 np0005532048 nova_compute[253661]: 2025-11-22 09:37:07.888 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:37:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 213 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Nov 22 04:37:08 np0005532048 podman[373410]: 2025-11-22 09:37:08.125030332 +0000 UTC m=+0.075301404 container create aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:37:08 np0005532048 podman[373410]: 2025-11-22 09:37:08.076378922 +0000 UTC m=+0.026650014 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:37:08 np0005532048 systemd[1]: Started libpod-conmon-aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd.scope.
Nov 22 04:37:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871bb839a73d656755f89d39a1db17cd9b58ff25f5a2a0710db15a0e02acd3f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:08 np0005532048 podman[373410]: 2025-11-22 09:37:08.223148632 +0000 UTC m=+0.173419724 container init aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:37:08 np0005532048 podman[373410]: 2025-11-22 09:37:08.229616993 +0000 UTC m=+0.179888095 container start aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:37:08 np0005532048 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [NOTICE]   (373429) : New worker (373431) forked
Nov 22 04:37:08 np0005532048 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [NOTICE]   (373429) : Loading success.
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.305 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de in datapath c883e14c-ad7e-49eb-b0c3-2571140d1e57 unbound from our chassis#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.307 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c883e14c-ad7e-49eb-b0c3-2571140d1e57#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.322 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38ec0988-b759-4afb-8da5-89c7233060ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.326 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc883e14c-a1 in ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.329 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc883e14c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.329 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cfbe8b-d48e-40d4-aec9-952a461dae9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.331 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[014944af-72d5-4995-89bf-fc9b10acb219]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.345 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[bd315248-7924-4956-9e8e-ae0705f3b1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.362 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20ce1cd1-d4ee-4640-a03b-bc06263baec9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.398 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d465991d-04b6-41b5-8573-35fa407c822a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 systemd-udevd[373321]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:37:08 np0005532048 NetworkManager[48920]: <info>  [1763804228.4065] manager: (tapc883e14c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/489)
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.405 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25992086-cd0e-42a6-8652-8e12c606ff8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.447 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[369714c2-f60f-4c43-aaf2-6bcd3c0c25d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.450 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fb7f12-6930-4750-8bcb-96b4aaed6894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 NetworkManager[48920]: <info>  [1763804228.4809] device (tapc883e14c-a0): carrier: link connected
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.488 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[57ea02f3-9165-44e5-945d-533a409e7b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.516 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5c33a1-8d97-4b91-a8d0-66b0aabc137a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc883e14c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d1:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707921, 'reachable_time': 43324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373450, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.542 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5fab949b-c28b-490c-98b2-54b7d60d03e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d1f3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707921, 'tstamp': 707921}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373451, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[151be39c-670e-4fbf-812c-dd740dc4cc49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc883e14c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d1:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707921, 'reachable_time': 43324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373452, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.605 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb807c2-02d6-4634-8688-2c60458800aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.649 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c83f6145-6201-43a5-bb6f-ec7dd478b3a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.651 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc883e14c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.651 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.651 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc883e14c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:08 np0005532048 kernel: tapc883e14c-a0: entered promiscuous mode
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:08 np0005532048 NetworkManager[48920]: <info>  [1763804228.7076] manager: (tapc883e14c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.712 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc883e14c-a0, col_values=(('external_ids', {'iface-id': '8cb4fbf8-c8a1-48f8-bf71-339312c7db31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.714 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:08Z|01185|binding|INFO|Releasing lport 8cb4fbf8-c8a1-48f8-bf71-339312c7db31 from this chassis (sb_readonly=0)
Nov 22 04:37:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.734 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.736 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c883e14c-ad7e-49eb-b0c3-2571140d1e57.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c883e14c-ad7e-49eb-b0c3-2571140d1e57.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.737 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f87c52cf-f83a-4d3c-9933-f51162d69c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.739 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-c883e14c-ad7e-49eb-b0c3-2571140d1e57
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/c883e14c-ad7e-49eb-b0c3-2571140d1e57.pid.haproxy
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID c883e14c-ad7e-49eb-b0c3-2571140d1e57
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:37:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:08.739 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'env', 'PROCESS_TAG=haproxy-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c883e14c-ad7e-49eb-b0c3-2571140d1e57.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.919 253665 DEBUG nova.network.neutron [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.941 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.943 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.943 253665 INFO nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating image(s)#033[00m
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.968 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.973 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.975 253665 DEBUG oslo_concurrency.lockutils [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:08 np0005532048 nova_compute[253661]: 2025-11-22 09:37:08.975 253665 DEBUG nova.network.neutron [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Refreshing network info cache for port c027d879-91b3-497d-9f51-8476006ea65c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.020 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.046 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.051 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "ae26e73adffe36046883b3f1778c799a83ef0b41" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.052 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "ae26e73adffe36046883b3f1778c799a83ef0b41" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:09 np0005532048 podman[373537]: 2025-11-22 09:37:09.162928823 +0000 UTC m=+0.068619268 container create e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:37:09 np0005532048 systemd[1]: Started libpod-conmon-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868.scope.
Nov 22 04:37:09 np0005532048 podman[373537]: 2025-11-22 09:37:09.120105928 +0000 UTC m=+0.025796393 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:37:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8abda532727a5eec38dc91090bcebb9f55c2e54221da7f33b18490782350886/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:09 np0005532048 podman[373537]: 2025-11-22 09:37:09.277770259 +0000 UTC m=+0.183460744 container init e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 04:37:09 np0005532048 podman[373537]: 2025-11-22 09:37:09.286002843 +0000 UTC m=+0.191693288 container start e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:37:09 np0005532048 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [NOTICE]   (373557) : New worker (373559) forked
Nov 22 04:37:09 np0005532048 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [NOTICE]   (373557) : Loading success.
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.440 253665 DEBUG nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.440 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.440 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.441 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.441 253665 DEBUG nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Processing event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.441 253665 DEBUG nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.442 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.443 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.443 253665 DEBUG oslo_concurrency.lockutils [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.444 253665 DEBUG nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.444 253665 WARNING nova.compute.manager [req-8d8d19e7-f9dd-436a-a8d9-d1aee986b8c1 req-5d5817f2-507c-48f9-a9c1-92c69095372a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received unexpected event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.446 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.449 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804229.4491794, 2837c740-6ce1-47d5-ad27-107211f74db7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.449 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.451 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.455 253665 INFO nova.virt.libvirt.driver [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance spawned successfully.#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.455 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.482 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.486 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.487 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.487 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.487 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.488 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.488 253665 DEBUG nova.virt.libvirt.driver [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.492 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.501 253665 DEBUG nova.virt.libvirt.imagebackend [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/fd10acf7-7116-43c7-8e62-b2aed4e8d629/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/fd10acf7-7116-43c7-8e62-b2aed4e8d629/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.558 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.576 253665 INFO nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Took 12.49 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.576 253665 DEBUG nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.584 253665 DEBUG nova.virt.libvirt.imagebackend [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Selected location: {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/fd10acf7-7116-43c7-8e62-b2aed4e8d629/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.585 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] cloning images/fd10acf7-7116-43c7-8e62-b2aed4e8d629@snap to None/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.662 253665 DEBUG nova.compute.manager [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.662 253665 DEBUG oslo_concurrency.lockutils [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.663 253665 DEBUG oslo_concurrency.lockutils [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.663 253665 DEBUG oslo_concurrency.lockutils [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.663 253665 DEBUG nova.compute.manager [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.663 253665 WARNING nova.compute.manager [req-77d80059-65f8-4724-9f33-b98bf88362c9 req-c9f43c57-c4b0-47a9-83ca-bfb38a7973f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received unexpected event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.707 253665 INFO nova.compute.manager [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Took 13.71 seconds to build instance.#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.726 253665 DEBUG oslo_concurrency.lockutils [None req-50bd7140-e499-431e-b4be-07e332b5ab68 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.753 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "ae26e73adffe36046883b3f1778c799a83ef0b41" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 213 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 1.4 MiB/s wr, 109 op/s
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.930 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'migration_context' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:09 np0005532048 nova_compute[253661]: 2025-11-22 09:37:09.994 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] flattening vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.388 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Image rbd:vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.389 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.389 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Ensure instance console log exists: /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.390 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.390 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.390 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.392 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start _get_guest_xml network_info=[{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:36:45Z,direct_url=<?>,disk_format='raw',id=fd10acf7-7116-43c7-8e62-b2aed4e8d629,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-627235813-shelved',owner='a31947dfacfc450ba028c42968f103b2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:36:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.399 253665 WARNING nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.405 253665 DEBUG nova.virt.libvirt.host [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.406 253665 DEBUG nova.virt.libvirt.host [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.410 253665 DEBUG nova.virt.libvirt.host [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.412 253665 DEBUG nova.virt.libvirt.host [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.413 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.413 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:36:45Z,direct_url=<?>,disk_format='raw',id=fd10acf7-7116-43c7-8e62-b2aed4e8d629,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-627235813-shelved',owner='a31947dfacfc450ba028c42968f103b2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:36:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.413 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.414 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.414 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.414 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.414 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.415 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.415 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.416 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.416 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.416 253665 DEBUG nova.virt.hardware [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.416 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.435 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.819 253665 DEBUG nova.network.neutron [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updated VIF entry in instance network info cache for port c027d879-91b3-497d-9f51-8476006ea65c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.821 253665 DEBUG nova.network.neutron [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.834 253665 DEBUG oslo_concurrency.lockutils [req-d08e47eb-1a9a-401f-aa89-b169aa04b02b req-57b5171b-185c-4e28-9848-784e484a0900 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:37:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/249608731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:37:10 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.938 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.969 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:10 np0005532048 nova_compute[253661]: 2025-11-22 09:37:10.978 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.095 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:37:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62197803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.482 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.484 253665 DEBUG nova.virt.libvirt.vif [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='fd10acf7-7116-43c7-8e62-b2aed4e8d629',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member',shelved_at='2025-11-22T09:36:54.480378',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fd10acf7-7116-43c7-8e62-b2aed4e8d629'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:05Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.484 253665 DEBUG nova.network.os_vif_util [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.485 253665 DEBUG nova.network.os_vif_util [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.486 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.499 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <uuid>cf5e117a-f203-4c8f-b795-01fb355ca5e8</uuid>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <name>instance-0000006d</name>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersNegativeTestJSON-server-627235813</nova:name>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:37:10</nova:creationTime>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <nova:user uuid="31c7a4aa8fa340d2881ddc3ed426b6db">tempest-ServersNegativeTestJSON-1692723590-project-member</nova:user>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <nova:project uuid="a31947dfacfc450ba028c42968f103b2">tempest-ServersNegativeTestJSON-1692723590</nova:project>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="fd10acf7-7116-43c7-8e62-b2aed4e8d629"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <nova:port uuid="c027d879-91b3-497d-9f51-8476006ea65c">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <entry name="serial">cf5e117a-f203-4c8f-b795-01fb355ca5e8</entry>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <entry name="uuid">cf5e117a-f203-4c8f-b795-01fb355ca5e8</entry>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:d9:42:5a"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <target dev="tapc027d879-91"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/console.log" append="off"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <input type="keyboard" bus="usb"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:37:11 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:37:11 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:37:11 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:37:11 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.500 253665 DEBUG nova.compute.manager [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Preparing to wait for external event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.500 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.501 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.501 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.501 253665 DEBUG nova.virt.libvirt.vif [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='fd10acf7-7116-43c7-8e62-b2aed4e8d629',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:35:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member',shelved_at='2025-11-22T09:36:54.480378',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fd10acf7-7116-43c7-8e62-b2aed4e8d629'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:05Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.502 253665 DEBUG nova.network.os_vif_util [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.502 253665 DEBUG nova.network.os_vif_util [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.503 253665 DEBUG os_vif [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.504 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.504 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.505 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.508 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc027d879-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.508 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc027d879-91, col_values=(('external_ids', {'iface-id': 'c027d879-91b3-497d-9f51-8476006ea65c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:42:5a', 'vm-uuid': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.510 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:11 np0005532048 NetworkManager[48920]: <info>  [1763804231.5111] manager: (tapc027d879-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.520 253665 INFO os_vif [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')#033[00m
Nov 22 04:37:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:11Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:83:85 10.100.0.8
Nov 22 04:37:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:11Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:83:85 10.100.0.8
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.588 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.588 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.588 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] No VIF found with MAC fa:16:3e:d9:42:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.589 253665 INFO nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Using config drive#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.615 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.641 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:11 np0005532048 nova_compute[253661]: 2025-11-22 09:37:11.699 253665 DEBUG nova.objects.instance [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'keypairs' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 230 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.296 253665 INFO nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Creating config drive at /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.304 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0djoiahi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:37:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1323851533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:37:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:37:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1323851533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.471 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0djoiahi" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.510 253665 DEBUG nova.storage.rbd_utils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] rbd image cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.514 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.627 253665 DEBUG nova.compute.manager [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-changed-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.629 253665 DEBUG nova.compute.manager [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing instance network info cache due to event network-changed-18df29f5-368d-4b94-ac69-8541de164d02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.630 253665 DEBUG oslo_concurrency.lockutils [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.630 253665 DEBUG oslo_concurrency.lockutils [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.630 253665 DEBUG nova.network.neutron [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing network info cache for port 18df29f5-368d-4b94-ac69-8541de164d02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.698 253665 DEBUG oslo_concurrency.processutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config cf5e117a-f203-4c8f-b795-01fb355ca5e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.699 253665 INFO nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deleting local config drive /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8/disk.config because it was imported into RBD.#033[00m
Nov 22 04:37:12 np0005532048 NetworkManager[48920]: <info>  [1763804232.7550] manager: (tapc027d879-91): new Tun device (/org/freedesktop/NetworkManager/Devices/492)
Nov 22 04:37:12 np0005532048 kernel: tapc027d879-91: entered promiscuous mode
Nov 22 04:37:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:12Z|01186|binding|INFO|Claiming lport c027d879-91b3-497d-9f51-8476006ea65c for this chassis.
Nov 22 04:37:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:12Z|01187|binding|INFO|c027d879-91b3-497d-9f51-8476006ea65c: Claiming fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.759 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.764 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.766 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 bound to our chassis#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.767 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674#033[00m
Nov 22 04:37:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:12Z|01188|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c ovn-installed in OVS
Nov 22 04:37:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:12Z|01189|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c up in Southbound
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.781 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.785 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1dfecbed-0a29-43bf-85a0-f61386ec631d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.786 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa990966c-01 in ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:37:12 np0005532048 nova_compute[253661]: 2025-11-22 09:37:12.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.788 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa990966c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.788 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2212f283-8008-478b-9473-f2ee1a0cb942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.789 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fe093ea9-6950-4530-a44c-6b4c8820876f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 systemd-machined[215941]: New machine qemu-145-instance-0000006d.
Nov 22 04:37:12 np0005532048 systemd[1]: Started Virtual Machine qemu-145-instance-0000006d.
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.801 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[67cf49f5-230f-44d6-98df-680738155f50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 systemd-udevd[373865]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:37:12 np0005532048 NetworkManager[48920]: <info>  [1763804232.8211] device (tapc027d879-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:37:12 np0005532048 NetworkManager[48920]: <info>  [1763804232.8221] device (tapc027d879-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.826 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fce5ba44-13c1-4e5e-a9d3-0e4fc72106aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.860 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1d864bdf-1cfe-4740-a03b-e6da5355fd47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 NetworkManager[48920]: <info>  [1763804232.8713] manager: (tapa990966c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/493)
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.872 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[638a7553-9b6e-413f-ba24-8065ada9fb62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 systemd-udevd[373868]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.914 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bde5e18c-f17e-49e1-9bc9-4da1d5f07e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.922 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[84120098-1289-452a-817d-73450e471d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 NetworkManager[48920]: <info>  [1763804232.9485] device (tapa990966c-00): carrier: link connected
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.959 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a0311d-f0a9-4e94-b93c-fafdf42857e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:12.983 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8945e155-649c-4017-b2d8-727fde2885d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708368, 'reachable_time': 34879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373896, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.001 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e79a85a6-3bdc-4e10-9828-ff017c05c7c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:6fb9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708368, 'tstamp': 708368}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373897, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.022 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdac8aa-a687-4e86-bffa-8a62c8e602f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708368, 'reachable_time': 34879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373898, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.062 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d9fbe12f-91b1-4c16-9483-fcbc7626c274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.149 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f67e159-6e29-4023-8a72-e0daffffb899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.152 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.155 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:13 np0005532048 kernel: tapa990966c-00: entered promiscuous mode
Nov 22 04:37:13 np0005532048 NetworkManager[48920]: <info>  [1763804233.1569] manager: (tapa990966c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/494)
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.165 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:13Z|01190|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.166 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.167 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.169 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.170 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dabf6043-e8eb-4932-97aa-3c5494a74dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.172 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-a990966c-0851-457f-bdd5-27cf73032674
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID a990966c-0851-457f-bdd5-27cf73032674
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:37:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:13.173 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'env', 'PROCESS_TAG=haproxy-a990966c-0851-457f-bdd5-27cf73032674', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a990966c-0851-457f-bdd5-27cf73032674.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.527 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804233.5259013, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.527 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Started (Lifecycle Event)#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.548 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.552 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804233.526357, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.553 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.567 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.572 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:13 np0005532048 nova_compute[253661]: 2025-11-22 09:37:13.591 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:37:13 np0005532048 podman[373972]: 2025-11-22 09:37:13.614523069 +0000 UTC m=+0.057236445 container create dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:37:13 np0005532048 systemd[1]: Started libpod-conmon-dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e.scope.
Nov 22 04:37:13 np0005532048 podman[373972]: 2025-11-22 09:37:13.585783353 +0000 UTC m=+0.028496759 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:37:13 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52479717ef3cd88457aeffef0253fbcadccac9e1b5c6c7b801a1ce1dcf90c28c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:13 np0005532048 podman[373972]: 2025-11-22 09:37:13.727823576 +0000 UTC m=+0.170536962 container init dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 04:37:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:13 np0005532048 podman[373972]: 2025-11-22 09:37:13.735796024 +0000 UTC m=+0.178509400 container start dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:37:13 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [NOTICE]   (373992) : New worker (373994) forked
Nov 22 04:37:13 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [NOTICE]   (373992) : Loading success.
Nov 22 04:37:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 325 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.4 MiB/s wr, 261 op/s
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.088 253665 DEBUG nova.network.neutron [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updated VIF entry in instance network info cache for port 18df29f5-368d-4b94-ac69-8541de164d02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.089 253665 DEBUG nova.network.neutron [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [{"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.115 253665 DEBUG oslo_concurrency.lockutils [req-4e4a5350-b86d-42d5-ad29-5b3085c47c6f req-ecd935bb-77c4-4005-bf17-8afede0c6034 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.310 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804219.3091772, 750659ed-67e0-44d4-a5b3-b8d0029ffa7e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.311 253665 INFO nova.compute.manager [-] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.329 253665 DEBUG nova.compute.manager [None req-ea3ec371-849e-43a4-98b9-972eba9a3aa7 - - - - - -] [instance: 750659ed-67e0-44d4-a5b3-b8d0029ffa7e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.851 253665 DEBUG nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.851 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Processing event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.852 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.853 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.853 253665 DEBUG oslo_concurrency.lockutils [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.853 253665 DEBUG nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.853 253665 WARNING nova.compute.manager [req-e434f709-8690-4000-b53e-b3e007333225 req-ce3a2e69-4b32-4b7d-8c2b-380d301089d5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.854 253665 DEBUG nova.compute.manager [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.857 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804234.8574965, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.858 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.860 253665 DEBUG nova.virt.libvirt.driver [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.863 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance spawned successfully.#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.874 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.879 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:14 np0005532048 nova_compute[253661]: 2025-11-22 09:37:14.896 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:37:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 325 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.0 MiB/s wr, 211 op/s
Nov 22 04:37:16 np0005532048 nova_compute[253661]: 2025-11-22 09:37:16.097 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Nov 22 04:37:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Nov 22 04:37:16 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Nov 22 04:37:16 np0005532048 nova_compute[253661]: 2025-11-22 09:37:16.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:16 np0005532048 nova_compute[253661]: 2025-11-22 09:37:16.605 253665 DEBUG nova.compute.manager [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:16 np0005532048 nova_compute[253661]: 2025-11-22 09:37:16.662 253665 DEBUG oslo_concurrency.lockutils [None req-556b6273-8d62-4489-801c-24da31a30ad2 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 264 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 7.2 MiB/s wr, 315 op/s
Nov 22 04:37:18 np0005532048 nova_compute[253661]: 2025-11-22 09:37:18.721 253665 INFO nova.compute.manager [None req-3abe7d23-ec69-4bd0-97f3-4aee927e6e7b 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Get console output#033[00m
Nov 22 04:37:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:18 np0005532048 nova_compute[253661]: 2025-11-22 09:37:18.728 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:37:18 np0005532048 nova_compute[253661]: 2025-11-22 09:37:18.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 246 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 7.3 MiB/s wr, 359 op/s
Nov 22 04:37:20 np0005532048 nova_compute[253661]: 2025-11-22 09:37:20.635 253665 DEBUG nova.compute.manager [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:20 np0005532048 nova_compute[253661]: 2025-11-22 09:37:20.637 253665 DEBUG nova.compute.manager [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing instance network info cache due to event network-changed-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:20 np0005532048 nova_compute[253661]: 2025-11-22 09:37:20.638 253665 DEBUG oslo_concurrency.lockutils [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:20 np0005532048 nova_compute[253661]: 2025-11-22 09:37:20.639 253665 DEBUG oslo_concurrency.lockutils [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:20 np0005532048 nova_compute[253661]: 2025-11-22 09:37:20.640 253665 DEBUG nova.network.neutron [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Refreshing network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:21 np0005532048 nova_compute[253661]: 2025-11-22 09:37:21.099 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:21 np0005532048 podman[374003]: 2025-11-22 09:37:21.380437585 +0000 UTC m=+0.068818262 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 04:37:21 np0005532048 podman[374004]: 2025-11-22 09:37:21.383132592 +0000 UTC m=+0.070015672 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 04:37:21 np0005532048 nova_compute[253661]: 2025-11-22 09:37:21.515 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 246 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 5.7 MiB/s wr, 283 op/s
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.021 253665 DEBUG nova.objects.instance [None req-12b23e72-c768-4231-a75f-0cbfa3688f84 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.052 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804242.0520916, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.053 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.080 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.084 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.105 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.119 253665 DEBUG nova.network.neutron [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updated VIF entry in instance network info cache for port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.120 253665 DEBUG nova.network.neutron [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [{"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.140 253665 DEBUG oslo_concurrency.lockutils [req-b627e746-96dd-4734-b95e-990c7655246f req-c021c709-9339-401c-8e0b-5fa818b9218d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.255 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.256 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.258 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:22 np0005532048 kernel: tapc027d879-91 (unregistering): left promiscuous mode
Nov 22 04:37:22 np0005532048 NetworkManager[48920]: <info>  [1763804242.5960] device (tapc027d879-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:22Z|01191|binding|INFO|Releasing lport c027d879-91b3-497d-9f51-8476006ea65c from this chassis (sb_readonly=0)
Nov 22 04:37:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:22Z|01192|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c down in Southbound
Nov 22 04:37:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:22Z|01193|binding|INFO|Removing iface tapc027d879-91 ovn-installed in OVS
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.612 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '9', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.615 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 unbound from our chassis#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.617 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a990966c-0851-457f-bdd5-27cf73032674, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.619 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74921fff-9070-4e68-9676-c3a218a8b86a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.620 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace which is not needed anymore#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.626 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:22 np0005532048 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 22 04:37:22 np0005532048 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d0000006d.scope: Consumed 8.306s CPU time.
Nov 22 04:37:22 np0005532048 systemd-machined[215941]: Machine qemu-145-instance-0000006d terminated.
Nov 22 04:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:37:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:37:22 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [NOTICE]   (373992) : haproxy version is 2.8.14-c23fe91
Nov 22 04:37:22 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [NOTICE]   (373992) : path to executable is /usr/sbin/haproxy
Nov 22 04:37:22 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [WARNING]  (373992) : Exiting Master process...
Nov 22 04:37:22 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [ALERT]    (373992) : Current worker (373994) exited with code 143 (Terminated)
Nov 22 04:37:22 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[373987]: [WARNING]  (373992) : All workers exited. Exiting... (0)
Nov 22 04:37:22 np0005532048 systemd[1]: libpod-dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e.scope: Deactivated successfully.
Nov 22 04:37:22 np0005532048 podman[374067]: 2025-11-22 09:37:22.786146292 +0000 UTC m=+0.056511726 container died dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.790 253665 DEBUG nova.compute.manager [None req-12b23e72-c768-4231-a75f-0cbfa3688f84 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e-userdata-shm.mount: Deactivated successfully.
Nov 22 04:37:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-52479717ef3cd88457aeffef0253fbcadccac9e1b5c6c7b801a1ce1dcf90c28c-merged.mount: Deactivated successfully.
Nov 22 04:37:22 np0005532048 podman[374067]: 2025-11-22 09:37:22.843710743 +0000 UTC m=+0.114076167 container cleanup dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:37:22 np0005532048 systemd[1]: libpod-conmon-dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e.scope: Deactivated successfully.
Nov 22 04:37:22 np0005532048 podman[374107]: 2025-11-22 09:37:22.9163797 +0000 UTC m=+0.046010454 container remove dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.922 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5017013d-f6c0-470d-89b3-91b158ad28e9]: (4, ('Sat Nov 22 09:37:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e)\ndff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e\nSat Nov 22 09:37:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (dff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e)\ndff2e638db8c7eb730bbe30a3b9da665287469ab837aef5debb51a8b12af0c1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.924 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3c5cfe-4f62-48bd-ab60-483138c3e20e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.926 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:22 np0005532048 kernel: tapa990966c-00: left promiscuous mode
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.928 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:22 np0005532048 nova_compute[253661]: 2025-11-22 09:37:22.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.950 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd44b11-aba0-473f-9812-86418ab575c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.968 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[93a4568c-0e46-4380-87b2-59d56f210cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:22.978 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[191614e5-4844-4d4e-a6b9-19992ce650b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:23.000 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20746f5c-5c62-4406-a3df-06971e4f0e38]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708359, 'reachable_time': 43102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374127, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:23 np0005532048 systemd[1]: run-netns-ovnmeta\x2da990966c\x2d0851\x2d457f\x2dbdd5\x2d27cf73032674.mount: Deactivated successfully.
Nov 22 04:37:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:23.005 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:37:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:23.005 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[843553cf-2170-482c-a65f-a42432f119e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:23 np0005532048 nova_compute[253661]: 2025-11-22 09:37:23.268 253665 DEBUG nova.compute.manager [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:23 np0005532048 nova_compute[253661]: 2025-11-22 09:37:23.269 253665 DEBUG oslo_concurrency.lockutils [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:23 np0005532048 nova_compute[253661]: 2025-11-22 09:37:23.269 253665 DEBUG oslo_concurrency.lockutils [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:23 np0005532048 nova_compute[253661]: 2025-11-22 09:37:23.269 253665 DEBUG oslo_concurrency.lockutils [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:23 np0005532048 nova_compute[253661]: 2025-11-22 09:37:23.270 253665 DEBUG nova.compute.manager [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:23 np0005532048 nova_compute[253661]: 2025-11-22 09:37:23.270 253665 WARNING nova.compute.manager [req-21784e28-9c1a-4aff-8838-ded556078261 req-7f6ed884-d696-4096-938b-fe8e22ea7c66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state suspended and task_state None.#033[00m
Nov 22 04:37:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:23Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:34:a1 10.100.0.7
Nov 22 04:37:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:23Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:34:a1 10.100.0.7
Nov 22 04:37:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Nov 22 04:37:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Nov 22 04:37:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Nov 22 04:37:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 264 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 184 op/s
Nov 22 04:37:24 np0005532048 nova_compute[253661]: 2025-11-22 09:37:24.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:24 np0005532048 nova_compute[253661]: 2025-11-22 09:37:24.537 253665 INFO nova.compute.manager [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Resuming#033[00m
Nov 22 04:37:24 np0005532048 nova_compute[253661]: 2025-11-22 09:37:24.540 253665 DEBUG nova.objects.instance [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'flavor' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:24 np0005532048 nova_compute[253661]: 2025-11-22 09:37:24.577 253665 DEBUG oslo_concurrency.lockutils [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:24 np0005532048 nova_compute[253661]: 2025-11-22 09:37:24.578 253665 DEBUG oslo_concurrency.lockutils [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:24 np0005532048 nova_compute[253661]: 2025-11-22 09:37:24.578 253665 DEBUG nova.network.neutron [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:37:25 np0005532048 nova_compute[253661]: 2025-11-22 09:37:25.395 253665 DEBUG nova.compute.manager [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:25 np0005532048 nova_compute[253661]: 2025-11-22 09:37:25.396 253665 DEBUG oslo_concurrency.lockutils [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:25 np0005532048 nova_compute[253661]: 2025-11-22 09:37:25.397 253665 DEBUG oslo_concurrency.lockutils [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:25 np0005532048 nova_compute[253661]: 2025-11-22 09:37:25.397 253665 DEBUG oslo_concurrency.lockutils [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:25 np0005532048 nova_compute[253661]: 2025-11-22 09:37:25.397 253665 DEBUG nova.compute.manager [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:25 np0005532048 nova_compute[253661]: 2025-11-22 09:37:25.398 253665 WARNING nova.compute.manager [req-b28acf1c-16d9-4dbd-b1f6-9048debf8b90 req-7da8ed5f-7619-4139-9a85-a3bd0597d68a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state suspended and task_state resuming.#033[00m
Nov 22 04:37:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 264 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 151 op/s
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.099 253665 DEBUG nova.network.neutron [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.102 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.111 253665 DEBUG oslo_concurrency.lockutils [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.116 253665 DEBUG nova.virt.libvirt.vif [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:22Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.116 253665 DEBUG nova.network.os_vif_util [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.117 253665 DEBUG nova.network.os_vif_util [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.118 253665 DEBUG os_vif [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.119 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.119 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.119 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.121 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.122 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc027d879-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.122 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc027d879-91, col_values=(('external_ids', {'iface-id': 'c027d879-91b3-497d-9f51-8476006ea65c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:42:5a', 'vm-uuid': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.122 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.123 253665 INFO os_vif [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.149 253665 DEBUG nova.objects.instance [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'numa_topology' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:26 np0005532048 kernel: tapc027d879-91: entered promiscuous mode
Nov 22 04:37:26 np0005532048 NetworkManager[48920]: <info>  [1763804246.2393] manager: (tapc027d879-91): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Nov 22 04:37:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:26Z|01194|binding|INFO|Claiming lport c027d879-91b3-497d-9f51-8476006ea65c for this chassis.
Nov 22 04:37:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:26Z|01195|binding|INFO|c027d879-91b3-497d-9f51-8476006ea65c: Claiming fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.239 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.249 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '10', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.250 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 bound to our chassis#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.252 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a990966c-0851-457f-bdd5-27cf73032674#033[00m
Nov 22 04:37:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:26Z|01196|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c ovn-installed in OVS
Nov 22 04:37:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:26Z|01197|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c up in Southbound
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.259 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.264 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4829b879-3011-4f84-83c3-742848381449]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.269 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa990966c-01 in ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.271 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa990966c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.271 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6aad753f-4a28-4d1d-9a5d-7e3bba278835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.272 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b29e005a-0000-4b9c-bdf5-043368979937]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.290 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[f3090ae3-28f6-4c8e-9e9f-47c403445df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 systemd-udevd[374144]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:37:26 np0005532048 systemd-machined[215941]: New machine qemu-146-instance-0000006d.
Nov 22 04:37:26 np0005532048 NetworkManager[48920]: <info>  [1763804246.3095] device (tapc027d879-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:37:26 np0005532048 NetworkManager[48920]: <info>  [1763804246.3102] device (tapc027d879-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:37:26 np0005532048 systemd[1]: Started Virtual Machine qemu-146-instance-0000006d.
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.316 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7219807-8426-409c-a03a-30e8bccd02e3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.358 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[250d3a81-dba1-4ec1-9b28-a00165bf79bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 NetworkManager[48920]: <info>  [1763804246.3658] manager: (tapa990966c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.364 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[10bd31c3-d871-4f4e-9e55-22fcbbe35515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 systemd-udevd[374147]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.402 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[14b3837a-0e12-48bf-b5de-e6ec45990386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.405 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[80e87f61-0b13-4246-9143-f185246eb731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 NetworkManager[48920]: <info>  [1763804246.4310] device (tapa990966c-00): carrier: link connected
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.436 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd9a486-3286-406e-a086-1ff4513e7da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 podman[374148]: 2025-11-22 09:37:26.438512942 +0000 UTC m=+0.103496765 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.460 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b829c7-7ec1-4503-be22-e7a11c3cb49d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709716, 'reachable_time': 16973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374200, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.488 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1f9c7e-2beb-41fc-a187-98485fa72b6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:6fb9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709716, 'tstamp': 709716}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374201, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.509 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd378c5-98e8-4db8-ac89-487938ee26d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa990966c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:6f:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709716, 'reachable_time': 16973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374209, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0d612c-f544-4c6c-a4d3-5976cb382164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.620 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[84c31a5f-0881-46a3-ae59-3f96eecfd6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.622 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.623 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.623 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa990966c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 kernel: tapa990966c-00: entered promiscuous mode
Nov 22 04:37:26 np0005532048 NetworkManager[48920]: <info>  [1763804246.6266] manager: (tapa990966c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.628 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.630 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa990966c-00, col_values=(('external_ids', {'iface-id': '97798f16-a2eb-434e-aad3-3ece954bb8e7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:26Z|01198|binding|INFO|Releasing lport 97798f16-a2eb-434e-aad3-3ece954bb8e7 from this chassis (sb_readonly=0)
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.646 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.649 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.651 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38d9d1ef-bc1e-447c-8638-545dc2acd651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.651 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-a990966c-0851-457f-bdd5-27cf73032674
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/a990966c-0851-457f-bdd5-27cf73032674.pid.haproxy
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID a990966c-0851-457f-bdd5-27cf73032674
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:37:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:26.653 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'env', 'PROCESS_TAG=haproxy-a990966c-0851-457f-bdd5-27cf73032674', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a990966c-0851-457f-bdd5-27cf73032674.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.805 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for cf5e117a-f203-4c8f-b795-01fb355ca5e8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.805 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804246.8042178, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.806 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Started (Lifecycle Event)#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.820 253665 DEBUG nova.compute.manager [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.821 253665 DEBUG nova.objects.instance [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.841 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.848 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance running successfully.#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.850 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:26 np0005532048 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.854 253665 DEBUG nova.virt.libvirt.guest [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.854 253665 DEBUG nova.compute.manager [None req-c8ce2234-5ce3-40e9-9ac3-d61134efdbe5 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.866 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.867 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804246.8191342, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.867 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.883 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:26 np0005532048 nova_compute[253661]: 2025-11-22 09:37:26.893 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:27 np0005532048 podman[374276]: 2025-11-22 09:37:27.074729943 +0000 UTC m=+0.049830970 container create 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 04:37:27 np0005532048 systemd[1]: Started libpod-conmon-8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9.scope.
Nov 22 04:37:27 np0005532048 podman[374276]: 2025-11-22 09:37:27.050093361 +0000 UTC m=+0.025194408 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:37:27 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03312f80a55e34f62a33dfaed471bc61dca49a6488e5599296b3a6b417800e8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:27 np0005532048 podman[374276]: 2025-11-22 09:37:27.17750147 +0000 UTC m=+0.152602517 container init 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 04:37:27 np0005532048 podman[374276]: 2025-11-22 09:37:27.184526974 +0000 UTC m=+0.159628001 container start 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 04:37:27 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [NOTICE]   (374296) : New worker (374298) forked
Nov 22 04:37:27 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [NOTICE]   (374296) : Loading success.
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.872 253665 DEBUG nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.874 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.874 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.874 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.875 253665 DEBUG nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.875 253665 WARNING nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state None.#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.875 253665 DEBUG nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.876 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.876 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.876 253665 DEBUG oslo_concurrency.lockutils [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.876 253665 DEBUG nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:27 np0005532048 nova_compute[253661]: 2025-11-22 09:37:27.877 253665 WARNING nova.compute.manager [req-e8a05d70-2cab-423d-86a3-bf3b9f629e94 req-4210f169-ea0c-4fe1-bafc-324ec3d1d92d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state None.#033[00m
Nov 22 04:37:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 278 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 120 op/s
Nov 22 04:37:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:27.980 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:27.981 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:27.982 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.541 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.544 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.558 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.559 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.579 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.582 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.662 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.663 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.666 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.672 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.673 253665 INFO nova.compute.claims [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:37:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:28 np0005532048 nova_compute[253661]: 2025-11-22 09:37:28.839 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:37:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535357608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.314 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.323 253665 DEBUG nova.compute.provider_tree [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.341 253665 DEBUG nova.scheduler.client.report [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.365 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.367 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.369 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.383 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.385 253665 INFO nova.compute.claims [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.462 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.463 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.497 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.521 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.614 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.616 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.617 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Creating image(s)#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.641 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.671 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.696 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.701 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.754 253665 DEBUG nova.policy [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.775 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.819 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.821 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.822 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.822 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.848 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:29 np0005532048 nova_compute[253661]: 2025-11-22 09:37:29.853 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d2f5b215-3a41-451c-8ad8-68b17c96a678_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 279 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.6 MiB/s wr, 79 op/s
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.243 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d2f5b215-3a41-451c-8ad8-68b17c96a678_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:37:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712136166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.348 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.394 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.431 253665 DEBUG nova.compute.provider_tree [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.450 253665 DEBUG nova.scheduler.client.report [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.473 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.476 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.521 253665 DEBUG nova.objects.instance [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid d2f5b215-3a41-451c-8ad8-68b17c96a678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.526 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.527 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.537 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.538 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Ensure instance console log exists: /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.538 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.538 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.539 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.541 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.570 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.650 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.655 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.655 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Creating image(s)#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.689 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.721 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.753 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.760 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.820 253665 DEBUG nova.policy [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.824 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Successfully created port: 1e21d7ad-a6e7-4649-91f2-612de75fe16f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.870 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.873 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.874 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.874 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.903 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:30 np0005532048 nova_compute[253661]: 2025-11-22 09:37:30.909 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 117927df-3c9e-4609-b5ba-dc3937b9339d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.267 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 117927df-3c9e-4609-b5ba-dc3937b9339d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.354 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.506 253665 DEBUG nova.objects.instance [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid 117927df-3c9e-4609-b5ba-dc3937b9339d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.526 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.527 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Ensure instance console log exists: /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.528 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.529 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.529 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.866 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Successfully updated port: 1e21d7ad-a6e7-4649-91f2-612de75fe16f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.883 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.883 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.883 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:37:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 326 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 4.7 MiB/s wr, 115 op/s
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.946 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Successfully created port: 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.989 253665 DEBUG nova.compute.manager [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.990 253665 DEBUG nova.compute.manager [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing instance network info cache due to event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:31 np0005532048 nova_compute[253661]: 2025-11-22 09:37:31.990 253665 DEBUG oslo_concurrency.lockutils [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:32 np0005532048 nova_compute[253661]: 2025-11-22 09:37:32.045 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:37:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:32.258 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 372 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 5.5 MiB/s wr, 123 op/s
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.158 253665 DEBUG nova.network.neutron [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.168 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Successfully updated port: 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.198 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.199 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.199 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.210 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.210 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance network_info: |[{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.211 253665 DEBUG oslo_concurrency.lockutils [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.212 253665 DEBUG nova.network.neutron [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.215 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start _get_guest_xml network_info=[{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.228 253665 WARNING nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.237 253665 DEBUG nova.virt.libvirt.host [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.239 253665 DEBUG nova.virt.libvirt.host [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.243 253665 DEBUG nova.virt.libvirt.host [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.243 253665 DEBUG nova.virt.libvirt.host [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.244 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.244 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.245 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.245 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.246 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.246 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.246 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.247 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.247 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.247 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.248 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.248 253665 DEBUG nova.virt.hardware [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.251 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:34 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:34Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:42:5a 10.100.0.3
Nov 22 04:37:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:37:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266849916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.732 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.760 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:34 np0005532048 nova_compute[253661]: 2025-11-22 09:37:34.765 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.002 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.129 253665 DEBUG nova.compute.manager [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-changed-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.129 253665 DEBUG nova.compute.manager [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Refreshing instance network info cache due to event network-changed-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.130 253665 DEBUG oslo_concurrency.lockutils [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:37:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720972578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.276 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.278 253665 DEBUG nova.virt.libvirt.vif [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=117,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhEnav/8bmHhlravIj7ZzbNKEW+UMvBgA2sDDDC11ma4Sh8uEn9mVvYdSzBFRFowvU98Jl7d9jrFKpsv67Pj9Xp0jWGCVRbBnzzKhVjFFyGFkc+DH0al99fQPTR1eXa1A==',key_name='tempest-TestSecurityGroupsBasicOps-1955317373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-nt0g0idi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:29Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=d2f5b215-3a41-451c-8ad8-68b17c96a678,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.278 253665 DEBUG nova.network.os_vif_util [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.279 253665 DEBUG nova.network.os_vif_util [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.280 253665 DEBUG nova.objects.instance [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid d2f5b215-3a41-451c-8ad8-68b17c96a678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.294 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <uuid>d2f5b215-3a41-451c-8ad8-68b17c96a678</uuid>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <name>instance-00000075</name>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835</nova:name>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:37:34</nova:creationTime>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <nova:port uuid="1e21d7ad-a6e7-4649-91f2-612de75fe16f">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <entry name="serial">d2f5b215-3a41-451c-8ad8-68b17c96a678</entry>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <entry name="uuid">d2f5b215-3a41-451c-8ad8-68b17c96a678</entry>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d2f5b215-3a41-451c-8ad8-68b17c96a678_disk">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:1a:0f:ac"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <target dev="tap1e21d7ad-a6"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/console.log" append="off"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:37:35 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:37:35 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:37:35 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:37:35 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.294 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Preparing to wait for external event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.295 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.295 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.295 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.296 253665 DEBUG nova.virt.libvirt.vif [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=117,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhEnav/8bmHhlravIj7ZzbNKEW+UMvBgA2sDDDC11ma4Sh8uEn9mVvYdSzBFRFowvU98Jl7d9jrFKpsv67Pj9Xp0jWGCVRbBnzzKhVjFFyGFkc+DH0al99fQPTR1eXa1A==',key_name='tempest-TestSecurityGroupsBasicOps-1955317373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-nt0g0idi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:29Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=d2f5b215-3a41-451c-8ad8-68b17c96a678,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.296 253665 DEBUG nova.network.os_vif_util [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.297 253665 DEBUG nova.network.os_vif_util [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.297 253665 DEBUG os_vif [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.298 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.298 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.299 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.302 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.303 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e21d7ad-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.303 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e21d7ad-a6, col_values=(('external_ids', {'iface-id': '1e21d7ad-a6e7-4649-91f2-612de75fe16f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:0f:ac', 'vm-uuid': 'd2f5b215-3a41-451c-8ad8-68b17c96a678'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:35 np0005532048 NetworkManager[48920]: <info>  [1763804255.3069] manager: (tap1e21d7ad-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.310 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.317 253665 INFO os_vif [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6')#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.373 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.374 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.375 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:1a:0f:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.375 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Using config drive#033[00m
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.413 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 372 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 4.7 MiB/s wr, 104 op/s
Nov 22 04:37:35 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.994 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Creating config drive at /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:35.999 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixjxwwlr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.149 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixjxwwlr" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.179 253665 DEBUG nova.storage.rbd_utils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.184 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.269 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.270 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.278 253665 DEBUG nova.network.neutron [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updating instance_info_cache with network_info: [{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.302 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.302 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance network_info: |[{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.303 253665 DEBUG oslo_concurrency.lockutils [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.303 253665 DEBUG nova.network.neutron [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Refreshing network info cache for port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.306 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start _get_guest_xml network_info=[{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.313 253665 WARNING nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.318 253665 DEBUG nova.virt.libvirt.host [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.320 253665 DEBUG nova.virt.libvirt.host [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.327 253665 DEBUG nova.virt.libvirt.host [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.327 253665 DEBUG nova.virt.libvirt.host [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.327 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.328 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.328 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.328 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.329 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.330 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.330 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.330 253665 DEBUG nova.virt.hardware [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.334 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.384 253665 DEBUG oslo_concurrency.processutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config d2f5b215-3a41-451c-8ad8-68b17c96a678_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.385 253665 INFO nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Deleting local config drive /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678/disk.config because it was imported into RBD.#033[00m
Nov 22 04:37:36 np0005532048 kernel: tap1e21d7ad-a6: entered promiscuous mode
Nov 22 04:37:36 np0005532048 NetworkManager[48920]: <info>  [1763804256.4691] manager: (tap1e21d7ad-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/499)
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.468 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:36Z|01199|binding|INFO|Claiming lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f for this chassis.
Nov 22 04:37:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:36Z|01200|binding|INFO|1e21d7ad-a6e7-4649-91f2-612de75fe16f: Claiming fa:16:3e:1a:0f:ac 10.100.0.14
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.480 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:0f:ac 10.100.0.14'], port_security=['fa:16:3e:1a:0f:ac 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2f5b215-3a41-451c-8ad8-68b17c96a678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37126bdf-684b-42ae-b38f-88d563755df6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '443f7e2d-f0e9-45ab-9cf5-08268d38e115 d6d16faa-9388-499f-aa74-b3fccde9fbc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7f8c6c4-9648-452d-b35b-4ce3aef6c8f6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1e21d7ad-a6e7-4649-91f2-612de75fe16f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.482 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1e21d7ad-a6e7-4649-91f2-612de75fe16f in datapath 37126bdf-684b-42ae-b38f-88d563755df6 bound to our chassis#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.483 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 37126bdf-684b-42ae-b38f-88d563755df6#033[00m
Nov 22 04:37:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:36Z|01201|binding|INFO|Setting lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f ovn-installed in OVS
Nov 22 04:37:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:36Z|01202|binding|INFO|Setting lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f up in Southbound
Nov 22 04:37:36 np0005532048 systemd-udevd[374837]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:36 np0005532048 systemd-machined[215941]: New machine qemu-147-instance-00000075.
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.507 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d74cb8-5f2c-4867-aacb-00584483978f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.509 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap37126bdf-61 in ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.511 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap37126bdf-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[efec8851-97cf-4927-9fc7-dff04253d7a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.513 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[307da282-8766-4528-a953-d4b93df32727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 NetworkManager[48920]: <info>  [1763804256.5168] device (tap1e21d7ad-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:37:36 np0005532048 systemd[1]: Started Virtual Machine qemu-147-instance-00000075.
Nov 22 04:37:36 np0005532048 NetworkManager[48920]: <info>  [1763804256.5194] device (tap1e21d7ad-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.525 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b211274a-14fb-4274-8b62-2360209ba811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2f2f15-6b7c-4698-9fe0-c53d663da38e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.590 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1228e953-a5d6-4d68-a108-0a2f0edaf047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[46bbaa8c-ce9f-428f-b258-e1ff45e64bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 NetworkManager[48920]: <info>  [1763804256.6009] manager: (tap37126bdf-60): new Veth device (/org/freedesktop/NetworkManager/Devices/500)
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.642 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa242b3-09c5-486c-a01c-64f0595fcb9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.644 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9aa41a-42f2-4cc9-b844-8afe7dac746a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 NetworkManager[48920]: <info>  [1763804256.6766] device (tap37126bdf-60): carrier: link connected
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.682 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bad2164e-c73f-4a0c-ba1f-1d7b1f394c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.702 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4e0aaba7-be7b-484c-b707-5d8a783bc971]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37126bdf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:2e:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 350], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710741, 'reachable_time': 19064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374872, 'error': None, 'target': 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.725 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd685f1-a29e-4b50-a774-928983f5316d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:2e59'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710741, 'tstamp': 710741}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374873, 'error': None, 'target': 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c46c4225-3755-41e4-b1a4-e3557b3737e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap37126bdf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:2e:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 350], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710741, 'reachable_time': 19064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374874, 'error': None, 'target': 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.809 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e9484e85-aa5d-4c06-b64e-8ea39d5a334c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:37:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712071147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.841 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.877 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.884 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.890 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[258bff3a-f9a8-4f13-9bd1-2cb90f2c0099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.892 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37126bdf-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.892 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.893 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37126bdf-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:36 np0005532048 NetworkManager[48920]: <info>  [1763804256.8969] manager: (tap37126bdf-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/501)
Nov 22 04:37:36 np0005532048 kernel: tap37126bdf-60: entered promiscuous mode
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.899 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap37126bdf-60, col_values=(('external_ids', {'iface-id': 'eea1332c-6e32-4e52-a7c7-645bf860d501'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:36 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:36Z|01203|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.917 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/37126bdf-684b-42ae-b38f-88d563755df6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/37126bdf-684b-42ae-b38f-88d563755df6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.919 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa41080-0efb-4eca-9cd8-1ed00ef4e0c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.920 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-37126bdf-684b-42ae-b38f-88d563755df6
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/37126bdf-684b-42ae-b38f-88d563755df6.pid.haproxy
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 37126bdf-684b-42ae-b38f-88d563755df6
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:37:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:36.920 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'env', 'PROCESS_TAG=haproxy-37126bdf-684b-42ae-b38f-88d563755df6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/37126bdf-684b-42ae-b38f-88d563755df6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:37:36 np0005532048 nova_compute[253661]: 2025-11-22 09:37:36.947 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.044 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804257.0441287, d2f5b215-3a41-451c-8ad8-68b17c96a678 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.045 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] VM Started (Lifecycle Event)#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.069 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.074 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804257.0443964, d2f5b215-3a41-451c-8ad8-68b17c96a678 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.074 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.099 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.100 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.100 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.100 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.102 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.132 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.136 253665 DEBUG nova.network.neutron [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updated VIF entry in instance network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.137 253665 DEBUG nova.network.neutron [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.151 253665 DEBUG oslo_concurrency.lockutils [req-0c0bf134-964b-4437-bdf9-887a85c21edf req-a8f96c3e-5804-412d-9c1e-fb85121c780e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:37 np0005532048 podman[374988]: 2025-11-22 09:37:37.375792127 +0000 UTC m=+0.063631283 container create 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:37:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:37:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3167941660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.407 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.408 253665 DEBUG nova.virt.libvirt.vif [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-49006445',display_name='tempest-TestNetworkBasicOps-server-49006445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-49006445',id=116,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeXCAlba5Pky/MldtbxyajF3IcXgGA10hH2p6l/rDbu00wotgjV47YpIug01aEvxhEMHebjDZWxS13INHaUqa3arLwLiyV5qzWo5I/KVMb52E8fXSgSsdjLUTCsH4PUgQ==',key_name='tempest-TestNetworkBasicOps-1517678537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-bn5d4cub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:30Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=117927df-3c9e-4609-b5ba-dc3937b9339d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.409 253665 DEBUG nova.network.os_vif_util [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.418 253665 DEBUG nova.network.os_vif_util [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.421 253665 DEBUG nova.objects.instance [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 117927df-3c9e-4609-b5ba-dc3937b9339d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:37 np0005532048 podman[374988]: 2025-11-22 09:37:37.343595586 +0000 UTC m=+0.031434772 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.437 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <uuid>117927df-3c9e-4609-b5ba-dc3937b9339d</uuid>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <name>instance-00000074</name>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkBasicOps-server-49006445</nova:name>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:37:36</nova:creationTime>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:37:37 np0005532048 systemd[1]: Started libpod-conmon-2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1.scope.
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <nova:port uuid="01185f9f-cfa0-4eec-8adf-6b2c1516b5b1">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <entry name="serial">117927df-3c9e-4609-b5ba-dc3937b9339d</entry>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <entry name="uuid">117927df-3c9e-4609-b5ba-dc3937b9339d</entry>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/117927df-3c9e-4609-b5ba-dc3937b9339d_disk">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:9e:8d:06"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <target dev="tap01185f9f-cf"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/console.log" append="off"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:37:37 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:37:37 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:37:37 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:37:37 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.439 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Preparing to wait for external event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.440 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.441 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.441 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.442 253665 DEBUG nova.virt.libvirt.vif [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-49006445',display_name='tempest-TestNetworkBasicOps-server-49006445',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-49006445',id=116,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeXCAlba5Pky/MldtbxyajF3IcXgGA10hH2p6l/rDbu00wotgjV47YpIug01aEvxhEMHebjDZWxS13INHaUqa3arLwLiyV5qzWo5I/KVMb52E8fXSgSsdjLUTCsH4PUgQ==',key_name='tempest-TestNetworkBasicOps-1517678537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-bn5d4cub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:30Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=117927df-3c9e-4609-b5ba-dc3937b9339d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.443 253665 DEBUG nova.network.os_vif_util [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.444 253665 DEBUG nova.network.os_vif_util [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.444 253665 DEBUG os_vif [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.446 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.447 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.451 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01185f9f-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.452 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01185f9f-cf, col_values=(('external_ids', {'iface-id': '01185f9f-cfa0-4eec-8adf-6b2c1516b5b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:8d:06', 'vm-uuid': '117927df-3c9e-4609-b5ba-dc3937b9339d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.454 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:37 np0005532048 NetworkManager[48920]: <info>  [1763804257.4557] manager: (tap01185f9f-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/502)
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.457 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.463 253665 INFO os_vif [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf')#033[00m
Nov 22 04:37:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8e5aadf5ecb0314ed34aef4194ac1d915ee5e6be36aa131d63c38cd863668f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:37 np0005532048 podman[374988]: 2025-11-22 09:37:37.487439564 +0000 UTC m=+0.175278750 container init 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:37:37 np0005532048 podman[374988]: 2025-11-22 09:37:37.493099474 +0000 UTC m=+0.180938630 container start 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:37:37 np0005532048 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [NOTICE]   (375011) : New worker (375013) forked
Nov 22 04:37:37 np0005532048 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [NOTICE]   (375011) : Loading success.
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.526 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.527 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.527 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:9e:8d:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.528 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Using config drive#033[00m
Nov 22 04:37:37 np0005532048 nova_compute[253661]: 2025-11-22 09:37:37.561 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 372 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 4.7 MiB/s wr, 132 op/s
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.688 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Creating config drive at /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.695 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtuydwlv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.772 253665 DEBUG nova.compute.manager [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.773 253665 DEBUG oslo_concurrency.lockutils [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.773 253665 DEBUG oslo_concurrency.lockutils [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.773 253665 DEBUG oslo_concurrency.lockutils [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.773 253665 DEBUG nova.compute.manager [req-8278f752-7744-4c08-b6ea-13754c31b124 req-c53bbf0a-3f25-4593-b3ea-f17ab431e553 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Processing event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.774 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.779 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804258.779363, d2f5b215-3a41-451c-8ad8-68b17c96a678 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.779 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.782 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.784 253665 INFO nova.virt.libvirt.driver [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance spawned successfully.#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.784 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.805 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.815 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.815 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.816 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.816 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.817 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.817 253665 DEBUG nova.virt.libvirt.driver [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.819 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.852 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtuydwlv" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.889 253665 DEBUG nova.storage.rbd_utils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.894 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.947 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.956 253665 INFO nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Took 9.34 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:37:38 np0005532048 nova_compute[253661]: 2025-11-22 09:37:38.957 253665 DEBUG nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.032 253665 INFO nova.compute.manager [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Took 10.40 seconds to build instance.#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.045 253665 DEBUG oslo_concurrency.lockutils [None req-ec125446-111a-4223-8125-c68a74389b6f 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.074 253665 DEBUG oslo_concurrency.processutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config 117927df-3c9e-4609-b5ba-dc3937b9339d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.075 253665 INFO nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Deleting local config drive /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d/disk.config because it was imported into RBD.#033[00m
Nov 22 04:37:39 np0005532048 kernel: tap01185f9f-cf: entered promiscuous mode
Nov 22 04:37:39 np0005532048 NetworkManager[48920]: <info>  [1763804259.1395] manager: (tap01185f9f-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/503)
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.141 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:39Z|01204|binding|INFO|Claiming lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 for this chassis.
Nov 22 04:37:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:39Z|01205|binding|INFO|01185f9f-cfa0-4eec-8adf-6b2c1516b5b1: Claiming fa:16:3e:9e:8d:06 10.100.0.3
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.153 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:8d:06 10.100.0.3'], port_security=['fa:16:3e:9e:8d:06 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '117927df-3c9e-4609-b5ba-dc3937b9339d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669fa85d-7478-40e5-958b-7300ef3552b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7538651d-e44e-4a35-8243-e31c6426f6e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f6a9cc6-46e5-4035-8aed-8dfaed3a2f4d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.155 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 in datapath 669fa85d-7478-40e5-958b-7300ef3552b5 bound to our chassis#033[00m
Nov 22 04:37:39 np0005532048 NetworkManager[48920]: <info>  [1763804259.1603] device (tap01185f9f-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.157 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 669fa85d-7478-40e5-958b-7300ef3552b5#033[00m
Nov 22 04:37:39 np0005532048 NetworkManager[48920]: <info>  [1763804259.1619] device (tap01185f9f-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:37:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:39Z|01206|binding|INFO|Setting lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 ovn-installed in OVS
Nov 22 04:37:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:39Z|01207|binding|INFO|Setting lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 up in Southbound
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.165 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.180 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff4617-e76b-4b5f-b0d5-6e6a9b04cb79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:39 np0005532048 systemd-machined[215941]: New machine qemu-148-instance-00000074.
Nov 22 04:37:39 np0005532048 systemd[1]: Started Virtual Machine qemu-148-instance-00000074.
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.198 253665 DEBUG nova.network.neutron [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updated VIF entry in instance network info cache for port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.198 253665 DEBUG nova.network.neutron [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updating instance_info_cache with network_info: [{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.211 253665 DEBUG oslo_concurrency.lockutils [req-f14b9b7e-e869-45f6-bc90-9d50f264d71b req-23169cd2-5672-4078-bc25-58391e0296dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.223 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[127d97e2-ce25-4188-8eaa-71236aee6c8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.227 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3d60a055-08e3-4c27-9edc-cd7e434a176c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.268 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[816cfbea-038c-4cdf-ab35-76d7c86ec81e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.292 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f71f77-46fc-4b92-9f55-c00495c0afdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap669fa85d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:cb:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706707, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375105, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.315 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b668a629-0f8f-4cae-acaf-802fbd5fdd8f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap669fa85d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706722, 'tstamp': 706722}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375106, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap669fa85d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706727, 'tstamp': 706727}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375106, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.317 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap669fa85d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.321 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap669fa85d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.321 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.321 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap669fa85d-70, col_values=(('external_ids', {'iface-id': 'b0af7c96-3c08-40c2-b3ca-1e251090d01d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.322 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.447 253665 DEBUG nova.compute.manager [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.449 253665 DEBUG oslo_concurrency.lockutils [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.449 253665 DEBUG oslo_concurrency.lockutils [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.449 253665 DEBUG oslo_concurrency.lockutils [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.450 253665 DEBUG nova.compute.manager [req-15c53c4b-77a0-4ece-888e-cb9c6fb0bdce req-4020790e-8553-4c14-96ec-f6fa303ee1cf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Processing event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.585 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.586 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.586 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.587 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.587 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.589 253665 INFO nova.compute.manager [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Terminating instance#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.591 253665 DEBUG nova.compute.manager [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.615 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804259.615085, 117927df-3c9e-4609-b5ba-dc3937b9339d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.616 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] VM Started (Lifecycle Event)#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.618 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.623 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.627 253665 INFO nova.virt.libvirt.driver [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance spawned successfully.#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.628 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:37:39 np0005532048 kernel: tapc027d879-91 (unregistering): left promiscuous mode
Nov 22 04:37:39 np0005532048 NetworkManager[48920]: <info>  [1763804259.6425] device (tapc027d879-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.645 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.652 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.658 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.659 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.659 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.660 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.660 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.661 253665 DEBUG nova.virt.libvirt.driver [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:37:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:39Z|01208|binding|INFO|Releasing lport c027d879-91b3-497d-9f51-8476006ea65c from this chassis (sb_readonly=0)
Nov 22 04:37:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:39Z|01209|binding|INFO|Setting lport c027d879-91b3-497d-9f51-8476006ea65c down in Southbound
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:39Z|01210|binding|INFO|Removing iface tapc027d879-91 ovn-installed in OVS
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.677 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [{"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.679 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:42:5a 10.100.0.3'], port_security=['fa:16:3e:d9:42:5a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf5e117a-f203-4c8f-b795-01fb355ca5e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a990966c-0851-457f-bdd5-27cf73032674', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a31947dfacfc450ba028c42968f103b2', 'neutron:revision_number': '11', 'neutron:security_group_ids': '89642540-7944-41ba-8ed6-91045af1b213', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bafabe2a-ec0e-41bf-bad4-b88fdf9f208a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=c027d879-91b3-497d-9f51-8476006ea65c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.680 162862 INFO neutron.agent.ovn.metadata.agent [-] Port c027d879-91b3-497d-9f51-8476006ea65c in datapath a990966c-0851-457f-bdd5-27cf73032674 unbound from our chassis#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.683 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a990966c-0851-457f-bdd5-27cf73032674, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.684 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc60fd0-7cb1-4b32-8d4d-48036afd43ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:39.684 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 namespace which is not needed anymore#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:39 np0005532048 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 22 04:37:39 np0005532048 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d0000006d.scope: Consumed 7.815s CPU time.
Nov 22 04:37:39 np0005532048 systemd-machined[215941]: Machine qemu-146-instance-0000006d terminated.
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.711 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.714 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804259.6151934, 117927df-3c9e-4609-b5ba-dc3937b9339d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.714 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.746 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-cf5e117a-f203-4c8f-b795-01fb355ca5e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.747 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.748 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.749 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.752 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804259.6228628, 117927df-3c9e-4609-b5ba-dc3937b9339d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.753 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.763 253665 INFO nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Took 9.11 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.763 253665 DEBUG nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.773 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.776 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.800 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.831 253665 INFO nova.compute.manager [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Took 11.19 seconds to build instance.#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.839 253665 INFO nova.virt.libvirt.driver [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Instance destroyed successfully.#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.840 253665 DEBUG nova.objects.instance [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lazy-loading 'resources' on Instance uuid cf5e117a-f203-4c8f-b795-01fb355ca5e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.849 253665 DEBUG oslo_concurrency.lockutils [None req-62ef152b-ac32-49bf-adfc-a4b5d63577eb 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.856 253665 DEBUG nova.virt.libvirt.vif [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-627235813',display_name='tempest-ServersNegativeTestJSON-server-627235813',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-627235813',id=109,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a31947dfacfc450ba028c42968f103b2',ramdisk_id='',reservation_id='r-6hjukgnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1692723590',owner_user_name='tempest-ServersNegativeTestJSON-1692723590-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:26Z,user_data=None,user_id='31c7a4aa8fa340d2881ddc3ed426b6db',uuid=cf5e117a-f203-4c8f-b795-01fb355ca5e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:37:39 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [NOTICE]   (374296) : haproxy version is 2.8.14-c23fe91
Nov 22 04:37:39 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [NOTICE]   (374296) : path to executable is /usr/sbin/haproxy
Nov 22 04:37:39 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [ALERT]    (374296) : Current worker (374298) exited with code 143 (Terminated)
Nov 22 04:37:39 np0005532048 neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674[374292]: [WARNING]  (374296) : All workers exited. Exiting... (0)
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.857 253665 DEBUG nova.network.os_vif_util [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converting VIF {"id": "c027d879-91b3-497d-9f51-8476006ea65c", "address": "fa:16:3e:d9:42:5a", "network": {"id": "a990966c-0851-457f-bdd5-27cf73032674", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1415361237-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a31947dfacfc450ba028c42968f103b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc027d879-91", "ovs_interfaceid": "c027d879-91b3-497d-9f51-8476006ea65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.858 253665 DEBUG nova.network.os_vif_util [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.859 253665 DEBUG os_vif [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:37:39 np0005532048 systemd[1]: libpod-8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9.scope: Deactivated successfully.
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.864 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc027d879-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.868 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:39 np0005532048 podman[375171]: 2025-11-22 09:37:39.869626706 +0000 UTC m=+0.071249843 container died 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:37:39 np0005532048 nova_compute[253661]: 2025-11-22 09:37:39.873 253665 INFO os_vif [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:42:5a,bridge_name='br-int',has_traffic_filtering=True,id=c027d879-91b3-497d-9f51-8476006ea65c,network=Network(a990966c-0851-457f-bdd5-27cf73032674),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc027d879-91')#033[00m
Nov 22 04:37:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9-userdata-shm.mount: Deactivated successfully.
Nov 22 04:37:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-03312f80a55e34f62a33dfaed471bc61dca49a6488e5599296b3a6b417800e8a-merged.mount: Deactivated successfully.
Nov 22 04:37:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 372 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 3.6 MiB/s wr, 116 op/s
Nov 22 04:37:39 np0005532048 podman[375171]: 2025-11-22 09:37:39.949823311 +0000 UTC m=+0.151446448 container cleanup 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:37:39 np0005532048 systemd[1]: libpod-conmon-8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9.scope: Deactivated successfully.
Nov 22 04:37:40 np0005532048 podman[375228]: 2025-11-22 09:37:40.049449158 +0000 UTC m=+0.068670209 container remove 8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:37:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.057 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[569b899d-65af-43e7-b815-eb381862b62a]: (4, ('Sat Nov 22 09:37:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9)\n8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9\nSat Nov 22 09:37:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 (8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9)\n8b06e4f5c85d239cec9e3e2204638296957528cc3ce91d98ace53155f704b6c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.059 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae111dc-3f14-4e87-80a0-da638b566fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.060 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa990966c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:37:40 np0005532048 nova_compute[253661]: 2025-11-22 09:37:40.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:40 np0005532048 kernel: tapa990966c-00: left promiscuous mode
Nov 22 04:37:40 np0005532048 nova_compute[253661]: 2025-11-22 09:37:40.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.089 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4f9a04-b778-47a1-a9eb-b9b0c31809ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.106 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d868301a-eee2-4dd9-9674-d401eee1e29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.109 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9d7cac-5662-4e3a-b204-56eb74f3fa24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.133 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[106bd87a-2625-47a6-8d0e-d93dc6bc4aaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709708, 'reachable_time': 28379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375241, 'error': None, 'target': 'ovnmeta-a990966c-0851-457f-bdd5-27cf73032674', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.136 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a990966c-0851-457f-bdd5-27cf73032674 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:37:40 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:37:40.136 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[c26cc2af-8dd4-4702-b8a9-3ca1a1a672de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:37:40 np0005532048 systemd[1]: run-netns-ovnmeta\x2da990966c\x2d0851\x2d457f\x2dbdd5\x2d27cf73032674.mount: Deactivated successfully.
Nov 22 04:37:40 np0005532048 nova_compute[253661]: 2025-11-22 09:37:40.330 253665 INFO nova.virt.libvirt.driver [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deleting instance files /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8_del#033[00m
Nov 22 04:37:40 np0005532048 nova_compute[253661]: 2025-11-22 09:37:40.331 253665 INFO nova.virt.libvirt.driver [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deletion of /var/lib/nova/instances/cf5e117a-f203-4c8f-b795-01fb355ca5e8_del complete#033[00m
Nov 22 04:37:40 np0005532048 nova_compute[253661]: 2025-11-22 09:37:40.408 253665 INFO nova.compute.manager [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:37:40 np0005532048 nova_compute[253661]: 2025-11-22 09:37:40.408 253665 DEBUG oslo.service.loopingcall [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:37:40 np0005532048 nova_compute[253661]: 2025-11-22 09:37:40.409 253665 DEBUG nova.compute.manager [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:37:40 np0005532048 nova_compute[253661]: 2025-11-22 09:37:40.409 253665 DEBUG nova.network.neutron [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:37:40 np0005532048 nova_compute[253661]: 2025-11-22 09:37:40.738 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.029 253665 DEBUG nova.compute.manager [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.030 253665 DEBUG oslo_concurrency.lockutils [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.032 253665 DEBUG oslo_concurrency.lockutils [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.033 253665 DEBUG oslo_concurrency.lockutils [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.040 253665 DEBUG nova.compute.manager [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] No waiting events found dispatching network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.042 253665 WARNING nova.compute.manager [req-929e65b7-eb66-481e-9a6c-6bce1a7c57b9 req-14602842-8d81-4d62-82b2-f7a3e1b6c426 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received unexpected event network-vif-plugged-1e21d7ad-a6e7-4649-91f2-612de75fe16f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.111 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.592 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.592 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.592 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.593 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.593 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] No waiting events found dispatching network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.593 253665 WARNING nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received unexpected event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.593 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.594 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.594 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.594 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.595 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.595 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-unplugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.595 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.595 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.596 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.596 253665 DEBUG oslo_concurrency.lockutils [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.596 253665 DEBUG nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] No waiting events found dispatching network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:37:41 np0005532048 nova_compute[253661]: 2025-11-22 09:37:41.596 253665 WARNING nova.compute.manager [req-edbb42a4-83b9-41b5-8769-10aeeee2ac9c req-d844e171-6c0e-4bc5-bf56-923e0d4c8067 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received unexpected event network-vif-plugged-c027d879-91b3-497d-9f51-8476006ea65c for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:37:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 359 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Nov 22 04:37:42 np0005532048 nova_compute[253661]: 2025-11-22 09:37:42.133 253665 DEBUG nova.network.neutron [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:42 np0005532048 nova_compute[253661]: 2025-11-22 09:37:42.153 253665 INFO nova.compute.manager [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Took 1.74 seconds to deallocate network for instance.#033[00m
Nov 22 04:37:42 np0005532048 nova_compute[253661]: 2025-11-22 09:37:42.193 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:42 np0005532048 nova_compute[253661]: 2025-11-22 09:37:42.194 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:42 np0005532048 nova_compute[253661]: 2025-11-22 09:37:42.340 253665 DEBUG oslo_concurrency.processutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:37:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511800722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:37:42 np0005532048 nova_compute[253661]: 2025-11-22 09:37:42.885 253665 DEBUG oslo_concurrency.processutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:42 np0005532048 nova_compute[253661]: 2025-11-22 09:37:42.894 253665 DEBUG nova.compute.provider_tree [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:37:42 np0005532048 nova_compute[253661]: 2025-11-22 09:37:42.920 253665 DEBUG nova.scheduler.client.report [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:37:42 np0005532048 nova_compute[253661]: 2025-11-22 09:37:42.961 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:43 np0005532048 nova_compute[253661]: 2025-11-22 09:37:43.037 253665 INFO nova.scheduler.client.report [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Deleted allocations for instance cf5e117a-f203-4c8f-b795-01fb355ca5e8#033[00m
Nov 22 04:37:43 np0005532048 nova_compute[253661]: 2025-11-22 09:37:43.148 253665 DEBUG oslo_concurrency.lockutils [None req-1093ac30-d330-411c-883f-6af166cc3746 31c7a4aa8fa340d2881ddc3ed426b6db a31947dfacfc450ba028c42968f103b2 - - default default] Lock "cf5e117a-f203-4c8f-b795-01fb355ca5e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:43 np0005532048 nova_compute[253661]: 2025-11-22 09:37:43.667 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Received event network-vif-deleted-c027d879-91b3-497d-9f51-8476006ea65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:43 np0005532048 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-changed-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:43 np0005532048 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Refreshing instance network info cache due to event network-changed-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:43 np0005532048 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:43 np0005532048 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:43 np0005532048 nova_compute[253661]: 2025-11-22 09:37:43.668 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Refreshing network info cache for port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.9 MiB/s wr, 246 op/s
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.248 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.248 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:37:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458512515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.747 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.868 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.876 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.876 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.882 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.882 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.887 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.887 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.896 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:37:44 np0005532048 nova_compute[253661]: 2025-11-22 09:37:44.896 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.295 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.296 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3041MB free_disk=59.8553581237793GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.296 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.297 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.405 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.405 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 2837c740-6ce1-47d5-ad27-107211f74db7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.406 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 117927df-3c9e-4609-b5ba-dc3937b9339d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.406 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d2f5b215-3a41-451c-8ad8-68b17c96a678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.406 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.406 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.490 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:37:45 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c4d9b9fa-be58-4147-8a04-d29753127dc5 does not exist
Nov 22 04:37:45 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d1ab02a8-3aa5-48bf-b97e-7af5eefc54fe does not exist
Nov 22 04:37:45 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev eb617310-f500-45e2-8b53-07e52ac5bb30 does not exist
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:37:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.833 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updated VIF entry in instance network info cache for port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.834 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updating instance_info_cache with network_info: [{"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.858 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-117927df-3c9e-4609-b5ba-dc3937b9339d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.860 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.860 253665 DEBUG nova.compute.manager [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing instance network info cache due to event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.860 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.861 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:45 np0005532048 nova_compute[253661]: 2025-11-22 09:37:45.861 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 50 KiB/s wr, 210 op/s
Nov 22 04:37:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:37:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301452609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:37:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:37:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:37:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:37:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:37:46 np0005532048 nova_compute[253661]: 2025-11-22 09:37:46.055 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:46 np0005532048 nova_compute[253661]: 2025-11-22 09:37:46.067 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:37:46 np0005532048 nova_compute[253661]: 2025-11-22 09:37:46.081 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:37:46 np0005532048 nova_compute[253661]: 2025-11-22 09:37:46.107 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:37:46 np0005532048 nova_compute[253661]: 2025-11-22 09:37:46.108 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:46 np0005532048 nova_compute[253661]: 2025-11-22 09:37:46.113 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:46 np0005532048 podman[375581]: 2025-11-22 09:37:46.365547791 +0000 UTC m=+0.062125396 container create 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 04:37:46 np0005532048 systemd[1]: Started libpod-conmon-4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277.scope.
Nov 22 04:37:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:46 np0005532048 podman[375581]: 2025-11-22 09:37:46.334783856 +0000 UTC m=+0.031361481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:37:46 np0005532048 podman[375581]: 2025-11-22 09:37:46.45033032 +0000 UTC m=+0.146907935 container init 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:37:46 np0005532048 podman[375581]: 2025-11-22 09:37:46.458853451 +0000 UTC m=+0.155431056 container start 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:37:46 np0005532048 podman[375581]: 2025-11-22 09:37:46.462602014 +0000 UTC m=+0.159179649 container attach 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:37:46 np0005532048 systemd[1]: libpod-4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277.scope: Deactivated successfully.
Nov 22 04:37:46 np0005532048 vigilant_hypatia[375597]: 167 167
Nov 22 04:37:46 np0005532048 podman[375581]: 2025-11-22 09:37:46.465680061 +0000 UTC m=+0.162257666 container died 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:37:46 np0005532048 conmon[375597]: conmon 4da64997c495b3d8e775 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277.scope/container/memory.events
Nov 22 04:37:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9fc88188b04c3cff94c4df5837104927987845c11fe06d5ebaa9d522214a5643-merged.mount: Deactivated successfully.
Nov 22 04:37:46 np0005532048 podman[375581]: 2025-11-22 09:37:46.507506141 +0000 UTC m=+0.204083736 container remove 4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hypatia, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:37:46 np0005532048 systemd[1]: libpod-conmon-4da64997c495b3d8e775dfa21884fd2663796ea86959b955dc67667c5c6a8277.scope: Deactivated successfully.
Nov 22 04:37:46 np0005532048 podman[375621]: 2025-11-22 09:37:46.706617673 +0000 UTC m=+0.046498728 container create c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:37:46 np0005532048 systemd[1]: Started libpod-conmon-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope.
Nov 22 04:37:46 np0005532048 podman[375621]: 2025-11-22 09:37:46.689479086 +0000 UTC m=+0.029360161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:37:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:46 np0005532048 podman[375621]: 2025-11-22 09:37:46.80783072 +0000 UTC m=+0.147711795 container init c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 04:37:46 np0005532048 podman[375621]: 2025-11-22 09:37:46.81705704 +0000 UTC m=+0.156938095 container start c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:37:46 np0005532048 podman[375621]: 2025-11-22 09:37:46.820253419 +0000 UTC m=+0.160134504 container attach c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:37:47 np0005532048 nova_compute[253661]: 2025-11-22 09:37:47.434 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updated VIF entry in instance network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:37:47 np0005532048 nova_compute[253661]: 2025-11-22 09:37:47.436 253665 DEBUG nova.network.neutron [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:47 np0005532048 nova_compute[253661]: 2025-11-22 09:37:47.457 253665 DEBUG oslo_concurrency.lockutils [req-5534b5f6-2c00-4856-85b2-4e952ba370ad req-941d4fb9-a57a-4d76-bd46-5006168af87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:47Z|01211|binding|INFO|Releasing lport b0af7c96-3c08-40c2-b3ca-1e251090d01d from this chassis (sb_readonly=0)
Nov 22 04:37:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:47Z|01212|binding|INFO|Releasing lport 8cb4fbf8-c8a1-48f8-bf71-339312c7db31 from this chassis (sb_readonly=0)
Nov 22 04:37:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:47Z|01213|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 04:37:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:47Z|01214|binding|INFO|Releasing lport 6e07e124-b404-4946-958f-042e8d633a40 from this chassis (sb_readonly=0)
Nov 22 04:37:47 np0005532048 xenodochial_haslett[375637]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:37:47 np0005532048 xenodochial_haslett[375637]: --> relative data size: 1.0
Nov 22 04:37:47 np0005532048 xenodochial_haslett[375637]: --> All data devices are unavailable
Nov 22 04:37:47 np0005532048 nova_compute[253661]: 2025-11-22 09:37:47.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:47 np0005532048 systemd[1]: libpod-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope: Deactivated successfully.
Nov 22 04:37:47 np0005532048 systemd[1]: libpod-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope: Consumed 1.022s CPU time.
Nov 22 04:37:47 np0005532048 conmon[375637]: conmon c43713471ddf2b09f006 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope/container/memory.events
Nov 22 04:37:47 np0005532048 podman[375621]: 2025-11-22 09:37:47.93213829 +0000 UTC m=+1.272019345 container died c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:37:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 50 KiB/s wr, 210 op/s
Nov 22 04:37:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7cea37e0d81b36064e854f238274a6f83e66beeacac96b34dbc4dfbe9f31c85d-merged.mount: Deactivated successfully.
Nov 22 04:37:48 np0005532048 podman[375621]: 2025-11-22 09:37:48.005563455 +0000 UTC m=+1.345444510 container remove c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:37:48 np0005532048 systemd[1]: libpod-conmon-c43713471ddf2b09f0064c86bc9d9b967794941f22fa8f6707aaa1275e7865cc.scope: Deactivated successfully.
Nov 22 04:37:48 np0005532048 nova_compute[253661]: 2025-11-22 09:37:48.423 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:48 np0005532048 nova_compute[253661]: 2025-11-22 09:37:48.423 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:48 np0005532048 nova_compute[253661]: 2025-11-22 09:37:48.439 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:37:48 np0005532048 nova_compute[253661]: 2025-11-22 09:37:48.657 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:48 np0005532048 nova_compute[253661]: 2025-11-22 09:37:48.658 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:48 np0005532048 nova_compute[253661]: 2025-11-22 09:37:48.666 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:37:48 np0005532048 nova_compute[253661]: 2025-11-22 09:37:48.666 253665 INFO nova.compute.claims [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:37:48 np0005532048 podman[375819]: 2025-11-22 09:37:48.722640749 +0000 UTC m=+0.057181543 container create 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:37:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:48 np0005532048 systemd[1]: Started libpod-conmon-007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9.scope.
Nov 22 04:37:48 np0005532048 podman[375819]: 2025-11-22 09:37:48.699307458 +0000 UTC m=+0.033848283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:37:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:48 np0005532048 podman[375819]: 2025-11-22 09:37:48.815023617 +0000 UTC m=+0.149564431 container init 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:37:48 np0005532048 nova_compute[253661]: 2025-11-22 09:37:48.820 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:48 np0005532048 podman[375819]: 2025-11-22 09:37:48.823005294 +0000 UTC m=+0.157546088 container start 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:37:48 np0005532048 podman[375819]: 2025-11-22 09:37:48.826765998 +0000 UTC m=+0.161306792 container attach 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:37:48 np0005532048 frosty_northcutt[375835]: 167 167
Nov 22 04:37:48 np0005532048 systemd[1]: libpod-007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9.scope: Deactivated successfully.
Nov 22 04:37:48 np0005532048 conmon[375835]: conmon 007cb7ed197d62cd4201 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9.scope/container/memory.events
Nov 22 04:37:48 np0005532048 podman[375840]: 2025-11-22 09:37:48.876948496 +0000 UTC m=+0.028811718 container died 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 04:37:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1356bd5089cd3ea77b5384a4efdb75bd702682f3cbc38e0a04e9155934fcab18-merged.mount: Deactivated successfully.
Nov 22 04:37:48 np0005532048 podman[375840]: 2025-11-22 09:37:48.917855353 +0000 UTC m=+0.069718585 container remove 007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:37:48 np0005532048 systemd[1]: libpod-conmon-007cb7ed197d62cd420195ff40ae7c0dcf56c0b02d8d6f87ee8960a799bb0ef9.scope: Deactivated successfully.
Nov 22 04:37:49 np0005532048 podman[375881]: 2025-11-22 09:37:49.170829795 +0000 UTC m=+0.050367664 container create 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:37:49 np0005532048 systemd[1]: Started libpod-conmon-9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3.scope.
Nov 22 04:37:49 np0005532048 podman[375881]: 2025-11-22 09:37:49.151923494 +0000 UTC m=+0.031461383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:37:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:49 np0005532048 podman[375881]: 2025-11-22 09:37:49.281824545 +0000 UTC m=+0.161362444 container init 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:37:49 np0005532048 podman[375881]: 2025-11-22 09:37:49.293056824 +0000 UTC m=+0.172594693 container start 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:37:49 np0005532048 podman[375881]: 2025-11-22 09:37:49.297058934 +0000 UTC m=+0.176596823 container attach 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:37:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:37:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2969552940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.369 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.379 253665 DEBUG nova.compute.provider_tree [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.403 253665 DEBUG nova.scheduler.client.report [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.427 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.429 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.485 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.486 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.513 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.537 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.652 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.653 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.654 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Creating image(s)#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.690 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.730 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.764 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.770 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.831 253665 DEBUG nova.policy [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.877 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.878 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.879 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.879 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.908 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:49 np0005532048 nova_compute[253661]: 2025-11-22 09:37:49.913 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 293 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 39 KiB/s wr, 182 op/s
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.108 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.112 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]: {
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:    "0": [
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:        {
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "devices": [
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "/dev/loop3"
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            ],
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_name": "ceph_lv0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_size": "21470642176",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "name": "ceph_lv0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "tags": {
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.cluster_name": "ceph",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.crush_device_class": "",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.encrypted": "0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.osd_id": "0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.type": "block",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.vdo": "0"
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            },
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "type": "block",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "vg_name": "ceph_vg0"
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:        }
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:    ],
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:    "1": [
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:        {
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "devices": [
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "/dev/loop4"
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            ],
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_name": "ceph_lv1",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_size": "21470642176",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "name": "ceph_lv1",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "tags": {
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.cluster_name": "ceph",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.crush_device_class": "",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.encrypted": "0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.osd_id": "1",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.type": "block",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.vdo": "0"
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            },
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "type": "block",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "vg_name": "ceph_vg1"
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:        }
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:    ],
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:    "2": [
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:        {
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "devices": [
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "/dev/loop5"
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            ],
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_name": "ceph_lv2",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_size": "21470642176",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "name": "ceph_lv2",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "tags": {
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.cluster_name": "ceph",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.crush_device_class": "",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.encrypted": "0",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.osd_id": "2",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.type": "block",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:                "ceph.vdo": "0"
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            },
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "type": "block",
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:            "vg_name": "ceph_vg2"
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:        }
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]:    ]
Nov 22 04:37:50 np0005532048 elegant_fermi[375898]: }
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:37:50 np0005532048 systemd[1]: libpod-9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3.scope: Deactivated successfully.
Nov 22 04:37:50 np0005532048 podman[375881]: 2025-11-22 09:37:50.255541569 +0000 UTC m=+1.135079438 container died 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.258 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a37811eef8fb867b82d6d5e90c139b5d2140f7425633bb4160d54f61e4ed81f5-merged.mount: Deactivated successfully.
Nov 22 04:37:50 np0005532048 podman[375881]: 2025-11-22 09:37:50.309960333 +0000 UTC m=+1.189498202 container remove 9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:37:50 np0005532048 systemd[1]: libpod-conmon-9f5cae54c3cde35570d670771a39272a4aab51b4b96850d80528f9529aa3afb3.scope: Deactivated successfully.
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.398 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.535 253665 DEBUG nova.objects.instance [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid f16662c4-9b4f-4060-ac76-ebfb960dbb89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.549 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.549 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Ensure instance console log exists: /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.550 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.550 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:37:50 np0005532048 nova_compute[253661]: 2025-11-22 09:37:50.550 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:37:51 np0005532048 podman[376225]: 2025-11-22 09:37:51.006743931 +0000 UTC m=+0.048761434 container create dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 22 04:37:51 np0005532048 systemd[1]: Started libpod-conmon-dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2.scope.
Nov 22 04:37:51 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:51 np0005532048 podman[376225]: 2025-11-22 09:37:50.987192085 +0000 UTC m=+0.029209618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:37:51 np0005532048 podman[376225]: 2025-11-22 09:37:51.097384356 +0000 UTC m=+0.139401889 container init dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:37:51 np0005532048 podman[376225]: 2025-11-22 09:37:51.105491386 +0000 UTC m=+0.147508889 container start dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:37:51 np0005532048 podman[376225]: 2025-11-22 09:37:51.109412934 +0000 UTC m=+0.151430437 container attach dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:37:51 np0005532048 sad_hofstadter[376241]: 167 167
Nov 22 04:37:51 np0005532048 systemd[1]: libpod-dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2.scope: Deactivated successfully.
Nov 22 04:37:51 np0005532048 nova_compute[253661]: 2025-11-22 09:37:51.115 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:51 np0005532048 podman[376246]: 2025-11-22 09:37:51.168571075 +0000 UTC m=+0.036469438 container died dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:37:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8a857dcafeee4ecc351a3bb3674ccadbb22c500c93dfca77112900696c357953-merged.mount: Deactivated successfully.
Nov 22 04:37:51 np0005532048 podman[376246]: 2025-11-22 09:37:51.214356444 +0000 UTC m=+0.082254817 container remove dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:37:51 np0005532048 systemd[1]: libpod-conmon-dc49a2f07d9d9cc4bff7065b0303aeefde4661891d6a86e17084d97b0a8832c2.scope: Deactivated successfully.
Nov 22 04:37:51 np0005532048 podman[376268]: 2025-11-22 09:37:51.431357971 +0000 UTC m=+0.050588879 container create 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:37:51 np0005532048 systemd[1]: Started libpod-conmon-6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d.scope.
Nov 22 04:37:51 np0005532048 podman[376268]: 2025-11-22 09:37:51.411348913 +0000 UTC m=+0.030579821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:37:51 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:37:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:51 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:37:51 np0005532048 podman[376268]: 2025-11-22 09:37:51.55075824 +0000 UTC m=+0.169989158 container init 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:37:51 np0005532048 podman[376282]: 2025-11-22 09:37:51.553316123 +0000 UTC m=+0.079835146 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:37:51 np0005532048 podman[376268]: 2025-11-22 09:37:51.559907308 +0000 UTC m=+0.179138216 container start 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:37:51 np0005532048 podman[376268]: 2025-11-22 09:37:51.564752538 +0000 UTC m=+0.183983446 container attach 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:37:51 np0005532048 podman[376286]: 2025-11-22 09:37:51.566006869 +0000 UTC m=+0.091534477 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 04:37:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 304 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 588 KiB/s wr, 194 op/s
Nov 22 04:37:52 np0005532048 nova_compute[253661]: 2025-11-22 09:37:52.113 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Successfully created port: ff0231eb-335b-4acd-98c8-d655d887e97a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:37:52
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'volumes', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log']
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:37:52 np0005532048 sad_brattain[376305]: {
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "osd_id": 1,
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "type": "bluestore"
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:    },
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "osd_id": 0,
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "type": "bluestore"
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:    },
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "osd_id": 2,
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:        "type": "bluestore"
Nov 22 04:37:52 np0005532048 sad_brattain[376305]:    }
Nov 22 04:37:52 np0005532048 sad_brattain[376305]: }
Nov 22 04:37:52 np0005532048 systemd[1]: libpod-6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d.scope: Deactivated successfully.
Nov 22 04:37:52 np0005532048 systemd[1]: libpod-6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d.scope: Consumed 1.128s CPU time.
Nov 22 04:37:52 np0005532048 podman[376268]: 2025-11-22 09:37:52.702667036 +0000 UTC m=+1.321897954 container died 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:37:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1b6d9411a30ec8cf104e54ae8b889fae18f767e79bead786a54184bf017fdf23-merged.mount: Deactivated successfully.
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:37:52 np0005532048 podman[376268]: 2025-11-22 09:37:52.776149254 +0000 UTC m=+1.395380172 container remove 6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:37:52 np0005532048 systemd[1]: libpod-conmon-6c540de3361e912ee77b7f994f9faab255447c54263b90321d16c940674f9c9d.scope: Deactivated successfully.
Nov 22 04:37:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:37:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:37:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:37:52 np0005532048 nova_compute[253661]: 2025-11-22 09:37:52.839 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Successfully created port: d7659b3e-3579-403f-b319-ceb538d9c201 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:37:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c9ddef30-33c4-4779-89e1-782b4be8a0ea does not exist
Nov 22 04:37:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7e2f04e4-4f73-4ea5-8bd5-baf84e6c4151 does not exist
Nov 22 04:37:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:37:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:37:53 np0005532048 nova_compute[253661]: 2025-11-22 09:37:53.887 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Successfully updated port: ff0231eb-335b-4acd-98c8-d655d887e97a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:37:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 352 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 134 op/s
Nov 22 04:37:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:53Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:0f:ac 10.100.0.14
Nov 22 04:37:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:53Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:0f:ac 10.100.0.14
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG nova.compute.manager [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG nova.compute.manager [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing instance network info cache due to event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG oslo_concurrency.lockutils [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG oslo_concurrency.lockutils [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.055 253665 DEBUG nova.network.neutron [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.269 253665 DEBUG nova.network.neutron [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.661 253665 DEBUG nova.network.neutron [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.673 253665 DEBUG oslo_concurrency.lockutils [req-adc88a43-83c5-44a7-b314-2751d202eb10 req-b5a25b59-ee05-498b-ae12-840f6e82075e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.830 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804259.8278844, cf5e117a-f203-4c8f-b795-01fb355ca5e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.831 253665 INFO nova.compute.manager [-] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.846 253665 DEBUG nova.compute.manager [None req-1ad5c784-459d-4c7b-bfef-f9cd6b945745 - - - - - -] [instance: cf5e117a-f203-4c8f-b795-01fb355ca5e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:37:54 np0005532048 nova_compute[253661]: 2025-11-22 09:37:54.876 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:55 np0005532048 nova_compute[253661]: 2025-11-22 09:37:55.206 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Successfully updated port: d7659b3e-3579-403f-b319-ceb538d9c201 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:37:55 np0005532048 nova_compute[253661]: 2025-11-22 09:37:55.223 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:55 np0005532048 nova_compute[253661]: 2025-11-22 09:37:55.223 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:55 np0005532048 nova_compute[253661]: 2025-11-22 09:37:55.223 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:37:55 np0005532048 nova_compute[253661]: 2025-11-22 09:37:55.546 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:37:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:37:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:37:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:37:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:37:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:37:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 352 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.8 MiB/s wr, 53 op/s
Nov 22 04:37:55 np0005532048 nova_compute[253661]: 2025-11-22 09:37:55.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:56 np0005532048 nova_compute[253661]: 2025-11-22 09:37:56.118 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:56 np0005532048 nova_compute[253661]: 2025-11-22 09:37:56.185 253665 DEBUG nova.compute.manager [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-changed-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:37:56 np0005532048 nova_compute[253661]: 2025-11-22 09:37:56.185 253665 DEBUG nova.compute.manager [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing instance network info cache due to event network-changed-d7659b3e-3579-403f-b319-ceb538d9c201. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:37:56 np0005532048 nova_compute[253661]: 2025-11-22 09:37:56.185 253665 DEBUG oslo_concurrency.lockutils [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:37:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:56Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:8d:06 10.100.0.3
Nov 22 04:37:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:37:56Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:8d:06 10.100.0.3
Nov 22 04:37:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:37:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:37:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:37:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:37:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:37:57 np0005532048 podman[376417]: 2025-11-22 09:37:57.44938585 +0000 UTC m=+0.124547278 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:37:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 393 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 5.4 MiB/s wr, 128 op/s
Nov 22 04:37:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.310 253665 DEBUG nova.network.neutron [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.329 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.329 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance network_info: |[{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.329 253665 DEBUG oslo_concurrency.lockutils [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.330 253665 DEBUG nova.network.neutron [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing network info cache for port d7659b3e-3579-403f-b319-ceb538d9c201 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.334 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start _get_guest_xml network_info=[{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.341 253665 WARNING nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.349 253665 DEBUG nova.virt.libvirt.host [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.350 253665 DEBUG nova.virt.libvirt.host [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.354 253665 DEBUG nova.virt.libvirt.host [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.355 253665 DEBUG nova.virt.libvirt.host [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.355 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.355 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.356 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.356 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.356 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.357 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.357 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.357 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.357 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.358 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.358 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.358 253665 DEBUG nova.virt.hardware [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.362 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:37:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/743467742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.842 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.873 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.881 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:37:59 np0005532048 nova_compute[253661]: 2025-11-22 09:37:59.933 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:37:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 405 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 6.0 MiB/s wr, 153 op/s
Nov 22 04:38:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:38:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2023276289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.344 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.347 253665 DEBUG nova.virt.libvirt.vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:49Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.348 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.349 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.350 253665 DEBUG nova.virt.libvirt.vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:49Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.350 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.351 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.353 253665 DEBUG nova.objects.instance [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid f16662c4-9b4f-4060-ac76-ebfb960dbb89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.374 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <uuid>f16662c4-9b4f-4060-ac76-ebfb960dbb89</uuid>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <name>instance-00000076</name>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-1142729147</nova:name>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:37:59</nova:creationTime>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <nova:port uuid="ff0231eb-335b-4acd-98c8-d655d887e97a">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <nova:port uuid="d7659b3e-3579-403f-b319-ceb538d9c201">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe88:211f" ipVersion="6"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <entry name="serial">f16662c4-9b4f-4060-ac76-ebfb960dbb89</entry>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <entry name="uuid">f16662c4-9b4f-4060-ac76-ebfb960dbb89</entry>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:7e:38:77"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <target dev="tapff0231eb-33"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:88:21:1f"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <target dev="tapd7659b3e-35"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/console.log" append="off"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:38:00 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:38:00 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:38:00 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:38:00 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.376 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Preparing to wait for external event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.376 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.376 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Preparing to wait for external event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.377 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.378 253665 DEBUG nova.virt.libvirt.vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:49Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.378 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.379 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.379 253665 DEBUG os_vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.380 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.381 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.386 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff0231eb-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.386 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff0231eb-33, col_values=(('external_ids', {'iface-id': 'ff0231eb-335b-4acd-98c8-d655d887e97a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:38:77', 'vm-uuid': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:00 np0005532048 NetworkManager[48920]: <info>  [1763804280.3906] manager: (tapff0231eb-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.400 253665 INFO os_vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33')#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.401 253665 DEBUG nova.virt.libvirt.vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:37:49Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.402 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.403 253665 DEBUG nova.network.os_vif_util [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.403 253665 DEBUG os_vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.404 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.404 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.406 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.407 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7659b3e-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.407 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7659b3e-35, col_values=(('external_ids', {'iface-id': 'd7659b3e-3579-403f-b319-ceb538d9c201', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:21:1f', 'vm-uuid': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:00 np0005532048 NetworkManager[48920]: <info>  [1763804280.4100] manager: (tapd7659b3e-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.412 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.420 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.421 253665 INFO os_vif [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35')#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.478 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.478 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.478 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:7e:38:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.479 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:88:21:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.479 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Using config drive#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.505 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.927 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Creating config drive at /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config#033[00m
Nov 22 04:38:00 np0005532048 nova_compute[253661]: 2025-11-22 09:38:00.932 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_pc7zp0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.082 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu_pc7zp0" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.112 253665 DEBUG nova.storage.rbd_utils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.119 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.164 253665 DEBUG nova.network.neutron [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updated VIF entry in instance network info cache for port d7659b3e-3579-403f-b319-ceb538d9c201. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.165 253665 DEBUG nova.network.neutron [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.167 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.182 253665 DEBUG oslo_concurrency.lockutils [req-93ebc5af-d85d-4a22-be3a-7a7ba8a16ac2 req-afaa59fa-821b-4204-a3b8-6fe2763fbdc3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.302 253665 DEBUG oslo_concurrency.processutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config f16662c4-9b4f-4060-ac76-ebfb960dbb89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.303 253665 INFO nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Deleting local config drive /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89/disk.config because it was imported into RBD.#033[00m
Nov 22 04:38:01 np0005532048 NetworkManager[48920]: <info>  [1763804281.3719] manager: (tapff0231eb-33): new Tun device (/org/freedesktop/NetworkManager/Devices/506)
Nov 22 04:38:01 np0005532048 kernel: tapff0231eb-33: entered promiscuous mode
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:01Z|01215|binding|INFO|Claiming lport ff0231eb-335b-4acd-98c8-d655d887e97a for this chassis.
Nov 22 04:38:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:01Z|01216|binding|INFO|ff0231eb-335b-4acd-98c8-d655d887e97a: Claiming fa:16:3e:7e:38:77 10.100.0.14
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.395 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:38:77 10.100.0.14'], port_security=['fa:16:3e:7e:38:77 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=416fdb0b-60ab-41a3-b089-f86f3fe1761e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ff0231eb-335b-4acd-98c8-d655d887e97a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:01 np0005532048 NetworkManager[48920]: <info>  [1763804281.3969] manager: (tapd7659b3e-35): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.396 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ff0231eb-335b-4acd-98c8-d655d887e97a in datapath a1a3f352-95a9-4122-aecd-94a4bbf79683 bound to our chassis#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.399 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a3f352-95a9-4122-aecd-94a4bbf79683#033[00m
Nov 22 04:38:01 np0005532048 kernel: tapd7659b3e-35: entered promiscuous mode
Nov 22 04:38:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:01Z|01217|binding|INFO|Setting lport ff0231eb-335b-4acd-98c8-d655d887e97a ovn-installed in OVS
Nov 22 04:38:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:01Z|01218|binding|INFO|Setting lport ff0231eb-335b-4acd-98c8-d655d887e97a up in Southbound
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:01Z|01219|if_status|INFO|Not updating pb chassis for d7659b3e-3579-403f-b319-ceb538d9c201 now as sb is readonly
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.411 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:01Z|01220|binding|INFO|Claiming lport d7659b3e-3579-403f-b319-ceb538d9c201 for this chassis.
Nov 22 04:38:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:01Z|01221|binding|INFO|d7659b3e-3579-403f-b319-ceb538d9c201: Claiming fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.423 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f'], port_security=['fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:211f/64', 'neutron:device_id': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f56771e6-e0a6-4947-ad39-6cb384a012bf, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=d7659b3e-3579-403f-b319-ceb538d9c201) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.425 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaaa050f-eaeb-4529-8198-c27fbf646ffa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:01Z|01222|binding|INFO|Setting lport d7659b3e-3579-403f-b319-ceb538d9c201 ovn-installed in OVS
Nov 22 04:38:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:01Z|01223|binding|INFO|Setting lport d7659b3e-3579-403f-b319-ceb538d9c201 up in Southbound
Nov 22 04:38:01 np0005532048 systemd-udevd[376586]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:38:01 np0005532048 systemd-udevd[376587]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 systemd-machined[215941]: New machine qemu-149-instance-00000076.
Nov 22 04:38:01 np0005532048 NetworkManager[48920]: <info>  [1763804281.4500] device (tapff0231eb-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:38:01 np0005532048 NetworkManager[48920]: <info>  [1763804281.4510] device (tapff0231eb-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:38:01 np0005532048 systemd[1]: Started Virtual Machine qemu-149-instance-00000076.
Nov 22 04:38:01 np0005532048 NetworkManager[48920]: <info>  [1763804281.4601] device (tapd7659b3e-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:38:01 np0005532048 NetworkManager[48920]: <info>  [1763804281.4610] device (tapd7659b3e-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.470 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bda59c-f544-47e6-aba1-52d5684eb188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.475 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4755b4-97c6-46ef-bcab-f52c265cf154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.509 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[222fb592-e90c-4496-ad06-3362b10ed14b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.531 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2328e2a0-722a-4fce-be7c-d248853ade59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a3f352-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:dc:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707815, 'reachable_time': 26897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376601, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21c45c79-600a-468d-bf67-4bf5d2256df5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa1a3f352-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707837, 'tstamp': 707837}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376603, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa1a3f352-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707841, 'tstamp': 707841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376603, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.555 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a3f352-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.558 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a3f352-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.558 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.559 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a3f352-90, col_values=(('external_ids', {'iface-id': '6e07e124-b404-4946-958f-042e8d633a40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.559 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.561 162862 INFO neutron.agent.ovn.metadata.agent [-] Port d7659b3e-3579-403f-b319-ceb538d9c201 in datapath c883e14c-ad7e-49eb-b0c3-2571140d1e57 unbound from our chassis#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.562 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c883e14c-ad7e-49eb-b0c3-2571140d1e57#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.581 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe508c1-0a66-431e-9da1-c292d5fbed45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.618 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[829e37ac-429a-4ec2-abba-7f60bb321448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.622 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6367a06a-710a-4ddb-9c3f-ae2babf47d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.664 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[426ee172-b5b4-462f-8026-0bd651e2a880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.683 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26e1103f-661c-4360-9cd5-5a60bd673341]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc883e14c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d1:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707921, 'reachable_time': 43324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376609, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.704 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[16e3a0ce-c439-4d4a-b61e-652f327b1419]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc883e14c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707938, 'tstamp': 707938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376610, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.707 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc883e14c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.710 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc883e14c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.710 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.711 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc883e14c-a0, col_values=(('external_ids', {'iface-id': '8cb4fbf8-c8a1-48f8-bf71-339312c7db31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:01.711 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:01 np0005532048 nova_compute[253661]: 2025-11-22 09:38:01.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 405 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.062 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804282.0614202, f16662c4-9b4f-4060-ac76-ebfb960dbb89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.062 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] VM Started (Lifecycle Event)#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.089 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.093 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804282.0628104, f16662c4-9b4f-4060-ac76-ebfb960dbb89 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.093 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.110 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.114 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.130 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.246 253665 INFO nova.compute.manager [None req-1aaa91e8-a61b-4c66-9729-2d567802b9d8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Get console output#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.254 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.608 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.609 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.609 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.610 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.610 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.611 253665 INFO nova.compute.manager [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Terminating instance#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.612 253665 DEBUG nova.compute.manager [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:38:02 np0005532048 kernel: tap01185f9f-cf (unregistering): left promiscuous mode
Nov 22 04:38:02 np0005532048 NetworkManager[48920]: <info>  [1763804282.6674] device (tap01185f9f-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:38:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:02Z|01224|binding|INFO|Releasing lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 from this chassis (sb_readonly=0)
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.681 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:02Z|01225|binding|INFO|Setting lport 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 down in Southbound
Nov 22 04:38:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:02Z|01226|binding|INFO|Removing iface tap01185f9f-cf ovn-installed in OVS
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.702 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:8d:06 10.100.0.3'], port_security=['fa:16:3e:9e:8d:06 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '117927df-3c9e-4609-b5ba-dc3937b9339d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669fa85d-7478-40e5-958b-7300ef3552b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7538651d-e44e-4a35-8243-e31c6426f6e9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f6a9cc6-46e5-4035-8aed-8dfaed3a2f4d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.703 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 in datapath 669fa85d-7478-40e5-958b-7300ef3552b5 unbound from our chassis#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.705 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 669fa85d-7478-40e5-958b-7300ef3552b5#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.727 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5e388c49-cc30-4a33-a44a-394a913c3c07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:02 np0005532048 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 22 04:38:02 np0005532048 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Consumed 16.656s CPU time.
Nov 22 04:38:02 np0005532048 systemd-machined[215941]: Machine qemu-148-instance-00000074 terminated.
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.768 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec4eb5d-691e-476c-b336-226e6650858f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.771 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[75b4a783-e5f1-40de-b26f-2860a8dedc17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.803 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[72977d67-9bbf-41e4-a1ae-5d970040bc7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.824 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[824ee2c2-fdd5-4e67-b431-6a35bc1829e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap669fa85d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:cb:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706707, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376663, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.845 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23256544-bc7e-421c-88a2-b69df496c91a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap669fa85d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706722, 'tstamp': 706722}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376666, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap669fa85d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706727, 'tstamp': 706727}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376666, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.847 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap669fa85d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.864 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap669fa85d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.865 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.865 253665 INFO nova.virt.libvirt.driver [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Instance destroyed successfully.#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.865 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap669fa85d-70, col_values=(('external_ids', {'iface-id': 'b0af7c96-3c08-40c2-b3ca-1e251090d01d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:02.866 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.866 253665 DEBUG nova.objects.instance [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 117927df-3c9e-4609-b5ba-dc3937b9339d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003381054344110178 of space, bias 1.0, pg target 1.0143163032330533 quantized to 32 (current 32)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:38:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.879 253665 DEBUG nova.virt.libvirt.vif [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-49006445',display_name='tempest-TestNetworkBasicOps-server-49006445',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-49006445',id=116,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeXCAlba5Pky/MldtbxyajF3IcXgGA10hH2p6l/rDbu00wotgjV47YpIug01aEvxhEMHebjDZWxS13INHaUqa3arLwLiyV5qzWo5I/KVMb52E8fXSgSsdjLUTCsH4PUgQ==',key_name='tempest-TestNetworkBasicOps-1517678537',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-bn5d4cub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:39Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=117927df-3c9e-4609-b5ba-dc3937b9339d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.880 253665 DEBUG nova.network.os_vif_util [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "address": "fa:16:3e:9e:8d:06", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01185f9f-cf", "ovs_interfaceid": "01185f9f-cfa0-4eec-8adf-6b2c1516b5b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.881 253665 DEBUG nova.network.os_vif_util [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.881 253665 DEBUG os_vif [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.883 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01185f9f-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.888 253665 INFO os_vif [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:8d:06,bridge_name='br-int',has_traffic_filtering=True,id=01185f9f-cfa0-4eec-8adf-6b2c1516b5b1,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01185f9f-cf')#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.937 253665 DEBUG nova.compute.manager [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-unplugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.938 253665 DEBUG oslo_concurrency.lockutils [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.938 253665 DEBUG oslo_concurrency.lockutils [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.938 253665 DEBUG oslo_concurrency.lockutils [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.938 253665 DEBUG nova.compute.manager [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] No waiting events found dispatching network-vif-unplugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:02 np0005532048 nova_compute[253661]: 2025-11-22 09:38:02.939 253665 DEBUG nova.compute.manager [req-03b83b4f-8609-429a-b190-474955a905c4 req-1f6d4cc0-5bb8-4a1b-9934-fd7b97b924f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-unplugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:38:03 np0005532048 nova_compute[253661]: 2025-11-22 09:38:03.328 253665 INFO nova.virt.libvirt.driver [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Deleting instance files /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d_del#033[00m
Nov 22 04:38:03 np0005532048 nova_compute[253661]: 2025-11-22 09:38:03.329 253665 INFO nova.virt.libvirt.driver [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Deletion of /var/lib/nova/instances/117927df-3c9e-4609-b5ba-dc3937b9339d_del complete#033[00m
Nov 22 04:38:03 np0005532048 nova_compute[253661]: 2025-11-22 09:38:03.385 253665 INFO nova.compute.manager [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:38:03 np0005532048 nova_compute[253661]: 2025-11-22 09:38:03.385 253665 DEBUG oslo.service.loopingcall [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:38:03 np0005532048 nova_compute[253661]: 2025-11-22 09:38:03.386 253665 DEBUG nova.compute.manager [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:38:03 np0005532048 nova_compute[253661]: 2025-11-22 09:38:03.386 253665 DEBUG nova.network.neutron [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:38:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 382 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 656 KiB/s rd, 5.5 MiB/s wr, 139 op/s
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.067 253665 DEBUG nova.compute.manager [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.067 253665 DEBUG oslo_concurrency.lockutils [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.068 253665 DEBUG oslo_concurrency.lockutils [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.068 253665 DEBUG oslo_concurrency.lockutils [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.068 253665 DEBUG nova.compute.manager [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] No waiting events found dispatching network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.069 253665 WARNING nova.compute.manager [req-523c55f0-2db4-4a24-88af-55741b45f663 req-d4b82934-add5-40ad-be41-bf99877e0c8a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received unexpected event network-vif-plugged-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.199 253665 DEBUG nova.compute.manager [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.199 253665 DEBUG oslo_concurrency.lockutils [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.200 253665 DEBUG oslo_concurrency.lockutils [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.200 253665 DEBUG oslo_concurrency.lockutils [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.200 253665 DEBUG nova.compute.manager [req-17ba6211-9d7c-44e5-b588-7c807e6b7766 req-fdf5aaf8-d4bd-4e93-a99b-461afcf2e90d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Processing event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.450 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.881 253665 DEBUG nova.network.neutron [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.898 253665 INFO nova.compute.manager [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Took 2.51 seconds to deallocate network for instance.#033[00m
Nov 22 04:38:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 382 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 3.3 MiB/s wr, 111 op/s
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.946 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:05 np0005532048 nova_compute[253661]: 2025-11-22 09:38:05.946 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:06 np0005532048 nova_compute[253661]: 2025-11-22 09:38:06.093 253665 DEBUG oslo_concurrency.processutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:06 np0005532048 nova_compute[253661]: 2025-11-22 09:38:06.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4285726818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:06 np0005532048 nova_compute[253661]: 2025-11-22 09:38:06.564 253665 DEBUG oslo_concurrency.processutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:06 np0005532048 nova_compute[253661]: 2025-11-22 09:38:06.571 253665 DEBUG nova.compute.provider_tree [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:06 np0005532048 nova_compute[253661]: 2025-11-22 09:38:06.588 253665 DEBUG nova.scheduler.client.report [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:06 np0005532048 nova_compute[253661]: 2025-11-22 09:38:06.610 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:06 np0005532048 nova_compute[253661]: 2025-11-22 09:38:06.636 253665 INFO nova.scheduler.client.report [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 117927df-3c9e-4609-b5ba-dc3937b9339d#033[00m
Nov 22 04:38:06 np0005532048 nova_compute[253661]: 2025-11-22 09:38:06.692 253665 DEBUG oslo_concurrency.lockutils [None req-3591ed8e-87ca-43b7-b3d4-15996fe0bd92 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "117927df-3c9e-4609-b5ba-dc3937b9339d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.298 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.298 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.299 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.299 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.299 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No event matching network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a in dict_keys([('network-vif-plugged', 'd7659b3e-3579-403f-b319-ceb538d9c201')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.300 253665 WARNING nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received unexpected event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.300 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.301 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.301 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.302 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.302 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Processing event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.303 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.303 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.303 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.304 253665 DEBUG oslo_concurrency.lockutils [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.304 253665 DEBUG nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.304 253665 WARNING nova.compute.manager [req-ce14bfb3-fc4d-4b77-a41f-14c4bfddd199 req-eece0d4d-264c-4315-9820-479729073826 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received unexpected event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.306 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance event wait completed in 5 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.311 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804287.3109167, f16662c4-9b4f-4060-ac76-ebfb960dbb89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.312 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.316 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.322 253665 INFO nova.virt.libvirt.driver [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance spawned successfully.#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.322 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.337 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.345 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.354 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.355 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.356 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.357 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.358 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.359 253665 DEBUG nova.virt.libvirt.driver [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.366 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.417 253665 INFO nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Took 17.76 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.418 253665 DEBUG nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.425 253665 DEBUG nova.compute.manager [req-9ac417ec-ac01-47a1-a15a-fbb7f341e98e req-16d12dad-d431-492b-8f80-6cf933c71d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Received event network-vif-deleted-01185f9f-cfa0-4eec-8adf-6b2c1516b5b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.508 253665 INFO nova.compute.manager [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Took 18.87 seconds to build instance.#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.525 253665 DEBUG oslo_concurrency.lockutils [None req-dda51f92-ea61-4d4f-867c-531eb16f3957 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:07 np0005532048 nova_compute[253661]: 2025-11-22 09:38:07.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 326 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.3 MiB/s wr, 139 op/s
Nov 22 04:38:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.883 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.884 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.884 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.884 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.884 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.886 253665 INFO nova.compute.manager [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Terminating instance#033[00m
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.887 253665 DEBUG nova.compute.manager [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:38:08 np0005532048 kernel: tap22b006cb-c0 (unregistering): left promiscuous mode
Nov 22 04:38:08 np0005532048 NetworkManager[48920]: <info>  [1763804288.9403] device (tap22b006cb-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:38:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:08Z|01227|binding|INFO|Releasing lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 from this chassis (sb_readonly=0)
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:08Z|01228|binding|INFO|Setting lport 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 down in Southbound
Nov 22 04:38:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:08Z|01229|binding|INFO|Removing iface tap22b006cb-c0 ovn-installed in OVS
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.959 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:83:85 10.100.0.8'], port_security=['fa:16:3e:c9:83:85 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669fa85d-7478-40e5-958b-7300ef3552b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '714d001a-9857-4892-9e43-4add0015169f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f6a9cc6-46e5-4035-8aed-8dfaed3a2f4d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.960 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 in datapath 669fa85d-7478-40e5-958b-7300ef3552b5 unbound from our chassis#033[00m
Nov 22 04:38:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.963 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 669fa85d-7478-40e5-958b-7300ef3552b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:38:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.965 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1e4752-2d95-4c4f-af54-68fba04dd830]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:08.965 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 namespace which is not needed anymore#033[00m
Nov 22 04:38:08 np0005532048 nova_compute[253661]: 2025-11-22 09:38:08.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:09 np0005532048 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Deactivated successfully.
Nov 22 04:38:09 np0005532048 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Consumed 16.588s CPU time.
Nov 22 04:38:09 np0005532048 systemd-machined[215941]: Machine qemu-143-instance-00000072 terminated.
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.129 253665 INFO nova.virt.libvirt.driver [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Instance destroyed successfully.#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.130 253665 DEBUG nova.objects.instance [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:09 np0005532048 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [NOTICE]   (372888) : haproxy version is 2.8.14-c23fe91
Nov 22 04:38:09 np0005532048 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [NOTICE]   (372888) : path to executable is /usr/sbin/haproxy
Nov 22 04:38:09 np0005532048 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [WARNING]  (372888) : Exiting Master process...
Nov 22 04:38:09 np0005532048 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [WARNING]  (372888) : Exiting Master process...
Nov 22 04:38:09 np0005532048 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [ALERT]    (372888) : Current worker (372893) exited with code 143 (Terminated)
Nov 22 04:38:09 np0005532048 neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5[372884]: [WARNING]  (372888) : All workers exited. Exiting... (0)
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.151 253665 DEBUG nova.virt.libvirt.vif [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-628839971',display_name='tempest-TestNetworkBasicOps-server-628839971',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-628839971',id=114,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIZ5gGdNvaqAtX8j4rLIehpVsycYZstZu428EjSgRsIaTKO3qobX2DWEa45t7eW4vzvXR6ESLf4/AnMv9en3fY5WkAniEGuSXx7koBFV1HR0ktIagOKt25I/jbmVsb/jUA==',key_name='tempest-TestNetworkBasicOps-971917795',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:36:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-n0lt6esc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:36:56Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.152 253665 DEBUG nova.network.os_vif_util [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "address": "fa:16:3e:c9:83:85", "network": {"id": "669fa85d-7478-40e5-958b-7300ef3552b5", "bridge": "br-int", "label": "tempest-network-smoke--1379397349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22b006cb-c0", "ovs_interfaceid": "22b006cb-c06d-4ebb-9f02-ccbbdfc34f26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:09 np0005532048 systemd[1]: libpod-16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239.scope: Deactivated successfully.
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.152 253665 DEBUG nova.network.os_vif_util [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.153 253665 DEBUG os_vif [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.154 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.155 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22b006cb-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.159 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:38:09 np0005532048 podman[376742]: 2025-11-22 09:38:09.161120764 +0000 UTC m=+0.061148002 container died 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.162 253665 INFO os_vif [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:83:85,bridge_name='br-int',has_traffic_filtering=True,id=22b006cb-c06d-4ebb-9f02-ccbbdfc34f26,network=Network(669fa85d-7478-40e5-958b-7300ef3552b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22b006cb-c0')#033[00m
Nov 22 04:38:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239-userdata-shm.mount: Deactivated successfully.
Nov 22 04:38:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-eb6d1ba9fa3eb3ce0aa01e1a76d7c9a2d999425c99d5548d6568348e14d15459-merged.mount: Deactivated successfully.
Nov 22 04:38:09 np0005532048 podman[376742]: 2025-11-22 09:38:09.214299926 +0000 UTC m=+0.114327144 container cleanup 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:38:09 np0005532048 systemd[1]: libpod-conmon-16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239.scope: Deactivated successfully.
Nov 22 04:38:09 np0005532048 podman[376800]: 2025-11-22 09:38:09.29445792 +0000 UTC m=+0.054790334 container remove 16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:38:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.307 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e58471f1-4878-43ce-8081-852fee8e6e69]: (4, ('Sat Nov 22 09:38:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 (16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239)\n16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239\nSat Nov 22 09:38:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 (16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239)\n16417638e7f6397518a640509a00c0b9b3434807772363712ebb0db1654a5239\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c2ea86-ab9b-4373-b292-ffd41febecee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.311 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap669fa85d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:09 np0005532048 kernel: tap669fa85d-70: left promiscuous mode
Nov 22 04:38:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.322 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5aebc9cf-998f-4753-ab08-24987518de34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.314 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.344 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3e9c6a-a87b-4998-8d60-a3958e5e1e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.346 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d315649a-6c67-41b0-aaa8-4a3cadf3d291]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.365 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c96f8028-798d-4d9f-a82f-2dba2262f3ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706699, 'reachable_time': 17662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376815, 'error': None, 'target': 'ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:09 np0005532048 systemd[1]: run-netns-ovnmeta\x2d669fa85d\x2d7478\x2d40e5\x2d958b\x2d7300ef3552b5.mount: Deactivated successfully.
Nov 22 04:38:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.373 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-669fa85d-7478-40e5-958b-7300ef3552b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:38:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:09.373 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe7392f-1e07-4afa-ba9d-82c5ebc9b6f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.400 253665 DEBUG nova.compute.manager [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-unplugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.401 253665 DEBUG oslo_concurrency.lockutils [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.401 253665 DEBUG oslo_concurrency.lockutils [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.401 253665 DEBUG oslo_concurrency.lockutils [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.401 253665 DEBUG nova.compute.manager [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] No waiting events found dispatching network-vif-unplugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.402 253665 DEBUG nova.compute.manager [req-c7625ee8-dd9d-4906-8de1-684b7a1d35c7 req-44afb9e7-5c37-4ad4-8565-64cbd9603515 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-unplugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.826 253665 INFO nova.virt.libvirt.driver [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Deleting instance files /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_del#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.827 253665 INFO nova.virt.libvirt.driver [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Deletion of /var/lib/nova/instances/1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9_del complete#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.868 253665 INFO nova.compute.manager [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.869 253665 DEBUG oslo.service.loopingcall [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.869 253665 DEBUG nova.compute.manager [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:38:09 np0005532048 nova_compute[253661]: 2025-11-22 09:38:09.869 253665 DEBUG nova.network.neutron [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:38:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 326 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 759 KiB/s rd, 654 KiB/s wr, 84 op/s
Nov 22 04:38:10 np0005532048 nova_compute[253661]: 2025-11-22 09:38:10.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.125 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.403 253665 DEBUG nova.network.neutron [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.437 253665 INFO nova.compute.manager [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Took 1.57 seconds to deallocate network for instance.#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.493 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.494 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.550 253665 DEBUG nova.compute.manager [req-d99a69d3-1e4e-45f8-b5e6-7435341cdc2a req-d5586a9d-4aa0-4bf3-a8c7-858206577624 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-deleted-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.561 253665 DEBUG nova.compute.manager [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.562 253665 DEBUG oslo_concurrency.lockutils [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.562 253665 DEBUG oslo_concurrency.lockutils [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.563 253665 DEBUG oslo_concurrency.lockutils [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.563 253665 DEBUG nova.compute.manager [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] No waiting events found dispatching network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.563 253665 WARNING nova.compute.manager [req-fb47da4c-fba6-4748-a1a4-aced8a369ae8 req-34e83ca0-b1f9-451d-8560-16148c064a76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Received unexpected event network-vif-plugged-22b006cb-c06d-4ebb-9f02-ccbbdfc34f26 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:38:11 np0005532048 nova_compute[253661]: 2025-11-22 09:38:11.623 253665 DEBUG oslo_concurrency.processutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 303 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 41 KiB/s wr, 85 op/s
Nov 22 04:38:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976523645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:12 np0005532048 nova_compute[253661]: 2025-11-22 09:38:12.125 253665 DEBUG oslo_concurrency.processutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:12 np0005532048 nova_compute[253661]: 2025-11-22 09:38:12.132 253665 DEBUG nova.compute.provider_tree [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:12 np0005532048 nova_compute[253661]: 2025-11-22 09:38:12.144 253665 DEBUG nova.scheduler.client.report [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:12 np0005532048 nova_compute[253661]: 2025-11-22 09:38:12.165 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:12 np0005532048 nova_compute[253661]: 2025-11-22 09:38:12.191 253665 INFO nova.scheduler.client.report [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9#033[00m
Nov 22 04:38:12 np0005532048 nova_compute[253661]: 2025-11-22 09:38:12.250 253665 DEBUG oslo_concurrency.lockutils [None req-055cd25a-0cde-42d2-bb5e-86031fcec714 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:12 np0005532048 nova_compute[253661]: 2025-11-22 09:38:12.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:38:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2062388428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:38:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:38:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2062388428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:38:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 246 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 38 KiB/s wr, 131 op/s
Nov 22 04:38:14 np0005532048 nova_compute[253661]: 2025-11-22 09:38:14.159 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:15 np0005532048 nova_compute[253661]: 2025-11-22 09:38:15.532 253665 DEBUG nova.compute.manager [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:15 np0005532048 nova_compute[253661]: 2025-11-22 09:38:15.532 253665 DEBUG nova.compute.manager [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing instance network info cache due to event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:38:15 np0005532048 nova_compute[253661]: 2025-11-22 09:38:15.533 253665 DEBUG oslo_concurrency.lockutils [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:15 np0005532048 nova_compute[253661]: 2025-11-22 09:38:15.533 253665 DEBUG oslo_concurrency.lockutils [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:15 np0005532048 nova_compute[253661]: 2025-11-22 09:38:15.533 253665 DEBUG nova.network.neutron [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:38:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:15Z|01230|binding|INFO|Releasing lport 8cb4fbf8-c8a1-48f8-bf71-339312c7db31 from this chassis (sb_readonly=0)
Nov 22 04:38:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:15Z|01231|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 04:38:15 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:15Z|01232|binding|INFO|Releasing lport 6e07e124-b404-4946-958f-042e8d633a40 from this chassis (sb_readonly=0)
Nov 22 04:38:15 np0005532048 nova_compute[253661]: 2025-11-22 09:38:15.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 246 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 120 op/s
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.128 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.231 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.232 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.245 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.254 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.254 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.285 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.359 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.360 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.366 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.366 253665 INFO nova.compute.claims [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.390 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.555 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.686 253665 DEBUG nova.network.neutron [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updated VIF entry in instance network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.687 253665 DEBUG nova.network.neutron [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:16 np0005532048 nova_compute[253661]: 2025-11-22 09:38:16.706 253665 DEBUG oslo_concurrency.lockutils [req-e19798e7-fd7c-4be6-8622-420bab36ea91 req-8da8aad3-6e77-4335-b479-4ee2559e1a87 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064126094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.121 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.129 253665 DEBUG nova.compute.provider_tree [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.146 253665 DEBUG nova.scheduler.client.report [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.176 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.177 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.181 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.191 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.191 253665 INFO nova.compute.claims [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.235 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.235 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.255 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.273 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.363 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.366 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.367 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Creating image(s)#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.389 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.418 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.443 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.448 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.513 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.558 253665 DEBUG nova.policy [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '24fbabe00a26461eaa9027f7105ae97c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.570 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.572 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.574 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.574 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.601 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.607 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.863 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804282.8613734, 117927df-3c9e-4609-b5ba-dc3937b9339d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.865 253665 INFO nova.compute.manager [-] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.893 253665 DEBUG nova.compute.manager [None req-0e22a9ad-59e2-41aa-a804-f2307df1b760 - - - - - -] [instance: 117927df-3c9e-4609-b5ba-dc3937b9339d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 254 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 97 KiB/s wr, 124 op/s
Nov 22 04:38:17 np0005532048 nova_compute[253661]: 2025-11-22 09:38:17.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840286575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.007 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.015 253665 DEBUG nova.compute.provider_tree [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.034 253665 DEBUG nova.scheduler.client.report [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.064 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.065 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.071 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.144 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.145 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.155 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] resizing rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.198 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.227 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.269 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Successfully created port: 761d949a-b334-4144-be7a-5f02c905c715 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.280 253665 DEBUG nova.objects.instance [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lazy-loading 'migration_context' on Instance uuid d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.299 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.300 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Ensure instance console log exists: /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.300 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.301 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.301 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.344 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.346 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.347 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Creating image(s)#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.381 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.415 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.447 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.452 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.506 253665 DEBUG nova.policy [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '58f15faf9ac94307a17022836fe74e23', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84cc8edaaa54443997ac9f33f8fab7ce', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.545 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.547 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.548 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.548 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.577 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.583 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6e3727ef-288f-4e26-8d29-f85423546391_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:18 np0005532048 nova_compute[253661]: 2025-11-22 09:38:18.951 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6e3727ef-288f-4e26-8d29-f85423546391_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:18 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.040 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] resizing rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.159 253665 DEBUG nova.objects.instance [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lazy-loading 'migration_context' on Instance uuid 6e3727ef-288f-4e26-8d29-f85423546391 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.163 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.182 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.183 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Ensure instance console log exists: /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.183 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.184 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.184 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.195 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Successfully created port: a28c191e-c725-404b-a4cb-e5b89c914f67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.313 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Successfully updated port: 761d949a-b334-4144-be7a-5f02c905c715 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.328 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.329 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquired lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.329 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.435 253665 DEBUG nova.compute.manager [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-changed-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.438 253665 DEBUG nova.compute.manager [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing instance network info cache due to event network-changed-761d949a-b334-4144-be7a-5f02c905c715. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.439 253665 DEBUG oslo_concurrency.lockutils [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:19 np0005532048 nova_compute[253661]: 2025-11-22 09:38:19.561 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:38:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 285 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 110 op/s
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.194 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Successfully updated port: a28c191e-c725-404b-a4cb-e5b89c914f67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.207 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.207 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquired lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.207 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.312 253665 DEBUG nova.compute.manager [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-changed-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.313 253665 DEBUG nova.compute.manager [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Refreshing instance network info cache due to event network-changed-a28c191e-c725-404b-a4cb-e5b89c914f67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.313 253665 DEBUG oslo_concurrency.lockutils [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.391 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.407 253665 DEBUG nova.network.neutron [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.424 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Releasing lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.425 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance network_info: |[{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.425 253665 DEBUG oslo_concurrency.lockutils [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.425 253665 DEBUG nova.network.neutron [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing network info cache for port 761d949a-b334-4144-be7a-5f02c905c715 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.428 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start _get_guest_xml network_info=[{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.433 253665 WARNING nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.440 253665 DEBUG nova.virt.libvirt.host [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.441 253665 DEBUG nova.virt.libvirt.host [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.444 253665 DEBUG nova.virt.libvirt.host [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.445 253665 DEBUG nova.virt.libvirt.host [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.445 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.445 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.446 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.447 253665 DEBUG nova.virt.hardware [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.451 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:38:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630693833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:38:20 np0005532048 nova_compute[253661]: 2025-11-22 09:38:20.987 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.021 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.026 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:21Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:38:77 10.100.0.14
Nov 22 04:38:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:21Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:38:77 10.100.0.14
Nov 22 04:38:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:38:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885037816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.579 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.582 253665 DEBUG nova.virt.libvirt.vif [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2070464237-ac',id=119,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNtSLVn2f2AjktFMVEQRNrPDPgiu6XGcAVHoUX9ErUANDAfx8scLKesh39J38uCHme4Kr1WaGaUgPEF++ZKW4JdZA91CWGfVEKx+uaYRX1tqW4xZuiIvDOiFoDeabW/cjQ==',key_name='tempest-TestSecurityGroupsBasicOps-580779993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9ddb669b6144eee90dc043099e8df8c',ramdisk_id='',reservation_id='r-b9fzu52l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2070464237',owner_user_name='tempest-TestSecurityGroupsBasicOps-2070464237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:17Z,user_data=None,user_id='24fbabe00a26461eaa9027f7105ae97c',uuid=d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.583 253665 DEBUG nova.network.os_vif_util [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converting VIF {"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.585 253665 DEBUG nova.network.os_vif_util [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.587 253665 DEBUG nova.objects.instance [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lazy-loading 'pci_devices' on Instance uuid d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.602 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <uuid>d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3</uuid>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <name>instance-00000077</name>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641</nova:name>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:38:20</nova:creationTime>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <nova:user uuid="24fbabe00a26461eaa9027f7105ae97c">tempest-TestSecurityGroupsBasicOps-2070464237-project-member</nova:user>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <nova:project uuid="a9ddb669b6144eee90dc043099e8df8c">tempest-TestSecurityGroupsBasicOps-2070464237</nova:project>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <nova:port uuid="761d949a-b334-4144-be7a-5f02c905c715">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <entry name="serial">d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3</entry>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <entry name="uuid">d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3</entry>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:85:a8:14"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <target dev="tap761d949a-b3"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/console.log" append="off"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:38:21 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:38:21 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:38:21 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:38:21 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.603 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Preparing to wait for external event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.603 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.604 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.604 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.605 253665 DEBUG nova.virt.libvirt.vif [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2070464237-ac',id=119,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNtSLVn2f2AjktFMVEQRNrPDPgiu6XGcAVHoUX9ErUANDAfx8scLKesh39J38uCHme4Kr1WaGaUgPEF++ZKW4JdZA91CWGfVEKx+uaYRX1tqW4xZuiIvDOiFoDeabW/cjQ==',key_name='tempest-TestSecurityGroupsBasicOps-580779993',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9ddb669b6144eee90dc043099e8df8c',ramdisk_id='',reservation_id='r-b9fzu52l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2070464237',owner_user_name='tempest-TestSecurityGroupsBasicOps-2070464237-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:17Z,user_data=None,user_id='24fbabe00a26461eaa9027f7105ae97c',uuid=d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.605 253665 DEBUG nova.network.os_vif_util [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converting VIF {"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.606 253665 DEBUG nova.network.os_vif_util [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.606 253665 DEBUG os_vif [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.607 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.608 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.611 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.611 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap761d949a-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.613 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap761d949a-b3, col_values=(('external_ids', {'iface-id': '761d949a-b334-4144-be7a-5f02c905c715', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:a8:14', 'vm-uuid': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:21 np0005532048 NetworkManager[48920]: <info>  [1763804301.6169] manager: (tap761d949a-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.619 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.625 253665 INFO os_vif [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3')#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.690 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.690 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.690 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] No VIF found with MAC fa:16:3e:85:a8:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.691 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Using config drive#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.718 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.728 253665 DEBUG nova.network.neutron [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Updating instance_info_cache with network_info: [{"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:21 np0005532048 podman[377282]: 2025-11-22 09:38:21.731134624 +0000 UTC m=+0.068854865 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:38:21 np0005532048 podman[377283]: 2025-11-22 09:38:21.741746217 +0000 UTC m=+0.075988460 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.764 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Releasing lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.764 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance network_info: |[{"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.766 253665 DEBUG oslo_concurrency.lockutils [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.766 253665 DEBUG nova.network.neutron [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Refreshing network info cache for port a28c191e-c725-404b-a4cb-e5b89c914f67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.771 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start _get_guest_xml network_info=[{"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.776 253665 WARNING nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.781 253665 DEBUG nova.virt.libvirt.host [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.782 253665 DEBUG nova.virt.libvirt.host [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.793 253665 DEBUG nova.virt.libvirt.host [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.793 253665 DEBUG nova.virt.libvirt.host [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.794 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.794 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.795 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.795 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.796 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.796 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.797 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.797 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.797 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.798 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.798 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.798 253665 DEBUG nova.virt.hardware [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.801 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:21 np0005532048 nova_compute[253661]: 2025-11-22 09:38:21.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 316 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 127 op/s
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.161 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Creating config drive at /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.169 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq5_2mgb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.288 253665 DEBUG nova.network.neutron [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updated VIF entry in instance network info cache for port 761d949a-b334-4144-be7a-5f02c905c715. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.288 253665 DEBUG nova.network.neutron [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.305 253665 DEBUG oslo_concurrency.lockutils [req-c30a3eea-0562-4ffd-bc58-c25ddd18774a req-468134d9-c62b-4731-9a5c-191dee840013 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:38:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3387314890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.330 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplq5_2mgb" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.356 253665 DEBUG nova.storage.rbd_utils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] rbd image d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.360 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.403 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.427 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.432 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.540 253665 DEBUG oslo_concurrency.processutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.542 253665 INFO nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Deleting local config drive /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3/disk.config because it was imported into RBD.#033[00m
Nov 22 04:38:22 np0005532048 NetworkManager[48920]: <info>  [1763804302.6208] manager: (tap761d949a-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/509)
Nov 22 04:38:22 np0005532048 kernel: tap761d949a-b3: entered promiscuous mode
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:22Z|01233|binding|INFO|Claiming lport 761d949a-b334-4144-be7a-5f02c905c715 for this chassis.
Nov 22 04:38:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:22Z|01234|binding|INFO|761d949a-b334-4144-be7a-5f02c905c715: Claiming fa:16:3e:85:a8:14 10.100.0.8
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.640 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:a8:14 10.100.0.8'], port_security=['fa:16:3e:85:a8:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6befe33f-63d2-41aa-b574-8eb9b323c484 8fecaa1a-36f4-4ef4-bac2-46e5b8b5f461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1cb9034-c4c3-45e7-9e31-5c5d3f434f14, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=761d949a-b334-4144-be7a-5f02c905c715) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.641 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 761d949a-b334-4144-be7a-5f02c905c715 in datapath a8c9b48b-687a-480f-aff5-bd1fee4c2bbd bound to our chassis#033[00m
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.644 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8c9b48b-687a-480f-aff5-bd1fee4c2bbd#033[00m
Nov 22 04:38:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:22Z|01235|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 ovn-installed in OVS
Nov 22 04:38:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:22Z|01236|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 up in Southbound
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.658 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.663 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[25e98c39-ca9c-4b37-9560-ecc16dfd3d73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.664 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8c9b48b-61 in ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.666 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8c9b48b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.666 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[034fd7b2-2454-4589-8e4f-bf639f679104]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.667 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b523a676-e557-4527-bf2a-f460d96bfab7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:22 np0005532048 systemd-machined[215941]: New machine qemu-150-instance-00000077.
Nov 22 04:38:22 np0005532048 systemd[1]: Started Virtual Machine qemu-150-instance-00000077.
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.691 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[63af098c-258a-4a5f-8f3f-1f7681a1b2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:22 np0005532048 systemd-udevd[377452]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.709 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5769ab-a229-4534-91f0-21fa824586b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:22 np0005532048 NetworkManager[48920]: <info>  [1763804302.7195] device (tap761d949a-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:38:22 np0005532048 NetworkManager[48920]: <info>  [1763804302.7212] device (tap761d949a-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.747 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6f9634-f6c7-4c29-8ae2-563ddb135305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:38:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:38:22 np0005532048 NetworkManager[48920]: <info>  [1763804302.7582] manager: (tapa8c9b48b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/510)
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.756 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc73ac8-afee-425f-8540-2281e17272f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.818 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1be4b618-be2c-41ad-bfa4-8eeef714d6dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:22.824 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e083b641-b8e5-40d5-87f5-61c5ea47a3e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:38:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594992208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.944 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.946 253665 DEBUG nova.virt.libvirt.vif [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-771105155',id=120,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84cc8edaaa54443997ac9f33f8fab7ce',ramdisk_id='',reservation_id='r-uce6hdys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-495917723',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-495917723-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:18Z,user_data=None,user_id='58f15faf9ac94307a17022836fe74e23',uuid=6e3727ef-288f-4e26-8d29-f85423546391,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.946 253665 DEBUG nova.network.os_vif_util [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converting VIF {"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.947 253665 DEBUG nova.network.os_vif_util [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.948 253665 DEBUG nova.objects.instance [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e3727ef-288f-4e26-8d29-f85423546391 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.966 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <uuid>6e3727ef-288f-4e26-8d29-f85423546391</uuid>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <name>instance-00000078</name>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-771105155</nova:name>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:38:21</nova:creationTime>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <nova:user uuid="58f15faf9ac94307a17022836fe74e23">tempest-ServersNegativeTestMultiTenantJSON-495917723-project-member</nova:user>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <nova:project uuid="84cc8edaaa54443997ac9f33f8fab7ce">tempest-ServersNegativeTestMultiTenantJSON-495917723</nova:project>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <nova:port uuid="a28c191e-c725-404b-a4cb-e5b89c914f67">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <entry name="serial">6e3727ef-288f-4e26-8d29-f85423546391</entry>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <entry name="uuid">6e3727ef-288f-4e26-8d29-f85423546391</entry>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6e3727ef-288f-4e26-8d29-f85423546391_disk">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6e3727ef-288f-4e26-8d29-f85423546391_disk.config">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:13:6e:9b"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <target dev="tapa28c191e-c7"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/console.log" append="off"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:38:22 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:38:22 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:38:22 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:38:22 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.969 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Preparing to wait for external event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.969 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.969 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.970 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.970 253665 DEBUG nova.virt.libvirt.vif [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-771105155',id=120,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84cc8edaaa54443997ac9f33f8fab7ce',ramdisk_id='',reservation_id='r-uce6hdys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-495917723',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-495917723-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:18Z,user_data=None,user_id='58f15faf9ac94307a17022836fe74e23',uuid=6e3727ef-288f-4e26-8d29-f85423546391,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.970 253665 DEBUG nova.network.os_vif_util [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converting VIF {"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.972 253665 DEBUG nova.network.os_vif_util [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.973 253665 DEBUG os_vif [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.974 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.974 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.978 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa28c191e-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.978 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa28c191e-c7, col_values=(('external_ids', {'iface-id': 'a28c191e-c725-404b-a4cb-e5b89c914f67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:6e:9b', 'vm-uuid': '6e3727ef-288f-4e26-8d29-f85423546391'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.979 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:22 np0005532048 NetworkManager[48920]: <info>  [1763804302.9810] manager: (tapa28c191e-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.983 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.990 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:22 np0005532048 nova_compute[253661]: 2025-11-22 09:38:22.991 253665 INFO os_vif [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7')#033[00m
Nov 22 04:38:23 np0005532048 NetworkManager[48920]: <info>  [1763804303.0061] device (tapa8c9b48b-60): carrier: link connected
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.017 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[23d30165-cef7-4adc-bddd-f89ed7c1c7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.041 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[725da78e-f9f0-44bd-af4e-eafa3d43f291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8c9b48b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:59:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715374, 'reachable_time': 40748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377530, 'error': None, 'target': 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.066 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bc610dc1-f737-444f-99ac-df9151f530e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:5919'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 715374, 'tstamp': 715374}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377531, 'error': None, 'target': 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.095 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0b25c1db-5fd3-4860-8f46-dfb46bd74fbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8c9b48b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:59:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715374, 'reachable_time': 40748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377533, 'error': None, 'target': 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.124 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804303.12357, d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.125 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] VM Started (Lifecycle Event)#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.136 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.137 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.137 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] No VIF found with MAC fa:16:3e:13:6e:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.137 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21733754-b429-49b2-b2a7-c2bca06bc53c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.139 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Using config drive#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.168 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.177 253665 DEBUG nova.compute.manager [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.178 253665 DEBUG oslo_concurrency.lockutils [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.178 253665 DEBUG oslo_concurrency.lockutils [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.178 253665 DEBUG oslo_concurrency.lockutils [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.178 253665 DEBUG nova.compute.manager [req-529c7bcb-3bd3-45d1-b4e6-a882976b2fe4 req-ac4d322a-a5d6-4f6d-8265-be62a468785b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Processing event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.179 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.180 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.190 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.191 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.196 253665 INFO nova.virt.libvirt.driver [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance spawned successfully.#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.196 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.207 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.207 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804303.1239512, d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.208 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.215 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.215 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.216 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.216 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.216 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.216 253665 DEBUG nova.virt.libvirt.driver [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.216 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13e57839-fcb5-4ceb-9e82-95696bf1dd58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.218 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c9b48b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.218 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.219 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8c9b48b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.224 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:23 np0005532048 NetworkManager[48920]: <info>  [1763804303.2252] manager: (tapa8c9b48b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Nov 22 04:38:23 np0005532048 kernel: tapa8c9b48b-60: entered promiscuous mode
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.230 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.231 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8c9b48b-60, col_values=(('external_ids', {'iface-id': '9e57ed14-a93d-454a-9d37-00035fb43663'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:23Z|01237|binding|INFO|Releasing lport 9e57ed14-a93d-454a-9d37-00035fb43663 from this chassis (sb_readonly=0)
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.239 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.243 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804303.1831286, d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.243 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.259 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8c9b48b-687a-480f-aff5-bd1fee4c2bbd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8c9b48b-687a-480f-aff5-bd1fee4c2bbd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.260 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc6ad51-9750-4b99-a327-c57a96d43ba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.261 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/a8c9b48b-687a-480f-aff5-bd1fee4c2bbd.pid.haproxy
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID a8c9b48b-687a-480f-aff5-bd1fee4c2bbd
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:38:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:23.262 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'env', 'PROCESS_TAG=haproxy-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8c9b48b-687a-480f-aff5-bd1fee4c2bbd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.265 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.267 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.276 253665 INFO nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Took 5.91 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.276 253665 DEBUG nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.284 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.333 253665 INFO nova.compute.manager [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Took 6.99 seconds to build instance.#033[00m
Nov 22 04:38:23 np0005532048 nova_compute[253661]: 2025-11-22 09:38:23.351 253665 DEBUG oslo_concurrency.lockutils [None req-e1051523-a75f-40c7-ad11-3f518cb0a97d 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:23 np0005532048 podman[377586]: 2025-11-22 09:38:23.685604579 +0000 UTC m=+0.057767338 container create 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:38:23 np0005532048 systemd[1]: Started libpod-conmon-2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e.scope.
Nov 22 04:38:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:23 np0005532048 podman[377586]: 2025-11-22 09:38:23.656565987 +0000 UTC m=+0.028728786 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:38:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:38:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58bc45d9f28847928b45ee55af7005c19133177176f15c3c95d23db67f15d5f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:23 np0005532048 podman[377586]: 2025-11-22 09:38:23.772693884 +0000 UTC m=+0.144856673 container init 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:38:23 np0005532048 podman[377586]: 2025-11-22 09:38:23.777888593 +0000 UTC m=+0.150051362 container start 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:38:23 np0005532048 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [NOTICE]   (377606) : New worker (377608) forked
Nov 22 04:38:23 np0005532048 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [NOTICE]   (377606) : Loading success.
Nov 22 04:38:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 371 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.7 MiB/s wr, 158 op/s
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.127 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804289.1257915, 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.127 253665 INFO nova.compute.manager [-] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.153 253665 DEBUG nova.compute.manager [None req-f09ba0db-3166-4b14-8b41-64ccd812cb40 - - - - - -] [instance: 1d0562b0-e22d-4fb3-9f81-ab4559a7a1c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.335 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.336 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.361 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Creating config drive at /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.366 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp515hid6o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.444 253665 DEBUG nova.network.neutron [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Updated VIF entry in instance network info cache for port a28c191e-c725-404b-a4cb-e5b89c914f67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.446 253665 DEBUG nova.network.neutron [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Updating instance_info_cache with network_info: [{"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.460 253665 DEBUG oslo_concurrency.lockutils [req-6cafe751-6b5f-4609-a675-93da33038cba req-90aa240a-3c26-419b-9f56-c600dfe48265 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6e3727ef-288f-4e26-8d29-f85423546391" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.520 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp515hid6o" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.549 253665 DEBUG nova.storage.rbd_utils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] rbd image 6e3727ef-288f-4e26-8d29-f85423546391_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.555 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config 6e3727ef-288f-4e26-8d29-f85423546391_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.784 253665 DEBUG oslo_concurrency.processutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config 6e3727ef-288f-4e26-8d29-f85423546391_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.786 253665 INFO nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Deleting local config drive /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391/disk.config because it was imported into RBD.#033[00m
Nov 22 04:38:24 np0005532048 NetworkManager[48920]: <info>  [1763804304.8579] manager: (tapa28c191e-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Nov 22 04:38:24 np0005532048 kernel: tapa28c191e-c7: entered promiscuous mode
Nov 22 04:38:24 np0005532048 systemd-udevd[377479]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:24Z|01238|binding|INFO|Claiming lport a28c191e-c725-404b-a4cb-e5b89c914f67 for this chassis.
Nov 22 04:38:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:24Z|01239|binding|INFO|a28c191e-c725-404b-a4cb-e5b89c914f67: Claiming fa:16:3e:13:6e:9b 10.100.0.3
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.873 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.879 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:6e:9b 10.100.0.3'], port_security=['fa:16:3e:13:6e:9b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6e3727ef-288f-4e26-8d29-f85423546391', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d994c6fb-564e-4523-afe4-89804b993385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84cc8edaaa54443997ac9f33f8fab7ce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '621b389d-2096-4ee1-8e3b-c5cb3466897b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56f26a9d-1a5c-40a1-8f03-488332bb450e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a28c191e-c725-404b-a4cb-e5b89c914f67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.881 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a28c191e-c725-404b-a4cb-e5b89c914f67 in datapath d994c6fb-564e-4523-afe4-89804b993385 bound to our chassis#033[00m
Nov 22 04:38:24 np0005532048 NetworkManager[48920]: <info>  [1763804304.8855] device (tapa28c191e-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.883 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d994c6fb-564e-4523-afe4-89804b993385#033[00m
Nov 22 04:38:24 np0005532048 NetworkManager[48920]: <info>  [1763804304.8869] device (tapa28c191e-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:38:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:24Z|01240|binding|INFO|Setting lport a28c191e-c725-404b-a4cb-e5b89c914f67 ovn-installed in OVS
Nov 22 04:38:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:24Z|01241|binding|INFO|Setting lport a28c191e-c725-404b-a4cb-e5b89c914f67 up in Southbound
Nov 22 04:38:24 np0005532048 nova_compute[253661]: 2025-11-22 09:38:24.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.905 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[57bf1648-98fd-4774-af4c-16e1936652ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.907 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd994c6fb-51 in ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.909 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd994c6fb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.910 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0821ac85-ae3a-4631-b5ce-aa8c549459a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.912 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9822c923-b91a-4520-8377-c119c456ad30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:24 np0005532048 systemd-machined[215941]: New machine qemu-151-instance-00000078.
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.930 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd8dcb1-679e-428d-a003-50fbc8462f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:24 np0005532048 systemd[1]: Started Virtual Machine qemu-151-instance-00000078.
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.962 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5a69378c-cbde-4061-8eab-13c15f64e2ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:24.996 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[66620c46-1a4a-4ae5-912f-f688b90d2f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 NetworkManager[48920]: <info>  [1763804305.0033] manager: (tapd994c6fb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/514)
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.007 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cb221a9b-6e0b-4aee-871f-e4778c657a6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.053 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd31307-2f3d-4645-9013-02f45ac49c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.060 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0faefdbd-054d-4ec2-8fae-a260243df9f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 NetworkManager[48920]: <info>  [1763804305.1004] device (tapd994c6fb-50): carrier: link connected
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.106 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d9df4e5b-1222-4f7e-b003-402db68b04fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.123 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec3924e-76e5-43b9-8db3-456b9b6e901f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd994c6fb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:09:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715583, 'reachable_time': 37348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377688, 'error': None, 'target': 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.147 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[88de0866-5726-4933-8940-b54a8537bac2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:9a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 715583, 'tstamp': 715583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377689, 'error': None, 'target': 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.163 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8c8c9f-85da-4cc9-ad30-254977d8cf6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd994c6fb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:09:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715583, 'reachable_time': 37348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377690, 'error': None, 'target': 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.220 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7942634c-9c75-46be-98f4-e1db184de802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.276 253665 DEBUG nova.compute.manager [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.277 253665 DEBUG oslo_concurrency.lockutils [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.277 253665 DEBUG oslo_concurrency.lockutils [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.277 253665 DEBUG oslo_concurrency.lockutils [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.278 253665 DEBUG nova.compute.manager [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.278 253665 WARNING nova.compute.manager [req-5cb8a16d-07c4-4f02-b98c-6b7dbd241b0a req-c6efe8d0-b3ed-49b8-9021-64fea95bc304 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.305 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7601e614-6235-4ab4-bf68-8f6f8a7829d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.307 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd994c6fb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.307 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.308 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd994c6fb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.311 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:25 np0005532048 NetworkManager[48920]: <info>  [1763804305.3126] manager: (tapd994c6fb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/515)
Nov 22 04:38:25 np0005532048 kernel: tapd994c6fb-50: entered promiscuous mode
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.316 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd994c6fb-50, col_values=(('external_ids', {'iface-id': '01bfc096-4605-4de9-9175-6d95e7483385'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.318 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:25Z|01242|binding|INFO|Releasing lport 01bfc096-4605-4de9-9175-6d95e7483385 from this chassis (sb_readonly=0)
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.334 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.336 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d994c6fb-564e-4523-afe4-89804b993385.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d994c6fb-564e-4523-afe4-89804b993385.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.337 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a75a5f3b-8d1e-4776-a8cb-1b82c5a74fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.337 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-d994c6fb-564e-4523-afe4-89804b993385
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/d994c6fb-564e-4523-afe4-89804b993385.pid.haproxy
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID d994c6fb-564e-4523-afe4-89804b993385
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:38:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:25.338 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'env', 'PROCESS_TAG=haproxy-d994c6fb-564e-4523-afe4-89804b993385', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d994c6fb-564e-4523-afe4-89804b993385.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.524 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804305.5236468, 6e3727ef-288f-4e26-8d29-f85423546391 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.525 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] VM Started (Lifecycle Event)#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.571 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.581 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804305.5239272, 6e3727ef-288f-4e26-8d29-f85423546391 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.581 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.599 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.602 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:38:25 np0005532048 nova_compute[253661]: 2025-11-22 09:38:25.618 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:38:25 np0005532048 podman[377764]: 2025-11-22 09:38:25.78789394 +0000 UTC m=+0.073827898 container create 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 04:38:25 np0005532048 systemd[1]: Started libpod-conmon-0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5.scope.
Nov 22 04:38:25 np0005532048 podman[377764]: 2025-11-22 09:38:25.745072194 +0000 UTC m=+0.031006142 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:38:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:38:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ce9d2c62dc97e3c042547d057ab8624862e781d05baaf78a7c8f0d1a0a1d0c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:25 np0005532048 podman[377764]: 2025-11-22 09:38:25.888038671 +0000 UTC m=+0.173972609 container init 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:38:25 np0005532048 podman[377764]: 2025-11-22 09:38:25.895464795 +0000 UTC m=+0.181398713 container start 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:38:25 np0005532048 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [NOTICE]   (377783) : New worker (377785) forked
Nov 22 04:38:25 np0005532048 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [NOTICE]   (377783) : Loading success.
Nov 22 04:38:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 371 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 5.7 MiB/s wr, 112 op/s
Nov 22 04:38:26 np0005532048 nova_compute[253661]: 2025-11-22 09:38:26.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 372 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1011 KiB/s rd, 5.7 MiB/s wr, 157 op/s
Nov 22 04:38:27 np0005532048 nova_compute[253661]: 2025-11-22 09:38:27.981 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:27.982 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:27.983 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:27.985 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:28Z|01243|memory|INFO|peak resident set size grew 53% in last 3294.1 seconds, from 15872 kB to 24268 kB
Nov 22 04:38:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:28Z|01244|memory|INFO|idl-cells-OVN_Southbound:10807 idl-cells-Open_vSwitch:1326 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:2 lflow-cache-entries-cache-expr:408 lflow-cache-entries-cache-matches:288 lflow-cache-size-KB:1726 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:738 ofctrl_installed_flow_usage-KB:538 ofctrl_sb_flow_ref_usage-KB:275
Nov 22 04:38:28 np0005532048 podman[377794]: 2025-11-22 09:38:28.470740188 +0000 UTC m=+0.149743515 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:38:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.038 253665 DEBUG nova.compute.manager [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.038 253665 DEBUG oslo_concurrency.lockutils [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.039 253665 DEBUG oslo_concurrency.lockutils [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.039 253665 DEBUG oslo_concurrency.lockutils [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.039 253665 DEBUG nova.compute.manager [req-9f7b5fb9-2454-442a-a2a7-f537dcd529d3 req-75fcfaf4-47ec-4791-bf8f-511f98835934 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Processing event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.040 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.044 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804309.0438735, 6e3727ef-288f-4e26-8d29-f85423546391 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.044 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.046 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.055 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance spawned successfully.#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.056 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.087 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.093 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.097 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.098 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.098 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.098 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.099 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.099 253665 DEBUG nova.virt.libvirt.driver [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.127 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.196 253665 INFO nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Took 10.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.198 253665 DEBUG nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.300 253665 INFO nova.compute.manager [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Took 12.95 seconds to build instance.#033[00m
Nov 22 04:38:29 np0005532048 nova_compute[253661]: 2025-11-22 09:38:29.318 253665 DEBUG oslo_concurrency.lockutils [None req-7eedf42d-3c29-4b8b-bf36-e7fdded4c644 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:29.339 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 372 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 200 op/s
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.712 253665 DEBUG nova.compute.manager [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.714 253665 DEBUG nova.compute.manager [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing instance network info cache due to event network-changed-ff0231eb-335b-4acd-98c8-d655d887e97a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.714 253665 DEBUG oslo_concurrency.lockutils [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.714 253665 DEBUG oslo_concurrency.lockutils [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.714 253665 DEBUG nova.network.neutron [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Refreshing network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.744 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.745 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.746 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.746 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.747 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.749 253665 INFO nova.compute.manager [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Terminating instance#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.751 253665 DEBUG nova.compute.manager [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:38:30 np0005532048 kernel: tapff0231eb-33 (unregistering): left promiscuous mode
Nov 22 04:38:30 np0005532048 NetworkManager[48920]: <info>  [1763804310.8469] device (tapff0231eb-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:38:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:30Z|01245|binding|INFO|Releasing lport ff0231eb-335b-4acd-98c8-d655d887e97a from this chassis (sb_readonly=0)
Nov 22 04:38:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:30Z|01246|binding|INFO|Setting lport ff0231eb-335b-4acd-98c8-d655d887e97a down in Southbound
Nov 22 04:38:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:30Z|01247|binding|INFO|Removing iface tapff0231eb-33 ovn-installed in OVS
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.859 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:30 np0005532048 kernel: tapd7659b3e-35 (unregistering): left promiscuous mode
Nov 22 04:38:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.873 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:38:77 10.100.0.14'], port_security=['fa:16:3e:7e:38:77 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=416fdb0b-60ab-41a3-b089-f86f3fe1761e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ff0231eb-335b-4acd-98c8-d655d887e97a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.876 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ff0231eb-335b-4acd-98c8-d655d887e97a in datapath a1a3f352-95a9-4122-aecd-94a4bbf79683 unbound from our chassis#033[00m
Nov 22 04:38:30 np0005532048 NetworkManager[48920]: <info>  [1763804310.8789] device (tapd7659b3e-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.886 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1a3f352-95a9-4122-aecd-94a4bbf79683#033[00m
Nov 22 04:38:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:30Z|01248|binding|INFO|Releasing lport d7659b3e-3579-403f-b319-ceb538d9c201 from this chassis (sb_readonly=0)
Nov 22 04:38:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:30Z|01249|binding|INFO|Setting lport d7659b3e-3579-403f-b319-ceb538d9c201 down in Southbound
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:30Z|01250|binding|INFO|Removing iface tapd7659b3e-35 ovn-installed in OVS
Nov 22 04:38:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.908 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f'], port_security=['fa:16:3e:88:21:1f 2001:db8::f816:3eff:fe88:211f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:211f/64', 'neutron:device_id': 'f16662c4-9b4f-4060-ac76-ebfb960dbb89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f56771e6-e0a6-4947-ad39-6cb384a012bf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=d7659b3e-3579-403f-b319-ceb538d9c201) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.914 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a162d837-15b6-4169-bac2-975ae10607c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:30 np0005532048 nova_compute[253661]: 2025-11-22 09:38:30.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:30 np0005532048 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Deactivated successfully.
Nov 22 04:38:30 np0005532048 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Consumed 15.077s CPU time.
Nov 22 04:38:30 np0005532048 systemd-machined[215941]: Machine qemu-149-instance-00000076 terminated.
Nov 22 04:38:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.958 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb07afe-1d49-4e66-94cd-d3ad5d8047c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:30.963 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[da8b3155-0de5-4361-99a0-8fb2647f3354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 NetworkManager[48920]: <info>  [1763804311.0024] manager: (tapd7659b3e-35): new Tun device (/org/freedesktop/NetworkManager/Devices/516)
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.004 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.015 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a77ed8fc-3d38-488f-b880-fcf1da2a4f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.017 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.027 253665 INFO nova.virt.libvirt.driver [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Instance destroyed successfully.#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.027 253665 DEBUG nova.objects.instance [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid f16662c4-9b4f-4060-ac76-ebfb960dbb89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.041 253665 DEBUG nova.virt.libvirt.vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:07Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.043 253665 DEBUG nova.network.os_vif_util [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.041 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[561d0619-62e4-46ca-a95d-29351949e367]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1a3f352-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:dc:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707815, 'reachable_time': 26897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377855, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.044 253665 DEBUG nova.network.os_vif_util [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.045 253665 DEBUG os_vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.049 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff0231eb-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.054 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.056 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.061 253665 INFO os_vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:38:77,bridge_name='br-int',has_traffic_filtering=True,id=ff0231eb-335b-4acd-98c8-d655d887e97a,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff0231eb-33')#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.062 253665 DEBUG nova.virt.libvirt.vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:37:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1142729147',display_name='tempest-TestGettingAddress-server-1142729147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1142729147',id=118,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-2jla2sib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:07Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=f16662c4-9b4f-4060-ac76-ebfb960dbb89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.062 253665 DEBUG nova.network.os_vif_util [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.063 253665 DEBUG nova.network.os_vif_util [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.063 253665 DEBUG os_vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.065 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.065 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7659b3e-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.068 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c93e00-e4c8-4ad4-82dd-483434847372]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa1a3f352-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707837, 'tstamp': 707837}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377856, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa1a3f352-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707841, 'tstamp': 707841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377856, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.070 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a3f352-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.070 253665 INFO os_vif [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:21:1f,bridge_name='br-int',has_traffic_filtering=True,id=d7659b3e-3579-403f-b319-ceb538d9c201,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7659b3e-35')#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.072 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1a3f352-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.072 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.073 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1a3f352-90, col_values=(('external_ids', {'iface-id': '6e07e124-b404-4946-958f-042e8d633a40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.073 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.074 162862 INFO neutron.agent.ovn.metadata.agent [-] Port d7659b3e-3579-403f-b319-ceb538d9c201 in datapath c883e14c-ad7e-49eb-b0c3-2571140d1e57 unbound from our chassis#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.076 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c883e14c-ad7e-49eb-b0c3-2571140d1e57#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.089 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.097 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a23113cf-f221-4215-8804-37a0c26497a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.143 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb2996a-d7bf-4bfe-8d22-a7c89dbe4dfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.147 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[376f4a9f-7a40-41c8-8db1-529bfca029ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.194 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[caa32111-4c3e-410a-927d-1af0fbf6f418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.202 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.203 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.221 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cc448cd7-afa0-4123-b9cd-6d7ff27cf3f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc883e14c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d1:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707921, 'reachable_time': 43324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377881, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.228 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.250 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4efdcf04-56d3-4b0c-a026-4cb6204fd3a9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc883e14c-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707938, 'tstamp': 707938}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377882, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.252 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc883e14c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.255 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.256 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.257 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc883e14c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.258 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.258 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc883e14c-a0, col_values=(('external_ids', {'iface-id': '8cb4fbf8-c8a1-48f8-bf71-339312c7db31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:31.258 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.301 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.302 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.309 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.310 253665 INFO nova.compute.claims [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.522 253665 INFO nova.virt.libvirt.driver [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Deleting instance files /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89_del#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.523 253665 INFO nova.virt.libvirt.driver [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Deletion of /var/lib/nova/instances/f16662c4-9b4f-4060-ac76-ebfb960dbb89_del complete#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.559 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.612 253665 INFO nova.compute.manager [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.613 253665 DEBUG oslo.service.loopingcall [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.614 253665 DEBUG nova.compute.manager [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.614 253665 DEBUG nova.network.neutron [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.751 253665 DEBUG nova.compute.manager [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.751 253665 DEBUG oslo_concurrency.lockutils [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.752 253665 DEBUG oslo_concurrency.lockutils [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.752 253665 DEBUG oslo_concurrency.lockutils [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.753 253665 DEBUG nova.compute.manager [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] No waiting events found dispatching network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:31 np0005532048 nova_compute[253661]: 2025-11-22 09:38:31.754 253665 WARNING nova.compute.manager [req-02e95d8f-780e-468c-82cb-13bed846cb11 req-8adea8ac-6db6-4ae5-82ea-4b9e41c73783 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received unexpected event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:38:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 348 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 216 op/s
Nov 22 04:38:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:32 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532818359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.048 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.060 253665 DEBUG nova.compute.provider_tree [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.073 253665 DEBUG nova.scheduler.client.report [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.089 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.090 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.130 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.131 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.148 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.171 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.267 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.269 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.270 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Creating image(s)#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.299 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.328 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.357 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.364 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.426 253665 DEBUG nova.network.neutron [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updated VIF entry in instance network info cache for port ff0231eb-335b-4acd-98c8-d655d887e97a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.427 253665 DEBUG nova.network.neutron [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d7659b3e-3579-403f-b319-ceb538d9c201", "address": "fa:16:3e:88:21:1f", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe88:211f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7659b3e-35", "ovs_interfaceid": "d7659b3e-3579-403f-b319-ceb538d9c201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.431 253665 DEBUG nova.policy [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.453 253665 DEBUG oslo_concurrency.lockutils [req-27950ca1-85e4-40fc-ba2c-60b129b2d075 req-1903c4b3-af1c-4687-bdd6-e2fdcd6dedb3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-f16662c4-9b4f-4060-ac76-ebfb960dbb89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.482 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.483 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.483 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.484 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.518 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.525 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.709 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.710 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.711 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.711 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.711 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.721 253665 INFO nova.compute.manager [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Terminating instance#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.723 253665 DEBUG nova.compute.manager [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.842 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-unplugged-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.843 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.843 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.844 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.844 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-unplugged-ff0231eb-335b-4acd-98c8-d655d887e97a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.844 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-unplugged-ff0231eb-335b-4acd-98c8-d655d887e97a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.844 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 WARNING nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received unexpected event network-vif-plugged-ff0231eb-335b-4acd-98c8-d655d887e97a for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.845 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-unplugged-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-unplugged-d7659b3e-3579-403f-b319-ceb538d9c201 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-unplugged-d7659b3e-3579-403f-b319-ceb538d9c201 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.846 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG oslo_concurrency.lockutils [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] No waiting events found dispatching network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 WARNING nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received unexpected event network-vif-plugged-d7659b3e-3579-403f-b319-ceb538d9c201 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-deleted-d7659b3e-3579-403f-b319-ceb538d9c201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 INFO nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Neutron deleted interface d7659b3e-3579-403f-b319-ceb538d9c201; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.847 253665 DEBUG nova.network.neutron [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [{"id": "ff0231eb-335b-4acd-98c8-d655d887e97a", "address": "fa:16:3e:7e:38:77", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff0231eb-33", "ovs_interfaceid": "ff0231eb-335b-4acd-98c8-d655d887e97a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:32 np0005532048 kernel: tapa28c191e-c7 (unregistering): left promiscuous mode
Nov 22 04:38:32 np0005532048 NetworkManager[48920]: <info>  [1763804312.8677] device (tapa28c191e-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:38:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:32Z|01251|binding|INFO|Releasing lport a28c191e-c725-404b-a4cb-e5b89c914f67 from this chassis (sb_readonly=0)
Nov 22 04:38:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:32Z|01252|binding|INFO|Setting lport a28c191e-c725-404b-a4cb-e5b89c914f67 down in Southbound
Nov 22 04:38:32 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:32Z|01253|binding|INFO|Removing iface tapa28c191e-c7 ovn-installed in OVS
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.886 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:6e:9b 10.100.0.3'], port_security=['fa:16:3e:13:6e:9b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6e3727ef-288f-4e26-8d29-f85423546391', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d994c6fb-564e-4523-afe4-89804b993385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84cc8edaaa54443997ac9f33f8fab7ce', 'neutron:revision_number': '4', 'neutron:security_group_ids': '621b389d-2096-4ee1-8e3b-c5cb3466897b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=56f26a9d-1a5c-40a1-8f03-488332bb450e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a28c191e-c725-404b-a4cb-e5b89c914f67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.888 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a28c191e-c725-404b-a4cb-e5b89c914f67 in datapath d994c6fb-564e-4523-afe4-89804b993385 unbound from our chassis#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.889 253665 DEBUG nova.compute.manager [req-9a73bf2d-b60a-4b08-abca-ffa7802314e3 req-a105a24c-b77d-46af-8ff0-010c28471ece 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Detach interface failed, port_id=d7659b3e-3579-403f-b319-ceb538d9c201, reason: Instance f16662c4-9b4f-4060-ac76-ebfb960dbb89 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:38:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.890 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d994c6fb-564e-4523-afe4-89804b993385, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:38:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.891 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d69e81-4f3e-42c3-955e-f60a41be8a46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:32.892 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 namespace which is not needed anymore#033[00m
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.907 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:32 np0005532048 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Deactivated successfully.
Nov 22 04:38:32 np0005532048 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Consumed 4.229s CPU time.
Nov 22 04:38:32 np0005532048 systemd-machined[215941]: Machine qemu-151-instance-00000078 terminated.
Nov 22 04:38:32 np0005532048 nova_compute[253661]: 2025-11-22 09:38:32.934 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.028 253665 INFO nova.virt.libvirt.driver [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Instance destroyed successfully.#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.029 253665 DEBUG nova.objects.instance [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lazy-loading 'resources' on Instance uuid 6e3727ef-288f-4e26-8d29-f85423546391 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.036 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:38:33 np0005532048 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [NOTICE]   (377783) : haproxy version is 2.8.14-c23fe91
Nov 22 04:38:33 np0005532048 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [NOTICE]   (377783) : path to executable is /usr/sbin/haproxy
Nov 22 04:38:33 np0005532048 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [WARNING]  (377783) : Exiting Master process...
Nov 22 04:38:33 np0005532048 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [ALERT]    (377783) : Current worker (377785) exited with code 143 (Terminated)
Nov 22 04:38:33 np0005532048 neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385[377778]: [WARNING]  (377783) : All workers exited. Exiting... (0)
Nov 22 04:38:33 np0005532048 systemd[1]: libpod-0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5.scope: Deactivated successfully.
Nov 22 04:38:33 np0005532048 podman[378050]: 2025-11-22 09:38:33.069076802 +0000 UTC m=+0.049652025 container died 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.078 253665 DEBUG nova.virt.libvirt.vif [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-771105155',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-771105155',id=120,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84cc8edaaa54443997ac9f33f8fab7ce',ramdisk_id='',reservation_id='r-uce6hdys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-495917723',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-495917723-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:29Z,user_data=None,user_id='58f15faf9ac94307a17022836fe74e23',uuid=6e3727ef-288f-4e26-8d29-f85423546391,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.079 253665 DEBUG nova.network.os_vif_util [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converting VIF {"id": "a28c191e-c725-404b-a4cb-e5b89c914f67", "address": "fa:16:3e:13:6e:9b", "network": {"id": "d994c6fb-564e-4523-afe4-89804b993385", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1108681350-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84cc8edaaa54443997ac9f33f8fab7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa28c191e-c7", "ovs_interfaceid": "a28c191e-c725-404b-a4cb-e5b89c914f67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.080 253665 DEBUG nova.network.os_vif_util [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.081 253665 DEBUG os_vif [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.086 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa28c191e-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.095 253665 INFO os_vif [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:6e:9b,bridge_name='br-int',has_traffic_filtering=True,id=a28c191e-c725-404b-a4cb-e5b89c914f67,network=Network(d994c6fb-564e-4523-afe4-89804b993385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa28c191e-c7')#033[00m
Nov 22 04:38:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5-userdata-shm.mount: Deactivated successfully.
Nov 22 04:38:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-59ce9d2c62dc97e3c042547d057ab8624862e781d05baaf78a7c8f0d1a0a1d0c-merged.mount: Deactivated successfully.
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.123 253665 DEBUG nova.network.neutron [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:33 np0005532048 podman[378050]: 2025-11-22 09:38:33.124793358 +0000 UTC m=+0.105368601 container cleanup 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 04:38:33 np0005532048 systemd[1]: libpod-conmon-0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5.scope: Deactivated successfully.
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.144 253665 INFO nova.compute.manager [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Took 1.53 seconds to deallocate network for instance.#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.192 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.193 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.201 253665 DEBUG nova.objects.instance [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:33 np0005532048 podman[378129]: 2025-11-22 09:38:33.212408718 +0000 UTC m=+0.053914423 container remove 0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.213 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.213 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Ensure instance console log exists: /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.214 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.214 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.214 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.220 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[086645fb-c4ea-48d8-98e2-f66f527d5fb6]: (4, ('Sat Nov 22 09:38:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 (0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5)\n0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5\nSat Nov 22 09:38:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 (0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5)\n0fdcb2410f443794e9b64079e9f3197163a77cb975a057d62b97aab6603c9ad5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.223 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e401cb84-0360-4df3-b3cc-cdf1fbc206f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.226 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd994c6fb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:33 np0005532048 kernel: tapd994c6fb-50: left promiscuous mode
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.255 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3b013c22-f891-467a-a441-94dfc61ba373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.274 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9087a841-4e62-44b0-80ec-664445b14f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.276 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b909443-d2fe-453b-8f7c-044cc7d79a9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.295 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac859420-1ebd-46af-beb2-dbc7c9b51ef8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715572, 'reachable_time': 24910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378165, 'error': None, 'target': 'ovnmeta-d994c6fb-564e-4523-afe4-89804b993385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:33 np0005532048 systemd[1]: run-netns-ovnmeta\x2dd994c6fb\x2d564e\x2d4523\x2dafe4\x2d89804b993385.mount: Deactivated successfully.
Nov 22 04:38:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.299 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d994c6fb-564e-4523-afe4-89804b993385 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:38:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:33.299 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[fac5e1ea-9f3d-4d84-99d3-17c2fc09895d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.336 253665 DEBUG oslo_concurrency.processutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.444 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Successfully created port: 12ab8505-5ae2-427c-aaf6-9431683a99c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.602 253665 INFO nova.virt.libvirt.driver [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Deleting instance files /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391_del#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.603 253665 INFO nova.virt.libvirt.driver [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Deletion of /var/lib/nova/instances/6e3727ef-288f-4e26-8d29-f85423546391_del complete#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.653 253665 INFO nova.compute.manager [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.653 253665 DEBUG oslo.service.loopingcall [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.654 253665 DEBUG nova.compute.manager [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.654 253665 DEBUG nova.network.neutron [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:38:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807006791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.838 253665 DEBUG oslo_concurrency.processutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.845 253665 DEBUG nova.compute.provider_tree [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.859 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-changed-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.860 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing instance network info cache due to event network-changed-761d949a-b334-4144-be7a-5f02c905c715. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.860 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.860 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.860 253665 DEBUG nova.network.neutron [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing network info cache for port 761d949a-b334-4144-be7a-5f02c905c715 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.863 253665 DEBUG nova.scheduler.client.report [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.888 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.924 253665 INFO nova.scheduler.client.report [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance f16662c4-9b4f-4060-ac76-ebfb960dbb89#033[00m
Nov 22 04:38:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 292 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 271 op/s
Nov 22 04:38:33 np0005532048 nova_compute[253661]: 2025-11-22 09:38:33.999 253665 DEBUG oslo_concurrency.lockutils [None req-038a19ab-45b9-4556-85cb-9cbe73ffd042 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "f16662c4-9b4f-4060-ac76-ebfb960dbb89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.390720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314390786, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2110, "num_deletes": 253, "total_data_size": 3275166, "memory_usage": 3321744, "flush_reason": "Manual Compaction"}
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314416618, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3196116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45301, "largest_seqno": 47410, "table_properties": {"data_size": 3186732, "index_size": 5814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20186, "raw_average_key_size": 20, "raw_value_size": 3167563, "raw_average_value_size": 3215, "num_data_blocks": 257, "num_entries": 985, "num_filter_entries": 985, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804113, "oldest_key_time": 1763804113, "file_creation_time": 1763804314, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 25966 microseconds, and 9362 cpu microseconds.
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.416686) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3196116 bytes OK
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.416718) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.419632) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.419689) EVENT_LOG_v1 {"time_micros": 1763804314419675, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.419733) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3266239, prev total WAL file size 3266239, number of live WAL files 2.
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.421024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3121KB)], [104(8768KB)]
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314421088, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12174890, "oldest_snapshot_seqno": -1}
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7122 keys, 10511573 bytes, temperature: kUnknown
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314515125, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10511573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10462303, "index_size": 30342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 183560, "raw_average_key_size": 25, "raw_value_size": 10333346, "raw_average_value_size": 1450, "num_data_blocks": 1191, "num_entries": 7122, "num_filter_entries": 7122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804314, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.515502) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10511573 bytes
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.516929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.3 rd, 111.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 7644, records dropped: 522 output_compression: NoCompression
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.516957) EVENT_LOG_v1 {"time_micros": 1763804314516944, "job": 62, "event": "compaction_finished", "compaction_time_micros": 94136, "compaction_time_cpu_micros": 28926, "output_level": 6, "num_output_files": 1, "total_output_size": 10511573, "num_input_records": 7644, "num_output_records": 7122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314518175, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804314520960, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.420871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:34 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:34.521151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.521 253665 DEBUG nova.network.neutron [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.536 253665 INFO nova.compute.manager [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Took 0.88 seconds to deallocate network for instance.#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.581 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.582 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.698 253665 DEBUG oslo_concurrency.processutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.923 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Successfully updated port: 12ab8505-5ae2-427c-aaf6-9431683a99c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.929 253665 DEBUG nova.compute.manager [req-9477974e-3184-474b-8d7a-33264c0977c4 req-36f5b0fe-3e5b-4c15-ab2d-d45157f71976 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Received event network-vif-deleted-ff0231eb-335b-4acd-98c8-d655d887e97a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.929 253665 DEBUG nova.compute.manager [req-9477974e-3184-474b-8d7a-33264c0977c4 req-36f5b0fe-3e5b-4c15-ab2d-d45157f71976 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-deleted-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.939 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.939 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:34 np0005532048 nova_compute[253661]: 2025-11-22 09:38:34.939 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.094 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.094 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.094 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.095 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.095 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.096 253665 INFO nova.compute.manager [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Terminating instance#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.097 253665 DEBUG nova.compute.manager [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.133 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:38:35 np0005532048 kernel: tap18df29f5-36 (unregistering): left promiscuous mode
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.157 253665 DEBUG nova.network.neutron [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updated VIF entry in instance network info cache for port 761d949a-b334-4144-be7a-5f02c905c715. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.158 253665 DEBUG nova.network.neutron [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:35 np0005532048 NetworkManager[48920]: <info>  [1763804315.1679] device (tap18df29f5-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:38:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:35Z|01254|binding|INFO|Releasing lport 18df29f5-368d-4b94-ac69-8541de164d02 from this chassis (sb_readonly=0)
Nov 22 04:38:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:35Z|01255|binding|INFO|Setting lport 18df29f5-368d-4b94-ac69-8541de164d02 down in Southbound
Nov 22 04:38:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:35Z|01256|binding|INFO|Removing iface tap18df29f5-36 ovn-installed in OVS
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.182 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.187 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:34:a1 10.100.0.7'], port_security=['fa:16:3e:90:34:a1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2837c740-6ce1-47d5-ad27-107211f74db7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=416fdb0b-60ab-41a3-b089-f86f3fe1761e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=18df29f5-368d-4b94-ac69-8541de164d02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.189 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 18df29f5-368d-4b94-ac69-8541de164d02 in datapath a1a3f352-95a9-4122-aecd-94a4bbf79683 unbound from our chassis#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.191 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1a3f352-95a9-4122-aecd-94a4bbf79683, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.191 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.192 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-unplugged-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.192 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.192 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.192 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[809a063a-c949-46f4-8de7-b4dabd5c5091]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] No waiting events found dispatching network-vif-unplugged-a28c191e-c725-404b-a4cb-e5b89c914f67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.193 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 namespace which is not needed anymore#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-unplugged-a28c191e-c725-404b-a4cb-e5b89c914f67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6e3727ef-288f-4e26-8d29-f85423546391-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.193 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.194 253665 DEBUG oslo_concurrency.lockutils [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.194 253665 DEBUG nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] No waiting events found dispatching network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.194 253665 WARNING nova.compute.manager [req-31ebb1cd-980d-4347-b821-6bdf6c70cbaf req-be6c41ce-984f-4ace-a1e2-3168eccdf35e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Received unexpected event network-vif-plugged-a28c191e-c725-404b-a4cb-e5b89c914f67 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980003339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 kernel: tapa8c9a54f-9f (unregistering): left promiscuous mode
Nov 22 04:38:35 np0005532048 NetworkManager[48920]: <info>  [1763804315.2218] device (tapa8c9a54f-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.225 253665 DEBUG oslo_concurrency.processutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:35Z|01257|binding|INFO|Releasing lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de from this chassis (sb_readonly=0)
Nov 22 04:38:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:35Z|01258|binding|INFO|Setting lport a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de down in Southbound
Nov 22 04:38:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:35Z|01259|binding|INFO|Removing iface tapa8c9a54f-9f ovn-installed in OVS
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.242 253665 DEBUG nova.compute.provider_tree [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.259 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.284 253665 DEBUG nova.scheduler.client.report [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.286 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83'], port_security=['fa:16:3e:9d:fd:83 2001:db8::f816:3eff:fe9d:fd83'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:fd83/64', 'neutron:device_id': '2837c740-6ce1-47d5-ad27-107211f74db7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a935b0bb-9a00-49bb-8266-f3d0879d526c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f56771e6-e0a6-4947-ad39-6cb384a012bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:35 np0005532048 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 22 04:38:35 np0005532048 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Consumed 17.731s CPU time.
Nov 22 04:38:35 np0005532048 systemd-machined[215941]: Machine qemu-144-instance-00000073 terminated.
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.306 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.329 253665 INFO nova.scheduler.client.report [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Deleted allocations for instance 6e3727ef-288f-4e26-8d29-f85423546391#033[00m
Nov 22 04:38:35 np0005532048 NetworkManager[48920]: <info>  [1763804315.3360] manager: (tapa8c9a54f-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [NOTICE]   (373429) : haproxy version is 2.8.14-c23fe91
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [NOTICE]   (373429) : path to executable is /usr/sbin/haproxy
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [WARNING]  (373429) : Exiting Master process...
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [ALERT]    (373429) : Current worker (373431) exited with code 143 (Terminated)
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683[373425]: [WARNING]  (373429) : All workers exited. Exiting... (0)
Nov 22 04:38:35 np0005532048 systemd[1]: libpod-aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd.scope: Deactivated successfully.
Nov 22 04:38:35 np0005532048 podman[378236]: 2025-11-22 09:38:35.350928759 +0000 UTC m=+0.057112421 container died aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.363 253665 INFO nova.virt.libvirt.driver [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance destroyed successfully.#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.364 253665 DEBUG nova.objects.instance [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 2837c740-6ce1-47d5-ad27-107211f74db7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd-userdata-shm.mount: Deactivated successfully.
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.383 253665 DEBUG nova.virt.libvirt.vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:09Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.385 253665 DEBUG nova.network.os_vif_util [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "18df29f5-368d-4b94-ac69-8541de164d02", "address": "fa:16:3e:90:34:a1", "network": {"id": "a1a3f352-95a9-4122-aecd-94a4bbf79683", "bridge": "br-int", "label": "tempest-network-smoke--899294932", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df29f5-36", "ovs_interfaceid": "18df29f5-368d-4b94-ac69-8541de164d02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.385 253665 DEBUG nova.network.os_vif_util [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-871bb839a73d656755f89d39a1db17cd9b58ff25f5a2a0710db15a0e02acd3f2-merged.mount: Deactivated successfully.
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.386 253665 DEBUG os_vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18df29f5-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.403 253665 DEBUG oslo_concurrency.lockutils [None req-ab50ee5b-e03e-4251-b077-b22c9a67d8fe 58f15faf9ac94307a17022836fe74e23 84cc8edaaa54443997ac9f33f8fab7ce - - default default] Lock "6e3727ef-288f-4e26-8d29-f85423546391" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:35 np0005532048 podman[378236]: 2025-11-22 09:38:35.404228455 +0000 UTC m=+0.110412117 container cleanup aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.404 253665 INFO os_vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=18df29f5-368d-4b94-ac69-8541de164d02,network=Network(a1a3f352-95a9-4122-aecd-94a4bbf79683),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df29f5-36')#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.405 253665 DEBUG nova.virt.libvirt.vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:36:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1413808402',display_name='tempest-TestGettingAddress-server-1413808402',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1413808402',id=115,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAX84BlGpIG8eAxx9m/R2wpehhpvVCQOjeB1Ciib/kXLYPZDVCF+S84upLfjZ4ZNiqFZ3D4h5QwUbj0Cm7KiWEJDTx2fmvpQx32ZLW6ym0oCs47H6Hx/yhUvN7A9UetyOA==',key_name='tempest-TestGettingAddress-1115401783',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rou3pok7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:09Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=2837c740-6ce1-47d5-ad27-107211f74db7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.406 253665 DEBUG nova.network.os_vif_util [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.407 253665 DEBUG nova.network.os_vif_util [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.407 253665 DEBUG os_vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.410 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c9a54f-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.413 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.416 253665 INFO os_vif [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:fd:83,bridge_name='br-int',has_traffic_filtering=True,id=a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de,network=Network(c883e14c-ad7e-49eb-b0c3-2571140d1e57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c9a54f-9f')#033[00m
Nov 22 04:38:35 np0005532048 systemd[1]: libpod-conmon-aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd.scope: Deactivated successfully.
Nov 22 04:38:35 np0005532048 podman[378284]: 2025-11-22 09:38:35.517346338 +0000 UTC m=+0.080772480 container remove aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.526 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[337e7e91-7034-4faa-9dde-73e6f04ca691]: (4, ('Sat Nov 22 09:38:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 (aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd)\naecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd\nSat Nov 22 09:38:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 (aecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd)\naecd3bc211e6079988450f10e4ff5467da3ee307ab8363d6374d76a7a2834bdd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.528 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c8733e1b-6189-42d4-9630-8599dcc9555d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.529 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1a3f352-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 kernel: tapa1a3f352-90: left promiscuous mode
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9d2307-edcf-463c-adc6-017955e2f729]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[697b809a-1128-4484-9b01-809161c59350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.571 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[31221561-68fc-4869-9d07-0469c56e2f68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.591 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4e55965a-87dd-4334-bbc0-cc5a029dcafe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707806, 'reachable_time': 26109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378317, 'error': None, 'target': 'ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 systemd[1]: run-netns-ovnmeta\x2da1a3f352\x2d95a9\x2d4122\x2daecd\x2d94a4bbf79683.mount: Deactivated successfully.
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.595 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1a3f352-95a9-4122-aecd-94a4bbf79683 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.596 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea8ccc0-b61b-4eb1-97cd-6f7414668902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.596 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de in datapath c883e14c-ad7e-49eb-b0c3-2571140d1e57 unbound from our chassis#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.598 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c883e14c-ad7e-49eb-b0c3-2571140d1e57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.599 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[937a0986-32d4-4aa7-9d2d-26a0f8648012]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.600 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 namespace which is not needed anymore#033[00m
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [NOTICE]   (373557) : haproxy version is 2.8.14-c23fe91
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [NOTICE]   (373557) : path to executable is /usr/sbin/haproxy
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [WARNING]  (373557) : Exiting Master process...
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [ALERT]    (373557) : Current worker (373559) exited with code 143 (Terminated)
Nov 22 04:38:35 np0005532048 neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57[373552]: [WARNING]  (373557) : All workers exited. Exiting... (0)
Nov 22 04:38:35 np0005532048 systemd[1]: libpod-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868.scope: Deactivated successfully.
Nov 22 04:38:35 np0005532048 conmon[373552]: conmon e8905d359dd14150034f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868.scope/container/memory.events
Nov 22 04:38:35 np0005532048 podman[378335]: 2025-11-22 09:38:35.751295236 +0000 UTC m=+0.048108197 container died e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:38:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868-userdata-shm.mount: Deactivated successfully.
Nov 22 04:38:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a8abda532727a5eec38dc91090bcebb9f55c2e54221da7f33b18490782350886-merged.mount: Deactivated successfully.
Nov 22 04:38:35 np0005532048 podman[378335]: 2025-11-22 09:38:35.872366017 +0000 UTC m=+0.169178978 container cleanup e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:38:35 np0005532048 systemd[1]: libpod-conmon-e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868.scope: Deactivated successfully.
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.946 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.947 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.948 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 292 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 213 op/s
Nov 22 04:38:35 np0005532048 podman[378366]: 2025-11-22 09:38:35.978373013 +0000 UTC m=+0.079107198 container remove e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.986 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[59be89f3-5750-47fe-8b77-16d13201659d]: (4, ('Sat Nov 22 09:38:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 (e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868)\ne8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868\nSat Nov 22 09:38:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 (e8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868)\ne8905d359dd14150034fcaa26ee85f2443263ae8cd5f4187a543a3a0d1f06868\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.988 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd9b9ce-7eac-4009-823d-719c8313d508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:35.989 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc883e14c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:35 np0005532048 nova_compute[253661]: 2025-11-22 09:38:35.991 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:35 np0005532048 kernel: tapc883e14c-a0: left promiscuous mode
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.009 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb61b2a3-f419-4500-b76f-c5eb4bcca2a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.032 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6242b209-6442-4663-854d-1836ef5e1845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.034 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4220ef93-25f3-49ec-a3e8-cfc462bc51d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.055 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[08bd7ff2-930d-45d0-b6bf-d01711b2bedd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707912, 'reachable_time': 35242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378382, 'error': None, 'target': 'ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.058 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c883e14c-ad7e-49eb-b0c3-2571140d1e57 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:38:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:36.058 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[00206ffc-dd3c-480a-be98-db92cf0792fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.247 253665 INFO nova.virt.libvirt.driver [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Deleting instance files /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7_del#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.248 253665 INFO nova.virt.libvirt.driver [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Deletion of /var/lib/nova/instances/2837c740-6ce1-47d5-ad27-107211f74db7_del complete#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.346 253665 INFO nova.compute.manager [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.347 253665 DEBUG oslo.service.loopingcall [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.347 253665 DEBUG nova.compute.manager [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.347 253665 DEBUG nova.network.neutron [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:38:36 np0005532048 systemd[1]: run-netns-ovnmeta\x2dc883e14c\x2dad7e\x2d49eb\x2db0c3\x2d2571140d1e57.mount: Deactivated successfully.
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.610 253665 DEBUG nova.network.neutron [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.628 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.628 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance network_info: |[{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.629 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.629 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.632 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start _get_guest_xml network_info=[{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.640 253665 WARNING nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.649 253665 DEBUG nova.virt.libvirt.host [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.650 253665 DEBUG nova.virt.libvirt.host [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.655 253665 DEBUG nova.virt.libvirt.host [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.656 253665 DEBUG nova.virt.libvirt.host [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.656 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.656 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.657 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.658 253665 DEBUG nova.virt.hardware [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:38:36 np0005532048 nova_compute[253661]: 2025-11-22 09:38:36.661 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.011 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-unplugged-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.013 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.014 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.014 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-unplugged-18df29f5-368d-4b94-ac69-8541de164d02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-unplugged-18df29f5-368d-4b94-ac69-8541de164d02 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.015 253665 DEBUG oslo_concurrency.lockutils [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.016 253665 DEBUG nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.016 253665 WARNING nova.compute.manager [req-278e4606-5c27-439d-81fe-64b58c96181a req-e54b8afb-6172-4dee-b451-9e062945cefa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received unexpected event network-vif-plugged-18df29f5-368d-4b94-ac69-8541de164d02 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:38:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:38:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152459162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:38:37 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.155 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.182 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.187 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.301 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.302 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.304 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:38:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518556906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.692 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.694 253665 DEBUG nova.virt.libvirt.vif [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:32Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.694 253665 DEBUG nova.network.os_vif_util [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.695 253665 DEBUG nova.network.os_vif_util [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.697 253665 DEBUG nova.objects.instance [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.711 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <name>instance-00000079</name>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:38:36</nova:creationTime>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <entry name="serial">9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <entry name="uuid">9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:30:a0:d3"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <target dev="tap12ab8505-5a"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log" append="off"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:38:37 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:38:37 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:38:37 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:38:37 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.712 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Preparing to wait for external event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.713 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.713 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.713 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.714 253665 DEBUG nova.virt.libvirt.vif [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:38:32Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.714 253665 DEBUG nova.network.os_vif_util [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.715 253665 DEBUG nova.network.os_vif_util [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.715 253665 DEBUG os_vif [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.717 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.717 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.720 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.720 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12ab8505-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.721 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12ab8505-5a, col_values=(('external_ids', {'iface-id': '12ab8505-5ae2-427c-aaf6-9431683a99c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:a0:d3', 'vm-uuid': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:37 np0005532048 NetworkManager[48920]: <info>  [1763804317.7236] manager: (tap12ab8505-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.730 253665 INFO os_vif [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a')#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.793 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.793 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.793 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:30:a0:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.794 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Using config drive#033[00m
Nov 22 04:38:37 np0005532048 nova_compute[253661]: 2025-11-22 09:38:37.815 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 252 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 271 op/s
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.038 253665 DEBUG nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.039 253665 DEBUG oslo_concurrency.lockutils [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.039 253665 DEBUG oslo_concurrency.lockutils [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.039 253665 DEBUG oslo_concurrency.lockutils [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 DEBUG nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 WARNING nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received unexpected event network-vif-plugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 DEBUG nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-deleted-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 INFO nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Neutron deleted interface 18df29f5-368d-4b94-ac69-8541de164d02; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.040 253665 DEBUG nova.network.neutron [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [{"id": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "address": "fa:16:3e:9d:fd:83", "network": {"id": "c883e14c-ad7e-49eb-b0c3-2571140d1e57", "bridge": "br-int", "label": "tempest-network-smoke--178732271", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:fd83", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c9a54f-9f", "ovs_interfaceid": "a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.069 253665 DEBUG nova.compute.manager [req-747b46bd-417c-422c-a1e6-a60fe860814b req-6baf3a94-ed40-45b8-af85-cd1d4f9b022c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Detach interface failed, port_id=18df29f5-368d-4b94-ac69-8541de164d02, reason: Instance 2837c740-6ce1-47d5-ad27-107211f74db7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.097 253665 DEBUG nova.network.neutron [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.113 253665 INFO nova.compute.manager [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Took 1.77 seconds to deallocate network for instance.#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.168 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.168 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.284 253665 DEBUG oslo_concurrency.processutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.332 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:38Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:a8:14 10.100.0.8
Nov 22 04:38:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:38Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:a8:14 10.100.0.8
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.341 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Creating config drive at /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.348 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9t_sogkw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.505 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9t_sogkw" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.586 253665 DEBUG nova.storage.rbd_utils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.592 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.637 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.639 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.660 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.662 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-changed-18df29f5-368d-4b94-ac69-8541de164d02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.662 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing instance network info cache due to event network-changed-18df29f5-368d-4b94-ac69-8541de164d02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.663 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.663 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.663 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Refreshing network info cache for port 18df29f5-368d-4b94-ac69-8541de164d02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:38:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160455238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.768 253665 DEBUG oslo_concurrency.processutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.774 253665 DEBUG nova.compute.provider_tree [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.788 253665 DEBUG nova.scheduler.client.report [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.806 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.830 253665 DEBUG oslo_concurrency.processutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.831 253665 INFO nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Deleting local config drive /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/disk.config because it was imported into RBD.#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.839 253665 INFO nova.scheduler.client.report [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 2837c740-6ce1-47d5-ad27-107211f74db7#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.847 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:38:38 np0005532048 kernel: tap12ab8505-5a: entered promiscuous mode
Nov 22 04:38:38 np0005532048 NetworkManager[48920]: <info>  [1763804318.8889] manager: (tap12ab8505-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:38Z|01260|binding|INFO|Claiming lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 for this chassis.
Nov 22 04:38:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:38Z|01261|binding|INFO|12ab8505-5ae2-427c-aaf6-9431683a99c8: Claiming fa:16:3e:30:a0:d3 10.100.0.3
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.901 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:a0:d3 10.100.0.3'], port_security=['fa:16:3e:30:a0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80502cee-0a40-4541-8461-41de74f7266c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c64167e3-035b-4f1b-bee4-b85857c701f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1bcb3a6-b65a-4848-8c3b-e1169d9ae614, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=12ab8505-5ae2-427c-aaf6-9431683a99c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.903 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 12ab8505-5ae2-427c-aaf6-9431683a99c8 in datapath 80502cee-0a40-4541-8461-41de74f7266c bound to our chassis#033[00m
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.905 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80502cee-0a40-4541-8461-41de74f7266c#033[00m
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.908 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:38Z|01262|binding|INFO|Setting lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 ovn-installed in OVS
Nov 22 04:38:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:38Z|01263|binding|INFO|Setting lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 up in Southbound
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.911 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.923 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0df37f99-2615-4ef7-829d-5783790201d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.924 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80502cee-01 in ovnmeta-80502cee-0a40-4541-8461-41de74f7266c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.927 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80502cee-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.927 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bdc32b-2429-4ac5-9999-c1dcce2ed884]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.929 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bafe0886-c9a9-4542-99f8-e96ca2850d5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:38 np0005532048 systemd-machined[215941]: New machine qemu-152-instance-00000079.
Nov 22 04:38:38 np0005532048 systemd-udevd[378542]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:38:38 np0005532048 nova_compute[253661]: 2025-11-22 09:38:38.933 253665 DEBUG oslo_concurrency.lockutils [None req-0c9d4aa6-a85f-4370-9f7d-c274a46eda6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.943 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[0a60bcc4-ee81-4191-9c52-d6e51cb0711c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:38 np0005532048 systemd[1]: Started Virtual Machine qemu-152-instance-00000079.
Nov 22 04:38:38 np0005532048 NetworkManager[48920]: <info>  [1763804318.9541] device (tap12ab8505-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:38:38 np0005532048 NetworkManager[48920]: <info>  [1763804318.9562] device (tap12ab8505-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:38:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:38.973 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[11332b59-df49-4738-8473-5dfac97326fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.015 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f2527b8b-3a67-4744-a4ef-6fe973a01997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 systemd-udevd[378546]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.023 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a2ddee-0074-44c0-b289-ad373c466fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 NetworkManager[48920]: <info>  [1763804319.0244] manager: (tap80502cee-00): new Veth device (/org/freedesktop/NetworkManager/Devices/520)
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.061 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e6407ea4-5d61-4427-8965-76ffe7199a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.064 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc6f44c-59e5-4353-847e-ccff14c64ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 NetworkManager[48920]: <info>  [1763804319.0936] device (tap80502cee-00): carrier: link connected
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.103 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8a035b0b-0e85-400c-96f6-30a3635e21b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.123 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8bad26af-7fe9-4fad-918b-5b25ca43829a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80502cee-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:2a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716983, 'reachable_time': 31024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378574, 'error': None, 'target': 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.141 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0413c694-f8a1-48ba-a490-9f7e97396c51]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:2a77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 716983, 'tstamp': 716983}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378575, 'error': None, 'target': 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.162 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cadfda-edec-453e-862c-befb48df79a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80502cee-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:2a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716983, 'reachable_time': 31024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378576, 'error': None, 'target': 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.202 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf4a3cc-5857-4eb7-ba31-0345ea864fe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.232 253665 DEBUG nova.network.neutron [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.245 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-2837c740-6ce1-47d5-ad27-107211f74db7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.246 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-unplugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.246 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.246 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.247 253665 DEBUG oslo_concurrency.lockutils [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "2837c740-6ce1-47d5-ad27-107211f74db7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.247 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] No waiting events found dispatching network-vif-unplugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.247 253665 DEBUG nova.compute.manager [req-5485c3a3-09ed-4de6-bb22-203fa95078be req-e0f8c760-4448-4a67-88cb-2864e9bf977b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-unplugged-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.277 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6d3c2d-ebc0-4937-b71f-5ba6c3f0af4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.280 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80502cee-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.280 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.281 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80502cee-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.283 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:39 np0005532048 kernel: tap80502cee-00: entered promiscuous mode
Nov 22 04:38:39 np0005532048 NetworkManager[48920]: <info>  [1763804319.2840] manager: (tap80502cee-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.287 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80502cee-00, col_values=(('external_ids', {'iface-id': 'e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.288 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:39Z|01264|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.306 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80502cee-0a40-4541-8461-41de74f7266c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80502cee-0a40-4541-8461-41de74f7266c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.308 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[984f8d1f-d79d-4561-b318-785505f12384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.309 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-80502cee-0a40-4541-8461-41de74f7266c
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/80502cee-0a40-4541-8461-41de74f7266c.pid.haproxy
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 80502cee-0a40-4541-8461-41de74f7266c
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:38:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:39.309 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'env', 'PROCESS_TAG=haproxy-80502cee-0a40-4541-8461-41de74f7266c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80502cee-0a40-4541-8461-41de74f7266c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.511 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804319.5109382, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.522 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] VM Started (Lifecycle Event)#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.539 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.547 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804319.512336, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.548 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.563 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.566 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.587 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:38:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:39Z|01265|binding|INFO|Releasing lport 9e57ed14-a93d-454a-9d37-00035fb43663 from this chassis (sb_readonly=0)
Nov 22 04:38:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:39Z|01266|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 04:38:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:39Z|01267|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 04:38:39 np0005532048 nova_compute[253661]: 2025-11-22 09:38:39.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:39 np0005532048 podman[378650]: 2025-11-22 09:38:39.76456182 +0000 UTC m=+0.077436186 container create 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:38:39 np0005532048 podman[378650]: 2025-11-22 09:38:39.720225728 +0000 UTC m=+0.033100114 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:38:39 np0005532048 systemd[1]: Started libpod-conmon-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f.scope.
Nov 22 04:38:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:38:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f37bc7976ef071fba5e665fbf02212daac86ba36a65621a352596852c57db43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:39 np0005532048 podman[378650]: 2025-11-22 09:38:39.883915609 +0000 UTC m=+0.196790005 container init 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:38:39 np0005532048 podman[378650]: 2025-11-22 09:38:39.891435946 +0000 UTC m=+0.204310312 container start 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:38:39 np0005532048 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [NOTICE]   (378669) : New worker (378671) forked
Nov 22 04:38:39 np0005532048 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [NOTICE]   (378669) : Loading success.
Nov 22 04:38:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 242 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 283 op/s
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.191 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Received event network-vif-deleted-a8c9a54f-9f3c-4d5c-98eb-bfb38a7946de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.192 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.192 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.193 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.193 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.194 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Processing event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.194 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.195 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.195 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.196 253665 DEBUG oslo_concurrency.lockutils [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.196 253665 DEBUG nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.197 253665 WARNING nova.compute.manager [req-f27f9e9b-b163-49e6-95c8-ec8c0c2e868f req-80f8b0dd-0623-4ed5-898b-0b05cedf6722 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.198 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.204 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804320.2041552, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.205 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.208 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.215 253665 INFO nova.virt.libvirt.driver [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance spawned successfully.#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.216 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.240 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.247 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.248 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.248 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.248 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.249 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.249 253665 DEBUG nova.virt.libvirt.driver [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.252 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.292 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.315 253665 INFO nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Took 8.05 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.316 253665 DEBUG nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.379 253665 INFO nova.compute.manager [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Took 9.11 seconds to build instance.#033[00m
Nov 22 04:38:40 np0005532048 nova_compute[253661]: 2025-11-22 09:38:40.394 253665 DEBUG oslo_concurrency.lockutils [None req-44102a0c-d5b1-4f4f-baee-e6ca45eee5b5 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:41 np0005532048 nova_compute[253661]: 2025-11-22 09:38:41.143 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:41 np0005532048 nova_compute[253661]: 2025-11-22 09:38:41.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 263 op/s
Nov 22 04:38:42 np0005532048 nova_compute[253661]: 2025-11-22 09:38:42.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:43 np0005532048 nova_compute[253661]: 2025-11-22 09:38:43.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:43Z|01268|binding|INFO|Releasing lport 9e57ed14-a93d-454a-9d37-00035fb43663 from this chassis (sb_readonly=0)
Nov 22 04:38:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:43Z|01269|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 04:38:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:43Z|01270|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 04:38:43 np0005532048 nova_compute[253661]: 2025-11-22 09:38:43.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 282 op/s
Nov 22 04:38:44 np0005532048 nova_compute[253661]: 2025-11-22 09:38:44.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:44 np0005532048 nova_compute[253661]: 2025-11-22 09:38:44.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:38:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:44Z|01271|binding|INFO|Releasing lport 9e57ed14-a93d-454a-9d37-00035fb43663 from this chassis (sb_readonly=0)
Nov 22 04:38:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:44Z|01272|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 04:38:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:44Z|01273|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 04:38:44 np0005532048 nova_compute[253661]: 2025-11-22 09:38:44.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:44 np0005532048 nova_compute[253661]: 2025-11-22 09:38:44.630 253665 DEBUG nova.compute.manager [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:44 np0005532048 nova_compute[253661]: 2025-11-22 09:38:44.630 253665 DEBUG nova.compute.manager [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:38:44 np0005532048 nova_compute[253661]: 2025-11-22 09:38:44.631 253665 DEBUG oslo_concurrency.lockutils [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:44 np0005532048 nova_compute[253661]: 2025-11-22 09:38:44.631 253665 DEBUG oslo_concurrency.lockutils [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:44 np0005532048 nova_compute[253661]: 2025-11-22 09:38:44.631 253665 DEBUG nova.network.neutron [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:38:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 191 op/s
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.019 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804311.0169735, f16662c4-9b4f-4060-ac76-ebfb960dbb89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.020 253665 INFO nova.compute.manager [-] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.042 253665 DEBUG nova.compute.manager [None req-1547c055-69e9-402c-b169-05a7611c7efa - - - - - -] [instance: f16662c4-9b4f-4060-ac76-ebfb960dbb89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.146 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.200 253665 DEBUG nova.network.neutron [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.201 253665 DEBUG nova.network.neutron [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.220 253665 DEBUG oslo_concurrency.lockutils [req-00d81485-f68e-43d2-aa11-91354754789c req-6974cc3c-092a-4eaf-973d-355a727fa95d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084621905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:46 np0005532048 nova_compute[253661]: 2025-11-22 09:38:46.986 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.066 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.067 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.071 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.072 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.075 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.076 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.298 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.302 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3122MB free_disk=59.87643051147461GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.302 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.303 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.520 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d2f5b215-3a41-451c-8ad8-68b17c96a678 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.521 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.704 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.755 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 191 op/s
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.986 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804312.9588223, 6e3727ef-288f-4e26-8d29-f85423546391 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:47 np0005532048 nova_compute[253661]: 2025-11-22 09:38:47.987 253665 INFO nova.compute.manager [-] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:38:48 np0005532048 nova_compute[253661]: 2025-11-22 09:38:48.002 253665 DEBUG nova.compute.manager [None req-00163750-ba67-49c2-a919-07ae60c89bcc - - - - - -] [instance: 6e3727ef-288f-4e26-8d29-f85423546391] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797788429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:48 np0005532048 nova_compute[253661]: 2025-11-22 09:38:48.206 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:48 np0005532048 nova_compute[253661]: 2025-11-22 09:38:48.213 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:48 np0005532048 nova_compute[253661]: 2025-11-22 09:38:48.228 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:48 np0005532048 nova_compute[253661]: 2025-11-22 09:38:48.247 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:38:48 np0005532048 nova_compute[253661]: 2025-11-22 09:38:48.247 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:49 np0005532048 nova_compute[253661]: 2025-11-22 09:38:49.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:49 np0005532048 nova_compute[253661]: 2025-11-22 09:38:49.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:49 np0005532048 nova_compute[253661]: 2025-11-22 09:38:49.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:38:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 909 KiB/s wr, 134 op/s
Nov 22 04:38:50 np0005532048 nova_compute[253661]: 2025-11-22 09:38:50.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:38:50 np0005532048 nova_compute[253661]: 2025-11-22 09:38:50.353 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804315.3516412, 2837c740-6ce1-47d5-ad27-107211f74db7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:38:50 np0005532048 nova_compute[253661]: 2025-11-22 09:38:50.353 253665 INFO nova.compute.manager [-] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:38:50 np0005532048 nova_compute[253661]: 2025-11-22 09:38:50.376 253665 DEBUG nova.compute.manager [None req-41fabec6-82e4-4d3f-b70c-3655ff8db9fb - - - - - -] [instance: 2837c740-6ce1-47d5-ad27-107211f74db7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:38:51 np0005532048 nova_compute[253661]: 2025-11-22 09:38:51.150 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 38 KiB/s wr, 77 op/s
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:38:52
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:38:52 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 22 04:38:52 np0005532048 podman[378725]: 2025-11-22 09:38:52.383805162 +0000 UTC m=+0.077183241 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 04:38:52 np0005532048 podman[378726]: 2025-11-22 09:38:52.397548854 +0000 UTC m=+0.089706972 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.482 253665 DEBUG nova.compute.manager [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-changed-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.482 253665 DEBUG nova.compute.manager [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing instance network info cache due to event network-changed-761d949a-b334-4144-be7a-5f02c905c715. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.483 253665 DEBUG oslo_concurrency.lockutils [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.483 253665 DEBUG oslo_concurrency.lockutils [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.483 253665 DEBUG nova.network.neutron [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Refreshing network info cache for port 761d949a-b334-4144-be7a-5f02c905c715 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.605 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.606 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.606 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.606 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.606 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.608 253665 INFO nova.compute.manager [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Terminating instance#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.609 253665 DEBUG nova.compute.manager [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:38:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:52 np0005532048 kernel: tap761d949a-b3 (unregistering): left promiscuous mode
Nov 22 04:38:52 np0005532048 NetworkManager[48920]: <info>  [1763804332.8345] device (tap761d949a-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.843 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:52Z|01274|binding|INFO|Releasing lport 761d949a-b334-4144-be7a-5f02c905c715 from this chassis (sb_readonly=0)
Nov 22 04:38:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:52Z|01275|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 down in Southbound
Nov 22 04:38:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:52Z|01276|binding|INFO|Removing iface tap761d949a-b3 ovn-installed in OVS
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.851 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:a8:14 10.100.0.8'], port_security=['fa:16:3e:85:a8:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6befe33f-63d2-41aa-b574-8eb9b323c484 8fecaa1a-36f4-4ef4-bac2-46e5b8b5f461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1cb9034-c4c3-45e7-9e31-5c5d3f434f14, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=761d949a-b334-4144-be7a-5f02c905c715) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.852 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 761d949a-b334-4144-be7a-5f02c905c715 in datapath a8c9b48b-687a-480f-aff5-bd1fee4c2bbd unbound from our chassis#033[00m
Nov 22 04:38:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.854 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:38:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.856 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1908c454-5d3e-42d3-8356-17d36aa0a193]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:52.856 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd namespace which is not needed anymore#033[00m
Nov 22 04:38:52 np0005532048 nova_compute[253661]: 2025-11-22 09:38:52.863 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:52 np0005532048 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 22 04:38:52 np0005532048 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Consumed 14.491s CPU time.
Nov 22 04:38:52 np0005532048 systemd-machined[215941]: Machine qemu-150-instance-00000077 terminated.
Nov 22 04:38:53 np0005532048 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [NOTICE]   (377606) : haproxy version is 2.8.14-c23fe91
Nov 22 04:38:53 np0005532048 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [NOTICE]   (377606) : path to executable is /usr/sbin/haproxy
Nov 22 04:38:53 np0005532048 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [WARNING]  (377606) : Exiting Master process...
Nov 22 04:38:53 np0005532048 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [ALERT]    (377606) : Current worker (377608) exited with code 143 (Terminated)
Nov 22 04:38:53 np0005532048 neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd[377602]: [WARNING]  (377606) : All workers exited. Exiting... (0)
Nov 22 04:38:53 np0005532048 systemd[1]: libpod-2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e.scope: Deactivated successfully.
Nov 22 04:38:53 np0005532048 kernel: tap761d949a-b3: entered promiscuous mode
Nov 22 04:38:53 np0005532048 kernel: tap761d949a-b3 (unregistering): left promiscuous mode
Nov 22 04:38:53 np0005532048 podman[378786]: 2025-11-22 09:38:53.035933259 +0000 UTC m=+0.070047793 container died 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:38:53 np0005532048 NetworkManager[48920]: <info>  [1763804333.0399] manager: (tap761d949a-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/522)
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.039 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01277|binding|INFO|Claiming lport 761d949a-b334-4144-be7a-5f02c905c715 for this chassis.
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01278|binding|INFO|761d949a-b334-4144-be7a-5f02c905c715: Claiming fa:16:3e:85:a8:14 10.100.0.8
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.052 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:a8:14 10.100.0.8'], port_security=['fa:16:3e:85:a8:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6befe33f-63d2-41aa-b574-8eb9b323c484 8fecaa1a-36f4-4ef4-bac2-46e5b8b5f461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1cb9034-c4c3-45e7-9e31-5c5d3f434f14, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=761d949a-b334-4144-be7a-5f02c905c715) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.059 253665 INFO nova.virt.libvirt.driver [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance destroyed successfully.#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.059 253665 DEBUG nova.objects.instance [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lazy-loading 'resources' on Instance uuid d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01279|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 ovn-installed in OVS
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01280|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 up in Southbound
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01281|binding|INFO|Releasing lport 761d949a-b334-4144-be7a-5f02c905c715 from this chassis (sb_readonly=1)
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.068 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01282|if_status|INFO|Dropped 2 log messages in last 235 seconds (most recently, 235 seconds ago) due to excessive rate
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01283|if_status|INFO|Not setting lport 761d949a-b334-4144-be7a-5f02c905c715 down as sb is readonly
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01284|binding|INFO|Removing iface tap761d949a-b3 ovn-installed in OVS
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.071 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01285|binding|INFO|Releasing lport 761d949a-b334-4144-be7a-5f02c905c715 from this chassis (sb_readonly=0)
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|01286|binding|INFO|Setting lport 761d949a-b334-4144-be7a-5f02c905c715 down in Southbound
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.073 253665 DEBUG nova.virt.libvirt.vif [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2070464237-access_point-937459641',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2070464237-ac',id=119,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNtSLVn2f2AjktFMVEQRNrPDPgiu6XGcAVHoUX9ErUANDAfx8scLKesh39J38uCHme4Kr1WaGaUgPEF++ZKW4JdZA91CWGfVEKx+uaYRX1tqW4xZuiIvDOiFoDeabW/cjQ==',key_name='tempest-TestSecurityGroupsBasicOps-580779993',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9ddb669b6144eee90dc043099e8df8c',ramdisk_id='',reservation_id='r-b9fzu52l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2070464237',owner_user_name='tempest-TestSecurityGroupsBasicOps-2070464237-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:23Z,user_data=None,user_id='24fbabe00a26461eaa9027f7105ae97c',uuid=d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.074 253665 DEBUG nova.network.os_vif_util [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converting VIF {"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.075 253665 DEBUG nova.network.os_vif_util [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.075 253665 DEBUG os_vif [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.078 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.078 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap761d949a-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.080 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:a8:14 10.100.0.8'], port_security=['fa:16:3e:85:a8:14 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1cc6b07-57c8-46b4-abbb-e0a366b6c2c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9ddb669b6144eee90dc043099e8df8c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6befe33f-63d2-41aa-b574-8eb9b323c484 8fecaa1a-36f4-4ef4-bac2-46e5b8b5f461', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1cb9034-c4c3-45e7-9e31-5c5d3f434f14, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=761d949a-b334-4144-be7a-5f02c905c715) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:38:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e-userdata-shm.mount: Deactivated successfully.
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.085 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:38:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-58bc45d9f28847928b45ee55af7005c19133177176f15c3c95d23db67f15d5f1-merged.mount: Deactivated successfully.
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.090 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.093 253665 INFO os_vif [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:a8:14,bridge_name='br-int',has_traffic_filtering=True,id=761d949a-b334-4144-be7a-5f02c905c715,network=Network(a8c9b48b-687a-480f-aff5-bd1fee4c2bbd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap761d949a-b3')#033[00m
Nov 22 04:38:53 np0005532048 podman[378786]: 2025-11-22 09:38:53.108405732 +0000 UTC m=+0.142520246 container cleanup 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:38:53 np0005532048 systemd[1]: libpod-conmon-2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e.scope: Deactivated successfully.
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.121 253665 DEBUG nova.compute.manager [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-unplugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.121 253665 DEBUG oslo_concurrency.lockutils [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.122 253665 DEBUG oslo_concurrency.lockutils [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.122 253665 DEBUG oslo_concurrency.lockutils [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.122 253665 DEBUG nova.compute.manager [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-unplugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.122 253665 DEBUG nova.compute.manager [req-fc593efd-627c-4e0b-9927-137a3afcbf53 req-8c8e2ceb-28d7-4172-9b37-5835737d98f5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-unplugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:38:53 np0005532048 podman[378864]: 2025-11-22 09:38:53.202221775 +0000 UTC m=+0.064275940 container remove 2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.210 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b32bb7e9-c9bf-4978-87c4-070ae19408ae]: (4, ('Sat Nov 22 09:38:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd (2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e)\n2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e\nSat Nov 22 09:38:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd (2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e)\n2df11423c042da2c2e497aabc58bed173a4209ac9037bf4c8355ddf49abc6f4e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.212 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[306c1ebc-6129-4bf3-817d-8f04065901b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.213 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c9b48b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:38:53 np0005532048 kernel: tapa8c9b48b-60: left promiscuous mode
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.248 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.252 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78d73fec-1749-4972-bf24-d4b98154b2c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.264 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.268 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ce9788-8d3d-43a0-a9ea-5b05a8fd2a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.271 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[daeb9bad-3c20-4dae-baf4-2c5bcb80a971]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.294 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1797bbf6-d025-459e-a1b9-fab9aa3ed056]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715347, 'reachable_time': 22668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378932, 'error': None, 'target': 'ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:53 np0005532048 systemd[1]: run-netns-ovnmeta\x2da8c9b48b\x2d687a\x2d480f\x2daff5\x2dbd1fee4c2bbd.mount: Deactivated successfully.
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.298 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8c9b48b-687a-480f-aff5-bd1fee4c2bbd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.298 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e436cb-3599-4fef-a6a7-e8830891dc7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.303 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 761d949a-b334-4144-be7a-5f02c905c715 in datapath a8c9b48b-687a-480f-aff5-bd1fee4c2bbd unbound from our chassis#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.306 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.307 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58beda35-9087-4e39-aaf3-732898eb9985]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.308 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 761d949a-b334-4144-be7a-5f02c905c715 in datapath a8c9b48b-687a-480f-aff5-bd1fee4c2bbd unbound from our chassis#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.310 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8c9b48b-687a-480f-aff5-bd1fee4c2bbd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:38:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:38:53.310 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[061ab4b5-f22f-4a6e-9cac-81d80d9b6177]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:38:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.794 253665 INFO nova.virt.libvirt.driver [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Deleting instance files /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_del#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.795 253665 INFO nova.virt.libvirt.driver [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Deletion of /var/lib/nova/instances/d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3_del complete#033[00m
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:a0:d3 10.100.0.3
Nov 22 04:38:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:38:53Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:a0:d3 10.100.0.3
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.874 253665 INFO nova.compute.manager [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Took 1.26 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.874 253665 DEBUG oslo.service.loopingcall [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.874 253665 DEBUG nova.compute.manager [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:38:53 np0005532048 nova_compute[253661]: 2025-11-22 09:38:53.875 253665 DEBUG nova.network.neutron [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:38:53 np0005532048 podman[379025]: 2025-11-22 09:38:53.919677277 +0000 UTC m=+0.068546276 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:38:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 214 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1012 KiB/s wr, 76 op/s
Nov 22 04:38:54 np0005532048 podman[379025]: 2025-11-22 09:38:54.025840077 +0000 UTC m=+0.174709056 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:38:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:38:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:38:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:38:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.220 253665 DEBUG nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.223 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.223 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.223 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.224 253665 DEBUG nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.224 253665 WARNING nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.224 253665 DEBUG nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.225 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.225 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.225 253665 DEBUG oslo_concurrency.lockutils [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.226 253665 DEBUG nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.226 253665 WARNING nova.compute.manager [req-bbf35c15-e78c-4414-834c-c2f6e9b81a05 req-af44cc9f-ee48-439d-8339-a7a89fd0bde3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.335 253665 DEBUG nova.network.neutron [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.352 253665 INFO nova.compute.manager [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Took 1.48 seconds to deallocate network for instance.#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.391 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.392 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.502 253665 DEBUG oslo_concurrency.processutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.638 253665 DEBUG nova.network.neutron [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updated VIF entry in instance network info cache for port 761d949a-b334-4144-be7a-5f02c905c715. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.639 253665 DEBUG nova.network.neutron [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Updating instance_info_cache with network_info: [{"id": "761d949a-b334-4144-be7a-5f02c905c715", "address": "fa:16:3e:85:a8:14", "network": {"id": "a8c9b48b-687a-480f-aff5-bd1fee4c2bbd", "bridge": "br-int", "label": "tempest-network-smoke--1974403531", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9ddb669b6144eee90dc043099e8df8c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap761d949a-b3", "ovs_interfaceid": "761d949a-b334-4144-be7a-5f02c905c715", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.662 253665 DEBUG oslo_concurrency.lockutils [req-489a3753-bbc1-48b3-b569-dd63b3b5bbac req-85915e04-a5e3-4ba7-adf1-439c1ef0dc32 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:38:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:38:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:38:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:38:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:38:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:38:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:38:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2920870187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:38:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 214 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 1012 KiB/s wr, 25 op/s
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.971 253665 DEBUG oslo_concurrency.processutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.980 253665 DEBUG nova.compute.provider_tree [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:38:55 np0005532048 nova_compute[253661]: 2025-11-22 09:38:55.995 253665 DEBUG nova.scheduler.client.report [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:38:56 np0005532048 nova_compute[253661]: 2025-11-22 09:38:56.010 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:38:56 np0005532048 nova_compute[253661]: 2025-11-22 09:38:56.035 253665 INFO nova.scheduler.client.report [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Deleted allocations for instance d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3#033[00m
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:38:56 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 734191c9-652a-4b7a-bf3a-b900a14a3708 does not exist
Nov 22 04:38:56 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev daf05849-ce1f-42bf-a190-62560e25d731 does not exist
Nov 22 04:38:56 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d9b8daa6-e972-47e2-87ce-f1a74bf95add does not exist
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:38:56 np0005532048 nova_compute[253661]: 2025-11-22 09:38:56.087 253665 DEBUG oslo_concurrency.lockutils [None req-dbff76e5-9259-4d4c-866f-4489eda0ef99 24fbabe00a26461eaa9027f7105ae97c a9ddb669b6144eee90dc043099e8df8c - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:38:56 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:38:56 np0005532048 nova_compute[253661]: 2025-11-22 09:38:56.150 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:38:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:38:56 np0005532048 podman[379476]: 2025-11-22 09:38:56.635816003 +0000 UTC m=+0.039799260 container create a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 04:38:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:38:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:38:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:38:56 np0005532048 systemd[1]: Started libpod-conmon-a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96.scope.
Nov 22 04:38:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:38:56 np0005532048 podman[379476]: 2025-11-22 09:38:56.619291042 +0000 UTC m=+0.023274329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:38:56 np0005532048 podman[379476]: 2025-11-22 09:38:56.719945735 +0000 UTC m=+0.123929022 container init a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:38:56 np0005532048 podman[379476]: 2025-11-22 09:38:56.729592105 +0000 UTC m=+0.133575362 container start a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:38:56 np0005532048 podman[379476]: 2025-11-22 09:38:56.733109093 +0000 UTC m=+0.137092370 container attach a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:38:56 np0005532048 silly_gauss[379492]: 167 167
Nov 22 04:38:56 np0005532048 systemd[1]: libpod-a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96.scope: Deactivated successfully.
Nov 22 04:38:56 np0005532048 conmon[379492]: conmon a4db668e1f1c776f2f55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96.scope/container/memory.events
Nov 22 04:38:56 np0005532048 podman[379476]: 2025-11-22 09:38:56.73780166 +0000 UTC m=+0.141784927 container died a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:38:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f7396942851a74e4305ffeb1b48aa39b7853302604fcc518c54b6250a8c757e4-merged.mount: Deactivated successfully.
Nov 22 04:38:56 np0005532048 podman[379476]: 2025-11-22 09:38:56.778391769 +0000 UTC m=+0.182375026 container remove a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:38:56 np0005532048 systemd[1]: libpod-conmon-a4db668e1f1c776f2f552e46f5c9404b72c6263da5dc020d99ffa1d836418e96.scope: Deactivated successfully.
Nov 22 04:38:56 np0005532048 podman[379515]: 2025-11-22 09:38:56.971518972 +0000 UTC m=+0.048216171 container create 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:38:57 np0005532048 systemd[1]: Started libpod-conmon-1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b.scope.
Nov 22 04:38:57 np0005532048 podman[379515]: 2025-11-22 09:38:56.953492114 +0000 UTC m=+0.030189343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:38:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:38:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:57 np0005532048 podman[379515]: 2025-11-22 09:38:57.084535343 +0000 UTC m=+0.161232562 container init 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:38:57 np0005532048 podman[379515]: 2025-11-22 09:38:57.092517781 +0000 UTC m=+0.169214980 container start 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:38:57 np0005532048 podman[379515]: 2025-11-22 09:38:57.098146491 +0000 UTC m=+0.174843700 container attach 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.354 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.354 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 WARNING nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.355 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG oslo_concurrency.lockutils [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] No waiting events found dispatching network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 WARNING nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received unexpected event network-vif-plugged-761d949a-b334-4144-be7a-5f02c905c715 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.356 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Received event network-vif-deleted-761d949a-b334-4144-be7a-5f02c905c715 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.357 253665 INFO nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Neutron deleted interface 761d949a-b334-4144-be7a-5f02c905c715; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.357 253665 DEBUG nova.network.neutron [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 22 04:38:57 np0005532048 nova_compute[253661]: 2025-11-22 09:38:57.361 253665 DEBUG nova.compute.manager [req-4347fe7e-6397-4385-a348-e3b639d6b224 req-32008205-6290-4d3a-bcf6-224c3e42fa6c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Detach interface failed, port_id=761d949a-b334-4144-be7a-5f02c905c715, reason: Instance d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:38:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 195 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Nov 22 04:38:58 np0005532048 nova_compute[253661]: 2025-11-22 09:38:58.084 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:38:58 np0005532048 happy_elgamal[379532]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:38:58 np0005532048 happy_elgamal[379532]: --> relative data size: 1.0
Nov 22 04:38:58 np0005532048 happy_elgamal[379532]: --> All data devices are unavailable
Nov 22 04:38:58 np0005532048 systemd[1]: libpod-1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b.scope: Deactivated successfully.
Nov 22 04:38:58 np0005532048 systemd[1]: libpod-1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b.scope: Consumed 1.106s CPU time.
Nov 22 04:38:58 np0005532048 podman[379515]: 2025-11-22 09:38:58.268043995 +0000 UTC m=+1.344741194 container died 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:38:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-449107cb7696b6a1f167ea90b955a6648f26d0f9b30828b22e5ebef5c197d405-merged.mount: Deactivated successfully.
Nov 22 04:38:58 np0005532048 podman[379515]: 2025-11-22 09:38:58.441541379 +0000 UTC m=+1.518238578 container remove 1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:38:58 np0005532048 systemd[1]: libpod-conmon-1c9006baf6ddb8ed91158804fe375bac9595b2ccca5ab0cc251fca97e4774f8b.scope: Deactivated successfully.
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:38:58 np0005532048 podman[379600]: 2025-11-22 09:38:58.769995797 +0000 UTC m=+0.169973508 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.888821) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804338888879, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 485, "num_deletes": 256, "total_data_size": 413910, "memory_usage": 423032, "flush_reason": "Manual Compaction"}
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804338940426, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 410299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47411, "largest_seqno": 47895, "table_properties": {"data_size": 407548, "index_size": 787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6637, "raw_average_key_size": 18, "raw_value_size": 401916, "raw_average_value_size": 1125, "num_data_blocks": 34, "num_entries": 357, "num_filter_entries": 357, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804315, "oldest_key_time": 1763804315, "file_creation_time": 1763804338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 51650 microseconds, and 2247 cpu microseconds.
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.940469) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 410299 bytes OK
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.940494) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.996848) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.996928) EVENT_LOG_v1 {"time_micros": 1763804338996914, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.996960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 411003, prev total WAL file size 411003, number of live WAL files 2.
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.997675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373534' seq:72057594037927935, type:22 .. '6C6F676D0032303036' seq:0, type:0; will stop at (end)
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(400KB)], [107(10MB)]
Nov 22 04:38:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804338997722, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10921872, "oldest_snapshot_seqno": -1}
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6955 keys, 10788561 bytes, temperature: kUnknown
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804339132735, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10788561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10739610, "index_size": 30471, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 181053, "raw_average_key_size": 26, "raw_value_size": 10612750, "raw_average_value_size": 1525, "num_data_blocks": 1193, "num_entries": 6955, "num_filter_entries": 6955, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.133006) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10788561 bytes
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.149633) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.8 rd, 79.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(52.9) write-amplify(26.3) OK, records in: 7479, records dropped: 524 output_compression: NoCompression
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.149689) EVENT_LOG_v1 {"time_micros": 1763804339149670, "job": 64, "event": "compaction_finished", "compaction_time_micros": 135106, "compaction_time_cpu_micros": 27682, "output_level": 6, "num_output_files": 1, "total_output_size": 10788561, "num_input_records": 7479, "num_output_records": 6955, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804339149998, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804339154049, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:58.997526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:38:59.154134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:38:59 np0005532048 podman[379746]: 2025-11-22 09:38:59.179476181 +0000 UTC m=+0.025456664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:38:59 np0005532048 podman[379746]: 2025-11-22 09:38:59.28122025 +0000 UTC m=+0.127200743 container create d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:38:59 np0005532048 systemd[1]: Started libpod-conmon-d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa.scope.
Nov 22 04:38:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:38:59 np0005532048 nova_compute[253661]: 2025-11-22 09:38:59.493 253665 INFO nova.compute.manager [None req-20e98732-75d2-48cc-a1c0-1dbd54b757d8 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Get console output#033[00m
Nov 22 04:38:59 np0005532048 nova_compute[253661]: 2025-11-22 09:38:59.502 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:38:59 np0005532048 podman[379746]: 2025-11-22 09:38:59.51157785 +0000 UTC m=+0.357558333 container init d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 04:38:59 np0005532048 podman[379746]: 2025-11-22 09:38:59.519905747 +0000 UTC m=+0.365886190 container start d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:38:59 np0005532048 compassionate_elion[379763]: 167 167
Nov 22 04:38:59 np0005532048 systemd[1]: libpod-d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa.scope: Deactivated successfully.
Nov 22 04:38:59 np0005532048 podman[379746]: 2025-11-22 09:38:59.599365513 +0000 UTC m=+0.445345996 container attach d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:38:59 np0005532048 podman[379746]: 2025-11-22 09:38:59.600670355 +0000 UTC m=+0.446650808 container died d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:38:59 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6fc61eb8d3cb3f8a8d3a124395d7538a58a6cd03c98f5aac6cdaa7835ac699bf-merged.mount: Deactivated successfully.
Nov 22 04:38:59 np0005532048 podman[379746]: 2025-11-22 09:38:59.680182322 +0000 UTC m=+0.526162775 container remove d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elion, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:38:59 np0005532048 systemd[1]: libpod-conmon-d3795c915ce87e63fcde13293e5c8ec8c08a86a94fabcd500f65138996358afa.scope: Deactivated successfully.
Nov 22 04:38:59 np0005532048 podman[379787]: 2025-11-22 09:38:59.879630373 +0000 UTC m=+0.042411406 container create f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:38:59 np0005532048 systemd[1]: Started libpod-conmon-f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163.scope.
Nov 22 04:38:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:38:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:38:59 np0005532048 podman[379787]: 2025-11-22 09:38:59.862316491 +0000 UTC m=+0.025097564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:38:59 np0005532048 podman[379787]: 2025-11-22 09:38:59.970516633 +0000 UTC m=+0.133297696 container init f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 04:38:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 200 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 22 04:38:59 np0005532048 podman[379787]: 2025-11-22 09:38:59.978996214 +0000 UTC m=+0.141777257 container start f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:38:59 np0005532048 podman[379787]: 2025-11-22 09:38:59.983814563 +0000 UTC m=+0.146595636 container attach f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:39:00 np0005532048 musing_mclean[379804]: {
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:    "0": [
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:        {
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "devices": [
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "/dev/loop3"
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            ],
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_name": "ceph_lv0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_size": "21470642176",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "name": "ceph_lv0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "tags": {
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.cluster_name": "ceph",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.crush_device_class": "",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.encrypted": "0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.osd_id": "0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.type": "block",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.vdo": "0"
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            },
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "type": "block",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "vg_name": "ceph_vg0"
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:        }
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:    ],
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:    "1": [
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:        {
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "devices": [
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "/dev/loop4"
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            ],
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_name": "ceph_lv1",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_size": "21470642176",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "name": "ceph_lv1",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "tags": {
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.cluster_name": "ceph",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.crush_device_class": "",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.encrypted": "0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.osd_id": "1",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.type": "block",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.vdo": "0"
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            },
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "type": "block",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "vg_name": "ceph_vg1"
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:        }
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:    ],
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:    "2": [
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:        {
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "devices": [
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "/dev/loop5"
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            ],
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_name": "ceph_lv2",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_size": "21470642176",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "name": "ceph_lv2",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "tags": {
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.cluster_name": "ceph",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.crush_device_class": "",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.encrypted": "0",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.osd_id": "2",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.type": "block",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:                "ceph.vdo": "0"
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            },
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "type": "block",
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:            "vg_name": "ceph_vg2"
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:        }
Nov 22 04:39:00 np0005532048 musing_mclean[379804]:    ]
Nov 22 04:39:00 np0005532048 musing_mclean[379804]: }
Nov 22 04:39:00 np0005532048 systemd[1]: libpod-f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163.scope: Deactivated successfully.
Nov 22 04:39:00 np0005532048 podman[379787]: 2025-11-22 09:39:00.82652316 +0000 UTC m=+0.989304203 container died f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:39:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-93ab96f68364f6dd50242684df7b3718cbffc0ec58491837e73cc1b3bb583747-merged.mount: Deactivated successfully.
Nov 22 04:39:00 np0005532048 podman[379787]: 2025-11-22 09:39:00.891747363 +0000 UTC m=+1.054528416 container remove f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:39:00 np0005532048 systemd[1]: libpod-conmon-f535983969d6a3cbea3e85840cdc1d5fd8662b078471b8814751f4cd57c03163.scope: Deactivated successfully.
Nov 22 04:39:01 np0005532048 nova_compute[253661]: 2025-11-22 09:39:01.183 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:01 np0005532048 podman[379966]: 2025-11-22 09:39:01.600198431 +0000 UTC m=+0.041626416 container create 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:39:01 np0005532048 systemd[1]: Started libpod-conmon-24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84.scope.
Nov 22 04:39:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:39:01 np0005532048 podman[379966]: 2025-11-22 09:39:01.581990008 +0000 UTC m=+0.023418023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:39:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:01Z|01287|binding|INFO|Releasing lport eea1332c-6e32-4e52-a7c7-645bf860d501 from this chassis (sb_readonly=0)
Nov 22 04:39:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:01Z|01288|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 04:39:01 np0005532048 podman[379966]: 2025-11-22 09:39:01.693095991 +0000 UTC m=+0.134524006 container init 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:39:01 np0005532048 podman[379966]: 2025-11-22 09:39:01.704096705 +0000 UTC m=+0.145524690 container start 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:39:01 np0005532048 podman[379966]: 2025-11-22 09:39:01.707693423 +0000 UTC m=+0.149121408 container attach 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:39:01 np0005532048 magical_cartwright[379982]: 167 167
Nov 22 04:39:01 np0005532048 systemd[1]: libpod-24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84.scope: Deactivated successfully.
Nov 22 04:39:01 np0005532048 podman[379966]: 2025-11-22 09:39:01.712720169 +0000 UTC m=+0.154148154 container died 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:39:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fc40b2a45cfe92dc929fcf04cfea837af13a23c3239fc723483cc00043fe3b88-merged.mount: Deactivated successfully.
Nov 22 04:39:01 np0005532048 nova_compute[253661]: 2025-11-22 09:39:01.747 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:01 np0005532048 podman[379966]: 2025-11-22 09:39:01.761022631 +0000 UTC m=+0.202450606 container remove 24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cartwright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:39:01 np0005532048 systemd[1]: libpod-conmon-24d554ea7842f00ead4150aed7fe5e0f79dfbbb6719247b985998bbf533b6c84.scope: Deactivated successfully.
Nov 22 04:39:01 np0005532048 podman[380006]: 2025-11-22 09:39:01.96772313 +0000 UTC m=+0.041974714 container create c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:39:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 200 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 04:39:02 np0005532048 systemd[1]: Started libpod-conmon-c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e.scope.
Nov 22 04:39:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:39:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:39:02 np0005532048 podman[380006]: 2025-11-22 09:39:01.949042156 +0000 UTC m=+0.023293760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:39:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:39:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:39:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:39:02 np0005532048 podman[380006]: 2025-11-22 09:39:02.058347154 +0000 UTC m=+0.132598798 container init c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:39:02 np0005532048 podman[380006]: 2025-11-22 09:39:02.065573264 +0000 UTC m=+0.139824848 container start c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:39:02 np0005532048 podman[380006]: 2025-11-22 09:39:02.069245106 +0000 UTC m=+0.143496710 container attach c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:39:02 np0005532048 nova_compute[253661]: 2025-11-22 09:39:02.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015189276470013674 of space, bias 1.0, pg target 0.4556782941004102 quantized to 32 (current 32)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:39:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]: {
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "osd_id": 1,
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "type": "bluestore"
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:    },
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "osd_id": 0,
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "type": "bluestore"
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:    },
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "osd_id": 2,
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:        "type": "bluestore"
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]:    }
Nov 22 04:39:03 np0005532048 hardcore_williamson[380022]: }
Nov 22 04:39:03 np0005532048 systemd[1]: libpod-c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e.scope: Deactivated successfully.
Nov 22 04:39:03 np0005532048 systemd[1]: libpod-c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e.scope: Consumed 1.118s CPU time.
Nov 22 04:39:03 np0005532048 podman[380006]: 2025-11-22 09:39:03.187958266 +0000 UTC m=+1.262209890 container died c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:39:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ea9331582558c400598aa9247d43db511be4f2e747d19b320df62a8b85165575-merged.mount: Deactivated successfully.
Nov 22 04:39:03 np0005532048 podman[380006]: 2025-11-22 09:39:03.290037085 +0000 UTC m=+1.364288679 container remove c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_williamson, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:39:03 np0005532048 systemd[1]: libpod-conmon-c3786bca6b0753450ba2712fd7a88f57e131eddcf8817ea492f6533d552dbc6e.scope: Deactivated successfully.
Nov 22 04:39:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:39:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:39:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:39:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:39:03 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 273ac386-cade-44f8-83c7-d6e1b0186ba8 does not exist
Nov 22 04:39:03 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c1ca2862-edbb-46fe-adc8-bcb53e33f9a1 does not exist
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.556 253665 DEBUG nova.compute.manager [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.556 253665 DEBUG nova.compute.manager [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing instance network info cache due to event network-changed-1e21d7ad-a6e7-4649-91f2-612de75fe16f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.557 253665 DEBUG oslo_concurrency.lockutils [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.557 253665 DEBUG oslo_concurrency.lockutils [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.557 253665 DEBUG nova.network.neutron [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Refreshing network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.652 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.653 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.653 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.653 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.654 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.655 253665 INFO nova.compute.manager [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Terminating instance#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.656 253665 DEBUG nova.compute.manager [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:39:03 np0005532048 kernel: tap1e21d7ad-a6 (unregistering): left promiscuous mode
Nov 22 04:39:03 np0005532048 NetworkManager[48920]: <info>  [1763804343.7210] device (tap1e21d7ad-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.730 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:03Z|01289|binding|INFO|Releasing lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f from this chassis (sb_readonly=0)
Nov 22 04:39:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:03Z|01290|binding|INFO|Setting lport 1e21d7ad-a6e7-4649-91f2-612de75fe16f down in Southbound
Nov 22 04:39:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:03Z|01291|binding|INFO|Removing iface tap1e21d7ad-a6 ovn-installed in OVS
Nov 22 04:39:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.737 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:0f:ac 10.100.0.14'], port_security=['fa:16:3e:1a:0f:ac 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2f5b215-3a41-451c-8ad8-68b17c96a678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-37126bdf-684b-42ae-b38f-88d563755df6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '443f7e2d-f0e9-45ab-9cf5-08268d38e115 d6d16faa-9388-499f-aa74-b3fccde9fbc6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7f8c6c4-9648-452d-b35b-4ce3aef6c8f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1e21d7ad-a6e7-4649-91f2-612de75fe16f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:39:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.739 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1e21d7ad-a6e7-4649-91f2-612de75fe16f in datapath 37126bdf-684b-42ae-b38f-88d563755df6 unbound from our chassis#033[00m
Nov 22 04:39:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.742 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 37126bdf-684b-42ae-b38f-88d563755df6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:39:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.743 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[82650785-8a33-4844-8b19-6beba649c2f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:03.744 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 namespace which is not needed anymore#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.749 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:03 np0005532048 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 22 04:39:03 np0005532048 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Consumed 17.977s CPU time.
Nov 22 04:39:03 np0005532048 systemd-machined[215941]: Machine qemu-147-instance-00000075 terminated.
Nov 22 04:39:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.902 253665 INFO nova.virt.libvirt.driver [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Instance destroyed successfully.#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.903 253665 DEBUG nova.objects.instance [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid d2f5b215-3a41-451c-8ad8-68b17c96a678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.913 253665 DEBUG nova.virt.libvirt.vif [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-1117485835',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=117,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhEnav/8bmHhlravIj7ZzbNKEW+UMvBgA2sDDDC11ma4Sh8uEn9mVvYdSzBFRFowvU98Jl7d9jrFKpsv67Pj9Xp0jWGCVRbBnzzKhVjFFyGFkc+DH0al99fQPTR1eXa1A==',key_name='tempest-TestSecurityGroupsBasicOps-1955317373',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:37:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-nt0g0idi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:37:39Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=d2f5b215-3a41-451c-8ad8-68b17c96a678,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.915 253665 DEBUG nova.network.os_vif_util [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:03 np0005532048 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [NOTICE]   (375011) : haproxy version is 2.8.14-c23fe91
Nov 22 04:39:03 np0005532048 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [NOTICE]   (375011) : path to executable is /usr/sbin/haproxy
Nov 22 04:39:03 np0005532048 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [WARNING]  (375011) : Exiting Master process...
Nov 22 04:39:03 np0005532048 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [WARNING]  (375011) : Exiting Master process...
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.916 253665 DEBUG nova.network.os_vif_util [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.917 253665 DEBUG os_vif [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:39:03 np0005532048 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [ALERT]    (375011) : Current worker (375013) exited with code 143 (Terminated)
Nov 22 04:39:03 np0005532048 neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6[375005]: [WARNING]  (375011) : All workers exited. Exiting... (0)
Nov 22 04:39:03 np0005532048 systemd[1]: libpod-2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1.scope: Deactivated successfully.
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.923 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e21d7ad-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.926 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:03 np0005532048 podman[380143]: 2025-11-22 09:39:03.927376275 +0000 UTC m=+0.056403245 container died 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.929 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:03 np0005532048 nova_compute[253661]: 2025-11-22 09:39:03.932 253665 INFO os_vif [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:0f:ac,bridge_name='br-int',has_traffic_filtering=True,id=1e21d7ad-a6e7-4649-91f2-612de75fe16f,network=Network(37126bdf-684b-42ae-b38f-88d563755df6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e21d7ad-a6')#033[00m
Nov 22 04:39:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1-userdata-shm.mount: Deactivated successfully.
Nov 22 04:39:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2a8e5aadf5ecb0314ed34aef4194ac1d915ee5e6be36aa131d63c38cd863668f-merged.mount: Deactivated successfully.
Nov 22 04:39:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 200 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 04:39:03 np0005532048 podman[380143]: 2025-11-22 09:39:03.983862989 +0000 UTC m=+0.112889969 container cleanup 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:39:03 np0005532048 systemd[1]: libpod-conmon-2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1.scope: Deactivated successfully.
Nov 22 04:39:04 np0005532048 podman[380197]: 2025-11-22 09:39:04.066652718 +0000 UTC m=+0.054205719 container remove 2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.073 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[495ada6d-603c-4272-9b96-f7f8cf89ede5]: (4, ('Sat Nov 22 09:39:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 (2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1)\n2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1\nSat Nov 22 09:39:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 (2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1)\n2c91fb3ddf9c1b4be458d6b8cf931a64f59c3a0027e619751c4cb9f268587ae1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.075 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[acb1db57-38d8-4de6-a3fe-e5f7c3282069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.079 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37126bdf-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.081 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:04 np0005532048 kernel: tap37126bdf-60: left promiscuous mode
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.113 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[54badb81-e926-4156-95df-94c90caa7151]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.129 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[364b6941-e552-40fe-b156-e68d775c079a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.131 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[73004a20-672b-4c1e-a554-701c0a5d07e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.150 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1a21d2-dcf9-48cb-b60c-54f15f3caa82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710732, 'reachable_time': 29533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380212, 'error': None, 'target': 'ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:04 np0005532048 systemd[1]: run-netns-ovnmeta\x2d37126bdf\x2d684b\x2d42ae\x2db38f\x2d88d563755df6.mount: Deactivated successfully.
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.155 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-37126bdf-684b-42ae-b38f-88d563755df6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.155 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[187d5292-dfc7-40ee-8460-721ac66b08d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:39:04 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.397 253665 INFO nova.virt.libvirt.driver [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Deleting instance files /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678_del#033[00m
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.399 253665 INFO nova.virt.libvirt.driver [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Deletion of /var/lib/nova/instances/d2f5b215-3a41-451c-8ad8-68b17c96a678_del complete#033[00m
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.490 253665 INFO nova.compute.manager [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.491 253665 DEBUG oslo.service.loopingcall [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.491 253665 DEBUG nova.compute.manager [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.492 253665 DEBUG nova.network.neutron [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.502 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:0f:0b 2001:db8:0:1:f816:3eff:fe8d:f0b 2001:db8::f816:3eff:fe8d:f0b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8d:f0b/64 2001:db8::f816:3eff:fe8d:f0b/64', 'neutron:device_id': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1) old=Port_Binding(mac=['fa:16:3e:8d:0f:0b 2001:db8::f816:3eff:fe8d:f0b'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8d:f0b/64', 'neutron:device_id': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.504 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1 in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb updated#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.505 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20228844-2184-465b-8bc3-e846cfb6d3cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:39:04 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:04.506 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[794ebc08-01ef-4e92-ba26-909fc556eaf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.593 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.594 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:04 np0005532048 nova_compute[253661]: 2025-11-22 09:39:04.595 253665 DEBUG nova.objects.instance [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'flavor' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.291 253665 DEBUG nova.network.neutron [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updated VIF entry in instance network info cache for port 1e21d7ad-a6e7-4649-91f2-612de75fe16f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.292 253665 DEBUG nova.network.neutron [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [{"id": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "address": "fa:16:3e:1a:0f:ac", "network": {"id": "37126bdf-684b-42ae-b38f-88d563755df6", "bridge": "br-int", "label": "tempest-network-smoke--1122519385", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e21d7ad-a6", "ovs_interfaceid": "1e21d7ad-a6e7-4649-91f2-612de75fe16f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.375 253665 DEBUG oslo_concurrency.lockutils [req-100b3643-8d63-4acc-8239-ed6e46048e09 req-39835cde-daac-48b8-a6a4-41545f1435bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d2f5b215-3a41-451c-8ad8-68b17c96a678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.528 253665 DEBUG nova.objects.instance [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.548 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.754 253665 DEBUG nova.network.neutron [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.773 253665 INFO nova.compute.manager [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Took 1.28 seconds to deallocate network for instance.#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.847 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.848 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:05 np0005532048 nova_compute[253661]: 2025-11-22 09:39:05.941 253665 DEBUG oslo_concurrency.processutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 200 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 1.2 MiB/s wr, 69 op/s
Nov 22 04:39:06 np0005532048 nova_compute[253661]: 2025-11-22 09:39:06.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:06 np0005532048 nova_compute[253661]: 2025-11-22 09:39:06.355 253665 DEBUG nova.policy [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:39:06 np0005532048 nova_compute[253661]: 2025-11-22 09:39:06.412 253665 DEBUG nova.compute.manager [req-650388d3-d5e8-4aa7-be8f-7943ae93eee1 req-2eb9d447-061d-4aa7-84bb-b938c752f5f2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Received event network-vif-deleted-1e21d7ad-a6e7-4649-91f2-612de75fe16f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:39:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869904138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:39:06 np0005532048 nova_compute[253661]: 2025-11-22 09:39:06.471 253665 DEBUG oslo_concurrency.processutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:06 np0005532048 nova_compute[253661]: 2025-11-22 09:39:06.479 253665 DEBUG nova.compute.provider_tree [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:39:06 np0005532048 nova_compute[253661]: 2025-11-22 09:39:06.495 253665 DEBUG nova.scheduler.client.report [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:39:06 np0005532048 nova_compute[253661]: 2025-11-22 09:39:06.520 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:06 np0005532048 nova_compute[253661]: 2025-11-22 09:39:06.552 253665 INFO nova.scheduler.client.report [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance d2f5b215-3a41-451c-8ad8-68b17c96a678#033[00m
Nov 22 04:39:06 np0005532048 nova_compute[253661]: 2025-11-22 09:39:06.658 253665 DEBUG oslo_concurrency.lockutils [None req-701d69b1-1927-414f-befd-59248fd57c21 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "d2f5b215-3a41-451c-8ad8-68b17c96a678" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:07 np0005532048 nova_compute[253661]: 2025-11-22 09:39:07.036 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Successfully created port: b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:39:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 140 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 1.2 MiB/s wr, 81 op/s
Nov 22 04:39:08 np0005532048 nova_compute[253661]: 2025-11-22 09:39:08.057 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804333.0546021, d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:08 np0005532048 nova_compute[253661]: 2025-11-22 09:39:08.057 253665 INFO nova.compute.manager [-] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:39:08 np0005532048 nova_compute[253661]: 2025-11-22 09:39:08.076 253665 DEBUG nova.compute.manager [None req-8a8c487e-7e58-4555-ab90-36743f4eba9c - - - - - -] [instance: d1cc6b07-57c8-46b4-abbb-e0a366b6c2c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:08 np0005532048 nova_compute[253661]: 2025-11-22 09:39:08.231 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Successfully updated port: b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:39:08 np0005532048 nova_compute[253661]: 2025-11-22 09:39:08.252 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:08 np0005532048 nova_compute[253661]: 2025-11-22 09:39:08.253 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:08 np0005532048 nova_compute[253661]: 2025-11-22 09:39:08.253 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:39:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:08 np0005532048 nova_compute[253661]: 2025-11-22 09:39:08.926 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:09 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:09Z|01292|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 04:39:09 np0005532048 nova_compute[253661]: 2025-11-22 09:39:09.929 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 43 KiB/s wr, 37 op/s
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.451 253665 DEBUG nova.compute.manager [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.452 253665 DEBUG nova.compute.manager [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.452 253665 DEBUG oslo_concurrency.lockutils [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.696 253665 DEBUG nova.network.neutron [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.717 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.719 253665 DEBUG oslo_concurrency.lockutils [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.720 253665 DEBUG nova.network.neutron [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.723 253665 DEBUG nova.virt.libvirt.vif [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.723 253665 DEBUG nova.network.os_vif_util [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.724 253665 DEBUG nova.network.os_vif_util [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.725 253665 DEBUG os_vif [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.726 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.726 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.729 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9b8fcd6-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.730 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9b8fcd6-fb, col_values=(('external_ids', {'iface-id': 'b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:d1:9a', 'vm-uuid': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:10 np0005532048 NetworkManager[48920]: <info>  [1763804350.7322] manager: (tapb9b8fcd6-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.735 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.737 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.741 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.741 253665 INFO os_vif [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb')#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.742 253665 DEBUG nova.virt.libvirt.vif [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.742 253665 DEBUG nova.network.os_vif_util [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.743 253665 DEBUG nova.network.os_vif_util [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.746 253665 DEBUG nova.virt.libvirt.guest [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] attach device xml: <interface type="ethernet">
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:d7:d1:9a"/>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <target dev="tapb9b8fcd6-fb"/>
Nov 22 04:39:10 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:39:10 np0005532048 nova_compute[253661]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 22 04:39:10 np0005532048 NetworkManager[48920]: <info>  [1763804350.7626] manager: (tapb9b8fcd6-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/524)
Nov 22 04:39:10 np0005532048 kernel: tapb9b8fcd6-fb: entered promiscuous mode
Nov 22 04:39:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:10Z|01293|binding|INFO|Claiming lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for this chassis.
Nov 22 04:39:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:10Z|01294|binding|INFO|b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f: Claiming fa:16:3e:d7:d1:9a 10.100.0.24
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.765 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.773 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:d1:9a 10.100.0.24'], port_security=['fa:16:3e:d7:d1:9a 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5061ce-83c8-4f7d-bdd0-cc8d1c8db63d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.774 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f in datapath 30756ec6-103b-4571-a5dc-9b4a481bc5b1 bound to our chassis#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.776 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30756ec6-103b-4571-a5dc-9b4a481bc5b1#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.787 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[28d03135-81e8-40c9-b557-5d0a8bca8146]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.788 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap30756ec6-11 in ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.790 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap30756ec6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.790 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8f2098-8f2e-4ebc-87fe-558288f41c33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.790 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e988795-c414-4f40-813e-8eb341542057]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 systemd-udevd[380243]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.803 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a04e3422-16e1-42d6-bc63-ff683272af51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 NetworkManager[48920]: <info>  [1763804350.8081] device (tapb9b8fcd6-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:39:10 np0005532048 NetworkManager[48920]: <info>  [1763804350.8106] device (tapb9b8fcd6-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:10Z|01295|binding|INFO|Setting lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f ovn-installed in OVS
Nov 22 04:39:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:10Z|01296|binding|INFO|Setting lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f up in Southbound
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3efafc71-d2a2-4572-a365-e7c751fe93c4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.848 253665 DEBUG nova.virt.libvirt.driver [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.848 253665 DEBUG nova.virt.libvirt.driver [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.848 253665 DEBUG nova.virt.libvirt.driver [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:30:a0:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.848 253665 DEBUG nova.virt.libvirt.driver [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:d7:d1:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.866 253665 DEBUG nova.virt.libvirt.guest [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:39:10</nova:creationTime>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 04:39:10 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    <nova:port uuid="b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f">
Nov 22 04:39:10 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:39:10 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:39:10 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:39:10 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.867 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5edf5c-8f2f-4010-82b2-ecab52c7c523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d8163bf1-39a8-454b-9130-e6a034f6546b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 NetworkManager[48920]: <info>  [1763804350.8784] manager: (tap30756ec6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/525)
Nov 22 04:39:10 np0005532048 nova_compute[253661]: 2025-11-22 09:39:10.886 253665 DEBUG oslo_concurrency.lockutils [None req-58ef46b4-44f8-4a22-a6a1-c1655bbf36ae 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.917 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[47ca3f5a-6360-48fc-be39-9e8222fba3a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.921 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd5b9ee-54da-4661-a645-04376eeb27bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 NetworkManager[48920]: <info>  [1763804350.9496] device (tap30756ec6-10): carrier: link connected
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.958 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[851d65f9-b037-4496-84ca-383cfb0a3158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.979 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c18afa-8e2c-4acd-a796-5e321324fc80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30756ec6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:cb:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720168, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380269, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:10.999 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3383f930-d2be-44c7-a0a3-b6614ee8c913]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:cbf9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720168, 'tstamp': 720168}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380270, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.023 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c77aca19-1d23-4840-8f09-78a666f6c2c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30756ec6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:cb:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720168, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380271, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.063 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbbab6a-3aca-4787-a403-ab497ddcc8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.138 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e91721b8-35b8-4d12-93c6-804a2ac3d2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.140 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30756ec6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.140 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.141 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30756ec6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:11 np0005532048 kernel: tap30756ec6-10: entered promiscuous mode
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:11 np0005532048 NetworkManager[48920]: <info>  [1763804351.1876] manager: (tap30756ec6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.191 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30756ec6-10, col_values=(('external_ids', {'iface-id': 'ef3a77cb-c20e-4c0c-b747-f8d33bfa04a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:11Z|01297|binding|INFO|Releasing lport ef3a77cb-c20e-4c0c-b747-f8d33bfa04a5 from this chassis (sb_readonly=0)
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.194 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/30756ec6-103b-4571-a5dc-9b4a481bc5b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/30756ec6-103b-4571-a5dc-9b4a481bc5b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.195 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc8e88c-02fe-4686-bf0a-23c838613525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.196 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-30756ec6-103b-4571-a5dc-9b4a481bc5b1
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/30756ec6-103b-4571-a5dc-9b4a481bc5b1.pid.haproxy
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 30756ec6-103b-4571-a5dc-9b4a481bc5b1
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:39:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:11.196 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'env', 'PROCESS_TAG=haproxy-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/30756ec6-103b-4571-a5dc-9b4a481bc5b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.214 253665 DEBUG nova.compute.manager [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.214 253665 DEBUG oslo_concurrency.lockutils [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.215 253665 DEBUG oslo_concurrency.lockutils [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.215 253665 DEBUG oslo_concurrency.lockutils [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.215 253665 DEBUG nova.compute.manager [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:39:11 np0005532048 nova_compute[253661]: 2025-11-22 09:39:11.215 253665 WARNING nova.compute.manager [req-4e8cd64e-1bd7-408e-8831-cf54087f294a req-751ecf49-30d2-41c4-bff5-95cc11a02689 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:39:11 np0005532048 podman[380303]: 2025-11-22 09:39:11.621382883 +0000 UTC m=+0.054701332 container create 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:39:11 np0005532048 systemd[1]: Started libpod-conmon-87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20.scope.
Nov 22 04:39:11 np0005532048 podman[380303]: 2025-11-22 09:39:11.593180691 +0000 UTC m=+0.026499160 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:39:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:39:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f89d97ed2ee8aa5d0ef9bc0221d90f0d63faaffe83c22b65aecd4abf2565382/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:39:11 np0005532048 podman[380303]: 2025-11-22 09:39:11.721133903 +0000 UTC m=+0.154452362 container init 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:39:11 np0005532048 podman[380303]: 2025-11-22 09:39:11.727348558 +0000 UTC m=+0.160666997 container start 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:39:11 np0005532048 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [NOTICE]   (380322) : New worker (380324) forked
Nov 22 04:39:11 np0005532048 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [NOTICE]   (380322) : Loading success.
Nov 22 04:39:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.348 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.349 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:39:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188009071' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:39:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:39:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188009071' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.393 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.618 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.619 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.625 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.625 253665 INFO nova.compute.claims [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.797 253665 DEBUG nova.network.neutron [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.798 253665 DEBUG nova.network.neutron [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.815 253665 DEBUG oslo_concurrency.lockutils [req-dcc4a3cd-c289-407d-af44-23b8081b0cb0 req-2ac5bbe6-64bf-47b5-b5a4-102e31afab76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:12 np0005532048 nova_compute[253661]: 2025-11-22 09:39:12.867 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:39:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802943002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.333 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.341 253665 DEBUG nova.compute.provider_tree [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.355 253665 DEBUG nova.scheduler.client.report [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.377 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.379 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.429 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.430 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.447 253665 DEBUG nova.compute.manager [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.447 253665 DEBUG oslo_concurrency.lockutils [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.447 253665 DEBUG oslo_concurrency.lockutils [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.448 253665 DEBUG oslo_concurrency.lockutils [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.448 253665 DEBUG nova.compute.manager [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.448 253665 WARNING nova.compute.manager [req-e0e19a4e-8640-4f32-80e8-27605569bd44 req-e1150122-9fc3-432b-8503-dff9d214b37f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.449 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.467 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.558 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.560 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.560 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Creating image(s)#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.585 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.615 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.643 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.648 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.700 253665 DEBUG nova.policy [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:39:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:13Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:d1:9a 10.100.0.24
Nov 22 04:39:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:13Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:d1:9a 10.100.0.24
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.737 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.738 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.739 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.739 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.769 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:13 np0005532048 nova_compute[253661]: 2025-11-22 09:39:13.773 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ba0b1c52-c98b-4c2f-a213-e203719ada54_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Nov 22 04:39:14 np0005532048 nova_compute[253661]: 2025-11-22 09:39:14.146 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ba0b1c52-c98b-4c2f-a213-e203719ada54_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:14 np0005532048 nova_compute[253661]: 2025-11-22 09:39:14.218 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:39:14 np0005532048 nova_compute[253661]: 2025-11-22 09:39:14.346 253665 DEBUG nova.objects.instance [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid ba0b1c52-c98b-4c2f-a213-e203719ada54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:14 np0005532048 nova_compute[253661]: 2025-11-22 09:39:14.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:14 np0005532048 nova_compute[253661]: 2025-11-22 09:39:14.384 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:39:14 np0005532048 nova_compute[253661]: 2025-11-22 09:39:14.384 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Ensure instance console log exists: /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:39:14 np0005532048 nova_compute[253661]: 2025-11-22 09:39:14.384 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:14 np0005532048 nova_compute[253661]: 2025-11-22 09:39:14.385 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:14 np0005532048 nova_compute[253661]: 2025-11-22 09:39:14.385 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:15 np0005532048 nova_compute[253661]: 2025-11-22 09:39:15.419 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Successfully created port: 2a619e33-769d-4ebf-b212-40975e40d3ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:39:15 np0005532048 nova_compute[253661]: 2025-11-22 09:39:15.732 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 121 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Nov 22 04:39:16 np0005532048 nova_compute[253661]: 2025-11-22 09:39:16.190 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Successfully created port: 27382337-7fe1-4d29-942c-7735f8c98a06 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:39:16 np0005532048 nova_compute[253661]: 2025-11-22 09:39:16.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.080 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Successfully updated port: 2a619e33-769d-4ebf-b212-40975e40d3ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.199 253665 DEBUG nova.compute.manager [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.199 253665 DEBUG nova.compute.manager [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing instance network info cache due to event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.200 253665 DEBUG oslo_concurrency.lockutils [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.200 253665 DEBUG oslo_concurrency.lockutils [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.200 253665 DEBUG nova.network.neutron [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:39:17 np0005532048 nova_compute[253661]: 2025-11-22 09:39:17.449 253665 DEBUG nova.network.neutron [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:39:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 141 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 996 KiB/s wr, 30 op/s
Nov 22 04:39:18 np0005532048 nova_compute[253661]: 2025-11-22 09:39:18.513 253665 DEBUG nova.network.neutron [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:18 np0005532048 nova_compute[253661]: 2025-11-22 09:39:18.543 253665 DEBUG oslo_concurrency.lockutils [req-42fb9b15-b271-41aa-a81e-c6bdecb012b7 req-0b1518be-f484-41ae-b08e-efd48dda476d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:18 np0005532048 nova_compute[253661]: 2025-11-22 09:39:18.632 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Successfully updated port: 27382337-7fe1-4d29-942c-7735f8c98a06 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:39:18 np0005532048 nova_compute[253661]: 2025-11-22 09:39:18.656 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:18 np0005532048 nova_compute[253661]: 2025-11-22 09:39:18.657 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:18 np0005532048 nova_compute[253661]: 2025-11-22 09:39:18.657 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:39:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:18 np0005532048 nova_compute[253661]: 2025-11-22 09:39:18.900 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804343.8996782, d2f5b215-3a41-451c-8ad8-68b17c96a678 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:18 np0005532048 nova_compute[253661]: 2025-11-22 09:39:18.901 253665 INFO nova.compute.manager [-] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:39:18 np0005532048 nova_compute[253661]: 2025-11-22 09:39:18.929 253665 DEBUG nova.compute.manager [None req-de1cb40f-8975-488a-b16c-040b51901d59 - - - - - -] [instance: d2f5b215-3a41-451c-8ad8-68b17c96a678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:19 np0005532048 nova_compute[253661]: 2025-11-22 09:39:19.001 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:39:19 np0005532048 nova_compute[253661]: 2025-11-22 09:39:19.410 253665 DEBUG nova.compute.manager [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-changed-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:19 np0005532048 nova_compute[253661]: 2025-11-22 09:39:19.411 253665 DEBUG nova.compute.manager [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing instance network info cache due to event network-changed-27382337-7fe1-4d29-942c-7735f8c98a06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:39:19 np0005532048 nova_compute[253661]: 2025-11-22 09:39:19.412 253665 DEBUG oslo_concurrency.lockutils [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 167 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.657 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.658 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.676 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.735 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.745 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.746 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.752 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.752 253665 INFO nova.compute.claims [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:39:20 np0005532048 nova_compute[253661]: 2025-11-22 09:39:20.898 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:39:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2987396762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.380 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.388 253665 DEBUG nova.compute.provider_tree [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.410 253665 DEBUG nova.scheduler.client.report [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.434 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.435 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.484 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.485 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.509 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.526 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.615 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.617 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.618 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Creating image(s)#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.643 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.668 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.695 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.701 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.800 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.802 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.803 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.804 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.840 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.845 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d7865a13-0d41-44d6-aac2-10cca6e1348a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:21 np0005532048 nova_compute[253661]: 2025-11-22 09:39:21.952 253665 DEBUG nova.policy [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:39:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 167 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.182 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d7865a13-0d41-44d6-aac2-10cca6e1348a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.256 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.362 253665 DEBUG nova.objects.instance [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid d7865a13-0d41-44d6-aac2-10cca6e1348a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.376 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.376 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Ensure instance console log exists: /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.377 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.377 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.378 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.692 253665 DEBUG nova.network.neutron [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.716 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.717 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance network_info: |[{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.717 253665 DEBUG oslo_concurrency.lockutils [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.718 253665 DEBUG nova.network.neutron [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing network info cache for port 27382337-7fe1-4d29-942c-7735f8c98a06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.722 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start _get_guest_xml network_info=[{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.728 253665 WARNING nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.733 253665 DEBUG nova.virt.libvirt.host [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.734 253665 DEBUG nova.virt.libvirt.host [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.743 253665 DEBUG nova.virt.libvirt.host [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.743 253665 DEBUG nova.virt.libvirt.host [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.744 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.744 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.745 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.745 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.745 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.746 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.746 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.746 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.746 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.747 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.747 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.747 253665 DEBUG nova.virt.hardware [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:39:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:39:22 np0005532048 nova_compute[253661]: 2025-11-22 09:39:22.751 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:39:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944524559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.266 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.290 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.302 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:23 np0005532048 podman[380747]: 2025-11-22 09:39:23.367113063 +0000 UTC m=+0.057032560 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:39:23 np0005532048 podman[380751]: 2025-11-22 09:39:23.386441994 +0000 UTC m=+0.069653733 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.696 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Successfully created port: 54a61ee9-1fb8-4c5c-8716-613fc3355afb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:39:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:39:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368777439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.777 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.780 253665 DEBUG nova.virt.libvirt.vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:13Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.780 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.782 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.784 253665 DEBUG nova.virt.libvirt.vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:13Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.785 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.786 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.788 253665 DEBUG nova.objects.instance [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid ba0b1c52-c98b-4c2f-a213-e203719ada54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.810 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <uuid>ba0b1c52-c98b-4c2f-a213-e203719ada54</uuid>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <name>instance-0000007a</name>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-206573176</nova:name>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:39:22</nova:creationTime>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <nova:port uuid="2a619e33-769d-4ebf-b212-40975e40d3ca">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <nova:port uuid="27382337-7fe1-4d29-942c-7735f8c98a06">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe81:ef0f" ipVersion="6"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe81:ef0f" ipVersion="6"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <entry name="serial">ba0b1c52-c98b-4c2f-a213-e203719ada54</entry>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <entry name="uuid">ba0b1c52-c98b-4c2f-a213-e203719ada54</entry>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:39:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/ba0b1c52-c98b-4c2f-a213-e203719ada54_disk">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:09:e0:cc"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <target dev="tap2a619e33-76"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:81:ef:0f"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <target dev="tap27382337-7f"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/console.log" append="off"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:39:23 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:39:23 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:39:23 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:39:23 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.811 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Preparing to wait for external event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.811 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.811 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.812 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.812 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Preparing to wait for external event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.812 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.812 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.813 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.813 253665 DEBUG nova.virt.libvirt.vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:13Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.814 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.815 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.815 253665 DEBUG os_vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.817 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.817 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.823 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a619e33-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.824 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a619e33-76, col_values=(('external_ids', {'iface-id': '2a619e33-769d-4ebf-b212-40975e40d3ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:e0:cc', 'vm-uuid': 'ba0b1c52-c98b-4c2f-a213-e203719ada54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.826 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:23 np0005532048 NetworkManager[48920]: <info>  [1763804363.8274] manager: (tap2a619e33-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.836 253665 INFO os_vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76')#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.837 253665 DEBUG nova.virt.libvirt.vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:13Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.838 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.839 253665 DEBUG nova.network.os_vif_util [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.839 253665 DEBUG os_vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.840 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.843 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.844 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27382337-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.844 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27382337-7f, col_values=(('external_ids', {'iface-id': '27382337-7fe1-4d29-942c-7735f8c98a06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:ef:0f', 'vm-uuid': 'ba0b1c52-c98b-4c2f-a213-e203719ada54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:23 np0005532048 NetworkManager[48920]: <info>  [1763804363.8474] manager: (tap27382337-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.850 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.855 253665 INFO os_vif [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f')#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.924 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.925 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.925 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:09:e0:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.925 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:81:ef:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.926 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Using config drive#033[00m
Nov 22 04:39:23 np0005532048 nova_compute[253661]: 2025-11-22 09:39:23.949 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 198 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.9 MiB/s wr, 54 op/s
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.498 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Creating config drive at /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.504 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wreox54 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.653 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wreox54" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.681 253665 DEBUG nova.storage.rbd_utils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.685 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:24.768 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:39:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:24.789 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.791 253665 DEBUG nova.network.neutron [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updated VIF entry in instance network info cache for port 27382337-7fe1-4d29-942c-7735f8c98a06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.792 253665 DEBUG nova.network.neutron [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.795 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.814 253665 DEBUG oslo_concurrency.lockutils [req-d518c746-6166-426b-959a-f066f4b340c3 req-0aca0fe5-1edc-4cd4-bac9-ba1683cb21b5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.846 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Successfully updated port: 54a61ee9-1fb8-4c5c-8716-613fc3355afb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.892 253665 DEBUG oslo_concurrency.processutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config ba0b1c52-c98b-4c2f-a213-e203719ada54_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.893 253665 INFO nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Deleting local config drive /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54/disk.config because it was imported into RBD.#033[00m
Nov 22 04:39:24 np0005532048 kernel: tap2a619e33-76: entered promiscuous mode
Nov 22 04:39:24 np0005532048 NetworkManager[48920]: <info>  [1763804364.9695] manager: (tap2a619e33-76): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Nov 22 04:39:24 np0005532048 nova_compute[253661]: 2025-11-22 09:39:24.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:24Z|01298|if_status|INFO|Not updating pb chassis for 2a619e33-769d-4ebf-b212-40975e40d3ca now as sb is readonly
Nov 22 04:39:24 np0005532048 NetworkManager[48920]: <info>  [1763804364.9912] manager: (tap27382337-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Nov 22 04:39:24 np0005532048 kernel: tap27382337-7f: entered promiscuous mode
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.002 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:25 np0005532048 systemd-udevd[380885]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:39:25 np0005532048 systemd-udevd[380887]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:25 np0005532048 NetworkManager[48920]: <info>  [1763804365.0308] device (tap27382337-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:39:25 np0005532048 NetworkManager[48920]: <info>  [1763804365.0318] device (tap2a619e33-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:39:25 np0005532048 NetworkManager[48920]: <info>  [1763804365.0325] device (tap27382337-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:39:25 np0005532048 NetworkManager[48920]: <info>  [1763804365.0329] device (tap2a619e33-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:39:25 np0005532048 systemd-machined[215941]: New machine qemu-153-instance-0000007a.
Nov 22 04:39:25 np0005532048 systemd[1]: Started Virtual Machine qemu-153-instance-0000007a.
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.050 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.051 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.051 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:39:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:25Z|01299|binding|INFO|Claiming lport 27382337-7fe1-4d29-942c-7735f8c98a06 for this chassis.
Nov 22 04:39:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:25Z|01300|binding|INFO|27382337-7fe1-4d29-942c-7735f8c98a06: Claiming fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f
Nov 22 04:39:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:25Z|01301|binding|INFO|Claiming lport 2a619e33-769d-4ebf-b212-40975e40d3ca for this chassis.
Nov 22 04:39:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:25Z|01302|binding|INFO|2a619e33-769d-4ebf-b212-40975e40d3ca: Claiming fa:16:3e:09:e0:cc 10.100.0.10
Nov 22 04:39:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:25Z|01303|binding|INFO|Setting lport 27382337-7fe1-4d29-942c-7735f8c98a06 ovn-installed in OVS
Nov 22 04:39:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:25Z|01304|binding|INFO|Setting lport 2a619e33-769d-4ebf-b212-40975e40d3ca ovn-installed in OVS
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.213 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.272 253665 DEBUG nova.compute.manager [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-changed-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.273 253665 DEBUG nova.compute.manager [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Refreshing instance network info cache due to event network-changed-54a61ee9-1fb8-4c5c-8716-613fc3355afb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.273 253665 DEBUG oslo_concurrency.lockutils [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:25Z|01305|binding|INFO|Setting lport 27382337-7fe1-4d29-942c-7735f8c98a06 up in Southbound
Nov 22 04:39:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:25Z|01306|binding|INFO|Setting lport 2a619e33-769d-4ebf-b212-40975e40d3ca up in Southbound
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.320 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:e0:cc 10.100.0.10'], port_security=['fa:16:3e:09:e0:cc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ba0b1c52-c98b-4c2f-a213-e203719ada54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33aa2b15-84be-4fa8-858f-98182293b1b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82afa9d-1a09-411a-8866-4ce961a27350, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a619e33-769d-4ebf-b212-40975e40d3ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.323 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f'], port_security=['fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe81:ef0f/64 2001:db8::f816:3eff:fe81:ef0f/64', 'neutron:device_id': 'ba0b1c52-c98b-4c2f-a213-e203719ada54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=27382337-7fe1-4d29-942c-7735f8c98a06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.324 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a619e33-769d-4ebf-b212-40975e40d3ca in datapath 33aa2b15-84be-4fa8-858f-98182293b1b2 bound to our chassis#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.327 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33aa2b15-84be-4fa8-858f-98182293b1b2#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.343 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b27fe9d-edd4-459c-a793-cc41eccfd659]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.345 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap33aa2b15-81 in ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.347 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap33aa2b15-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.347 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ceaff0-d018-4dab-9442-129343c12d46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.348 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7a982d2f-c679-4446-9895-0b068cd533ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.366 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a61b622d-6a2e-4e36-9854-616e69ff37de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.392 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3398218d-4ee4-4abc-97ae-a47dd82b6980]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.431 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[adcad5a4-8cd3-4a84-9154-a23899e43e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 NetworkManager[48920]: <info>  [1763804365.4405] manager: (tap33aa2b15-80): new Veth device (/org/freedesktop/NetworkManager/Devices/531)
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.441 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f866c8a1-b323-45ba-bb1e-da27f7fa4cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.477 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b13d5e-426b-4917-97c0-550f795e5d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.480 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cf6bb0-981b-45f5-aa8c-710e8fdd0ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 NetworkManager[48920]: <info>  [1763804365.5115] device (tap33aa2b15-80): carrier: link connected
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.519 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a2797e92-e5c2-4fe7-879e-6a6402d5ac83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.537 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a658cae8-471a-45e6-9993-444c271f0ca9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33aa2b15-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:23:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721624, 'reachable_time': 19894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380924, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.556 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c102ec71-211b-4080-9367-f4bc92e69258]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:23b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721624, 'tstamp': 721624}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380925, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.575 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a22f46ba-bedd-43df-b65c-c014796d2ffd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33aa2b15-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:23:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721624, 'reachable_time': 19894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380926, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.612 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d5eac2-446d-4094-9833-f7391abfd886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.683 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a02bb340-be22-4ef0-8a07-f3d987e76ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.685 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33aa2b15-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.686 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.687 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33aa2b15-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:25 np0005532048 NetworkManager[48920]: <info>  [1763804365.6896] manager: (tap33aa2b15-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Nov 22 04:39:25 np0005532048 kernel: tap33aa2b15-80: entered promiscuous mode
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.692 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.692 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33aa2b15-80, col_values=(('external_ids', {'iface-id': 'c8541406-177e-4d49-a6da-f639419da399'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:25 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:25Z|01307|binding|INFO|Releasing lport c8541406-177e-4d49-a6da-f639419da399 from this chassis (sb_readonly=0)
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.711 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/33aa2b15-84be-4fa8-858f-98182293b1b2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/33aa2b15-84be-4fa8-858f-98182293b1b2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.712 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ea105f7f-a720-46e8-9f26-3cbc420f8b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.713 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-33aa2b15-84be-4fa8-858f-98182293b1b2
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/33aa2b15-84be-4fa8-858f-98182293b1b2.pid.haproxy
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 33aa2b15-84be-4fa8-858f-98182293b1b2
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:39:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:25.715 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'env', 'PROCESS_TAG=haproxy-33aa2b15-84be-4fa8-858f-98182293b1b2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/33aa2b15-84be-4fa8-858f-98182293b1b2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.855 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804365.8550584, ba0b1c52-c98b-4c2f-a213-e203719ada54 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.856 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] VM Started (Lifecycle Event)#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.859 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.875 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.880 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804365.8581522, ba0b1c52-c98b-4c2f-a213-e203719ada54 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.881 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.913 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.917 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:39:25 np0005532048 nova_compute[253661]: 2025-11-22 09:39:25.940 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:39:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 198 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.9 MiB/s wr, 53 op/s
Nov 22 04:39:26 np0005532048 podman[381001]: 2025-11-22 09:39:26.110841286 +0000 UTC m=+0.053335689 container create 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:39:26 np0005532048 systemd[1]: Started libpod-conmon-2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733.scope.
Nov 22 04:39:26 np0005532048 podman[381001]: 2025-11-22 09:39:26.08284554 +0000 UTC m=+0.025339963 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:39:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:39:26 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80866c630f9de6dcc7211912df360c883a2569930a017e86eb8d48a712ac4e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:39:26 np0005532048 podman[381001]: 2025-11-22 09:39:26.19751138 +0000 UTC m=+0.140005803 container init 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:26 np0005532048 podman[381001]: 2025-11-22 09:39:26.205679594 +0000 UTC m=+0.148173987 container start 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:39:26 np0005532048 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [NOTICE]   (381020) : New worker (381022) forked
Nov 22 04:39:26 np0005532048 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [NOTICE]   (381020) : Loading success.
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.275 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 27382337-7fe1-4d29-942c-7735f8c98a06 in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb unbound from our chassis#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.277 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20228844-2184-465b-8bc3-e846cfb6d3cb#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.294 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a237e08-7a8c-407f-bdf9-474dba13899a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.295 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20228844-21 in ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.297 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20228844-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.298 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1698afc-6d35-4205-9843-ec9ef1b03b62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.299 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[758ec8ba-b82a-499a-a421-ef0e9a5881a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.313 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6f12e3a3-811b-49e7-a75f-66b25d4b4822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.338 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1d940d-915e-4c5f-9f02-88d8055e0494]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.374 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d4d57b-15d1-4c31-8076-40aa77fda0d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.381 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4f27ba99-caa6-4c18-a262-c073c811e6c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 NetworkManager[48920]: <info>  [1763804366.3827] manager: (tap20228844-20): new Veth device (/org/freedesktop/NetworkManager/Devices/533)
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.422 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[266ccc74-0a1a-48d3-9418-ab27a97f73a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.425 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb2d736-db72-4a85-9c04-cf1f0a7f883f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 NetworkManager[48920]: <info>  [1763804366.4517] device (tap20228844-20): carrier: link connected
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.458 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e13bd9-2c28-4d8f-8cdb-070e6793029e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.477 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13157d32-6fba-4e29-a034-c81b8b659fe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20228844-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:0f:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721718, 'reachable_time': 39887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381041, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.498 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[35fc7dd9-c962-47fd-9723-5b39cfc0193a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8d:f0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721718, 'tstamp': 721718}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381042, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a227d1c3-402c-4512-b5c7-7c7cc612c381]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20228844-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:0f:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721718, 'reachable_time': 39887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381043, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.559 253665 DEBUG nova.compute.manager [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.560 253665 DEBUG oslo_concurrency.lockutils [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.560 253665 DEBUG oslo_concurrency.lockutils [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.560 253665 DEBUG oslo_concurrency.lockutils [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.561 253665 DEBUG nova.compute.manager [req-bdaac083-74f3-49ce-bab7-a93291303790 req-6498d552-076f-4931-954f-e33f7b1bb7a7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Processing event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.563 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0b5069-a903-4466-8077-2316b46da023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.603 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1ddba5e2-f72e-47a0-8334-c66bbd448297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.605 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20228844-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.605 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.605 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20228844-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:26 np0005532048 NetworkManager[48920]: <info>  [1763804366.6085] manager: (tap20228844-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Nov 22 04:39:26 np0005532048 kernel: tap20228844-20: entered promiscuous mode
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.613 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20228844-20, col_values=(('external_ids', {'iface-id': 'c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:26 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:26Z|01308|binding|INFO|Releasing lport c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1 from this chassis (sb_readonly=0)
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.617 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20228844-2184-465b-8bc3-e846cfb6d3cb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20228844-2184-465b-8bc3-e846cfb6d3cb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.619 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[58eccc14-f847-4550-8dcc-5af6018a33fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.620 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-20228844-2184-465b-8bc3-e846cfb6d3cb
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/20228844-2184-465b-8bc3-e846cfb6d3cb.pid.haproxy
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 20228844-2184-465b-8bc3-e846cfb6d3cb
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:39:26 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:26.620 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'env', 'PROCESS_TAG=haproxy-20228844-2184-465b-8bc3-e846cfb6d3cb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20228844-2184-465b-8bc3-e846cfb6d3cb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.642 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.642 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.679 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.834 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.835 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.842 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:39:26 np0005532048 nova_compute[253661]: 2025-11-22 09:39:26.843 253665 INFO nova.compute.claims [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:39:26 np0005532048 podman[381073]: 2025-11-22 09:39:26.984647616 +0000 UTC m=+0.051699577 container create fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:39:27 np0005532048 systemd[1]: Started libpod-conmon-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a.scope.
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.052 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:27 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:39:27 np0005532048 podman[381073]: 2025-11-22 09:39:26.958519306 +0000 UTC m=+0.025571287 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:39:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb633aced8fc5d6611d564ed19dc9e108a0bec8470c6c0ad7e16b832c8c4335b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:39:27 np0005532048 podman[381073]: 2025-11-22 09:39:27.072334506 +0000 UTC m=+0.139386487 container init fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 04:39:27 np0005532048 podman[381073]: 2025-11-22 09:39:27.078075599 +0000 UTC m=+0.145127550 container start fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:39:27 np0005532048 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [NOTICE]   (381093) : New worker (381095) forked
Nov 22 04:39:27 np0005532048 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [NOTICE]   (381093) : Loading success.
Nov 22 04:39:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:39:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584709581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.524 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.534 253665 DEBUG nova.compute.provider_tree [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.552 253665 DEBUG nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.552 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.553 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.553 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.553 253665 DEBUG nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Processing event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.554 253665 DEBUG nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.554 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.554 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.555 253665 DEBUG oslo_concurrency.lockutils [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.555 253665 DEBUG nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.555 253665 WARNING nova.compute.manager [req-d68ba1e2-8856-47c7-95d1-ff1ca634d5e6 req-3d8d3499-e263-4087-b0c2-8a9f02765670 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received unexpected event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.557 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.558 253665 DEBUG nova.scheduler.client.report [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.565 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804367.5648136, ba0b1c52-c98b-4c2f-a213-e203719ada54 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.565 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.568 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.573 253665 INFO nova.virt.libvirt.driver [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance spawned successfully.#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.574 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.588 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.592 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.593 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.600 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.621 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.625 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.626 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.626 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.626 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.627 253665 DEBUG nova.virt.libvirt.driver [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.634 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.706 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.707 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.738 253665 DEBUG nova.network.neutron [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Updating instance_info_cache with network_info: [{"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.742 253665 INFO nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Took 14.18 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.742 253665 DEBUG nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.744 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.772 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.773 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance network_info: |[{"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.773 253665 DEBUG oslo_concurrency.lockutils [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.775 253665 DEBUG nova.network.neutron [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Refreshing network info cache for port 54a61ee9-1fb8-4c5c-8716-613fc3355afb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.782 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start _get_guest_xml network_info=[{"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.788 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.792 253665 WARNING nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.799 253665 DEBUG nova.virt.libvirt.host [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.799 253665 DEBUG nova.virt.libvirt.host [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.802 253665 DEBUG nova.virt.libvirt.host [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.802 253665 DEBUG nova.virt.libvirt.host [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.803 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.803 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.804 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.805 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.805 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.805 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.805 253665 DEBUG nova.virt.hardware [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.808 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:27.983 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:27.984 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:27 np0005532048 nova_compute[253661]: 2025-11-22 09:39:27.978 253665 INFO nova.compute.manager [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Took 15.39 seconds to build instance.#033[00m
Nov 22 04:39:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:27.985 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 213 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.081 253665 DEBUG oslo_concurrency.lockutils [None req-32997991-480b-463e-ba6d-cacf8e6012cd 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.101 253665 DEBUG nova.policy [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.190 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.191 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.192 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Creating image(s)#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.219 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.247 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.273 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.279 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:39:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589652562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.339 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.362 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.367 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.413 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.414 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.415 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.415 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.445 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.450 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.761 253665 DEBUG nova.compute.manager [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.762 253665 DEBUG oslo_concurrency.lockutils [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.762 253665 DEBUG oslo_concurrency.lockutils [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.762 253665 DEBUG oslo_concurrency.lockutils [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.762 253665 DEBUG nova.compute.manager [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.763 253665 WARNING nova.compute.manager [req-dce39d99-300c-4eca-abbb-c30f468f9a36 req-b0f3d9a9-8ac1-4f89-a168-9d9a8c8b65e2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received unexpected event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:39:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.848 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:39:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388129535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.930 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.931 253665 DEBUG nova.virt.libvirt.vif [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1997303369',display_name='tempest-TestNetworkBasicOps-server-1997303369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1997303369',id=123,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG0uOorJ/gmaNrR6qSN8/HnR9fMkzDH2WfxtPrvyBivOyhJCMxJEV6zlpNVePFSMCgazPwKP4Vum9MI8Qs/y/+T2quaiVANmzVrFFYwVnCOps2b+X6LuQ32XNX42/GMXcg==',key_name='tempest-TestNetworkBasicOps-799900934',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-2eyiney1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:21Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=d7865a13-0d41-44d6-aac2-10cca6e1348a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.932 253665 DEBUG nova.network.os_vif_util [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.932 253665 DEBUG nova.network.os_vif_util [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.934 253665 DEBUG nova.objects.instance [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7865a13-0d41-44d6-aac2-10cca6e1348a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.947 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <uuid>d7865a13-0d41-44d6-aac2-10cca6e1348a</uuid>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <name>instance-0000007b</name>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkBasicOps-server-1997303369</nova:name>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:39:27</nova:creationTime>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <nova:port uuid="54a61ee9-1fb8-4c5c-8716-613fc3355afb">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <entry name="serial">d7865a13-0d41-44d6-aac2-10cca6e1348a</entry>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <entry name="uuid">d7865a13-0d41-44d6-aac2-10cca6e1348a</entry>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d7865a13-0d41-44d6-aac2-10cca6e1348a_disk">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:58:21:e3"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <target dev="tap54a61ee9-1f"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/console.log" append="off"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:39:28 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:39:28 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:39:28 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:39:28 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.948 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Preparing to wait for external event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.948 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.948 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.948 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.949 253665 DEBUG nova.virt.libvirt.vif [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1997303369',display_name='tempest-TestNetworkBasicOps-server-1997303369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1997303369',id=123,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG0uOorJ/gmaNrR6qSN8/HnR9fMkzDH2WfxtPrvyBivOyhJCMxJEV6zlpNVePFSMCgazPwKP4Vum9MI8Qs/y/+T2quaiVANmzVrFFYwVnCOps2b+X6LuQ32XNX42/GMXcg==',key_name='tempest-TestNetworkBasicOps-799900934',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-2eyiney1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:21Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=d7865a13-0d41-44d6-aac2-10cca6e1348a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.950 253665 DEBUG nova.network.os_vif_util [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.951 253665 DEBUG nova.network.os_vif_util [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.952 253665 DEBUG os_vif [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.953 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.953 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.957 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54a61ee9-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.958 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54a61ee9-1f, col_values=(('external_ids', {'iface-id': '54a61ee9-1fb8-4c5c-8716-613fc3355afb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:21:e3', 'vm-uuid': 'd7865a13-0d41-44d6-aac2-10cca6e1348a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:28 np0005532048 NetworkManager[48920]: <info>  [1763804368.9609] manager: (tap54a61ee9-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:28 np0005532048 nova_compute[253661]: 2025-11-22 09:39:28.970 253665 INFO os_vif [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f')#033[00m
Nov 22 04:39:29 np0005532048 podman[381283]: 2025-11-22 09:39:29.116906791 +0000 UTC m=+0.106423647 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.213 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.214 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.214 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:58:21:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.214 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Using config drive#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.236 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.281 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.831s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.352 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.454 253665 DEBUG nova.objects.instance [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid 44f1789d-14f7-46df-a863-e8c3c418f7f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.467 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.467 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Ensure instance console log exists: /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.468 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.468 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:29 np0005532048 nova_compute[253661]: 2025-11-22 09:39:29.468 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 213 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.6 MiB/s wr, 74 op/s
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.100 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Creating config drive at /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.105 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9gecf4_w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.259 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9gecf4_w" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.299 253665 DEBUG nova.storage.rbd_utils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.304 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.480 253665 DEBUG oslo_concurrency.processutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config d7865a13-0d41-44d6-aac2-10cca6e1348a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.481 253665 INFO nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Deleting local config drive /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a/disk.config because it was imported into RBD.#033[00m
Nov 22 04:39:30 np0005532048 NetworkManager[48920]: <info>  [1763804370.5447] manager: (tap54a61ee9-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/536)
Nov 22 04:39:30 np0005532048 kernel: tap54a61ee9-1f: entered promiscuous mode
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:30Z|01309|binding|INFO|Claiming lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb for this chassis.
Nov 22 04:39:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:30Z|01310|binding|INFO|54a61ee9-1fb8-4c5c-8716-613fc3355afb: Claiming fa:16:3e:58:21:e3 10.100.0.20
Nov 22 04:39:30 np0005532048 systemd-udevd[381452]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.587 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:30Z|01311|binding|INFO|Setting lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb ovn-installed in OVS
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.594 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:30 np0005532048 NetworkManager[48920]: <info>  [1763804370.6062] device (tap54a61ee9-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:39:30 np0005532048 NetworkManager[48920]: <info>  [1763804370.6073] device (tap54a61ee9-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:39:30 np0005532048 systemd-machined[215941]: New machine qemu-154-instance-0000007b.
Nov 22 04:39:30 np0005532048 systemd[1]: Started Virtual Machine qemu-154-instance-0000007b.
Nov 22 04:39:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:30Z|01312|binding|INFO|Setting lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb up in Southbound
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.654 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:21:e3 10.100.0.20'], port_security=['fa:16:3e:58:21:e3 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'd7865a13-0d41-44d6-aac2-10cca6e1348a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '268f1f5a-a38b-4a4b-99c8-6f247601dc2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5061ce-83c8-4f7d-bdd0-cc8d1c8db63d, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=54a61ee9-1fb8-4c5c-8716-613fc3355afb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.656 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 54a61ee9-1fb8-4c5c-8716-613fc3355afb in datapath 30756ec6-103b-4571-a5dc-9b4a481bc5b1 bound to our chassis#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.658 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30756ec6-103b-4571-a5dc-9b4a481bc5b1#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.678 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4feea2-4c01-49e5-ba24-36d14e49ae82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.713 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[43c6bfd9-888b-4d6c-ac78-2b0e56822457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.717 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcda24e-4ad9-42fb-9ebf-42e59e0a30d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.728 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Successfully created port: 7381e299-12bd-46ec-8abf-df35fe0bf48a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.758 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[be86a877-c73f-430a-953a-5d17fb9a19bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.776 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c96139b8-d994-4b06-864b-d0fdfc75626a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30756ec6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:cb:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720168, 'reachable_time': 29489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381467, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.793 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[19554eb2-99f1-4a9f-98e4-90d6ec1f5752]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap30756ec6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720183, 'tstamp': 720183}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381468, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30756ec6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720186, 'tstamp': 720186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381468, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.796 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30756ec6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.798 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.800 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30756ec6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.800 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.800 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30756ec6-10, col_values=(('external_ids', {'iface-id': 'ef3a77cb-c20e-4c0c-b747-f8d33bfa04a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:30.801 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:30 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.999 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804370.998435, d7865a13-0d41-44d6-aac2-10cca6e1348a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:30.999 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] VM Started (Lifecycle Event)#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.025 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.029 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804370.9986734, d7865a13-0d41-44d6-aac2-10cca6e1348a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.029 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.046 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.051 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.070 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.203 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.266 253665 DEBUG nova.network.neutron [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Updated VIF entry in instance network info cache for port 54a61ee9-1fb8-4c5c-8716-613fc3355afb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.267 253665 DEBUG nova.network.neutron [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Updating instance_info_cache with network_info: [{"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.280 253665 DEBUG oslo_concurrency.lockutils [req-7b8b0886-6c3e-4ba7-9a07-ae0dd253507d req-729a8727-f07d-4e64-93a2-fa797957686c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d7865a13-0d41-44d6-aac2-10cca6e1348a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.430 253665 DEBUG nova.compute.manager [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.431 253665 DEBUG oslo_concurrency.lockutils [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.431 253665 DEBUG oslo_concurrency.lockutils [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.432 253665 DEBUG oslo_concurrency.lockutils [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.432 253665 DEBUG nova.compute.manager [req-69db77be-c069-4880-99da-d3cd1bc37d4d req-15100244-e18c-4dad-b645-5fb498ff38ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Processing event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.433 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.436 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804371.4364312, d7865a13-0d41-44d6-aac2-10cca6e1348a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.437 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.439 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.443 253665 INFO nova.virt.libvirt.driver [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance spawned successfully.#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.444 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.459 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.465 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.472 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.473 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.473 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.474 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.474 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.475 253665 DEBUG nova.virt.libvirt.driver [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.519 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.562 253665 INFO nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Took 9.95 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.563 253665 DEBUG nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.639 253665 INFO nova.compute.manager [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Took 10.92 seconds to build instance.#033[00m
Nov 22 04:39:31 np0005532048 nova_compute[253661]: 2025-11-22 09:39:31.661 253665 DEBUG oslo_concurrency.lockutils [None req-6acd39bc-9f54-4418-9e27-b6281370d065 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 230 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 735 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Nov 22 04:39:32 np0005532048 nova_compute[253661]: 2025-11-22 09:39:32.296 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Successfully updated port: 7381e299-12bd-46ec-8abf-df35fe0bf48a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:39:32 np0005532048 nova_compute[253661]: 2025-11-22 09:39:32.379 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:32 np0005532048 nova_compute[253661]: 2025-11-22 09:39:32.379 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:32 np0005532048 nova_compute[253661]: 2025-11-22 09:39:32.379 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:39:32 np0005532048 nova_compute[253661]: 2025-11-22 09:39:32.421 253665 DEBUG nova.compute.manager [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:32 np0005532048 nova_compute[253661]: 2025-11-22 09:39:32.422 253665 DEBUG nova.compute.manager [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing instance network info cache due to event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:39:32 np0005532048 nova_compute[253661]: 2025-11-22 09:39:32.423 253665 DEBUG oslo_concurrency.lockutils [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:32 np0005532048 nova_compute[253661]: 2025-11-22 09:39:32.600 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:39:33 np0005532048 nova_compute[253661]: 2025-11-22 09:39:33.568 253665 DEBUG nova.compute.manager [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:33 np0005532048 nova_compute[253661]: 2025-11-22 09:39:33.569 253665 DEBUG oslo_concurrency.lockutils [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:33 np0005532048 nova_compute[253661]: 2025-11-22 09:39:33.569 253665 DEBUG oslo_concurrency.lockutils [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:33 np0005532048 nova_compute[253661]: 2025-11-22 09:39:33.570 253665 DEBUG oslo_concurrency.lockutils [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:33 np0005532048 nova_compute[253661]: 2025-11-22 09:39:33.570 253665 DEBUG nova.compute.manager [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] No waiting events found dispatching network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:39:33 np0005532048 nova_compute[253661]: 2025-11-22 09:39:33.570 253665 WARNING nova.compute.manager [req-e30d15f5-d1c3-49f5-a5ce-fdecca6a6829 req-c96b00b8-4ecb-4c34-9241-ad280161a33b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received unexpected event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb for instance with vm_state active and task_state None.#033[00m
Nov 22 04:39:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:33.792 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:33 np0005532048 nova_compute[253661]: 2025-11-22 09:39:33.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.584 253665 DEBUG nova.compute.manager [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.585 253665 DEBUG nova.compute.manager [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing instance network info cache due to event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.585 253665 DEBUG oslo_concurrency.lockutils [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.586 253665 DEBUG oslo_concurrency.lockutils [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.586 253665 DEBUG nova.network.neutron [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.717 253665 DEBUG nova.network.neutron [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.757 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.758 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance network_info: |[{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.760 253665 DEBUG oslo_concurrency.lockutils [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.766 253665 DEBUG nova.network.neutron [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.776 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start _get_guest_xml network_info=[{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.784 253665 WARNING nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.790 253665 DEBUG nova.virt.libvirt.host [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.791 253665 DEBUG nova.virt.libvirt.host [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.801 253665 DEBUG nova.virt.libvirt.host [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.802 253665 DEBUG nova.virt.libvirt.host [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.802 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.803 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.803 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.804 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.804 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.804 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.804 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.805 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.805 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.806 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.806 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.806 253665 DEBUG nova.virt.hardware [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:39:35 np0005532048 nova_compute[253661]: 2025-11-22 09:39:35.810 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.205 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:39:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472232399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.281 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.302 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.307 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:39:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027698613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.786 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.790 253665 DEBUG nova.virt.libvirt.vif [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=124,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-zgn5mokh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:27Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=44f1789d-14f7-46df-a863-e8c3c418f7f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.791 253665 DEBUG nova.network.os_vif_util [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.792 253665 DEBUG nova.network.os_vif_util [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.794 253665 DEBUG nova.objects.instance [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid 44f1789d-14f7-46df-a863-e8c3c418f7f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.812 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <uuid>44f1789d-14f7-46df-a863-e8c3c418f7f3</uuid>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <name>instance-0000007c</name>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749</nova:name>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:39:35</nova:creationTime>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <nova:port uuid="7381e299-12bd-46ec-8abf-df35fe0bf48a">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <entry name="serial">44f1789d-14f7-46df-a863-e8c3c418f7f3</entry>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <entry name="uuid">44f1789d-14f7-46df-a863-e8c3c418f7f3</entry>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/44f1789d-14f7-46df-a863-e8c3c418f7f3_disk">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:9b:ff:5c"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <target dev="tap7381e299-12"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/console.log" append="off"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:39:36 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:39:36 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:39:36 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:39:36 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.819 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Preparing to wait for external event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.820 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.820 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.821 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.822 253665 DEBUG nova.virt.libvirt.vif [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=124,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-zgn5mokh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:27Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=44f1789d-14f7-46df-a863-e8c3c418f7f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.822 253665 DEBUG nova.network.os_vif_util [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.823 253665 DEBUG nova.network.os_vif_util [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.824 253665 DEBUG os_vif [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.825 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.825 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.826 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.830 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7381e299-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.831 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7381e299-12, col_values=(('external_ids', {'iface-id': '7381e299-12bd-46ec-8abf-df35fe0bf48a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:ff:5c', 'vm-uuid': '44f1789d-14f7-46df-a863-e8c3c418f7f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:36 np0005532048 NetworkManager[48920]: <info>  [1763804376.8340] manager: (tap7381e299-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.841 253665 INFO os_vif [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12')#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.885 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.887 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.887 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:9b:ff:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.888 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Using config drive#033[00m
Nov 22 04:39:36 np0005532048 nova_compute[253661]: 2025-11-22 09:39:36.908 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:37 np0005532048 nova_compute[253661]: 2025-11-22 09:39:37.798 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Creating config drive at /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config#033[00m
Nov 22 04:39:37 np0005532048 nova_compute[253661]: 2025-11-22 09:39:37.808 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdlyujis execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:37 np0005532048 nova_compute[253661]: 2025-11-22 09:39:37.973 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmdlyujis" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 176 op/s
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.001 253665 DEBUG nova.storage.rbd_utils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.005 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.042 253665 DEBUG nova.network.neutron [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updated VIF entry in instance network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.043 253665 DEBUG nova.network.neutron [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.066 253665 DEBUG oslo_concurrency.lockutils [req-539381cf-2df5-448a-a9b7-9cb1c5be2ba5 req-2c61e5e0-0f86-46be-8105-e931a0a9aca1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.168 253665 DEBUG oslo_concurrency.processutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config 44f1789d-14f7-46df-a863-e8c3c418f7f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.169 253665 INFO nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Deleting local config drive /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3/disk.config because it was imported into RBD.#033[00m
Nov 22 04:39:38 np0005532048 kernel: tap7381e299-12: entered promiscuous mode
Nov 22 04:39:38 np0005532048 NetworkManager[48920]: <info>  [1763804378.2118] manager: (tap7381e299-12): new Tun device (/org/freedesktop/NetworkManager/Devices/538)
Nov 22 04:39:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:38Z|01313|binding|INFO|Claiming lport 7381e299-12bd-46ec-8abf-df35fe0bf48a for this chassis.
Nov 22 04:39:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:38Z|01314|binding|INFO|7381e299-12bd-46ec-8abf-df35fe0bf48a: Claiming fa:16:3e:9b:ff:5c 10.100.0.3
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.221 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:38Z|01315|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a ovn-installed in OVS
Nov 22 04:39:38 np0005532048 systemd-udevd[381645]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.248 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:39:38 np0005532048 NetworkManager[48920]: <info>  [1763804378.2560] device (tap7381e299-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:39:38 np0005532048 NetworkManager[48920]: <info>  [1763804378.2572] device (tap7381e299-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:39:38 np0005532048 systemd-machined[215941]: New machine qemu-155-instance-0000007c.
Nov 22 04:39:38 np0005532048 systemd[1]: Started Virtual Machine qemu-155-instance-0000007c.
Nov 22 04:39:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:38Z|01316|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a up in Southbound
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.281 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ff:5c 10.100.0.3'], port_security=['fa:16:3e:9b:ff:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44f1789d-14f7-46df-a863-e8c3c418f7f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa d24f9530-589a-4ee7-9767-0df91de410f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7381e299-12bd-46ec-8abf-df35fe0bf48a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.283 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7381e299-12bd-46ec-8abf-df35fe0bf48a in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 bound to our chassis#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.284 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 705357ee-1033-4907-905f-d41aa6dcfd73#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.287 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.304 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2bd56e-dd2e-4b11-8cd4-16c2bfd2c07e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.305 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap705357ee-11 in ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.307 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap705357ee-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.307 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[253cd634-2663-4025-8c92-6992591cae87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.308 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfabd50-4e76-45e7-a0f2-dadc2242c8dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.333 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d146c9-83e8-464a-ae67-58bcc7a93c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.364 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0060b635-21b0-48fa-998b-153e682fd14f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.405 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[616a4d21-77b2-45c3-b67d-57891785116f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 NetworkManager[48920]: <info>  [1763804378.4126] manager: (tap705357ee-10): new Veth device (/org/freedesktop/NetworkManager/Devices/539)
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.415 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1957099d-e173-4e88-a18c-248bff98a28f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.461 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6ae116-7cf5-40bd-8c73-f45131c621e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.467 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[92f8cfc9-efc9-4bb7-ae5c-49b6d9155c39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 NetworkManager[48920]: <info>  [1763804378.4991] device (tap705357ee-10): carrier: link connected
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.509 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[96491370-7509-4228-99c8-4916b6a43e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.528 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[48509dd7-3a8b-4ced-8548-535966383493]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap705357ee-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:aa:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722923, 'reachable_time': 34041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381680, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.547 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eaffb370-e89a-41cc-a41a-6116c381971b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:aa4a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722923, 'tstamp': 722923}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381682, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d1224e1b-dae4-4545-8524-1a2aec9dd2bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap705357ee-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:aa:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722923, 'reachable_time': 34041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381697, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.608 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc6c420-f480-47dd-95eb-bec3b867bb25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.671 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1d553168-20fd-4404-9d60-8ba632a18872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705357ee-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.673 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap705357ee-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.675 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:38 np0005532048 NetworkManager[48920]: <info>  [1763804378.6766] manager: (tap705357ee-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/540)
Nov 22 04:39:38 np0005532048 kernel: tap705357ee-10: entered promiscuous mode
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.680 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.680 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap705357ee-10, col_values=(('external_ids', {'iface-id': 'e4d17104-1aeb-4ffd-be7b-ed782324874a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.681 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:38Z|01317|binding|INFO|Releasing lport e4d17104-1aeb-4ffd-be7b-ed782324874a from this chassis (sb_readonly=0)
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.707 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/705357ee-1033-4907-905f-d41aa6dcfd73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/705357ee-1033-4907-905f-d41aa6dcfd73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.708 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae754ad-1abd-4086-9076-23cc97cbf6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.709 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-705357ee-1033-4907-905f-d41aa6dcfd73
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/705357ee-1033-4907-905f-d41aa6dcfd73.pid.haproxy
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 705357ee-1033-4907-905f-d41aa6dcfd73
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:39:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:39:38.710 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'env', 'PROCESS_TAG=haproxy-705357ee-1033-4907-905f-d41aa6dcfd73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/705357ee-1033-4907-905f-d41aa6dcfd73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.718 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.718 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.718 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.719 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.721 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804378.7206905, 44f1789d-14f7-46df-a863-e8c3c418f7f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.721 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] VM Started (Lifecycle Event)#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.742 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.747 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804378.7209592, 44f1789d-14f7-46df-a863-e8c3c418f7f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.747 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.762 253665 DEBUG nova.network.neutron [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updated VIF entry in instance network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.764 253665 DEBUG nova.network.neutron [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.780 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.782 253665 DEBUG oslo_concurrency.lockutils [req-dcf33696-b335-4f08-93ba-cd2693266cae req-f7470153-3804-4748-b62a-42ae8ec5ccd1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.785 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.803 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:39:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.982 253665 DEBUG nova.compute.manager [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.983 253665 DEBUG oslo_concurrency.lockutils [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.983 253665 DEBUG oslo_concurrency.lockutils [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.983 253665 DEBUG oslo_concurrency.lockutils [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.984 253665 DEBUG nova.compute.manager [req-a5055327-a2de-4fb9-873c-e16010203b21 req-f14cc13f-7a2a-4c6f-9e47-6bdd42c2bf76 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Processing event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.985 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.989 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804378.9890647, 44f1789d-14f7-46df-a863-e8c3c418f7f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.989 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.992 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.995 253665 INFO nova.virt.libvirt.driver [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance spawned successfully.#033[00m
Nov 22 04:39:38 np0005532048 nova_compute[253661]: 2025-11-22 09:39:38.995 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.014 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.025 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.029 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.030 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.030 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.031 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.031 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.032 253665 DEBUG nova.virt.libvirt.driver [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.065 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:39:39 np0005532048 podman[381753]: 2025-11-22 09:39:39.13787202 +0000 UTC m=+0.060230979 container create 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 04:39:39 np0005532048 systemd[1]: Started libpod-conmon-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e.scope.
Nov 22 04:39:39 np0005532048 podman[381753]: 2025-11-22 09:39:39.107244888 +0000 UTC m=+0.029603887 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:39:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:39:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ca2159f1a5a63e92ff60160fdc469795b2978d3e0860bcbe102db9710cb41/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:39:39 np0005532048 podman[381753]: 2025-11-22 09:39:39.251512696 +0000 UTC m=+0.173871675 container init 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:39:39 np0005532048 podman[381753]: 2025-11-22 09:39:39.257851003 +0000 UTC m=+0.180209972 container start 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 22 04:39:39 np0005532048 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [NOTICE]   (381772) : New worker (381774) forked
Nov 22 04:39:39 np0005532048 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [NOTICE]   (381772) : Loading success.
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.694 253665 INFO nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Took 11.50 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.695 253665 DEBUG nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.782 253665 INFO nova.compute.manager [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Took 12.98 seconds to build instance.#033[00m
Nov 22 04:39:39 np0005532048 nova_compute[253661]: 2025-11-22 09:39:39.800 253665 DEBUG oslo_concurrency.lockutils [None req-3455d44a-950b-4988-92a4-107895681d23 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 260 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.111 253665 DEBUG nova.compute.manager [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.112 253665 DEBUG oslo_concurrency.lockutils [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.113 253665 DEBUG oslo_concurrency.lockutils [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.114 253665 DEBUG oslo_concurrency.lockutils [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.114 253665 DEBUG nova.compute.manager [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] No waiting events found dispatching network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.115 253665 WARNING nova.compute.manager [req-ce30d82b-f6fb-46f7-9f6a-9787bb6ccc70 req-b145823e-3e33-4a7f-ae52-58979a7ca3c0 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received unexpected event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a for instance with vm_state active and task_state None.#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.192 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.236 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.237 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:39:41 np0005532048 nova_compute[253661]: 2025-11-22 09:39:41.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 279 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Nov 22 04:39:42 np0005532048 nova_compute[253661]: 2025-11-22 09:39:42.208 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:42 np0005532048 nova_compute[253661]: 2025-11-22 09:39:42.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:42 np0005532048 nova_compute[253661]: 2025-11-22 09:39:42.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:42Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:e0:cc 10.100.0.10
Nov 22 04:39:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:42Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:e0:cc 10.100.0.10
Nov 22 04:39:43 np0005532048 nova_compute[253661]: 2025-11-22 09:39:43.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 293 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.2 MiB/s wr, 269 op/s
Nov 22 04:39:44 np0005532048 nova_compute[253661]: 2025-11-22 09:39:44.399 253665 DEBUG nova.compute.manager [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:44 np0005532048 nova_compute[253661]: 2025-11-22 09:39:44.399 253665 DEBUG nova.compute.manager [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing instance network info cache due to event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:39:44 np0005532048 nova_compute[253661]: 2025-11-22 09:39:44.400 253665 DEBUG oslo_concurrency.lockutils [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:44 np0005532048 nova_compute[253661]: 2025-11-22 09:39:44.400 253665 DEBUG oslo_concurrency.lockutils [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:44 np0005532048 nova_compute[253661]: 2025-11-22 09:39:44.400 253665 DEBUG nova.network.neutron [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:39:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 293 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.251 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:46Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:21:e3 10.100.0.20
Nov 22 04:39:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:46Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:21:e3 10.100.0.20
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.559 253665 DEBUG nova.network.neutron [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updated VIF entry in instance network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.560 253665 DEBUG nova.network.neutron [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.588 253665 DEBUG oslo_concurrency.lockutils [req-46d8623d-23f7-4806-a81d-42354f4ebfc7 req-dbe98209-e25e-4c37-b5f2-ec732889c1b4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:39:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240403227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.792 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.792 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.796 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.797 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.799 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.800 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.803 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.803 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:39:46 np0005532048 nova_compute[253661]: 2025-11-22 09:39:46.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.028 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.030 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2933MB free_disk=59.85542297363281GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.030 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.031 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.115 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.115 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance ba0b1c52-c98b-4c2f-a213-e203719ada54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.116 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d7865a13-0d41-44d6-aac2-10cca6e1348a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.116 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 44f1789d-14f7-46df-a863-e8c3c418f7f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.116 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.117 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.216 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:39:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452286703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.686 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.693 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.710 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.730 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:39:47 np0005532048 nova_compute[253661]: 2025-11-22 09:39:47.731 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 310 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 177 op/s
Nov 22 04:39:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 201 op/s
Nov 22 04:39:51 np0005532048 nova_compute[253661]: 2025-11-22 09:39:51.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:51 np0005532048 nova_compute[253661]: 2025-11-22 09:39:51.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 198 op/s
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:39:52
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:39:52 np0005532048 nova_compute[253661]: 2025-11-22 09:39:52.732 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:52 np0005532048 nova_compute[253661]: 2025-11-22 09:39:52.732 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:39:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:39:53 np0005532048 nova_compute[253661]: 2025-11-22 09:39:53.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.826740) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393826775, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 706, "num_deletes": 250, "total_data_size": 832358, "memory_usage": 845216, "flush_reason": "Manual Compaction"}
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393831733, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 547992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47896, "largest_seqno": 48601, "table_properties": {"data_size": 544847, "index_size": 989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8440, "raw_average_key_size": 20, "raw_value_size": 538281, "raw_average_value_size": 1312, "num_data_blocks": 44, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804339, "oldest_key_time": 1763804339, "file_creation_time": 1763804393, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 5016 microseconds, and 2188 cpu microseconds.
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.831758) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 547992 bytes OK
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.831772) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836299) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836337) EVENT_LOG_v1 {"time_micros": 1763804393836332, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836354) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 828702, prev total WAL file size 828702, number of live WAL files 2.
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836808) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373534' seq:72057594037927935, type:22 .. '6D6772737461740032303035' seq:0, type:0; will stop at (end)
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(535KB)], [110(10MB)]
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393836860, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11336553, "oldest_snapshot_seqno": -1}
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6876 keys, 8306246 bytes, temperature: kUnknown
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393874553, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8306246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8262017, "index_size": 25941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 179595, "raw_average_key_size": 26, "raw_value_size": 8140718, "raw_average_value_size": 1183, "num_data_blocks": 1006, "num_entries": 6876, "num_filter_entries": 6876, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804393, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.874807) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8306246 bytes
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.876339) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 300.1 rd, 219.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(35.8) write-amplify(15.2) OK, records in: 7365, records dropped: 489 output_compression: NoCompression
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.876358) EVENT_LOG_v1 {"time_micros": 1763804393876348, "job": 66, "event": "compaction_finished", "compaction_time_micros": 37780, "compaction_time_cpu_micros": 22266, "output_level": 6, "num_output_files": 1, "total_output_size": 8306246, "num_input_records": 7365, "num_output_records": 6876, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393876563, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804393878211, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.836734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:39:53 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:39:53.878275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:39:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Nov 22 04:39:54 np0005532048 podman[381829]: 2025-11-22 09:39:54.373175811 +0000 UTC m=+0.064622138 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:39:54 np0005532048 podman[381830]: 2025-11-22 09:39:54.390166973 +0000 UTC m=+0.080902963 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:39:54 np0005532048 nova_compute[253661]: 2025-11-22 09:39:54.964 253665 DEBUG nova.compute.manager [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:39:54 np0005532048 nova_compute[253661]: 2025-11-22 09:39:54.964 253665 DEBUG nova.compute.manager [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:39:54 np0005532048 nova_compute[253661]: 2025-11-22 09:39:54.965 253665 DEBUG oslo_concurrency.lockutils [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:39:54 np0005532048 nova_compute[253661]: 2025-11-22 09:39:54.965 253665 DEBUG oslo_concurrency.lockutils [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:39:54 np0005532048 nova_compute[253661]: 2025-11-22 09:39:54.965 253665 DEBUG nova.network.neutron [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:39:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:39:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:39:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:39:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:39:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:39:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 326 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Nov 22 04:39:56 np0005532048 nova_compute[253661]: 2025-11-22 09:39:56.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:39:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:39:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:39:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:39:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:39:56 np0005532048 nova_compute[253661]: 2025-11-22 09:39:56.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:39:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:56Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:ff:5c 10.100.0.3
Nov 22 04:39:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:39:56Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:ff:5c 10.100.0.3
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.124 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.124 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.139 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.228 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.229 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.236 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.236 253665 INFO nova.compute.claims [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.403 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.521 253665 DEBUG nova.network.neutron [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.522 253665 DEBUG nova.network.neutron [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.539 253665 DEBUG oslo_concurrency.lockutils [req-7f565998-12ed-4b35-811f-05fb4367c507 req-e7d1f652-7599-4940-ac6a-cac80f3aa8ad 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:39:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:39:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577341996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.859 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.867 253665 DEBUG nova.compute.provider_tree [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.888 253665 DEBUG nova.scheduler.client.report [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.927 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.928 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.964 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.965 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:39:57 np0005532048 nova_compute[253661]: 2025-11-22 09:39:57.992 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:39:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 334 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.7 MiB/s wr, 78 op/s
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.011 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.097 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.098 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.099 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Creating image(s)#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.121 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.146 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.179 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.185 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.244 253665 DEBUG nova.policy [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.285 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.287 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.288 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.289 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.328 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.334 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7f88c3e8-e667-4d9a-8178-c99843560719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.682 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 7f88c3e8-e667-4d9a-8178-c99843560719_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.758 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:39:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.868 253665 DEBUG nova.objects.instance [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f88c3e8-e667-4d9a-8178-c99843560719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.886 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.886 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Ensure instance console log exists: /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.887 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.887 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:39:58 np0005532048 nova_compute[253661]: 2025-11-22 09:39:58.887 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:39:59 np0005532048 podman[382057]: 2025-11-22 09:39:59.410149104 +0000 UTC m=+0.098324846 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:39:59 np0005532048 nova_compute[253661]: 2025-11-22 09:39:59.562 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Successfully created port: 83f684f5-d7e5-44a8-960d-efe4ce81e023 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:40:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 373 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 3.5 MiB/s wr, 125 op/s
Nov 22 04:40:00 np0005532048 nova_compute[253661]: 2025-11-22 09:40:00.256 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Successfully created port: 454bebe0-5237-48cb-8cf5-10be46f6d33a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:40:01 np0005532048 nova_compute[253661]: 2025-11-22 09:40:01.267 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:01 np0005532048 nova_compute[253661]: 2025-11-22 09:40:01.716 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Successfully updated port: 83f684f5-d7e5-44a8-960d-efe4ce81e023 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:40:01 np0005532048 nova_compute[253661]: 2025-11-22 09:40:01.803 253665 DEBUG nova.compute.manager [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:01 np0005532048 nova_compute[253661]: 2025-11-22 09:40:01.804 253665 DEBUG nova.compute.manager [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing instance network info cache due to event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:40:01 np0005532048 nova_compute[253661]: 2025-11-22 09:40:01.804 253665 DEBUG oslo_concurrency.lockutils [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:01 np0005532048 nova_compute[253661]: 2025-11-22 09:40:01.804 253665 DEBUG oslo_concurrency.lockutils [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:01 np0005532048 nova_compute[253661]: 2025-11-22 09:40:01.804 253665 DEBUG nova.network.neutron [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:40:01 np0005532048 nova_compute[253661]: 2025-11-22 09:40:01.844 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:01 np0005532048 nova_compute[253661]: 2025-11-22 09:40:01.985 253665 DEBUG nova.network.neutron [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Nov 22 04:40:02 np0005532048 nova_compute[253661]: 2025-11-22 09:40:02.427 253665 DEBUG nova.network.neutron [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:02 np0005532048 nova_compute[253661]: 2025-11-22 09:40:02.443 253665 DEBUG oslo_concurrency.lockutils [req-ff7540ea-35e8-4b8f-a982-d316366c4f77 req-4a408a1c-168d-4cbb-970c-417be46247d3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:02 np0005532048 nova_compute[253661]: 2025-11-22 09:40:02.445 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Successfully updated port: 454bebe0-5237-48cb-8cf5-10be46f6d33a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:40:02 np0005532048 nova_compute[253661]: 2025-11-22 09:40:02.454 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:02 np0005532048 nova_compute[253661]: 2025-11-22 09:40:02.455 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:02 np0005532048 nova_compute[253661]: 2025-11-22 09:40:02.455 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:40:02 np0005532048 nova_compute[253661]: 2025-11-22 09:40:02.720 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033671915698053004 of space, bias 1.0, pg target 1.0101574709415901 quantized to 32 (current 32)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:40:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 22 04:40:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 22 04:40:04 np0005532048 nova_compute[253661]: 2025-11-22 09:40:04.015 253665 DEBUG nova.compute.manager [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-changed-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:04 np0005532048 nova_compute[253661]: 2025-11-22 09:40:04.017 253665 DEBUG nova.compute.manager [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing instance network info cache due to event network-changed-454bebe0-5237-48cb-8cf5-10be46f6d33a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:40:04 np0005532048 nova_compute[253661]: 2025-11-22 09:40:04.017 253665 DEBUG oslo_concurrency.lockutils [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:40:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0fa3dd52-8aa3-4d06-b35c-6f7493056435 does not exist
Nov 22 04:40:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d3cd356f-5ee3-4816-acac-b56f8350b4a2 does not exist
Nov 22 04:40:04 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ea6f0012-08ba-48d7-92ae-f3c6bd427431 does not exist
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:40:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.199 253665 DEBUG nova.network.neutron [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.214 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.214 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance network_info: |[{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.215 253665 DEBUG oslo_concurrency.lockutils [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.215 253665 DEBUG nova.network.neutron [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing network info cache for port 454bebe0-5237-48cb-8cf5-10be46f6d33a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.229 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start _get_guest_xml network_info=[{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.235 253665 WARNING nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.242 253665 DEBUG nova.virt.libvirt.host [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.243 253665 DEBUG nova.virt.libvirt.host [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.249 253665 DEBUG nova.virt.libvirt.host [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.250 253665 DEBUG nova.virt.libvirt.host [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.250 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.250 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.251 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.251 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.251 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.251 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.252 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.253 253665 DEBUG nova.virt.hardware [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.255 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:40:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:40:05 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:40:05 np0005532048 podman[382354]: 2025-11-22 09:40:05.474512497 +0000 UTC m=+0.113731910 container create e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 04:40:05 np0005532048 podman[382354]: 2025-11-22 09:40:05.387879493 +0000 UTC m=+0.027098916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:40:05 np0005532048 systemd[1]: Started libpod-conmon-e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759.scope.
Nov 22 04:40:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:40:05 np0005532048 podman[382354]: 2025-11-22 09:40:05.592719947 +0000 UTC m=+0.231939380 container init e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:40:05 np0005532048 podman[382354]: 2025-11-22 09:40:05.602715185 +0000 UTC m=+0.241934608 container start e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:40:05 np0005532048 podman[382354]: 2025-11-22 09:40:05.608004907 +0000 UTC m=+0.247224350 container attach e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:40:05 np0005532048 sleepy_lamarr[382389]: 167 167
Nov 22 04:40:05 np0005532048 systemd[1]: libpod-e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759.scope: Deactivated successfully.
Nov 22 04:40:05 np0005532048 podman[382354]: 2025-11-22 09:40:05.611188486 +0000 UTC m=+0.250407899 container died e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:40:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b554de41d47141a7a19f8bc69bc620771cbc5d7bd0f10d2875df11f32bbfdd1c-merged.mount: Deactivated successfully.
Nov 22 04:40:05 np0005532048 podman[382354]: 2025-11-22 09:40:05.683900334 +0000 UTC m=+0.323119757 container remove e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:40:05 np0005532048 systemd[1]: libpod-conmon-e73abf11103bd0a43570aab1dee5d9f01de491def8c6a5c0e821677e5df89759.scope: Deactivated successfully.
Nov 22 04:40:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:40:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4050339384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.815 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.842 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:05 np0005532048 nova_compute[253661]: 2025-11-22 09:40:05.849 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:05 np0005532048 podman[382433]: 2025-11-22 09:40:05.942125366 +0000 UTC m=+0.054801664 container create 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:40:05 np0005532048 systemd[1]: Started libpod-conmon-44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf.scope.
Nov 22 04:40:06 np0005532048 podman[382433]: 2025-11-22 09:40:05.91658525 +0000 UTC m=+0.029261578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:40:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 22 04:40:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:40:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:06 np0005532048 podman[382433]: 2025-11-22 09:40:06.044022109 +0000 UTC m=+0.156698427 container init 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:40:06 np0005532048 podman[382433]: 2025-11-22 09:40:06.051084455 +0000 UTC m=+0.163760753 container start 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:40:06 np0005532048 podman[382433]: 2025-11-22 09:40:06.05891123 +0000 UTC m=+0.171587528 container attach 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.304 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:40:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045376775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.368 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.370 253665 DEBUG nova.virt.libvirt.vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.370 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.371 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.372 253665 DEBUG nova.virt.libvirt.vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.373 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.373 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.376 253665 DEBUG nova.objects.instance [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f88c3e8-e667-4d9a-8178-c99843560719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.408 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <uuid>7f88c3e8-e667-4d9a-8178-c99843560719</uuid>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <name>instance-0000007d</name>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-1324430276</nova:name>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:40:05</nova:creationTime>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <nova:port uuid="83f684f5-d7e5-44a8-960d-efe4ce81e023">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <nova:port uuid="454bebe0-5237-48cb-8cf5-10be46f6d33a">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0c:2b99" ipVersion="6"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe0c:2b99" ipVersion="6"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <entry name="serial">7f88c3e8-e667-4d9a-8178-c99843560719</entry>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <entry name="uuid">7f88c3e8-e667-4d9a-8178-c99843560719</entry>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/7f88c3e8-e667-4d9a-8178-c99843560719_disk">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/7f88c3e8-e667-4d9a-8178-c99843560719_disk.config">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:29:93:6c"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <target dev="tap83f684f5-d7"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:0c:2b:99"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <target dev="tap454bebe0-52"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/console.log" append="off"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:40:06 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:40:06 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:40:06 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:40:06 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Preparing to wait for external event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.409 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Preparing to wait for external event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.410 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.410 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.410 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.410 253665 DEBUG nova.virt.libvirt.vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.411 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.411 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.412 253665 DEBUG os_vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.415 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.419 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83f684f5-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.420 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83f684f5-d7, col_values=(('external_ids', {'iface-id': '83f684f5-d7e5-44a8-960d-efe4ce81e023', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:93:6c', 'vm-uuid': '7f88c3e8-e667-4d9a-8178-c99843560719'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.421 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:40:06 np0005532048 NetworkManager[48920]: <info>  [1763804406.4243] manager: (tap83f684f5-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.434 253665 INFO os_vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7')#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.435 253665 DEBUG nova.virt.libvirt.vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:39:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.435 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.436 253665 DEBUG nova.network.os_vif_util [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.436 253665 DEBUG os_vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.437 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.437 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.439 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.439 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap454bebe0-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.439 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap454bebe0-52, col_values=(('external_ids', {'iface-id': '454bebe0-5237-48cb-8cf5-10be46f6d33a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:2b:99', 'vm-uuid': '7f88c3e8-e667-4d9a-8178-c99843560719'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:06 np0005532048 NetworkManager[48920]: <info>  [1763804406.4419] manager: (tap454bebe0-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.449 253665 INFO os_vif [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52')#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.523 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.524 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.524 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:29:93:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.524 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:0c:2b:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.525 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Using config drive#033[00m
Nov 22 04:40:06 np0005532048 nova_compute[253661]: 2025-11-22 09:40:06.560 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:07 np0005532048 wizardly_khayyam[382452]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:40:07 np0005532048 wizardly_khayyam[382452]: --> relative data size: 1.0
Nov 22 04:40:07 np0005532048 wizardly_khayyam[382452]: --> All data devices are unavailable
Nov 22 04:40:07 np0005532048 systemd[1]: libpod-44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf.scope: Deactivated successfully.
Nov 22 04:40:07 np0005532048 systemd[1]: libpod-44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf.scope: Consumed 1.110s CPU time.
Nov 22 04:40:07 np0005532048 podman[382433]: 2025-11-22 09:40:07.323677533 +0000 UTC m=+1.436353851 container died 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:40:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6e70e50b300f952487c5dcb34cfa4f2f68def9ed28c530d6df216bdc0629fc07-merged.mount: Deactivated successfully.
Nov 22 04:40:07 np0005532048 nova_compute[253661]: 2025-11-22 09:40:07.753 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Creating config drive at /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config#033[00m
Nov 22 04:40:07 np0005532048 nova_compute[253661]: 2025-11-22 09:40:07.758 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp87djkys6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:07 np0005532048 podman[382433]: 2025-11-22 09:40:07.760553427 +0000 UTC m=+1.873229765 container remove 44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:40:07 np0005532048 systemd[1]: libpod-conmon-44fb74f482737272d138f75db6d49f784a1766cd61889dd98592dfdc244dfbaf.scope: Deactivated successfully.
Nov 22 04:40:07 np0005532048 nova_compute[253661]: 2025-11-22 09:40:07.811 253665 DEBUG nova.network.neutron [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updated VIF entry in instance network info cache for port 454bebe0-5237-48cb-8cf5-10be46f6d33a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:40:07 np0005532048 nova_compute[253661]: 2025-11-22 09:40:07.812 253665 DEBUG nova.network.neutron [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:07 np0005532048 nova_compute[253661]: 2025-11-22 09:40:07.831 253665 DEBUG oslo_concurrency.lockutils [req-184eb8c0-360c-43fa-8226-f23df689bc41 req-c5b9fa18-3e8e-4a6f-949c-5f21c62f4137 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:07 np0005532048 nova_compute[253661]: 2025-11-22 09:40:07.918 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp87djkys6" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:07 np0005532048 nova_compute[253661]: 2025-11-22 09:40:07.948 253665 DEBUG nova.storage.rbd_utils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:07 np0005532048 nova_compute[253661]: 2025-11-22 09:40:07.954 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.210 253665 DEBUG oslo_concurrency.processutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config 7f88c3e8-e667-4d9a-8178-c99843560719_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.256s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.212 253665 INFO nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Deleting local config drive /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719/disk.config because it was imported into RBD.#033[00m
Nov 22 04:40:08 np0005532048 NetworkManager[48920]: <info>  [1763804408.2841] manager: (tap83f684f5-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Nov 22 04:40:08 np0005532048 kernel: tap83f684f5-d7: entered promiscuous mode
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.289 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01318|binding|INFO|Claiming lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 for this chassis.
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01319|binding|INFO|83f684f5-d7e5-44a8-960d-efe4ce81e023: Claiming fa:16:3e:29:93:6c 10.100.0.14
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.299 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:93:6c 10.100.0.14'], port_security=['fa:16:3e:29:93:6c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7f88c3e8-e667-4d9a-8178-c99843560719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33aa2b15-84be-4fa8-858f-98182293b1b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82afa9d-1a09-411a-8866-4ce961a27350, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=83f684f5-d7e5-44a8-960d-efe4ce81e023) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.300 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 83f684f5-d7e5-44a8-960d-efe4ce81e023 in datapath 33aa2b15-84be-4fa8-858f-98182293b1b2 bound to our chassis#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.303 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33aa2b15-84be-4fa8-858f-98182293b1b2#033[00m
Nov 22 04:40:08 np0005532048 NetworkManager[48920]: <info>  [1763804408.3161] manager: (tap454bebe0-52): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Nov 22 04:40:08 np0005532048 kernel: tap454bebe0-52: entered promiscuous mode
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01320|binding|INFO|Setting lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 ovn-installed in OVS
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01321|binding|INFO|Setting lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 up in Southbound
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.327 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[63c9a075-6853-498b-912a-e335acd40e60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01322|if_status|INFO|Dropped 14 log messages in last 44 seconds (most recently, 44 seconds ago) due to excessive rate
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01323|if_status|INFO|Not updating pb chassis for 454bebe0-5237-48cb-8cf5-10be46f6d33a now as sb is readonly
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01324|binding|INFO|Claiming lport 454bebe0-5237-48cb-8cf5-10be46f6d33a for this chassis.
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01325|binding|INFO|454bebe0-5237-48cb-8cf5-10be46f6d33a: Claiming fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99
Nov 22 04:40:08 np0005532048 systemd-udevd[382725]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:40:08 np0005532048 systemd-udevd[382721]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.335 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99'], port_security=['fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0c:2b99/64 2001:db8::f816:3eff:fe0c:2b99/64', 'neutron:device_id': '7f88c3e8-e667-4d9a-8178-c99843560719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=454bebe0-5237-48cb-8cf5-10be46f6d33a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01326|binding|INFO|Setting lport 454bebe0-5237-48cb-8cf5-10be46f6d33a ovn-installed in OVS
Nov 22 04:40:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:08Z|01327|binding|INFO|Setting lport 454bebe0-5237-48cb-8cf5-10be46f6d33a up in Southbound
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:08 np0005532048 NetworkManager[48920]: <info>  [1763804408.3492] device (tap454bebe0-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:40:08 np0005532048 NetworkManager[48920]: <info>  [1763804408.3501] device (tap454bebe0-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:40:08 np0005532048 NetworkManager[48920]: <info>  [1763804408.3526] device (tap83f684f5-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:40:08 np0005532048 NetworkManager[48920]: <info>  [1763804408.3536] device (tap83f684f5-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:40:08 np0005532048 systemd-machined[215941]: New machine qemu-156-instance-0000007d.
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.384 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0fedb400-2e5e-4b99-a7d8-e6885e1991e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.388 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8fbc20-6412-4adf-be4b-8a00d2ac030e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 systemd[1]: Started Virtual Machine qemu-156-instance-0000007d.
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.429 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[19224d84-f53a-4fb0-baf0-e3ee7855b0f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.453 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8d50c73a-da45-41de-a667-9a82d4a7ae4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33aa2b15-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:23:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721624, 'reachable_time': 43164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382754, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 podman[382740]: 2025-11-22 09:40:08.465261113 +0000 UTC m=+0.051847581 container create 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.475 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[427b96c3-dc76-4882-a09e-a0f8ddbe83c4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33aa2b15-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721638, 'tstamp': 721638}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382761, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33aa2b15-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721641, 'tstamp': 721641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382761, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.478 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33aa2b15-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.501 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33aa2b15-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.502 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.502 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33aa2b15-80, col_values=(('external_ids', {'iface-id': 'c8541406-177e-4d49-a6da-f639419da399'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.502 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.503 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 454bebe0-5237-48cb-8cf5-10be46f6d33a in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb unbound from our chassis#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.505 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20228844-2184-465b-8bc3-e846cfb6d3cb#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.526 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d62d1f03-bbba-450b-9126-c1b290c04b5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 systemd[1]: Started libpod-conmon-82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151.scope.
Nov 22 04:40:08 np0005532048 podman[382740]: 2025-11-22 09:40:08.445008418 +0000 UTC m=+0.031594906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:40:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.568 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe33ea10-6dba-44bc-a06d-d58e4bb0a025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.572 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ec58e6bf-ff32-45ea-8986-a38f1931b86e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 podman[382740]: 2025-11-22 09:40:08.581686038 +0000 UTC m=+0.168272506 container init 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:40:08 np0005532048 podman[382740]: 2025-11-22 09:40:08.590693861 +0000 UTC m=+0.177280329 container start 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:40:08 np0005532048 thirsty_tesla[382767]: 167 167
Nov 22 04:40:08 np0005532048 systemd[1]: libpod-82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151.scope: Deactivated successfully.
Nov 22 04:40:08 np0005532048 conmon[382767]: conmon 82edc5e6290b15f0f133 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151.scope/container/memory.events
Nov 22 04:40:08 np0005532048 podman[382740]: 2025-11-22 09:40:08.599393678 +0000 UTC m=+0.185980166 container attach 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:40:08 np0005532048 podman[382740]: 2025-11-22 09:40:08.600458754 +0000 UTC m=+0.187045222 container died 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.611 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d0a4df-6d06-495e-88b9-d9483299b4f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cf6c8669fd2b72544fd3133b23252ffcfd04eb907d89576f71bf24064b1c1530-merged.mount: Deactivated successfully.
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.638 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cea10039-706d-42b9-9c7a-9cfd81edd90c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20228844-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:0f:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721718, 'reachable_time': 22099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382783, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.660 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ed81a7-4f39-481e-b4ac-15c72830730d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap20228844-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721733, 'tstamp': 721733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382786, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.662 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20228844-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.663 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.665 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20228844-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.666 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.666 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20228844-20, col_values=(('external_ids', {'iface-id': 'c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:08.666 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:08 np0005532048 podman[382740]: 2025-11-22 09:40:08.666449225 +0000 UTC m=+0.253035693 container remove 82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:40:08 np0005532048 systemd[1]: libpod-conmon-82edc5e6290b15f0f13345d746959bc19ac7463c8bf4a29078ef6f1473160151.scope: Deactivated successfully.
Nov 22 04:40:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.886 253665 DEBUG nova.compute.manager [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.887 253665 DEBUG oslo_concurrency.lockutils [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.887 253665 DEBUG oslo_concurrency.lockutils [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.887 253665 DEBUG oslo_concurrency.lockutils [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:08 np0005532048 nova_compute[253661]: 2025-11-22 09:40:08.887 253665 DEBUG nova.compute.manager [req-8b6ea385-4c15-4232-a08f-a4d8dd46e2da req-46e2e238-7785-47ed-9364-d2255013fa37 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Processing event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:40:08 np0005532048 podman[382801]: 2025-11-22 09:40:08.910453134 +0000 UTC m=+0.052473157 container create 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:40:08 np0005532048 systemd[1]: Started libpod-conmon-933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785.scope.
Nov 22 04:40:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:40:08 np0005532048 podman[382801]: 2025-11-22 09:40:08.883422761 +0000 UTC m=+0.025442814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:40:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:09 np0005532048 podman[382801]: 2025-11-22 09:40:09.00279466 +0000 UTC m=+0.144814683 container init 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:40:09 np0005532048 podman[382801]: 2025-11-22 09:40:09.01369071 +0000 UTC m=+0.155710733 container start 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:40:09 np0005532048 podman[382801]: 2025-11-22 09:40:09.017371852 +0000 UTC m=+0.159391975 container attach 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:40:09 np0005532048 nova_compute[253661]: 2025-11-22 09:40:09.049 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804409.0483472, 7f88c3e8-e667-4d9a-8178-c99843560719 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:40:09 np0005532048 nova_compute[253661]: 2025-11-22 09:40:09.050 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] VM Started (Lifecycle Event)#033[00m
Nov 22 04:40:09 np0005532048 nova_compute[253661]: 2025-11-22 09:40:09.069 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:40:09 np0005532048 nova_compute[253661]: 2025-11-22 09:40:09.075 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804409.049988, 7f88c3e8-e667-4d9a-8178-c99843560719 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:40:09 np0005532048 nova_compute[253661]: 2025-11-22 09:40:09.075 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:40:09 np0005532048 nova_compute[253661]: 2025-11-22 09:40:09.091 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:40:09 np0005532048 nova_compute[253661]: 2025-11-22 09:40:09.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:40:09 np0005532048 nova_compute[253661]: 2025-11-22 09:40:09.110 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]: {
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:    "0": [
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:        {
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "devices": [
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "/dev/loop3"
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            ],
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_name": "ceph_lv0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_size": "21470642176",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "name": "ceph_lv0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "tags": {
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.cluster_name": "ceph",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.crush_device_class": "",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.encrypted": "0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.osd_id": "0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.type": "block",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.vdo": "0"
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            },
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "type": "block",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "vg_name": "ceph_vg0"
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:        }
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:    ],
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:    "1": [
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:        {
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "devices": [
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "/dev/loop4"
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            ],
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_name": "ceph_lv1",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_size": "21470642176",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "name": "ceph_lv1",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "tags": {
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.cluster_name": "ceph",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.crush_device_class": "",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.encrypted": "0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.osd_id": "1",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.type": "block",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.vdo": "0"
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            },
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "type": "block",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "vg_name": "ceph_vg1"
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:        }
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:    ],
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:    "2": [
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:        {
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "devices": [
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "/dev/loop5"
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            ],
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_name": "ceph_lv2",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_size": "21470642176",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "name": "ceph_lv2",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "tags": {
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.cluster_name": "ceph",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.crush_device_class": "",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.encrypted": "0",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.osd_id": "2",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.type": "block",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:                "ceph.vdo": "0"
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            },
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "type": "block",
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:            "vg_name": "ceph_vg2"
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:        }
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]:    ]
Nov 22 04:40:09 np0005532048 competent_stonebraker[382852]: }
Nov 22 04:40:09 np0005532048 systemd[1]: libpod-933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785.scope: Deactivated successfully.
Nov 22 04:40:09 np0005532048 podman[382801]: 2025-11-22 09:40:09.886275741 +0000 UTC m=+1.028295764 container died 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:40:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0487960d7eaf541fc2fa817bca4bcbf2f3516009f8c560b61f7c1cae7fe87e44-merged.mount: Deactivated successfully.
Nov 22 04:40:09 np0005532048 podman[382801]: 2025-11-22 09:40:09.952296363 +0000 UTC m=+1.094316386 container remove 933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:40:09 np0005532048 systemd[1]: libpod-conmon-933f9d424de867ebce74530bfb2eab2b3d9d377707bd409ca587ab5483dec785.scope: Deactivated successfully.
Nov 22 04:40:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 3.4 MiB/s wr, 83 op/s
Nov 22 04:40:10 np0005532048 podman[383011]: 2025-11-22 09:40:10.61324688 +0000 UTC m=+0.045446722 container create 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:40:10 np0005532048 systemd[1]: Started libpod-conmon-6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75.scope.
Nov 22 04:40:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:40:10 np0005532048 podman[383011]: 2025-11-22 09:40:10.685945468 +0000 UTC m=+0.118145330 container init 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:40:10 np0005532048 podman[383011]: 2025-11-22 09:40:10.592741409 +0000 UTC m=+0.024941271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:40:10 np0005532048 podman[383011]: 2025-11-22 09:40:10.695265359 +0000 UTC m=+0.127465201 container start 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:40:10 np0005532048 bold_gauss[383028]: 167 167
Nov 22 04:40:10 np0005532048 systemd[1]: libpod-6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75.scope: Deactivated successfully.
Nov 22 04:40:10 np0005532048 podman[383011]: 2025-11-22 09:40:10.700920019 +0000 UTC m=+0.133119911 container attach 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:40:10 np0005532048 podman[383011]: 2025-11-22 09:40:10.702093379 +0000 UTC m=+0.134293221 container died 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:40:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9a555f728f317851c9ef9548b238d46db31aacaf547e960bca3b8ff64dd2c642-merged.mount: Deactivated successfully.
Nov 22 04:40:10 np0005532048 podman[383011]: 2025-11-22 09:40:10.768558512 +0000 UTC m=+0.200758354 container remove 6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gauss, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:40:10 np0005532048 systemd[1]: libpod-conmon-6eb8b34bfba5011a5469611e273b1ed069388345adbb9420abca08d26977ab75.scope: Deactivated successfully.
Nov 22 04:40:11 np0005532048 podman[383052]: 2025-11-22 09:40:11.009493144 +0000 UTC m=+0.051730088 container create 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.021 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.024 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.024 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.024 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No event matching network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 in dict_keys([('network-vif-plugged', '454bebe0-5237-48cb-8cf5-10be46f6d33a')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 WARNING nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received unexpected event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.025 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Processing event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.026 253665 DEBUG oslo_concurrency.lockutils [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.027 253665 DEBUG nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.027 253665 WARNING nova.compute.manager [req-49b019f0-6b28-4b5f-809d-662d94189362 req-fa5eb1ba-e8d5-4fab-9299-af8d09e87522 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received unexpected event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.028 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.034 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804411.033806, 7f88c3e8-e667-4d9a-8178-c99843560719 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.034 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.036 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.041 253665 INFO nova.virt.libvirt.driver [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance spawned successfully.#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.041 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.054 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.064 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.064 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.065 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.066 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.066 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.066 253665 DEBUG nova.virt.libvirt.driver [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:11 np0005532048 systemd[1]: Started libpod-conmon-231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156.scope.
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.071 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:40:11 np0005532048 podman[383052]: 2025-11-22 09:40:10.985469577 +0000 UTC m=+0.027706531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.094 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:40:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:40:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.119 253665 INFO nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Took 13.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.120 253665 DEBUG nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:40:11 np0005532048 podman[383052]: 2025-11-22 09:40:11.123922929 +0000 UTC m=+0.166159903 container init 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:40:11 np0005532048 podman[383052]: 2025-11-22 09:40:11.134132983 +0000 UTC m=+0.176369927 container start 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 04:40:11 np0005532048 podman[383052]: 2025-11-22 09:40:11.141544577 +0000 UTC m=+0.183781531 container attach 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.189 253665 INFO nova.compute.manager [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Took 13.99 seconds to build instance.#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.202 253665 DEBUG oslo_concurrency.lockutils [None req-155b1683-b717-405e-9be4-50b56d47c271 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.310 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:11 np0005532048 nova_compute[253661]: 2025-11-22 09:40:11.441 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 MiB/s wr, 21 op/s
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]: {
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "osd_id": 1,
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "type": "bluestore"
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:    },
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "osd_id": 0,
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "type": "bluestore"
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:    },
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "osd_id": 2,
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:        "type": "bluestore"
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]:    }
Nov 22 04:40:12 np0005532048 wizardly_franklin[383068]: }
Nov 22 04:40:12 np0005532048 systemd[1]: libpod-231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156.scope: Deactivated successfully.
Nov 22 04:40:12 np0005532048 systemd[1]: libpod-231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156.scope: Consumed 1.101s CPU time.
Nov 22 04:40:12 np0005532048 podman[383052]: 2025-11-22 09:40:12.262670828 +0000 UTC m=+1.304907772 container died 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:40:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9a486926722bee7e7bd80c81e4e16af55a8d66262eec9d23e3afabb7567f5834-merged.mount: Deactivated successfully.
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432937170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2432937170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:40:12 np0005532048 podman[383052]: 2025-11-22 09:40:12.395972833 +0000 UTC m=+1.438209767 container remove 231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:40:12 np0005532048 systemd[1]: libpod-conmon-231abecc55cc488684cfc9a216842305579f89d0746f6db7788c81e2c063a156.scope: Deactivated successfully.
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:40:12 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bdbad584-66db-4b20-938f-fc9c4bb40a4d does not exist
Nov 22 04:40:12 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ae7a0bdc-fd07-424c-be03-3bb667de7821 does not exist
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:40:12 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:40:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 973 KiB/s rd, 378 KiB/s wr, 55 op/s
Nov 22 04:40:14 np0005532048 nova_compute[253661]: 2025-11-22 09:40:14.872 253665 DEBUG nova.compute.manager [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:14 np0005532048 nova_compute[253661]: 2025-11-22 09:40:14.873 253665 DEBUG nova.compute.manager [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing instance network info cache due to event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:40:14 np0005532048 nova_compute[253661]: 2025-11-22 09:40:14.874 253665 DEBUG oslo_concurrency.lockutils [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:14 np0005532048 nova_compute[253661]: 2025-11-22 09:40:14.874 253665 DEBUG oslo_concurrency.lockutils [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:14 np0005532048 nova_compute[253661]: 2025-11-22 09:40:14.874 253665 DEBUG nova.network.neutron [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:40:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 35 KiB/s wr, 43 op/s
Nov 22 04:40:16 np0005532048 nova_compute[253661]: 2025-11-22 09:40:16.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:16 np0005532048 nova_compute[253661]: 2025-11-22 09:40:16.432 253665 DEBUG nova.network.neutron [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updated VIF entry in instance network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:40:16 np0005532048 nova_compute[253661]: 2025-11-22 09:40:16.432 253665 DEBUG nova.network.neutron [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:16 np0005532048 nova_compute[253661]: 2025-11-22 09:40:16.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:16 np0005532048 nova_compute[253661]: 2025-11-22 09:40:16.447 253665 DEBUG oslo_concurrency.lockutils [req-3fd9bb7e-c353-401e-9485-dfb146f9d811 req-0130f617-c625-4076-8b22-8ee09ff9460d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 38 KiB/s wr, 76 op/s
Nov 22 04:40:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 75 op/s
Nov 22 04:40:20 np0005532048 nova_compute[253661]: 2025-11-22 09:40:20.985 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:20 np0005532048 nova_compute[253661]: 2025-11-22 09:40:20.986 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.003 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.093 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.094 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.107 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.107 253665 INFO nova.compute.claims [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.347 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:40:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259479379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.814 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.819 253665 DEBUG nova.compute.provider_tree [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.844 253665 DEBUG nova.scheduler.client.report [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.878 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.878 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.929 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.929 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.952 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:40:21 np0005532048 nova_compute[253661]: 2025-11-22 09:40:21.972 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:40:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 71 op/s
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.062 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.064 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.064 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Creating image(s)#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.092 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.127 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.159 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.164 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.258 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.259 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.260 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.261 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.287 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.292 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:22 np0005532048 nova_compute[253661]: 2025-11-22 09:40:22.371 253665 DEBUG nova.policy [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:40:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:40:23 np0005532048 nova_compute[253661]: 2025-11-22 09:40:23.078 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:23 np0005532048 nova_compute[253661]: 2025-11-22 09:40:23.164 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:40:23 np0005532048 nova_compute[253661]: 2025-11-22 09:40:23.404 253665 DEBUG nova.objects.instance [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid aaeb1088-1220-47e3-9462-ba96b1d4e87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:23 np0005532048 nova_compute[253661]: 2025-11-22 09:40:23.420 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:40:23 np0005532048 nova_compute[253661]: 2025-11-22 09:40:23.420 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Ensure instance console log exists: /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:40:23 np0005532048 nova_compute[253661]: 2025-11-22 09:40:23.421 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:23 np0005532048 nova_compute[253661]: 2025-11-22 09:40:23.421 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:23 np0005532048 nova_compute[253661]: 2025-11-22 09:40:23.421 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:23 np0005532048 nova_compute[253661]: 2025-11-22 09:40:23.440 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Successfully created port: 01cd64f6-47ab-4640-ae46-6834065ff09b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:40:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 84 op/s
Nov 22 04:40:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:24.825 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:24.826 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:40:24 np0005532048 nova_compute[253661]: 2025-11-22 09:40:24.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:24 np0005532048 nova_compute[253661]: 2025-11-22 09:40:24.986 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Successfully updated port: 01cd64f6-47ab-4640-ae46-6834065ff09b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:40:25 np0005532048 nova_compute[253661]: 2025-11-22 09:40:25.000 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:25 np0005532048 nova_compute[253661]: 2025-11-22 09:40:25.001 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:25 np0005532048 nova_compute[253661]: 2025-11-22 09:40:25.001 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:40:25 np0005532048 nova_compute[253661]: 2025-11-22 09:40:25.069 253665 DEBUG nova.compute.manager [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-changed-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:25 np0005532048 nova_compute[253661]: 2025-11-22 09:40:25.069 253665 DEBUG nova.compute.manager [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Refreshing instance network info cache due to event network-changed-01cd64f6-47ab-4640-ae46-6834065ff09b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:40:25 np0005532048 nova_compute[253661]: 2025-11-22 09:40:25.070 253665 DEBUG oslo_concurrency.lockutils [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:25 np0005532048 nova_compute[253661]: 2025-11-22 09:40:25.150 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:40:25 np0005532048 podman[383351]: 2025-11-22 09:40:25.393525284 +0000 UTC m=+0.069667182 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 22 04:40:25 np0005532048 podman[383352]: 2025-11-22 09:40:25.431285256 +0000 UTC m=+0.107228629 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:40:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 445 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1007 KiB/s rd, 1.6 MiB/s wr, 47 op/s
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.229 253665 DEBUG nova.network.neutron [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Updating instance_info_cache with network_info: [{"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.245 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.246 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance network_info: |[{"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.246 253665 DEBUG oslo_concurrency.lockutils [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.247 253665 DEBUG nova.network.neutron [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Refreshing network info cache for port 01cd64f6-47ab-4640-ae46-6834065ff09b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.251 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start _get_guest_xml network_info=[{"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.257 253665 WARNING nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.269 253665 DEBUG nova.virt.libvirt.host [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.270 253665 DEBUG nova.virt.libvirt.host [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.275 253665 DEBUG nova.virt.libvirt.host [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.276 253665 DEBUG nova.virt.libvirt.host [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.276 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.276 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.277 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.278 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.278 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.279 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.279 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.279 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.279 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.280 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.280 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.280 253665 DEBUG nova.virt.hardware [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.285 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.447 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:40:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576432687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.768 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.795 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:26 np0005532048 nova_compute[253661]: 2025-11-22 09:40:26.802 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:40:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2629522887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.279 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.282 253665 DEBUG nova.virt.libvirt.vif [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:40:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=126,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-9t0kovo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:40:22Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=aaeb1088-1220-47e3-9462-ba96b1d4e87a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.283 253665 DEBUG nova.network.os_vif_util [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.284 253665 DEBUG nova.network.os_vif_util [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.286 253665 DEBUG nova.objects.instance [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid aaeb1088-1220-47e3-9462-ba96b1d4e87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.300 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <uuid>aaeb1088-1220-47e3-9462-ba96b1d4e87a</uuid>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <name>instance-0000007e</name>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427</nova:name>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:40:26</nova:creationTime>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <nova:port uuid="01cd64f6-47ab-4640-ae46-6834065ff09b">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <entry name="serial">aaeb1088-1220-47e3-9462-ba96b1d4e87a</entry>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <entry name="uuid">aaeb1088-1220-47e3-9462-ba96b1d4e87a</entry>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:3b:50:06"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <target dev="tap01cd64f6-47"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/console.log" append="off"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:40:27 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:40:27 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:40:27 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:40:27 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.301 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Preparing to wait for external event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.301 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.302 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.302 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.303 253665 DEBUG nova.virt.libvirt.vif [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:40:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=126,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-9t0kovo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:40:22Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=aaeb1088-1220-47e3-9462-ba96b1d4e87a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.303 253665 DEBUG nova.network.os_vif_util [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.304 253665 DEBUG nova.network.os_vif_util [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.305 253665 DEBUG os_vif [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.306 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.307 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01cd64f6-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.315 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01cd64f6-47, col_values=(('external_ids', {'iface-id': '01cd64f6-47ab-4640-ae46-6834065ff09b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:50:06', 'vm-uuid': 'aaeb1088-1220-47e3-9462-ba96b1d4e87a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.317 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:27 np0005532048 NetworkManager[48920]: <info>  [1763804427.3185] manager: (tap01cd64f6-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.320 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.328 253665 INFO os_vif [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47')#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.398 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.398 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.399 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:3b:50:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.399 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Using config drive#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.422 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:27Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:93:6c 10.100.0.14
Nov 22 04:40:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:27Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:93:6c 10.100.0.14
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.927 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Creating config drive at /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.933 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ni59n8a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:27.984 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:27.985 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:27.986 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.993 253665 DEBUG nova.network.neutron [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Updated VIF entry in instance network info cache for port 01cd64f6-47ab-4640-ae46-6834065ff09b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:40:27 np0005532048 nova_compute[253661]: 2025-11-22 09:40:27.994 253665 DEBUG nova.network.neutron [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Updating instance_info_cache with network_info: [{"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.019 253665 DEBUG oslo_concurrency.lockutils [req-ba76b6d9-a835-4e70-8830-8d52e50b0f4b req-3c0352c4-6d8b-4508-a5b8-f4b061da3867 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-aaeb1088-1220-47e3-9462-ba96b1d4e87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 68 op/s
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.100 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ni59n8a" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.129 253665 DEBUG nova.storage.rbd_utils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.133 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.357 253665 DEBUG oslo_concurrency.processutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config aaeb1088-1220-47e3-9462-ba96b1d4e87a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.358 253665 INFO nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Deleting local config drive /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a/disk.config because it was imported into RBD.#033[00m
Nov 22 04:40:28 np0005532048 auditd[703]: Audit daemon rotating log files
Nov 22 04:40:28 np0005532048 kernel: tap01cd64f6-47: entered promiscuous mode
Nov 22 04:40:28 np0005532048 NetworkManager[48920]: <info>  [1763804428.4227] manager: (tap01cd64f6-47): new Tun device (/org/freedesktop/NetworkManager/Devices/546)
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.425 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:28Z|01328|binding|INFO|Claiming lport 01cd64f6-47ab-4640-ae46-6834065ff09b for this chassis.
Nov 22 04:40:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:28Z|01329|binding|INFO|01cd64f6-47ab-4640-ae46-6834065ff09b: Claiming fa:16:3e:3b:50:06 10.100.0.5
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.448 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:50:06 10.100.0.5'], port_security=['fa:16:3e:3b:50:06 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aaeb1088-1220-47e3-9462-ba96b1d4e87a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=01cd64f6-47ab-4640-ae46-6834065ff09b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.450 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 01cd64f6-47ab-4640-ae46-6834065ff09b in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 bound to our chassis#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.454 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 705357ee-1033-4907-905f-d41aa6dcfd73#033[00m
Nov 22 04:40:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:28Z|01330|binding|INFO|Setting lport 01cd64f6-47ab-4640-ae46-6834065ff09b ovn-installed in OVS
Nov 22 04:40:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:28Z|01331|binding|INFO|Setting lport 01cd64f6-47ab-4640-ae46-6834065ff09b up in Southbound
Nov 22 04:40:28 np0005532048 systemd-udevd[383525]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.466 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.478 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d80928e-f9aa-4fc9-ad80-ee589d843340]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:28 np0005532048 NetworkManager[48920]: <info>  [1763804428.4826] device (tap01cd64f6-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:40:28 np0005532048 NetworkManager[48920]: <info>  [1763804428.4838] device (tap01cd64f6-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:40:28 np0005532048 systemd-machined[215941]: New machine qemu-157-instance-0000007e.
Nov 22 04:40:28 np0005532048 systemd[1]: Started Virtual Machine qemu-157-instance-0000007e.
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.524 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d99c2cc3-6a16-4a21-b24c-48e9ebd5df4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.529 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[44322403-6499-466c-aa57-2283bc7e4a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.564 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[79d40de6-9d3a-4553-8221-920ce98f9e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.585 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2290a456-5604-4541-9b09-ab9a4cc71d4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap705357ee-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:aa:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722923, 'reachable_time': 34041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383538, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.605 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[97c85a47-d1a4-4161-ac59-dc28a8e8d925]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap705357ee-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722937, 'tstamp': 722937}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383540, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap705357ee-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722940, 'tstamp': 722940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383540, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.607 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705357ee-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.610 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap705357ee-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.610 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.610 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap705357ee-10, col_values=(('external_ids', {'iface-id': 'e4d17104-1aeb-4ffd-be7b-ed782324874a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:28.610 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.806 253665 DEBUG nova.compute.manager [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.806 253665 DEBUG oslo_concurrency.lockutils [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.806 253665 DEBUG oslo_concurrency.lockutils [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.807 253665 DEBUG oslo_concurrency.lockutils [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.807 253665 DEBUG nova.compute.manager [req-371694bf-9d33-48e0-8c0b-c86e85b0277d req-2456c6b3-b273-4da1-b38e-e71a83242416 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Processing event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.867 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.867 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804428.8662567, aaeb1088-1220-47e3-9462-ba96b1d4e87a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.868 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] VM Started (Lifecycle Event)#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.872 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.875 253665 INFO nova.virt.libvirt.driver [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance spawned successfully.#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.875 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:40:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.891 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.897 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.902 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.902 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.903 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.903 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.903 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:28 np0005532048 nova_compute[253661]: 2025-11-22 09:40:28.904 253665 DEBUG nova.virt.libvirt.driver [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.049 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.050 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804428.8665924, aaeb1088-1220-47e3-9462-ba96b1d4e87a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.050 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.070 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.075 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804428.8708794, aaeb1088-1220-47e3-9462-ba96b1d4e87a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.076 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.080 253665 INFO nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Took 7.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.080 253665 DEBUG nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.095 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.100 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.130 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.158 253665 INFO nova.compute.manager [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Took 8.09 seconds to build instance.#033[00m
Nov 22 04:40:29 np0005532048 nova_compute[253661]: 2025-11-22 09:40:29.184 253665 DEBUG oslo_concurrency.lockutils [None req-a046d0cc-c321-4b09-9479-7db2b035af6d 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 480 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Nov 22 04:40:30 np0005532048 podman[383584]: 2025-11-22 09:40:30.416771049 +0000 UTC m=+0.100087294 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 22 04:40:30 np0005532048 nova_compute[253661]: 2025-11-22 09:40:30.889 253665 DEBUG nova.compute.manager [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:30 np0005532048 nova_compute[253661]: 2025-11-22 09:40:30.889 253665 DEBUG oslo_concurrency.lockutils [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:30 np0005532048 nova_compute[253661]: 2025-11-22 09:40:30.890 253665 DEBUG oslo_concurrency.lockutils [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:30 np0005532048 nova_compute[253661]: 2025-11-22 09:40:30.890 253665 DEBUG oslo_concurrency.lockutils [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:30 np0005532048 nova_compute[253661]: 2025-11-22 09:40:30.891 253665 DEBUG nova.compute.manager [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] No waiting events found dispatching network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:30 np0005532048 nova_compute[253661]: 2025-11-22 09:40:30.891 253665 WARNING nova.compute.manager [req-62188497-275a-4408-b8e1-f02b86d4110c req-20032cf3-1a77-413a-95ac-cdab25c31d81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received unexpected event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b for instance with vm_state active and task_state None.#033[00m
Nov 22 04:40:31 np0005532048 nova_compute[253661]: 2025-11-22 09:40:31.322 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 476 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Nov 22 04:40:32 np0005532048 nova_compute[253661]: 2025-11-22 09:40:32.318 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:32.828 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Nov 22 04:40:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 154 op/s
Nov 22 04:40:36 np0005532048 nova_compute[253661]: 2025-11-22 09:40:36.325 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.224 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.225 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.225 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.225 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.226 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.227 253665 INFO nova.compute.manager [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Terminating instance#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.228 253665 DEBUG nova.compute.manager [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:40:37 np0005532048 kernel: tap54a61ee9-1f (unregistering): left promiscuous mode
Nov 22 04:40:37 np0005532048 NetworkManager[48920]: <info>  [1763804437.3039] device (tap54a61ee9-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:40:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:37Z|01332|binding|INFO|Releasing lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb from this chassis (sb_readonly=0)
Nov 22 04:40:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:37Z|01333|binding|INFO|Setting lport 54a61ee9-1fb8-4c5c-8716-613fc3355afb down in Southbound
Nov 22 04:40:37 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:37Z|01334|binding|INFO|Removing iface tap54a61ee9-1f ovn-installed in OVS
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.324 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.333 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:21:e3 10.100.0.20'], port_security=['fa:16:3e:58:21:e3 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'd7865a13-0d41-44d6-aac2-10cca6e1348a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '268f1f5a-a38b-4a4b-99c8-6f247601dc2d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5061ce-83c8-4f7d-bdd0-cc8d1c8db63d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=54a61ee9-1fb8-4c5c-8716-613fc3355afb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.334 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 54a61ee9-1fb8-4c5c-8716-613fc3355afb in datapath 30756ec6-103b-4571-a5dc-9b4a481bc5b1 unbound from our chassis#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.336 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30756ec6-103b-4571-a5dc-9b4a481bc5b1#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.363 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ae60f038-6c80-46ff-adc5-cfcffa3c3a87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:37 np0005532048 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Nov 22 04:40:37 np0005532048 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007b.scope: Consumed 16.237s CPU time.
Nov 22 04:40:37 np0005532048 systemd-machined[215941]: Machine qemu-154-instance-0000007b terminated.
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.410 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[830ea263-1ea1-4223-b2a1-094f8aac1d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.414 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[868d9f93-e309-4de5-91ee-83c8270905fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.468 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed5802b-5032-4110-a232-7c2ca694bdc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.484 253665 INFO nova.virt.libvirt.driver [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Instance destroyed successfully.#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.486 253665 DEBUG nova.objects.instance [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid d7865a13-0d41-44d6-aac2-10cca6e1348a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.504 253665 DEBUG nova.virt.libvirt.vif [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1997303369',display_name='tempest-TestNetworkBasicOps-server-1997303369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1997303369',id=123,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG0uOorJ/gmaNrR6qSN8/HnR9fMkzDH2WfxtPrvyBivOyhJCMxJEV6zlpNVePFSMCgazPwKP4Vum9MI8Qs/y/+T2quaiVANmzVrFFYwVnCOps2b+X6LuQ32XNX42/GMXcg==',key_name='tempest-TestNetworkBasicOps-799900934',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:39:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-2eyiney1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:39:31Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=d7865a13-0d41-44d6-aac2-10cca6e1348a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.506 253665 DEBUG nova.network.os_vif_util [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "address": "fa:16:3e:58:21:e3", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54a61ee9-1f", "ovs_interfaceid": "54a61ee9-1fb8-4c5c-8716-613fc3355afb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.507 253665 DEBUG nova.network.os_vif_util [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.507 253665 DEBUG os_vif [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.509 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.510 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54a61ee9-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3bea3fd2-1cf4-4df2-9533-176cd9bc1e14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30756ec6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:cb:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 742, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720168, 'reachable_time': 38410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383628, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.519 253665 INFO os_vif [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:21:e3,bridge_name='br-int',has_traffic_filtering=True,id=54a61ee9-1fb8-4c5c-8716-613fc3355afb,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54a61ee9-1f')#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.538 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0ca198-1191-4340-bead-a9e3a030e0be]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap30756ec6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720183, 'tstamp': 720183}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383633, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap30756ec6-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720186, 'tstamp': 720186}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383633, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.541 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30756ec6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.578 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.580 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30756ec6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.581 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.583 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30756ec6-10, col_values=(('external_ids', {'iface-id': 'ef3a77cb-c20e-4c0c-b747-f8d33bfa04a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:37.583 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.672 253665 DEBUG nova.compute.manager [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-unplugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.673 253665 DEBUG oslo_concurrency.lockutils [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.674 253665 DEBUG oslo_concurrency.lockutils [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.674 253665 DEBUG oslo_concurrency.lockutils [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.674 253665 DEBUG nova.compute.manager [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] No waiting events found dispatching network-vif-unplugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:37 np0005532048 nova_compute[253661]: 2025-11-22 09:40:37.675 253665 DEBUG nova.compute.manager [req-3d8ad95d-5990-495e-b160-e012e36147d7 req-0c271430-52b9-4563-9454-1424c548bc0e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-unplugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:40:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 155 op/s
Nov 22 04:40:38 np0005532048 nova_compute[253661]: 2025-11-22 09:40:38.184 253665 INFO nova.virt.libvirt.driver [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Deleting instance files /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a_del#033[00m
Nov 22 04:40:38 np0005532048 nova_compute[253661]: 2025-11-22 09:40:38.186 253665 INFO nova.virt.libvirt.driver [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Deletion of /var/lib/nova/instances/d7865a13-0d41-44d6-aac2-10cca6e1348a_del complete#033[00m
Nov 22 04:40:38 np0005532048 nova_compute[253661]: 2025-11-22 09:40:38.244 253665 INFO nova.compute.manager [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:40:38 np0005532048 nova_compute[253661]: 2025-11-22 09:40:38.245 253665 DEBUG oslo.service.loopingcall [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:40:38 np0005532048 nova_compute[253661]: 2025-11-22 09:40:38.246 253665 DEBUG nova.compute.manager [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:40:38 np0005532048 nova_compute[253661]: 2025-11-22 09:40:38.246 253665 DEBUG nova.network.neutron [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:40:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.229 253665 DEBUG nova.network.neutron [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.249 253665 INFO nova.compute.manager [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Took 1.00 seconds to deallocate network for instance.#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.287 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.288 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.355 253665 DEBUG nova.compute.manager [req-326ca425-5747-4abd-96c7-d08c2aa7f5db req-7e1caab8-364f-4044-bce5-7ef405fa015a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-deleted-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.449 253665 DEBUG oslo_concurrency.processutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.853 253665 DEBUG nova.compute.manager [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.854 253665 DEBUG oslo_concurrency.lockutils [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.854 253665 DEBUG oslo_concurrency.lockutils [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.854 253665 DEBUG oslo_concurrency.lockutils [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.855 253665 DEBUG nova.compute.manager [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] No waiting events found dispatching network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.855 253665 WARNING nova.compute.manager [req-fcc03964-01b8-47d0-b471-73714edcde9c req-66471810-a3cf-4a63-8aa5-2b7b3eebee21 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Received unexpected event network-vif-plugged-54a61ee9-1fb8-4c5c-8716-613fc3355afb for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:40:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:40:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131172211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.980 253665 DEBUG oslo_concurrency.processutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:39 np0005532048 nova_compute[253661]: 2025-11-22 09:40:39.987 253665 DEBUG nova.compute.provider_tree [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.014 253665 DEBUG nova.scheduler.client.report [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:40:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 432 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 160 op/s
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.040 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.068 253665 INFO nova.scheduler.client.report [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance d7865a13-0d41-44d6-aac2-10cca6e1348a#033[00m
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.148 253665 DEBUG oslo_concurrency.lockutils [None req-8869d2b1-013b-4cbf-9804-0b1124e6cbf1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "d7865a13-0d41-44d6-aac2-10cca6e1348a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.697 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.698 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:40 np0005532048 nova_compute[253661]: 2025-11-22 09:40:40.698 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.065 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.065 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.077 253665 DEBUG nova.objects.instance [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'flavor' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.094 253665 DEBUG nova.virt.libvirt.vif [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.095 253665 DEBUG nova.network.os_vif_util [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.096 253665 DEBUG nova.network.os_vif_util [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.103 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.108 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.113 253665 DEBUG nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Attempting to detach device tapb9b8fcd6-fb from instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.113 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:d7:d1:9a"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <target dev="tapb9b8fcd6-fb"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.124 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.131 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <name>instance-00000079</name>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:39:10</nova:creationTime>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:port uuid="b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='serial'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='uuid'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk' index='2'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config' index='1'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:30:a0:d3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target dev='tap12ab8505-5a'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:d7:d1:9a'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target dev='tapb9b8fcd6-fb'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='net1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c379,c882</label>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c882</imagelabel>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.132 253665 INFO nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully detached device tapb9b8fcd6-fb from instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c from the persistent domain config.#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.132 253665 DEBUG nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] (1/8): Attempting to detach device tapb9b8fcd6-fb with device alias net1 from instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.133 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] detach device xml: <interface type="ethernet">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <mac address="fa:16:3e:d7:d1:9a"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <model type="virtio"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <mtu size="1442"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <target dev="tapb9b8fcd6-fb"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: </interface>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 22 04:40:41 np0005532048 kernel: tapb9b8fcd6-fb (unregistering): left promiscuous mode
Nov 22 04:40:41 np0005532048 NetworkManager[48920]: <info>  [1763804441.2415] device (tapb9b8fcd6-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:41Z|01335|binding|INFO|Releasing lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f from this chassis (sb_readonly=0)
Nov 22 04:40:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:41Z|01336|binding|INFO|Setting lport b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f down in Southbound
Nov 22 04:40:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:41Z|01337|binding|INFO|Removing iface tapb9b8fcd6-fb ovn-installed in OVS
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.257 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:d1:9a 10.100.0.24', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe5061ce-83c8-4f7d-bdd0-cc8d1c8db63d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.258 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f in datapath 30756ec6-103b-4571-a5dc-9b4a481bc5b1 unbound from our chassis#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.260 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 30756ec6-103b-4571-a5dc-9b4a481bc5b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.264 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[188e3300-f4ec-4894-bd13-7e9a9b1402c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.264 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 namespace which is not needed anymore#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.267 253665 DEBUG nova.virt.libvirt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Received event <DeviceRemovedEvent: 1763804441.2672188, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.273 253665 DEBUG nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Start waiting for the detach event from libvirt for device tapb9b8fcd6-fb with device alias net1 for instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.273 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.278 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <name>instance-00000079</name>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:39:10</nova:creationTime>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:port uuid="b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='serial'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='uuid'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk' index='2'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config' index='1'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:30:a0:d3'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target dev='tap12ab8505-5a'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c379,c882</label>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c882</imagelabel>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.278 253665 INFO nova.virt.libvirt.driver [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully detached device tapb9b8fcd6-fb from instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c from the live domain config.#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.279 253665 DEBUG nova.virt.libvirt.vif [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.279 253665 DEBUG nova.network.os_vif_util [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.280 253665 DEBUG nova.network.os_vif_util [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.280 253665 DEBUG os_vif [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.282 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.283 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b8fcd6-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.285 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.294 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.296 253665 INFO os_vif [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb')#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.298 253665 DEBUG nova.virt.libvirt.guest [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:40:41</nova:creationTime>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 04:40:41 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:40:41 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:40:41 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:41 np0005532048 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [NOTICE]   (380322) : haproxy version is 2.8.14-c23fe91
Nov 22 04:40:41 np0005532048 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [NOTICE]   (380322) : path to executable is /usr/sbin/haproxy
Nov 22 04:40:41 np0005532048 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [WARNING]  (380322) : Exiting Master process...
Nov 22 04:40:41 np0005532048 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [WARNING]  (380322) : Exiting Master process...
Nov 22 04:40:41 np0005532048 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [ALERT]    (380322) : Current worker (380324) exited with code 143 (Terminated)
Nov 22 04:40:41 np0005532048 neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1[380318]: [WARNING]  (380322) : All workers exited. Exiting... (0)
Nov 22 04:40:41 np0005532048 systemd[1]: libpod-87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20.scope: Deactivated successfully.
Nov 22 04:40:41 np0005532048 podman[383698]: 2025-11-22 09:40:41.418942888 +0000 UTC m=+0.056065099 container died 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.448 253665 DEBUG nova.compute.manager [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-unplugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.450 253665 DEBUG oslo_concurrency.lockutils [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.450 253665 DEBUG oslo_concurrency.lockutils [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.450 253665 DEBUG oslo_concurrency.lockutils [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.451 253665 DEBUG nova.compute.manager [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-unplugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.451 253665 WARNING nova.compute.manager [req-ad25dd12-e8da-4e72-bb1c-719a045dcd4a req-bd3bacb4-b305-45dc-93b8-46055f417319 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-unplugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:40:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20-userdata-shm.mount: Deactivated successfully.
Nov 22 04:40:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3f89d97ed2ee8aa5d0ef9bc0221d90f0d63faaffe83c22b65aecd4abf2565382-merged.mount: Deactivated successfully.
Nov 22 04:40:41 np0005532048 podman[383698]: 2025-11-22 09:40:41.488909726 +0000 UTC m=+0.126031937 container cleanup 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:40:41 np0005532048 systemd[1]: libpod-conmon-87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20.scope: Deactivated successfully.
Nov 22 04:40:41 np0005532048 podman[383726]: 2025-11-22 09:40:41.561392445 +0000 UTC m=+0.047493341 container remove 87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1e5da1-5c8b-4e5a-b55d-18f0fb0651e5]: (4, ('Sat Nov 22 09:40:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 (87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20)\n87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20\nSat Nov 22 09:40:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 (87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20)\n87086ad4d39692670a7229938ee0e56c1fe1be7f2ee84cc1998f01f75fa6ca20\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.570 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae01796-7fa4-49b8-91eb-88385fe99530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.571 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30756ec6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:41 np0005532048 kernel: tap30756ec6-10: left promiscuous mode
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:41 np0005532048 nova_compute[253661]: 2025-11-22 09:40:41.592 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f991b9-bf64-4188-91f5-1376bca3c6f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.616 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[34a63f23-6c12-4d94-80da-9fe22f251353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.618 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[02506e95-5db4-4dbe-bce0-b12e27d8acc0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[403ef274-8597-4bda-854a-8b51180c1551]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720159, 'reachable_time': 39906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383742, 'error': None, 'target': 'ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:41 np0005532048 systemd[1]: run-netns-ovnmeta\x2d30756ec6\x2d103b\x2d4571\x2da5dc\x2d9b4a481bc5b1.mount: Deactivated successfully.
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.647 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-30756ec6-103b-4571-a5dc-9b4a481bc5b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:40:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:41.647 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[89919f23-7776-4f0b-be97-4c294743e25c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.015 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.016 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.016 253665 DEBUG nova.network.neutron [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:40:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 494 KiB/s wr, 118 op/s
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.096 253665 DEBUG nova.compute.manager [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-deleted-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.097 253665 INFO nova.compute.manager [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Neutron deleted interface b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.097 253665 DEBUG nova.network.neutron [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.121 253665 DEBUG nova.objects.instance [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'system_metadata' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.141 253665 DEBUG nova.objects.instance [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lazy-loading 'flavor' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.165 253665 DEBUG nova.virt.libvirt.vif [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.165 253665 DEBUG nova.network.os_vif_util [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.166 253665 DEBUG nova.network.os_vif_util [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.169 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.174 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <name>instance-00000079</name>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:40:41</nova:creationTime>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:40:42 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='serial'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='uuid'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk' index='2'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config' index='1'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:30:a0:d3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target dev='tap12ab8505-5a'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c379,c882</label>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c882</imagelabel>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:40:42 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:40:42 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.175 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.185 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:d1:9a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapb9b8fcd6-fb"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <name>instance-00000079</name>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <uuid>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</uuid>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:40:41</nova:creationTime>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:40:42 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <memory unit='KiB'>131072</memory>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <vcpu placement='static'>1</vcpu>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <resource>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <partition>/machine</partition>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </resource>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <sysinfo type='smbios'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='manufacturer'>RDO</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='product'>OpenStack Compute</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='serial'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='uuid'>9c45a555-9969-4d8a-bd3b-1ab61ce6f68c</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <entry name='family'>Virtual Machine</entry>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <boot dev='hd'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <smbios mode='sysinfo'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <vmcoreinfo state='on'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <cpu mode='custom' match='exact' check='full'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <model fallback='forbid'>EPYC-Rome</model>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <vendor>AMD</vendor>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='x2apic'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc-deadline'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='hypervisor'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='tsc_adjust'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='spec-ctrl'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='stibp'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='ssbd'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='cmp_legacy'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='overflow-recov'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='succor'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='ibrs'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='amd-ssbd'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='virt-ssbd'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='lbrv'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='tsc-scale'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='vmcb-clean'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='flushbyasid'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pause-filter'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='pfthreshold'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='xsaves'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='svm'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='require' name='topoext'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='npt'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <feature policy='disable' name='nrip-save'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <clock offset='utc'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <timer name='pit' tickpolicy='delay'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <timer name='hpet' present='no'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <on_poweroff>destroy</on_poweroff>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <on_reboot>restart</on_reboot>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <on_crash>destroy</on_crash>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <disk type='network' device='disk'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk' index='2'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target dev='vda' bus='virtio'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='virtio-disk0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <disk type='network' device='cdrom'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <driver name='qemu' type='raw' cache='none'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <auth username='openstack'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <secret type='ceph' uuid='34829716-a12c-57a6-8915-c1aa615c9d8a'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <source protocol='rbd' name='vms/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_disk.config' index='1'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <host name='192.168.122.100' port='6789'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target dev='sda' bus='sata'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <readonly/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='sata0-0-0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='0' model='pcie-root'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pcie.0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='1' port='0x10'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='2' port='0x11'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='3' port='0x12'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='4' port='0x13'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.4'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='5' port='0x14'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.5'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='6' port='0x15'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.6'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='7' port='0x16'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.7'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='8' port='0x17'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.8'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='9' port='0x18'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.9'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='10' port='0x19'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.10'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='11' port='0x1a'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.11'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='12' port='0x1b'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.12'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='13' port='0x1c'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.13'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='14' port='0x1d'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.14'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='15' port='0x1e'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.15'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='16' port='0x1f'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.16'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='17' port='0x20'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.17'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='18' port='0x21'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.18'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='19' port='0x22'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.19'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='20' port='0x23'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.20'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='21' port='0x24'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.21'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='22' port='0x25'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.22'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='23' port='0x26'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.23'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='24' port='0x27'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.24'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-root-port'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target chassis='25' port='0x28'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.25'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model name='pcie-pci-bridge'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='pci.26'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='usb'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <controller type='sata' index='0'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='ide'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </controller>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <interface type='ethernet'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <mac address='fa:16:3e:30:a0:d3'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target dev='tap12ab8505-5a'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model type='virtio'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <driver name='vhost' rx_queue_size='512'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <mtu size='1442'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='net0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <serial type='pty'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target type='isa-serial' port='0'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:        <model name='isa-serial'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      </target>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <console type='pty' tty='/dev/pts/0'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <source path='/dev/pts/0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <log file='/var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c/console.log' append='off'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <target type='serial' port='0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='serial0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </console>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <input type='tablet' bus='usb'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='input0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='usb' bus='0' port='1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <input type='mouse' bus='ps2'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='input1'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <input type='keyboard' bus='ps2'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='input2'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </input>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <listen type='address' address='::0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </graphics>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <audio id='1' type='none'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <model type='virtio' heads='1' primary='yes'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='video0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <watchdog model='itco' action='reset'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='watchdog0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </watchdog>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <memballoon model='virtio'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <stats period='10'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='balloon0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <rng model='virtio'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <backend model='random'>/dev/urandom</backend>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <alias name='rng0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <label>system_u:system_r:svirt_t:s0:c379,c882</label>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c379,c882</imagelabel>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <label>+107:+107</label>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <imagelabel>+107:+107</imagelabel>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </seclabel>
Nov 22 04:40:42 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:40:42 np0005532048 nova_compute[253661]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.185 253665 WARNING nova.virt.libvirt.driver [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Detaching interface fa:16:3e:d7:d1:9a failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapb9b8fcd6-fb' not found.#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.186 253665 DEBUG nova.virt.libvirt.vif [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.186 253665 DEBUG nova.network.os_vif_util [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converting VIF {"id": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "address": "fa:16:3e:d7:d1:9a", "network": {"id": "30756ec6-103b-4571-a5dc-9b4a481bc5b1", "bridge": "br-int", "label": "tempest-network-smoke--199233573", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9b8fcd6-fb", "ovs_interfaceid": "b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.187 253665 DEBUG nova.network.os_vif_util [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.188 253665 DEBUG os_vif [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.189 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.190 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b8fcd6-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.190 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.192 253665 INFO os_vif [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:d1:9a,bridge_name='br-int',has_traffic_filtering=True,id=b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f,network=Network(30756ec6-103b-4571-a5dc-9b4a481bc5b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9b8fcd6-fb')#033[00m
Nov 22 04:40:42 np0005532048 nova_compute[253661]: 2025-11-22 09:40:42.193 253665 DEBUG nova.virt.libvirt.guest [req-4add7152-54fe-4f95-86e7-5b3154574187 req-3d700e34-883c-4e73-ae99-56ddc6ff5a8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:name>tempest-TestNetworkBasicOps-server-985491122</nova:name>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:creationTime>2025-11-22 09:40:42</nova:creationTime>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:flavor name="m1.nano">
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:memory>128</nova:memory>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:disk>1</nova:disk>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:swap>0</nova:swap>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:vcpus>1</nova:vcpus>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </nova:flavor>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:owner>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </nova:owner>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  <nova:ports>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    <nova:port uuid="12ab8505-5ae2-427c-aaf6-9431683a99c8">
Nov 22 04:40:42 np0005532048 nova_compute[253661]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:    </nova:port>
Nov 22 04:40:42 np0005532048 nova_compute[253661]:  </nova:ports>
Nov 22 04:40:42 np0005532048 nova_compute[253661]: </nova:instance>
Nov 22 04:40:42 np0005532048 nova_compute[253661]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 22 04:40:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:43Z|01338|binding|INFO|Releasing lport c8541406-177e-4d49-a6da-f639419da399 from this chassis (sb_readonly=0)
Nov 22 04:40:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:43Z|01339|binding|INFO|Releasing lport c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1 from this chassis (sb_readonly=0)
Nov 22 04:40:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:43Z|01340|binding|INFO|Releasing lport e4d17104-1aeb-4ffd-be7b-ed782324874a from this chassis (sb_readonly=0)
Nov 22 04:40:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:43Z|01341|binding|INFO|Releasing lport e81a7283-b7a8-4fa9-8cc9-183f5a17ea6c from this chassis (sb_readonly=0)
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.467 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.469 253665 INFO nova.network.neutron [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Port b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.470 253665 DEBUG nova.network.neutron [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.488 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:43Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:50:06 10.100.0.5
Nov 22 04:40:43 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:43Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:50:06 10.100.0.5
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.548 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.548 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.549 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.549 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.551 253665 DEBUG oslo_concurrency.lockutils [None req-759afd53-b721-4828-ada7-ddf5e97694f1 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "interface-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.565 253665 DEBUG nova.compute.manager [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.565 253665 DEBUG oslo_concurrency.lockutils [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.566 253665 DEBUG oslo_concurrency.lockutils [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.566 253665 DEBUG oslo_concurrency.lockutils [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.567 253665 DEBUG nova.compute.manager [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:43 np0005532048 nova_compute[253661]: 2025-11-22 09:40:43.567 253665 WARNING nova.compute.manager [req-de473786-8650-4bf9-a59e-5e5fe69f7152 req-2f382d7c-ffd6-4e6a-8f41-68dd58cf3666 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-b9b8fcd6-fb13-472b-91c8-4ffd862c7e8f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:40:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 421 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 132 op/s
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.232 253665 DEBUG nova.compute.manager [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.233 253665 DEBUG nova.compute.manager [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing instance network info cache due to event network-changed-12ab8505-5ae2-427c-aaf6-9431683a99c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.233 253665 DEBUG oslo_concurrency.lockutils [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.233 253665 DEBUG oslo_concurrency.lockutils [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.233 253665 DEBUG nova.network.neutron [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Refreshing network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.277 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.277 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.277 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.277 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.278 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.279 253665 INFO nova.compute.manager [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Terminating instance#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.280 253665 DEBUG nova.compute.manager [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:40:44 np0005532048 kernel: tap12ab8505-5a (unregistering): left promiscuous mode
Nov 22 04:40:44 np0005532048 NetworkManager[48920]: <info>  [1763804444.3272] device (tap12ab8505-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:44Z|01342|binding|INFO|Releasing lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 from this chassis (sb_readonly=0)
Nov 22 04:40:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:44Z|01343|binding|INFO|Setting lport 12ab8505-5ae2-427c-aaf6-9431683a99c8 down in Southbound
Nov 22 04:40:44 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:44Z|01344|binding|INFO|Removing iface tap12ab8505-5a ovn-installed in OVS
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.359 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:a0:d3 10.100.0.3'], port_security=['fa:16:3e:30:a0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c45a555-9969-4d8a-bd3b-1ab61ce6f68c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80502cee-0a40-4541-8461-41de74f7266c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c64167e3-035b-4f1b-bee4-b85857c701f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1bcb3a6-b65a-4848-8c3b-e1169d9ae614, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=12ab8505-5ae2-427c-aaf6-9431683a99c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.361 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 12ab8505-5ae2-427c-aaf6-9431683a99c8 in datapath 80502cee-0a40-4541-8461-41de74f7266c unbound from our chassis#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.363 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80502cee-0a40-4541-8461-41de74f7266c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.363 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e150337b-e8fc-45d4-8897-8b364c653312]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.364 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80502cee-0a40-4541-8461-41de74f7266c namespace which is not needed anymore#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.364 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:44 np0005532048 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 22 04:40:44 np0005532048 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Consumed 19.720s CPU time.
Nov 22 04:40:44 np0005532048 systemd-machined[215941]: Machine qemu-152-instance-00000079 terminated.
Nov 22 04:40:44 np0005532048 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [NOTICE]   (378669) : haproxy version is 2.8.14-c23fe91
Nov 22 04:40:44 np0005532048 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [NOTICE]   (378669) : path to executable is /usr/sbin/haproxy
Nov 22 04:40:44 np0005532048 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [WARNING]  (378669) : Exiting Master process...
Nov 22 04:40:44 np0005532048 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [ALERT]    (378669) : Current worker (378671) exited with code 143 (Terminated)
Nov 22 04:40:44 np0005532048 neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c[378665]: [WARNING]  (378669) : All workers exited. Exiting... (0)
Nov 22 04:40:44 np0005532048 systemd[1]: libpod-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f.scope: Deactivated successfully.
Nov 22 04:40:44 np0005532048 conmon[378665]: conmon 7e12ed01fc6355b1a9cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f.scope/container/memory.events
Nov 22 04:40:44 np0005532048 podman[383767]: 2025-11-22 09:40:44.501175154 +0000 UTC m=+0.047656405 container died 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.516 253665 INFO nova.virt.libvirt.driver [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Instance destroyed successfully.#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.517 253665 DEBUG nova.objects.instance [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.533 253665 DEBUG nova.virt.libvirt.vif [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-985491122',display_name='tempest-TestNetworkBasicOps-server-985491122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-985491122',id=121,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLw/Dv8eQ4xdxRbrgMG3buHckSNpEq6nH01jUXVVCvbrxu9rFYxRPuOE7TkO7Vu7f1jI5X+YFsVT7W5UL14P2eBmD6ljncar+3VoJnWQEW3LGEficcRcO1AHHbfbm+zd4w==',key_name='tempest-TestNetworkBasicOps-1868914483',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-tl53inqt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:38:40Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=9c45a555-9969-4d8a-bd3b-1ab61ce6f68c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.534 253665 DEBUG nova.network.os_vif_util [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.534 253665 DEBUG nova.network.os_vif_util [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f-userdata-shm.mount: Deactivated successfully.
Nov 22 04:40:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7f37bc7976ef071fba5e665fbf02212daac86ba36a65621a352596852c57db43-merged.mount: Deactivated successfully.
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.541 253665 DEBUG os_vif [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.546 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12ab8505-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:44 np0005532048 podman[383767]: 2025-11-22 09:40:44.55346133 +0000 UTC m=+0.099942581 container cleanup 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.554 253665 INFO os_vif [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:a0:d3,bridge_name='br-int',has_traffic_filtering=True,id=12ab8505-5ae2-427c-aaf6-9431683a99c8,network=Network(80502cee-0a40-4541-8461-41de74f7266c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ab8505-5a')#033[00m
Nov 22 04:40:44 np0005532048 systemd[1]: libpod-conmon-7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f.scope: Deactivated successfully.
Nov 22 04:40:44 np0005532048 podman[383805]: 2025-11-22 09:40:44.633147285 +0000 UTC m=+0.053480896 container remove 7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.641 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f97663a6-1e3b-4560-88d7-8f1eb30e8932]: (4, ('Sat Nov 22 09:40:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c (7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f)\n7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f\nSat Nov 22 09:40:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80502cee-0a40-4541-8461-41de74f7266c (7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f)\n7e12ed01fc6355b1a9cd5e61d207e2f2cfa893a0e40ef3881c9b1fcc1bf07e5f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.643 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7154f2-d704-4bae-9695-dafbd5cfe332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.645 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80502cee-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:44 np0005532048 kernel: tap80502cee-00: left promiscuous mode
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:44 np0005532048 nova_compute[253661]: 2025-11-22 09:40:44.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.665 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1995449e-280e-4c5d-af9a-786cee99192e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.680 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c78e5a3f-dd42-4e2f-b449-ad718950a040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.681 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0a6015-b13e-4570-9220-2079657ea426]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.699 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b57f3ee-04fa-4059-8378-01ccac4a2cce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 716974, 'reachable_time': 15804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383836, 'error': None, 'target': 'ovnmeta-80502cee-0a40-4541-8461-41de74f7266c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:44 np0005532048 systemd[1]: run-netns-ovnmeta\x2d80502cee\x2d0a40\x2d4541\x2d8461\x2d41de74f7266c.mount: Deactivated successfully.
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.702 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80502cee-0a40-4541-8461-41de74f7266c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:40:44 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:44.702 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3658176e-6086-4607-83c8-ffeeb11ed332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.071 253665 INFO nova.virt.libvirt.driver [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Deleting instance files /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_del#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.072 253665 INFO nova.virt.libvirt.driver [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Deletion of /var/lib/nova/instances/9c45a555-9969-4d8a-bd3b-1ab61ce6f68c_del complete#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.121 253665 INFO nova.compute.manager [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.122 253665 DEBUG oslo.service.loopingcall [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.123 253665 DEBUG nova.compute.manager [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.123 253665 DEBUG nova.network.neutron [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.656 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-unplugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.656 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-unplugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-unplugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.657 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.658 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.658 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.658 253665 DEBUG oslo_concurrency.lockutils [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.658 253665 DEBUG nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] No waiting events found dispatching network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.659 253665 WARNING nova.compute.manager [req-e0c488d1-3059-488c-84a7-4a931b280057 req-313d2c7e-faf8-48a0-9265-525c13222f96 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received unexpected event network-vif-plugged-12ab8505-5ae2-427c-aaf6-9431683a99c8 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.773 253665 DEBUG nova.network.neutron [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updated VIF entry in instance network info cache for port 12ab8505-5ae2-427c-aaf6-9431683a99c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.773 253665 DEBUG nova.network.neutron [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [{"id": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "address": "fa:16:3e:30:a0:d3", "network": {"id": "80502cee-0a40-4541-8461-41de74f7266c", "bridge": "br-int", "label": "tempest-network-smoke--1950075117", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ab8505-5a", "ovs_interfaceid": "12ab8505-5ae2-427c-aaf6-9431683a99c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.791 253665 DEBUG oslo_concurrency.lockutils [req-22676866-1623-40f4-b19b-15ce25a5ac29 req-7ecd787a-880c-49fc-a76a-0fb992c6aa9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.954 253665 DEBUG nova.network.neutron [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:45 np0005532048 nova_compute[253661]: 2025-11-22 09:40:45.968 253665 INFO nova.compute.manager [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Took 0.84 seconds to deallocate network for instance.#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.003 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.004 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 421 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.124 253665 DEBUG oslo_concurrency.processutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:40:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3808430014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.586 253665 DEBUG oslo_concurrency.processutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.593 253665 DEBUG nova.compute.provider_tree [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.607 253665 DEBUG nova.scheduler.client.report [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.626 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.628 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.629 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.629 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.629 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.702 253665 INFO nova.scheduler.client.report [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c#033[00m
Nov 22 04:40:46 np0005532048 nova_compute[253661]: 2025-11-22 09:40:46.802 253665 DEBUG oslo_concurrency.lockutils [None req-853fe97b-49a5-4ee0-9a7e-3ac9226b9fce 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "9c45a555-9969-4d8a-bd3b-1ab61ce6f68c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:40:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162660923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.132 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.214 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.214 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.220 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.220 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.224 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.225 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.228 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.229 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.480 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.481 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2866MB free_disk=59.76681137084961GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.481 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.481 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.540 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance ba0b1c52-c98b-4c2f-a213-e203719ada54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 44f1789d-14f7-46df-a863-e8c3c418f7f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 7f88c3e8-e667-4d9a-8178-c99843560719 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance aaeb1088-1220-47e3-9462-ba96b1d4e87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.541 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.628 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:47 np0005532048 nova_compute[253661]: 2025-11-22 09:40:47.833 253665 DEBUG nova.compute.manager [req-24485250-2905-4fa7-b77c-de4593b3ed04 req-89fa3920-df18-4be7-a866-ccc6d989605d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Received event network-vif-deleted-12ab8505-5ae2-427c-aaf6-9431683a99c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 22 04:40:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:40:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932120452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:40:48 np0005532048 nova_compute[253661]: 2025-11-22 09:40:48.136 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:48 np0005532048 nova_compute[253661]: 2025-11-22 09:40:48.142 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:40:48 np0005532048 nova_compute[253661]: 2025-11-22 09:40:48.161 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:40:48 np0005532048 nova_compute[253661]: 2025-11-22 09:40:48.177 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:40:48 np0005532048 nova_compute[253661]: 2025-11-22 09:40:48.178 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:49 np0005532048 nova_compute[253661]: 2025-11-22 09:40:49.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:49 np0005532048 nova_compute[253661]: 2025-11-22 09:40:49.929 253665 DEBUG nova.compute.manager [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:49 np0005532048 nova_compute[253661]: 2025-11-22 09:40:49.930 253665 DEBUG nova.compute.manager [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing instance network info cache due to event network-changed-83f684f5-d7e5-44a8-960d-efe4ce81e023. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:40:49 np0005532048 nova_compute[253661]: 2025-11-22 09:40:49.930 253665 DEBUG oslo_concurrency.lockutils [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:49 np0005532048 nova_compute[253661]: 2025-11-22 09:40:49.930 253665 DEBUG oslo_concurrency.lockutils [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:49 np0005532048 nova_compute[253661]: 2025-11-22 09:40:49.931 253665 DEBUG nova.network.neutron [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Refreshing network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.028 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.028 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.029 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.029 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.029 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.030 253665 INFO nova.compute.manager [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Terminating instance#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.032 253665 DEBUG nova.compute.manager [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:40:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 22 04:40:50 np0005532048 kernel: tap83f684f5-d7 (unregistering): left promiscuous mode
Nov 22 04:40:50 np0005532048 NetworkManager[48920]: <info>  [1763804450.0992] device (tap83f684f5-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.107 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:50Z|01345|binding|INFO|Releasing lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 from this chassis (sb_readonly=0)
Nov 22 04:40:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:50Z|01346|binding|INFO|Setting lport 83f684f5-d7e5-44a8-960d-efe4ce81e023 down in Southbound
Nov 22 04:40:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:50Z|01347|binding|INFO|Removing iface tap83f684f5-d7 ovn-installed in OVS
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.110 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.124 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 kernel: tap454bebe0-52 (unregistering): left promiscuous mode
Nov 22 04:40:50 np0005532048 NetworkManager[48920]: <info>  [1763804450.1294] device (tap454bebe0-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:40:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:50Z|01348|binding|INFO|Releasing lport 454bebe0-5237-48cb-8cf5-10be46f6d33a from this chassis (sb_readonly=1)
Nov 22 04:40:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:50Z|01349|binding|INFO|Removing iface tap454bebe0-52 ovn-installed in OVS
Nov 22 04:40:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:50Z|01350|if_status|INFO|Dropped 2 log messages in last 117 seconds (most recently, 117 seconds ago) due to excessive rate
Nov 22 04:40:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:50Z|01351|if_status|INFO|Not setting lport 454bebe0-5237-48cb-8cf5-10be46f6d33a down as sb is readonly
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.142 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.178 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.178 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:40:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:50Z|01352|binding|INFO|Setting lport 454bebe0-5237-48cb-8cf5-10be46f6d33a down in Southbound
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.196 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:93:6c 10.100.0.14'], port_security=['fa:16:3e:29:93:6c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7f88c3e8-e667-4d9a-8178-c99843560719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33aa2b15-84be-4fa8-858f-98182293b1b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82afa9d-1a09-411a-8866-4ce961a27350, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=83f684f5-d7e5-44a8-960d-efe4ce81e023) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.199 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 83f684f5-d7e5-44a8-960d-efe4ce81e023 in datapath 33aa2b15-84be-4fa8-858f-98182293b1b2 unbound from our chassis#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.200 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33aa2b15-84be-4fa8-858f-98182293b1b2#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.215 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c632a2f9-994c-459c-8919-175e721d3714]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 22 04:40:50 np0005532048 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007d.scope: Consumed 18.361s CPU time.
Nov 22 04:40:50 np0005532048 systemd-machined[215941]: Machine qemu-156-instance-0000007d terminated.
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.247 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[104382f8-904d-438d-8ab7-3946c438fa37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.252 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bf84dfb0-5bb9-45ad-a45f-9a4a73327a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.257 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99'], port_security=['fa:16:3e:0c:2b:99 2001:db8:0:1:f816:3eff:fe0c:2b99 2001:db8::f816:3eff:fe0c:2b99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0c:2b99/64 2001:db8::f816:3eff:fe0c:2b99/64', 'neutron:device_id': '7f88c3e8-e667-4d9a-8178-c99843560719', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=454bebe0-5237-48cb-8cf5-10be46f6d33a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.286 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a446e5d7-9488-472c-9804-091d97dc51d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.304 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a695ef2f-c366-4a75-996e-8d96ed377840]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33aa2b15-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:23:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721624, 'reachable_time': 43164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383921, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.319 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[511e9399-f614-4408-adb0-9f71e8ad1b77]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap33aa2b15-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721638, 'tstamp': 721638}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383922, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap33aa2b15-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721641, 'tstamp': 721641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383922, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.321 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33aa2b15-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.322 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.331 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33aa2b15-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.332 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.333 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33aa2b15-80, col_values=(('external_ids', {'iface-id': 'c8541406-177e-4d49-a6da-f639419da399'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.334 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.336 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 454bebe0-5237-48cb-8cf5-10be46f6d33a in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb unbound from our chassis#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.339 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20228844-2184-465b-8bc3-e846cfb6d3cb#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.358 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20300581-df39-449b-8486-b6c39ee9c191]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.396 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f0eb76-bf54-45f6-b2df-8b87d4b983ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.400 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[262a894a-8d8d-4079-ac36-57bd8b67894b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.439 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[634e46eb-12f5-4b94-9ac5-d4ee73a5c4c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 NetworkManager[48920]: <info>  [1763804450.4627] manager: (tap454bebe0-52): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.463 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a497838e-8189-4843-8f43-9c40c549066b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20228844-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:0f:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721718, 'reachable_time': 22099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383931, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.483 253665 INFO nova.virt.libvirt.driver [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Instance destroyed successfully.#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.483 253665 DEBUG nova.objects.instance [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 7f88c3e8-e667-4d9a-8178-c99843560719 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.491 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3709b3-de89-470e-a2d4-2bd9901026e2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap20228844-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721733, 'tstamp': 721733}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383946, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.493 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20228844-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.497 253665 DEBUG nova.virt.libvirt.vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:40:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:40:11Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.497 253665 DEBUG nova.network.os_vif_util [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.498 253665 DEBUG nova.network.os_vif_util [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.499 253665 DEBUG os_vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.500 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83f684f5-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.504 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20228844-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.504 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20228844-20, col_values=(('external_ids', {'iface-id': 'c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:50 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:50.505 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.510 253665 INFO os_vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:93:6c,bridge_name='br-int',has_traffic_filtering=True,id=83f684f5-d7e5-44a8-960d-efe4ce81e023,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83f684f5-d7')#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.511 253665 DEBUG nova.virt.libvirt.vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1324430276',display_name='tempest-TestGettingAddress-server-1324430276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1324430276',id=125,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:40:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-blcxy7wc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:40:11Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=7f88c3e8-e667-4d9a-8178-c99843560719,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.511 253665 DEBUG nova.network.os_vif_util [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.512 253665 DEBUG nova.network.os_vif_util [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.513 253665 DEBUG os_vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.514 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap454bebe0-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.518 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.520 253665 INFO os_vif [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:2b:99,bridge_name='br-int',has_traffic_filtering=True,id=454bebe0-5237-48cb-8cf5-10be46f6d33a,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap454bebe0-52')#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.946 253665 INFO nova.virt.libvirt.driver [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Deleting instance files /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719_del#033[00m
Nov 22 04:40:50 np0005532048 nova_compute[253661]: 2025-11-22 09:40:50.947 253665 INFO nova.virt.libvirt.driver [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Deletion of /var/lib/nova/instances/7f88c3e8-e667-4d9a-8178-c99843560719_del complete#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.081 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.082 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.083 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.083 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.084 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.085 253665 INFO nova.compute.manager [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Terminating instance#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.087 253665 DEBUG nova.compute.manager [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:40:51 np0005532048 kernel: tap01cd64f6-47 (unregistering): left promiscuous mode
Nov 22 04:40:51 np0005532048 NetworkManager[48920]: <info>  [1763804451.1444] device (tap01cd64f6-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:40:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:51Z|01353|binding|INFO|Releasing lport 01cd64f6-47ab-4640-ae46-6834065ff09b from this chassis (sb_readonly=0)
Nov 22 04:40:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:51Z|01354|binding|INFO|Setting lport 01cd64f6-47ab-4640-ae46-6834065ff09b down in Southbound
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:51Z|01355|binding|INFO|Removing iface tap01cd64f6-47 ovn-installed in OVS
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.154 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.166 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.174 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:50:06 10.100.0.5'], port_security=['fa:16:3e:3b:50:06 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'aaeb1088-1220-47e3-9462-ba96b1d4e87a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=01cd64f6-47ab-4640-ae46-6834065ff09b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.175 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 01cd64f6-47ab-4640-ae46-6834065ff09b in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 unbound from our chassis#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.176 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 705357ee-1033-4907-905f-d41aa6dcfd73#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.186 253665 INFO nova.compute.manager [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Took 1.15 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.186 253665 DEBUG oslo.service.loopingcall [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.187 253665 DEBUG nova.compute.manager [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.187 253665 DEBUG nova.network.neutron [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.198 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1d25fb-83d7-4c46-b435-cc59981b07a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:51 np0005532048 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Nov 22 04:40:51 np0005532048 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007e.scope: Consumed 15.575s CPU time.
Nov 22 04:40:51 np0005532048 systemd-machined[215941]: Machine qemu-157-instance-0000007e terminated.
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.236 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[430c1c73-6b4c-4d38-9fa3-42d06c058b2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.239 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[003befae-b771-4fb9-a4d6-6cb95521617f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.274 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1b3c6b-ae2b-44e7-acf8-2ffab8a786ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.300 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed34f90-49f6-4bd5-b31e-8223cac58e40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap705357ee-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:aa:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722923, 'reachable_time': 34041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383983, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.320 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.325 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf464281-f691-47fa-857a-b7dab6438302]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap705357ee-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722937, 'tstamp': 722937}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383986, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap705357ee-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722940, 'tstamp': 722940}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383986, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.328 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705357ee-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.328 253665 INFO nova.virt.libvirt.driver [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Instance destroyed successfully.#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.329 253665 DEBUG nova.objects.instance [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid aaeb1088-1220-47e3-9462-ba96b1d4e87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.334 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.334 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap705357ee-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.335 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.335 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap705357ee-10, col_values=(('external_ids', {'iface-id': 'e4d17104-1aeb-4ffd-be7b-ed782324874a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:51.335 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.355 253665 DEBUG nova.virt.libvirt.vif [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:40:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-0-2082001427',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=126,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:40:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-9t0kovo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:40:29Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=aaeb1088-1220-47e3-9462-ba96b1d4e87a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.355 253665 DEBUG nova.network.os_vif_util [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "01cd64f6-47ab-4640-ae46-6834065ff09b", "address": "fa:16:3e:3b:50:06", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cd64f6-47", "ovs_interfaceid": "01cd64f6-47ab-4640-ae46-6834065ff09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.356 253665 DEBUG nova.network.os_vif_util [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.356 253665 DEBUG os_vif [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.357 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.357 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01cd64f6-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.361 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.363 253665 INFO os_vif [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:50:06,bridge_name='br-int',has_traffic_filtering=True,id=01cd64f6-47ab-4640-ae46-6834065ff09b,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cd64f6-47')#033[00m
Nov 22 04:40:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:51Z|01356|binding|INFO|Releasing lport c8541406-177e-4d49-a6da-f639419da399 from this chassis (sb_readonly=0)
Nov 22 04:40:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:51Z|01357|binding|INFO|Releasing lport c6eb41f8-4dde-4c2b-a6c7-dd47868a17b1 from this chassis (sb_readonly=0)
Nov 22 04:40:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:51Z|01358|binding|INFO|Releasing lport e4d17104-1aeb-4ffd-be7b-ed782324874a from this chassis (sb_readonly=0)
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.492 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.666 253665 DEBUG nova.compute.manager [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-unplugged-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.666 253665 DEBUG oslo_concurrency.lockutils [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.666 253665 DEBUG oslo_concurrency.lockutils [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.666 253665 DEBUG oslo_concurrency.lockutils [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.667 253665 DEBUG nova.compute.manager [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] No waiting events found dispatching network-vif-unplugged-01cd64f6-47ab-4640-ae46-6834065ff09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.667 253665 DEBUG nova.compute.manager [req-ae11f765-6e3f-4d75-8bc3-3fb531623c08 req-3d7d8c03-1153-4b6d-8f8b-be799deb9ddf 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-unplugged-01cd64f6-47ab-4640-ae46-6834065ff09b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.764 253665 INFO nova.virt.libvirt.driver [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Deleting instance files /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a_del#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.766 253665 INFO nova.virt.libvirt.driver [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Deletion of /var/lib/nova/instances/aaeb1088-1220-47e3-9462-ba96b1d4e87a_del complete#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.817 253665 INFO nova.compute.manager [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.818 253665 DEBUG oslo.service.loopingcall [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.818 253665 DEBUG nova.compute.manager [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:40:51 np0005532048 nova_compute[253661]: 2025-11-22 09:40:51.818 253665 DEBUG nova.network.neutron [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.076 253665 DEBUG nova.network.neutron [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updated VIF entry in instance network info cache for port 83f684f5-d7e5-44a8-960d-efe4ce81e023. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.077 253665 DEBUG nova.network.neutron [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [{"id": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "address": "fa:16:3e:29:93:6c", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83f684f5-d7", "ovs_interfaceid": "83f684f5-d7e5-44a8-960d-efe4ce81e023", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "address": "fa:16:3e:0c:2b:99", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe0c:2b99", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap454bebe0-52", "ovs_interfaceid": "454bebe0-5237-48cb-8cf5-10be46f6d33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.096 253665 DEBUG oslo_concurrency.lockutils [req-b4c17b68-1562-48d7-8d55-04dfb1d7dc59 req-8ac8e53e-bbc7-4f91-b20e-65252ed26f43 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-7f88c3e8-e667-4d9a-8178-c99843560719" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:40:52
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'backups', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'volumes']
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.480 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804437.4767847, d7865a13-0d41-44d6-aac2-10cca6e1348a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.481 253665 INFO nova.compute.manager [-] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.503 253665 DEBUG nova.compute.manager [None req-b4ed506e-f46f-49cd-90d7-fec8b90b07a4 - - - - - -] [instance: d7865a13-0d41-44d6-aac2-10cca6e1348a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:40:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.768 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-unplugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.768 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.769 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.769 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.769 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-unplugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.769 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-unplugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.770 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.770 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.770 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.770 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.771 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.771 253665 WARNING nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received unexpected event network-vif-plugged-83f684f5-d7e5-44a8-960d-efe4ce81e023 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.771 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-unplugged-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.771 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.772 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.772 253665 DEBUG oslo_concurrency.lockutils [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.772 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-unplugged-454bebe0-5237-48cb-8cf5-10be46f6d33a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:52 np0005532048 nova_compute[253661]: 2025-11-22 09:40:52.772 253665 DEBUG nova.compute.manager [req-6c17bc85-7fb7-4303-89f2-fc220a3aaf5a req-fab65e99-5655-4d35-83e3-8cddea805a17 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-unplugged-454bebe0-5237-48cb-8cf5-10be46f6d33a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.245 253665 DEBUG nova.network.neutron [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.265 253665 INFO nova.compute.manager [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Took 1.45 seconds to deallocate network for instance.#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.321 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.322 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.401 253665 DEBUG nova.network.neutron [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.416 253665 INFO nova.compute.manager [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Took 2.23 seconds to deallocate network for instance.#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.453 253665 DEBUG oslo_concurrency.processutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.493 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.806 253665 DEBUG nova.compute.manager [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.807 253665 DEBUG oslo_concurrency.lockutils [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.807 253665 DEBUG oslo_concurrency.lockutils [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.808 253665 DEBUG oslo_concurrency.lockutils [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.808 253665 DEBUG nova.compute.manager [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] No waiting events found dispatching network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.808 253665 WARNING nova.compute.manager [req-169b13ff-0dc5-45bf-bf3c-d134133d5ed4 req-8e5b6679-2cbd-4db7-b478-b3f37306919c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received unexpected event network-vif-plugged-01cd64f6-47ab-4640-ae46-6834065ff09b for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:40:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:40:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149363620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.975 253665 DEBUG oslo_concurrency.processutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:53 np0005532048 nova_compute[253661]: 2025-11-22 09:40:53.983 253665 DEBUG nova.compute.provider_tree [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.003 253665 DEBUG nova.scheduler.client.report [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.033 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.037 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 200 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 2.2 MiB/s wr, 152 op/s
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.069 253665 INFO nova.scheduler.client.report [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance aaeb1088-1220-47e3-9462-ba96b1d4e87a#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.130 253665 DEBUG oslo_concurrency.processutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.166 253665 DEBUG oslo_concurrency.lockutils [None req-068ca422-069b-450e-80e2-5bbc38f5a079 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "aaeb1088-1220-47e3-9462-ba96b1d4e87a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:40:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/37611534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.610 253665 DEBUG oslo_concurrency.processutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.615 253665 DEBUG nova.compute.provider_tree [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.632 253665 DEBUG nova.scheduler.client.report [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.668 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.700 253665 INFO nova.scheduler.client.report [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 7f88c3e8-e667-4d9a-8178-c99843560719#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.784 253665 DEBUG oslo_concurrency.lockutils [None req-adfdca07-1bf0-41c0-bf92-44f2155860b2 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.959 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.959 253665 DEBUG oslo_concurrency.lockutils [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.960 253665 DEBUG oslo_concurrency.lockutils [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.960 253665 DEBUG oslo_concurrency.lockutils [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "7f88c3e8-e667-4d9a-8178-c99843560719-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.960 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] No waiting events found dispatching network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.961 253665 WARNING nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received unexpected event network-vif-plugged-454bebe0-5237-48cb-8cf5-10be46f6d33a for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.961 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-deleted-83f684f5-d7e5-44a8-960d-efe4ce81e023 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.961 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Received event network-vif-deleted-01cd64f6-47ab-4640-ae46-6834065ff09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:54 np0005532048 nova_compute[253661]: 2025-11-22 09:40:54.962 253665 DEBUG nova.compute.manager [req-1fe3c39b-c1f4-480c-a3e4-e9bd2a16a4ac req-77c9b358-01da-4dc8-ac83-a07ced66d99f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Received event network-vif-deleted-454bebe0-5237-48cb-8cf5-10be46f6d33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:55 np0005532048 nova_compute[253661]: 2025-11-22 09:40:55.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:40:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:40:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:40:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:40:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:40:55 np0005532048 nova_compute[253661]: 2025-11-22 09:40:55.873 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:55 np0005532048 nova_compute[253661]: 2025-11-22 09:40:55.873 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:55 np0005532048 nova_compute[253661]: 2025-11-22 09:40:55.874 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:55 np0005532048 nova_compute[253661]: 2025-11-22 09:40:55.874 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:55 np0005532048 nova_compute[253661]: 2025-11-22 09:40:55.874 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:55 np0005532048 nova_compute[253661]: 2025-11-22 09:40:55.876 253665 INFO nova.compute.manager [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Terminating instance#033[00m
Nov 22 04:40:55 np0005532048 nova_compute[253661]: 2025-11-22 09:40:55.877 253665 DEBUG nova.compute.manager [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:40:55 np0005532048 kernel: tap7381e299-12 (unregistering): left promiscuous mode
Nov 22 04:40:55 np0005532048 NetworkManager[48920]: <info>  [1763804455.9367] device (tap7381e299-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01359|binding|INFO|Releasing lport 7381e299-12bd-46ec-8abf-df35fe0bf48a from this chassis (sb_readonly=0)
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01360|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a down in Southbound
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01361|binding|INFO|Removing iface tap7381e299-12 ovn-installed in OVS
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.019 253665 DEBUG nova.compute.manager [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.020 253665 DEBUG nova.compute.manager [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing instance network info cache due to event network-changed-7381e299-12bd-46ec-8abf-df35fe0bf48a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.021 253665 DEBUG oslo_concurrency.lockutils [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.021 253665 DEBUG oslo_concurrency.lockutils [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.021 253665 DEBUG nova.network.neutron [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Refreshing network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.026 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ff:5c 10.100.0.3'], port_security=['fa:16:3e:9b:ff:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44f1789d-14f7-46df-a863-e8c3c418f7f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa d24f9530-589a-4ee7-9767-0df91de410f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7381e299-12bd-46ec-8abf-df35fe0bf48a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.027 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7381e299-12bd-46ec-8abf-df35fe0bf48a in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 unbound from our chassis#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.029 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 705357ee-1033-4907-905f-d41aa6dcfd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.030 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd2fe73-587d-4ff2-9cbd-ea3497f6f07c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.031 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 namespace which is not needed anymore#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.033 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 200 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 618 KiB/s wr, 116 op/s
Nov 22 04:40:56 np0005532048 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Nov 22 04:40:56 np0005532048 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007c.scope: Consumed 19.181s CPU time.
Nov 22 04:40:56 np0005532048 systemd-machined[215941]: Machine qemu-155-instance-0000007c terminated.
Nov 22 04:40:56 np0005532048 podman[384061]: 2025-11-22 09:40:56.122301979 +0000 UTC m=+0.064277301 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:40:56 np0005532048 podman[384062]: 2025-11-22 09:40:56.140573795 +0000 UTC m=+0.102772861 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 22 04:40:56 np0005532048 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [NOTICE]   (381772) : haproxy version is 2.8.14-c23fe91
Nov 22 04:40:56 np0005532048 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [NOTICE]   (381772) : path to executable is /usr/sbin/haproxy
Nov 22 04:40:56 np0005532048 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [WARNING]  (381772) : Exiting Master process...
Nov 22 04:40:56 np0005532048 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [ALERT]    (381772) : Current worker (381774) exited with code 143 (Terminated)
Nov 22 04:40:56 np0005532048 neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73[381768]: [WARNING]  (381772) : All workers exited. Exiting... (0)
Nov 22 04:40:56 np0005532048 systemd[1]: libpod-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e.scope: Deactivated successfully.
Nov 22 04:40:56 np0005532048 conmon[381768]: conmon 91a8862898354d729f7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e.scope/container/memory.events
Nov 22 04:40:56 np0005532048 podman[384116]: 2025-11-22 09:40:56.168636059 +0000 UTC m=+0.048907364 container died 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 04:40:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e-userdata-shm.mount: Deactivated successfully.
Nov 22 04:40:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2e0ca2159f1a5a63e92ff60160fdc469795b2978d3e0860bcbe102db9710cb41-merged.mount: Deactivated successfully.
Nov 22 04:40:56 np0005532048 podman[384116]: 2025-11-22 09:40:56.214200152 +0000 UTC m=+0.094471457 container cleanup 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:40:56 np0005532048 systemd[1]: libpod-conmon-91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e.scope: Deactivated successfully.
Nov 22 04:40:56 np0005532048 podman[384145]: 2025-11-22 09:40:56.284006325 +0000 UTC m=+0.046916365 container remove 91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.291 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e838e75-ae83-4b94-926f-6f6cd2224282]: (4, ('Sat Nov 22 09:40:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 (91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e)\n91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e\nSat Nov 22 09:40:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 (91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e)\n91a8862898354d729f7ed5d047c3b50113823e112f3907e18227e68af005899e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.293 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[01231ce2-1b2c-41e4-8acc-4f49b9ee5156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.294 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap705357ee-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 kernel: tap705357ee-10: left promiscuous mode
Nov 22 04:40:56 np0005532048 kernel: tap7381e299-12: entered promiscuous mode
Nov 22 04:40:56 np0005532048 kernel: tap7381e299-12 (unregistering): left promiscuous mode
Nov 22 04:40:56 np0005532048 NetworkManager[48920]: <info>  [1763804456.3021] manager: (tap7381e299-12): new Tun device (/org/freedesktop/NetworkManager/Devices/548)
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01362|binding|INFO|Claiming lport 7381e299-12bd-46ec-8abf-df35fe0bf48a for this chassis.
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01363|binding|INFO|7381e299-12bd-46ec-8abf-df35fe0bf48a: Claiming fa:16:3e:9b:ff:5c 10.100.0.3
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.319 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[96144c78-559d-4cb8-a90a-354c13bdc24a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.321 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ff:5c 10.100.0.3'], port_security=['fa:16:3e:9b:ff:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44f1789d-14f7-46df-a863-e8c3c418f7f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa d24f9530-589a-4ee7-9767-0df91de410f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7381e299-12bd-46ec-8abf-df35fe0bf48a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.327 253665 INFO nova.virt.libvirt.driver [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Instance destroyed successfully.#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.328 253665 DEBUG nova.objects.instance [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid 44f1789d-14f7-46df-a863-e8c3c418f7f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.336 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1bbfd95b-5833-461d-bda0-09e82962ba84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01364|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a ovn-installed in OVS
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01365|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a up in Southbound
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01366|binding|INFO|Releasing lport 7381e299-12bd-46ec-8abf-df35fe0bf48a from this chassis (sb_readonly=1)
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01367|binding|INFO|Removing iface tap7381e299-12 ovn-installed in OVS
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.338 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f02505ff-a6be-477d-94fe-c45bd61d0088]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.339 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01368|binding|INFO|Releasing lport 7381e299-12bd-46ec-8abf-df35fe0bf48a from this chassis (sb_readonly=0)
Nov 22 04:40:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:56Z|01369|binding|INFO|Setting lport 7381e299-12bd-46ec-8abf-df35fe0bf48a down in Southbound
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.342 253665 DEBUG nova.virt.libvirt.vif [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-338358749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=124,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbk5RfudFquhpa5lprQIMNSDd1LWjuKWOiIN353NFhcoF5DkddOnpCLYMTAq6AP8dFFIkCpIG6/In3cki28BBZ+JI0FuFnDsEiRArR4SIm949ArAgIcePLWzUf/qVubsg==',key_name='tempest-TestSecurityGroupsBasicOps-321654172',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:39:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-zgn5mokh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:39:39Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=44f1789d-14f7-46df-a863-e8c3c418f7f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.343 253665 DEBUG nova.network.os_vif_util [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.343 253665 DEBUG nova.network.os_vif_util [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.344 253665 DEBUG os_vif [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.345 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7381e299-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.350 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:ff:5c 10.100.0.3'], port_security=['fa:16:3e:9b:ff:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '44f1789d-14f7-46df-a863-e8c3c418f7f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-705357ee-1033-4907-905f-d41aa6dcfd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f198579-316d-40d0-ae5d-a4d8440647aa d24f9530-589a-4ee7-9767-0df91de410f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b913e923-e2b2-4479-8913-960bf5f1e614, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7381e299-12bd-46ec-8abf-df35fe0bf48a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.355 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[abdb6fec-ea43-4fed-ab1f-84fa8c585778]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722913, 'reachable_time': 19787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384168, 'error': None, 'target': 'ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 systemd[1]: run-netns-ovnmeta\x2d705357ee\x2d1033\x2d4907\x2d905f\x2dd41aa6dcfd73.mount: Deactivated successfully.
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.361 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-705357ee-1033-4907-905f-d41aa6dcfd73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.362 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3b320ec7-5940-45ad-8ade-c8b26db05431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.363 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7381e299-12bd-46ec-8abf-df35fe0bf48a in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 unbound from our chassis#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.365 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 705357ee-1033-4907-905f-d41aa6dcfd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.366 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.366 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94ede78f-8b23-449c-afdc-c6883ddf1afb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.367 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7381e299-12bd-46ec-8abf-df35fe0bf48a in datapath 705357ee-1033-4907-905f-d41aa6dcfd73 unbound from our chassis#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.368 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 705357ee-1033-4907-905f-d41aa6dcfd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:40:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:56.368 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7af7885-0958-4490-b3ce-23f178220653]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.369 253665 INFO os_vif [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:ff:5c,bridge_name='br-int',has_traffic_filtering=True,id=7381e299-12bd-46ec-8abf-df35fe0bf48a,network=Network(705357ee-1033-4907-905f-d41aa6dcfd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7381e299-12')#033[00m
Nov 22 04:40:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:40:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:40:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:40:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:40:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.786 253665 INFO nova.virt.libvirt.driver [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Deleting instance files /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3_del#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.787 253665 INFO nova.virt.libvirt.driver [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Deletion of /var/lib/nova/instances/44f1789d-14f7-46df-a863-e8c3c418f7f3_del complete#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.858 253665 INFO nova.compute.manager [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.860 253665 DEBUG oslo.service.loopingcall [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.860 253665 DEBUG nova.compute.manager [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:40:56 np0005532048 nova_compute[253661]: 2025-11-22 09:40:56.860 253665 DEBUG nova.network.neutron [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.097 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.098 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.098 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.098 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.099 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.100 253665 INFO nova.compute.manager [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Terminating instance#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.101 253665 DEBUG nova.compute.manager [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:40:57 np0005532048 kernel: tap2a619e33-76 (unregistering): left promiscuous mode
Nov 22 04:40:57 np0005532048 NetworkManager[48920]: <info>  [1763804457.1701] device (tap2a619e33-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:40:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:57Z|01370|binding|INFO|Releasing lport 2a619e33-769d-4ebf-b212-40975e40d3ca from this chassis (sb_readonly=0)
Nov 22 04:40:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:57Z|01371|binding|INFO|Setting lport 2a619e33-769d-4ebf-b212-40975e40d3ca down in Southbound
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:57Z|01372|binding|INFO|Removing iface tap2a619e33-76 ovn-installed in OVS
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.188 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:e0:cc 10.100.0.10'], port_security=['fa:16:3e:09:e0:cc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ba0b1c52-c98b-4c2f-a213-e203719ada54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33aa2b15-84be-4fa8-858f-98182293b1b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82afa9d-1a09-411a-8866-4ce961a27350, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2a619e33-769d-4ebf-b212-40975e40d3ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.189 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2a619e33-769d-4ebf-b212-40975e40d3ca in datapath 33aa2b15-84be-4fa8-858f-98182293b1b2 unbound from our chassis#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.190 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 33aa2b15-84be-4fa8-858f-98182293b1b2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.191 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[85710aad-abbc-45b8-9b8a-7b2c01905251]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.192 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 namespace which is not needed anymore#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.195 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 kernel: tap27382337-7f (unregistering): left promiscuous mode
Nov 22 04:40:57 np0005532048 NetworkManager[48920]: <info>  [1763804457.2019] device (tap27382337-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:57Z|01373|binding|INFO|Releasing lport 27382337-7fe1-4d29-942c-7735f8c98a06 from this chassis (sb_readonly=0)
Nov 22 04:40:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:57Z|01374|binding|INFO|Setting lport 27382337-7fe1-4d29-942c-7735f8c98a06 down in Southbound
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.221 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 ovn_controller[152872]: 2025-11-22T09:40:57Z|01375|binding|INFO|Removing iface tap27382337-7f ovn-installed in OVS
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.225 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.230 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f'], port_security=['fa:16:3e:81:ef:0f 2001:db8:0:1:f816:3eff:fe81:ef0f 2001:db8::f816:3eff:fe81:ef0f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe81:ef0f/64 2001:db8::f816:3eff:fe81:ef0f/64', 'neutron:device_id': 'ba0b1c52-c98b-4c2f-a213-e203719ada54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20228844-2184-465b-8bc3-e846cfb6d3cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb584c12-cffa-488f-adbc-2a255a5cdce2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd6b77e6-a2ac-463b-a37b-14dc60b71e56, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=27382337-7fe1-4d29-942c-7735f8c98a06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 22 04:40:57 np0005532048 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007a.scope: Consumed 17.770s CPU time.
Nov 22 04:40:57 np0005532048 systemd-machined[215941]: Machine qemu-153-instance-0000007a terminated.
Nov 22 04:40:57 np0005532048 NetworkManager[48920]: <info>  [1763804457.3287] manager: (tap2a619e33-76): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Nov 22 04:40:57 np0005532048 NetworkManager[48920]: <info>  [1763804457.3413] manager: (tap27382337-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [NOTICE]   (381020) : haproxy version is 2.8.14-c23fe91
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [NOTICE]   (381020) : path to executable is /usr/sbin/haproxy
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [WARNING]  (381020) : Exiting Master process...
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [WARNING]  (381020) : Exiting Master process...
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [ALERT]    (381020) : Current worker (381022) exited with code 143 (Terminated)
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2[381016]: [WARNING]  (381020) : All workers exited. Exiting... (0)
Nov 22 04:40:57 np0005532048 systemd[1]: libpod-2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733.scope: Deactivated successfully.
Nov 22 04:40:57 np0005532048 podman[384214]: 2025-11-22 09:40:57.354178598 +0000 UTC m=+0.052388941 container died 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.355 253665 INFO nova.virt.libvirt.driver [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance destroyed successfully.#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.355 253665 DEBUG nova.objects.instance [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid ba0b1c52-c98b-4c2f-a213-e203719ada54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.374 253665 DEBUG nova.virt.libvirt.vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:39:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:39:27Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.375 253665 DEBUG nova.network.os_vif_util [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.376 253665 DEBUG nova.network.os_vif_util [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.376 253665 DEBUG os_vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.379 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.379 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a619e33-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:40:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733-userdata-shm.mount: Deactivated successfully.
Nov 22 04:40:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d80866c630f9de6dcc7211912df360c883a2569930a017e86eb8d48a712ac4e8-merged.mount: Deactivated successfully.
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.395 253665 INFO os_vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:e0:cc,bridge_name='br-int',has_traffic_filtering=True,id=2a619e33-769d-4ebf-b212-40975e40d3ca,network=Network(33aa2b15-84be-4fa8-858f-98182293b1b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a619e33-76')#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.396 253665 DEBUG nova.virt.libvirt.vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:39:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-206573176',display_name='tempest-TestGettingAddress-server-206573176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-206573176',id=122,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH8O9ucNPi9RIyQUH4Rv7azOVqG9BxY1oXDgEf/Je0Ba+x24hNxSVdwnxNt2BouY3QYNeamXyVuhF8EWbqEBklwcppbFczFYpqjUS02lUdcd9ifJPLASVh8RuqmwxQb7fQ==',key_name='tempest-TestGettingAddress-2003508393',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:39:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-g01s4gn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:39:27Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=ba0b1c52-c98b-4c2f-a213-e203719ada54,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.397 253665 DEBUG nova.network.os_vif_util [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.398 253665 DEBUG nova.network.os_vif_util [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.398 253665 DEBUG os_vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:40:57 np0005532048 podman[384214]: 2025-11-22 09:40:57.399002632 +0000 UTC m=+0.097212975 container cleanup 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.400 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27382337-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.405 253665 INFO os_vif [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:ef:0f,bridge_name='br-int',has_traffic_filtering=True,id=27382337-7fe1-4d29-942c-7735f8c98a06,network=Network(20228844-2184-465b-8bc3-e846cfb6d3cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27382337-7f')#033[00m
Nov 22 04:40:57 np0005532048 systemd[1]: libpod-conmon-2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733.scope: Deactivated successfully.
Nov 22 04:40:57 np0005532048 podman[384269]: 2025-11-22 09:40:57.486247621 +0000 UTC m=+0.059975615 container remove 2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f6aceb-ba8b-4f47-a842-fcdbccb9aae7]: (4, ('Sat Nov 22 09:40:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 (2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733)\n2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733\nSat Nov 22 09:40:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 (2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733)\n2fecf369f72f8bb03a7641e639e108af6725711b8dc697dfd91af0228dcc5733\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.496 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce396e3b-2ff4-488e-a268-78dede27dab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.497 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33aa2b15-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 kernel: tap33aa2b15-80: left promiscuous mode
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.517 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c40d7100-6bb4-4f3d-a743-7066b2984d91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.532 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[089fd739-39b4-47ab-ac55-761f40daa47e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.534 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[af34d5f7-593b-46ba-a1a6-f035de2a2f82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.553 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f468303f-c78e-48fa-95ee-6b805b899462]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721616, 'reachable_time': 37303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384297, 'error': None, 'target': 'ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 systemd[1]: run-netns-ovnmeta\x2d33aa2b15\x2d84be\x2d4fa8\x2d858f\x2d98182293b1b2.mount: Deactivated successfully.
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.556 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-33aa2b15-84be-4fa8-858f-98182293b1b2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.556 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[566fcd69-3dcc-4df8-b6a2-603610bf087e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.558 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 27382337-7fe1-4d29-942c-7735f8c98a06 in datapath 20228844-2184-465b-8bc3-e846cfb6d3cb unbound from our chassis#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.559 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20228844-2184-465b-8bc3-e846cfb6d3cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.560 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1391dd-8871-41f3-921d-5862570ae2d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.560 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb namespace which is not needed anymore#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.650 253665 DEBUG nova.network.neutron [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updated VIF entry in instance network info cache for port 7381e299-12bd-46ec-8abf-df35fe0bf48a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.652 253665 DEBUG nova.network.neutron [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [{"id": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "address": "fa:16:3e:9b:ff:5c", "network": {"id": "705357ee-1033-4907-905f-d41aa6dcfd73", "bridge": "br-int", "label": "tempest-network-smoke--1328729763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7381e299-12", "ovs_interfaceid": "7381e299-12bd-46ec-8abf-df35fe0bf48a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:40:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1466 writes, 7346 keys, 1466 commit groups, 1.0 writes per commit group, ingest: 9.07 MB, 0.02 MB/s#012Interval WAL: 1466 writes, 1466 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     45.8      1.21              0.21        33    0.037       0      0       0.0       0.0#012  L6      1/0    7.92 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.6     95.3     80.3      3.19              0.80        32    0.100    187K    17K       0.0       0.0#012 Sum      1/0    7.92 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.6     69.0     70.8      4.40              1.01        65    0.068    187K    17K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7    101.3     99.3      0.80              0.29        16    0.050     58K   4045       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     95.3     80.3      3.19              0.80        32    0.100    187K    17K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.9      1.21              0.21        32    0.038       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.054, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.30 GB write, 0.07 MB/s write, 0.30 GB read, 0.07 MB/s read, 4.4 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 33.93 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000263 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2230,32.53 MB,10.7008%) FilterBlock(66,536.73 KB,0.172419%) IndexBlock(66,899.67 KB,0.289008%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.674 253665 DEBUG oslo_concurrency.lockutils [req-b62ce3e9-6ba7-46d2-be44-b45e1c80171e req-5045e3fc-2983-42f2-8854-57fd5ced55be 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-44f1789d-14f7-46df-a863-e8c3c418f7f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [NOTICE]   (381093) : haproxy version is 2.8.14-c23fe91
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [NOTICE]   (381093) : path to executable is /usr/sbin/haproxy
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [WARNING]  (381093) : Exiting Master process...
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [WARNING]  (381093) : Exiting Master process...
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [ALERT]    (381093) : Current worker (381095) exited with code 143 (Terminated)
Nov 22 04:40:57 np0005532048 neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb[381088]: [WARNING]  (381093) : All workers exited. Exiting... (0)
Nov 22 04:40:57 np0005532048 systemd[1]: libpod-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a.scope: Deactivated successfully.
Nov 22 04:40:57 np0005532048 conmon[381088]: conmon fce4d4eaa218605777ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a.scope/container/memory.events
Nov 22 04:40:57 np0005532048 podman[384317]: 2025-11-22 09:40:57.713196861 +0000 UTC m=+0.051715834 container died fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:40:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a-userdata-shm.mount: Deactivated successfully.
Nov 22 04:40:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fb633aced8fc5d6611d564ed19dc9e108a0bec8470c6c0ad7e16b832c8c4335b-merged.mount: Deactivated successfully.
Nov 22 04:40:57 np0005532048 podman[384317]: 2025-11-22 09:40:57.758665931 +0000 UTC m=+0.097184894 container cleanup fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 04:40:57 np0005532048 systemd[1]: libpod-conmon-fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a.scope: Deactivated successfully.
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.795 253665 DEBUG nova.compute.manager [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-unplugged-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.796 253665 DEBUG oslo_concurrency.lockutils [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.796 253665 DEBUG oslo_concurrency.lockutils [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.796 253665 DEBUG oslo_concurrency.lockutils [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.796 253665 DEBUG nova.compute.manager [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-unplugged-2a619e33-769d-4ebf-b212-40975e40d3ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.797 253665 DEBUG nova.compute.manager [req-5fa25043-f906-4f53-9ab1-60c3b0985314 req-bb820551-8edc-422b-b2b3-fa51028e6530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-unplugged-2a619e33-769d-4ebf-b212-40975e40d3ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:40:57 np0005532048 podman[384344]: 2025-11-22 09:40:57.829884689 +0000 UTC m=+0.046046785 container remove fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.831 253665 INFO nova.virt.libvirt.driver [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Deleting instance files /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54_del#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.832 253665 INFO nova.virt.libvirt.driver [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Deletion of /var/lib/nova/instances/ba0b1c52-c98b-4c2f-a213-e203719ada54_del complete#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.837 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0299e8-1ef4-407d-98dd-bd043b466d43]: (4, ('Sat Nov 22 09:40:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb (fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a)\nfce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a\nSat Nov 22 09:40:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb (fce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a)\nfce4d4eaa218605777ed2aadfee18b165a0d8e2fc0d511f474c5125ee5dd071a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[81ac0d9e-3ad2-48e7-a4f5-303bb34a83b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.840 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20228844-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 kernel: tap20228844-20: left promiscuous mode
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.860 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b522ad64-91ee-49ea-908f-86fc9d639edf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.875 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03c60b60-1349-4436-9c62-d14d98ef69e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.876 253665 INFO nova.compute.manager [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.877 253665 DEBUG oslo.service.loopingcall [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.877 253665 DEBUG nova.compute.manager [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:40:57 np0005532048 nova_compute[253661]: 2025-11-22 09:40:57.877 253665 DEBUG nova.network.neutron [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c712acaa-d031-4b9b-b5ce-8e75c41a696d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.904 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfa1c66-0e53-47da-9f1a-205571a6b818]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721710, 'reachable_time': 22514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384360, 'error': None, 'target': 'ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.907 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20228844-2184-465b-8bc3-e846cfb6d3cb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:40:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:40:57.907 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[be8494b8-306d-4ea5-b365-21b657dcb1a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:40:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 175 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 619 KiB/s wr, 126 op/s
Nov 22 04:40:58 np0005532048 systemd[1]: run-netns-ovnmeta\x2d20228844\x2d2184\x2d465b\x2d8bc3\x2de846cfb6d3cb.mount: Deactivated successfully.
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.534 253665 DEBUG nova.compute.manager [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.534 253665 DEBUG nova.compute.manager [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing instance network info cache due to event network-changed-2a619e33-769d-4ebf-b212-40975e40d3ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.535 253665 DEBUG oslo_concurrency.lockutils [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.535 253665 DEBUG oslo_concurrency.lockutils [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.535 253665 DEBUG nova.network.neutron [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Refreshing network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.564 253665 DEBUG nova.network.neutron [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.589 253665 INFO nova.compute.manager [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Took 1.73 seconds to deallocate network for instance.#033[00m
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.639 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.640 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:40:58 np0005532048 nova_compute[253661]: 2025-11-22 09:40:58.706 253665 DEBUG oslo_concurrency.processutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:40:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:40:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:40:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741029208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:40:59 np0005532048 nova_compute[253661]: 2025-11-22 09:40:59.183 253665 DEBUG oslo_concurrency.processutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:40:59 np0005532048 nova_compute[253661]: 2025-11-22 09:40:59.191 253665 DEBUG nova.compute.provider_tree [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:40:59 np0005532048 nova_compute[253661]: 2025-11-22 09:40:59.211 253665 DEBUG nova.scheduler.client.report [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:40:59 np0005532048 nova_compute[253661]: 2025-11-22 09:40:59.235 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:59 np0005532048 nova_compute[253661]: 2025-11-22 09:40:59.266 253665 INFO nova.scheduler.client.report [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance 44f1789d-14f7-46df-a863-e8c3c418f7f3#033[00m
Nov 22 04:40:59 np0005532048 nova_compute[253661]: 2025-11-22 09:40:59.340 253665 DEBUG oslo_concurrency.lockutils [None req-dfd472df-3183-4ca0-a577-d972be00b631 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:40:59 np0005532048 nova_compute[253661]: 2025-11-22 09:40:59.512 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804444.5116167, 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:40:59 np0005532048 nova_compute[253661]: 2025-11-22 09:40:59.513 253665 INFO nova.compute.manager [-] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:40:59 np0005532048 nova_compute[253661]: 2025-11-22 09:40:59.543 253665 DEBUG nova.compute.manager [None req-09d9529d-daaa-49ce-9677-2acfa98ef7ea - - - - - -] [instance: 9c45a555-9969-4d8a-bd3b-1ab61ce6f68c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.041 253665 DEBUG nova.compute.manager [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.042 253665 DEBUG oslo_concurrency.lockutils [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.042 253665 DEBUG oslo_concurrency.lockutils [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.042 253665 DEBUG oslo_concurrency.lockutils [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.043 253665 DEBUG nova.compute.manager [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.043 253665 WARNING nova.compute.manager [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received unexpected event network-vif-plugged-2a619e33-769d-4ebf-b212-40975e40d3ca for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.043 253665 DEBUG nova.compute.manager [req-9c376ebd-0f22-42ed-867d-4f28cf232a49 req-afc0d2c9-0b5f-4f0c-a233-6ac78a45c782 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-deleted-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 65 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 113 KiB/s wr, 138 op/s
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.598 253665 DEBUG nova.network.neutron [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.625 253665 INFO nova.compute.manager [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Took 2.75 seconds to deallocate network for instance.#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.696 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.697 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.766 253665 DEBUG oslo_concurrency.processutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.811 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-unplugged-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.812 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.812 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.812 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.813 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] No waiting events found dispatching network-vif-unplugged-7381e299-12bd-46ec-8abf-df35fe0bf48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.813 253665 WARNING nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received unexpected event network-vif-unplugged-7381e299-12bd-46ec-8abf-df35fe0bf48a for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.813 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.813 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.814 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.814 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "44f1789d-14f7-46df-a863-e8c3c418f7f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.814 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] No waiting events found dispatching network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.814 253665 WARNING nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Received unexpected event network-vif-plugged-7381e299-12bd-46ec-8abf-df35fe0bf48a for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-deleted-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.815 253665 DEBUG oslo_concurrency.lockutils [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.816 253665 DEBUG nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] No waiting events found dispatching network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:41:00 np0005532048 nova_compute[253661]: 2025-11-22 09:41:00.816 253665 WARNING nova.compute.manager [req-261ce2d4-fd38-4493-89a1-5f3daaa14a5e req-80563dde-265c-4167-8cf6-7b25220757f9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received unexpected event network-vif-plugged-27382337-7fe1-4d29-942c-7735f8c98a06 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:41:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:41:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1116268201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.225 253665 DEBUG oslo_concurrency.processutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.228 253665 DEBUG nova.network.neutron [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updated VIF entry in instance network info cache for port 2a619e33-769d-4ebf-b212-40975e40d3ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.228 253665 DEBUG nova.network.neutron [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Updating instance_info_cache with network_info: [{"id": "2a619e33-769d-4ebf-b212-40975e40d3ca", "address": "fa:16:3e:09:e0:cc", "network": {"id": "33aa2b15-84be-4fa8-858f-98182293b1b2", "bridge": "br-int", "label": "tempest-network-smoke--1706014963", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a619e33-76", "ovs_interfaceid": "2a619e33-769d-4ebf-b212-40975e40d3ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27382337-7fe1-4d29-942c-7735f8c98a06", "address": "fa:16:3e:81:ef:0f", "network": {"id": "20228844-2184-465b-8bc3-e846cfb6d3cb", "bridge": "br-int", "label": "tempest-network-smoke--1555760442", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe81:ef0f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27382337-7f", "ovs_interfaceid": "27382337-7fe1-4d29-942c-7735f8c98a06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.234 253665 DEBUG nova.compute.provider_tree [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.246 253665 DEBUG nova.scheduler.client.report [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.250 253665 DEBUG oslo_concurrency.lockutils [req-fae007dc-8dca-4122-9f10-1f01a0b61377 req-a5098484-8228-432b-8f4d-80d502a14423 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ba0b1c52-c98b-4c2f-a213-e203719ada54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.267 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.298 253665 INFO nova.scheduler.client.report [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance ba0b1c52-c98b-4c2f-a213-e203719ada54#033[00m
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.361 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.371 253665 DEBUG oslo_concurrency.lockutils [None req-cc6a8ab5-7517-4e6f-af96-dc16868e570e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "ba0b1c52-c98b-4c2f-a213-e203719ada54" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:01 np0005532048 podman[384405]: 2025-11-22 09:41:01.432549788 +0000 UTC m=+0.122450650 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 04:41:01 np0005532048 nova_compute[253661]: 2025-11-22 09:41:01.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 23 KiB/s wr, 112 op/s
Nov 22 04:41:02 np0005532048 nova_compute[253661]: 2025-11-22 09:41:02.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:41:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:41:02 np0005532048 nova_compute[253661]: 2025-11-22 09:41:02.961 253665 DEBUG nova.compute.manager [req-9f5c4b41-2802-4458-8a7e-c1779ba9b410 req-27cad0f2-ed90-4519-8ee3-6d04ae5ed87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Received event network-vif-deleted-2a619e33-769d-4ebf-b212-40975e40d3ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:02 np0005532048 nova_compute[253661]: 2025-11-22 09:41:02.962 253665 INFO nova.compute.manager [req-9f5c4b41-2802-4458-8a7e-c1779ba9b410 req-27cad0f2-ed90-4519-8ee3-6d04ae5ed87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Neutron deleted interface 2a619e33-769d-4ebf-b212-40975e40d3ca; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:41:02 np0005532048 nova_compute[253661]: 2025-11-22 09:41:02.962 253665 DEBUG nova.network.neutron [req-9f5c4b41-2802-4458-8a7e-c1779ba9b410 req-27cad0f2-ed90-4519-8ee3-6d04ae5ed87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 22 04:41:02 np0005532048 nova_compute[253661]: 2025-11-22 09:41:02.966 253665 DEBUG nova.compute.manager [req-9f5c4b41-2802-4458-8a7e-c1779ba9b410 req-27cad0f2-ed90-4519-8ee3-6d04ae5ed87e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Detach interface failed, port_id=2a619e33-769d-4ebf-b212-40975e40d3ca, reason: Instance ba0b1c52-c98b-4c2f-a213-e203719ada54 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:41:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:41:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 21 KiB/s wr, 104 op/s
Nov 22 04:41:04 np0005532048 nova_compute[253661]: 2025-11-22 09:41:04.714 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:04 np0005532048 nova_compute[253661]: 2025-11-22 09:41:04.924 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:05 np0005532048 nova_compute[253661]: 2025-11-22 09:41:05.480 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804450.4791248, 7f88c3e8-e667-4d9a-8178-c99843560719 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:41:05 np0005532048 nova_compute[253661]: 2025-11-22 09:41:05.482 253665 INFO nova.compute.manager [-] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:41:05 np0005532048 nova_compute[253661]: 2025-11-22 09:41:05.504 253665 DEBUG nova.compute.manager [None req-2c0dcb17-9bff-4c3d-bffe-dae980b54395 - - - - - -] [instance: 7f88c3e8-e667-4d9a-8178-c99843560719] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:41:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Nov 22 04:41:06 np0005532048 nova_compute[253661]: 2025-11-22 09:41:06.326 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804451.3253665, aaeb1088-1220-47e3-9462-ba96b1d4e87a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:41:06 np0005532048 nova_compute[253661]: 2025-11-22 09:41:06.326 253665 INFO nova.compute.manager [-] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:41:06 np0005532048 nova_compute[253661]: 2025-11-22 09:41:06.352 253665 DEBUG nova.compute.manager [None req-372409af-39b3-464b-b494-ca545dbad6ac - - - - - -] [instance: aaeb1088-1220-47e3-9462-ba96b1d4e87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:41:06 np0005532048 nova_compute[253661]: 2025-11-22 09:41:06.362 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:07 np0005532048 nova_compute[253661]: 2025-11-22 09:41:07.406 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Nov 22 04:41:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:41:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.1 KiB/s wr, 46 op/s
Nov 22 04:41:10 np0005532048 nova_compute[253661]: 2025-11-22 09:41:10.883 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:10 np0005532048 nova_compute[253661]: 2025-11-22 09:41:10.884 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:10 np0005532048 nova_compute[253661]: 2025-11-22 09:41:10.900 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:41:10 np0005532048 nova_compute[253661]: 2025-11-22 09:41:10.995 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:10 np0005532048 nova_compute[253661]: 2025-11-22 09:41:10.996 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.004 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.005 253665 INFO nova.compute.claims [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.115 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.325 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804456.32309, 44f1789d-14f7-46df-a863-e8c3c418f7f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.325 253665 INFO nova.compute.manager [-] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.350 253665 DEBUG nova.compute.manager [None req-527332f5-65bf-4284-8a4f-6d809b5cfd08 - - - - - -] [instance: 44f1789d-14f7-46df-a863-e8c3c418f7f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.365 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:41:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707255038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.607 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.614 253665 DEBUG nova.compute.provider_tree [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.629 253665 DEBUG nova.scheduler.client.report [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.664 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.665 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.725 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.726 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.746 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.764 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.850 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.851 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.852 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Creating image(s)#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.872 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.894 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.915 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:11 np0005532048 nova_compute[253661]: 2025-11-22 09:41:11.920 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.003 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.004 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.005 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.005 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.025 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.028 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 1.4 KiB/s wr, 9 op/s
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.317 253665 DEBUG nova.policy [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.331 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.357 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804457.352855, ba0b1c52-c98b-4c2f-a213-e203719ada54 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.357 253665 INFO nova.compute.manager [-] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:41:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:41:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233261021' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:41:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:41:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233261021' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.393 253665 DEBUG nova.compute.manager [None req-25f0e0e4-c7cc-4981-a8db-415afb6614f4 - - - - - -] [instance: ba0b1c52-c98b-4c2f-a213-e203719ada54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.399 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.706 253665 DEBUG nova.objects.instance [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid de63fafb-9cce-47c5-8cdc-f5c348b1777a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.725 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.726 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Ensure instance console log exists: /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.727 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.727 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:12 np0005532048 nova_compute[253661]: 2025-11-22 09:41:12.727 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:41:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9ace6b39-f32f-4971-9184-611c8252821f does not exist
Nov 22 04:41:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0dc3dfa4-747f-48f0-b2e4-2cc70cdfce42 does not exist
Nov 22 04:41:13 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 454c5f95-9c08-4aa6-8edb-413f3f3e52ea does not exist
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:41:13 np0005532048 nova_compute[253661]: 2025-11-22 09:41:13.537 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Successfully updated port: a86218e5-015d-4324-b94e-b87b21f3333d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:41:13 np0005532048 nova_compute[253661]: 2025-11-22 09:41:13.554 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:41:13 np0005532048 nova_compute[253661]: 2025-11-22 09:41:13.554 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:41:13 np0005532048 nova_compute[253661]: 2025-11-22 09:41:13.554 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:41:13 np0005532048 nova_compute[253661]: 2025-11-22 09:41:13.637 253665 DEBUG nova.compute.manager [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:13 np0005532048 nova_compute[253661]: 2025-11-22 09:41:13.637 253665 DEBUG nova.compute.manager [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Refreshing instance network info cache due to event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:41:13 np0005532048 nova_compute[253661]: 2025-11-22 09:41:13.638 253665 DEBUG oslo_concurrency.lockutils [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:41:13 np0005532048 nova_compute[253661]: 2025-11-22 09:41:13.701 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:41:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:41:13 np0005532048 podman[384892]: 2025-11-22 09:41:13.993044212 +0000 UTC m=+0.047828939 container create c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 04:41:14 np0005532048 systemd[1]: Started libpod-conmon-c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488.scope.
Nov 22 04:41:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:41:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 65 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Nov 22 04:41:14 np0005532048 podman[384892]: 2025-11-22 09:41:13.973175647 +0000 UTC m=+0.027960374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:41:14 np0005532048 podman[384892]: 2025-11-22 09:41:14.067434138 +0000 UTC m=+0.122218865 container init c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:41:14 np0005532048 podman[384892]: 2025-11-22 09:41:14.074261974 +0000 UTC m=+0.129046681 container start c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:41:14 np0005532048 podman[384892]: 2025-11-22 09:41:14.077514614 +0000 UTC m=+0.132299321 container attach c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:41:14 np0005532048 goofy_rosalind[384909]: 167 167
Nov 22 04:41:14 np0005532048 systemd[1]: libpod-c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488.scope: Deactivated successfully.
Nov 22 04:41:14 np0005532048 conmon[384909]: conmon c20ef0d8949d7ea1b749 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488.scope/container/memory.events
Nov 22 04:41:14 np0005532048 podman[384892]: 2025-11-22 09:41:14.080968408 +0000 UTC m=+0.135753115 container died c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 22 04:41:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e82c79c9f7904ddfa669e4f159bfe8e6f08e92aaed3048656db6388d65b30d7d-merged.mount: Deactivated successfully.
Nov 22 04:41:14 np0005532048 podman[384892]: 2025-11-22 09:41:14.120264717 +0000 UTC m=+0.175049434 container remove c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rosalind, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:41:14 np0005532048 systemd[1]: libpod-conmon-c20ef0d8949d7ea1b7493e55344ac5ad421d21624da0f6e76d2732c841c72488.scope: Deactivated successfully.
Nov 22 04:41:14 np0005532048 podman[384933]: 2025-11-22 09:41:14.287238134 +0000 UTC m=+0.051551320 container create 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:41:14 np0005532048 systemd[1]: Started libpod-conmon-51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800.scope.
Nov 22 04:41:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:41:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:14 np0005532048 podman[384933]: 2025-11-22 09:41:14.358435721 +0000 UTC m=+0.122748927 container init 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:41:14 np0005532048 podman[384933]: 2025-11-22 09:41:14.268861914 +0000 UTC m=+0.033175120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:41:14 np0005532048 podman[384933]: 2025-11-22 09:41:14.36740335 +0000 UTC m=+0.131716536 container start 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.369 253665 DEBUG nova.network.neutron [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Updating instance_info_cache with network_info: [{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:41:14 np0005532048 podman[384933]: 2025-11-22 09:41:14.371568192 +0000 UTC m=+0.135881398 container attach 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.405 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.405 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Instance network_info: |[{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.406 253665 DEBUG oslo_concurrency.lockutils [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.406 253665 DEBUG nova.network.neutron [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Refreshing network info cache for port a86218e5-015d-4324-b94e-b87b21f3333d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.411 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start _get_guest_xml network_info=[{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.416 253665 WARNING nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.424 253665 DEBUG nova.virt.libvirt.host [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:41:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.427 253665 DEBUG nova.virt.libvirt.host [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.438 253665 DEBUG nova.virt.libvirt.host [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.439 253665 DEBUG nova.virt.libvirt.host [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.439 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.439 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.440 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.440 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.441 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.441 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.441 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.441 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.442 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.442 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.442 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.442 253665 DEBUG nova.virt.hardware [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.445 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:41:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171995316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.890 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.921 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:14 np0005532048 nova_compute[253661]: 2025-11-22 09:41:14.927 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:41:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220121214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.410 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.412 253665 DEBUG nova.virt.libvirt.vif [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-801609584',display_name='tempest-TestNetworkBasicOps-server-801609584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-801609584',id=127,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBR3KlC+I+BrQmt+UKktZu9qmsBf630tj2ls2EAmBhPrPHG9I1DIKGXJ13OKDXnaKtyixc97nbX6Fgi3vYqBPQ5wohq9YCdMs+5UaDa5kTzpHNni4MDhpWBjxoEExVT1mA==',key_name='tempest-TestNetworkBasicOps-1734152809',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-091r93vu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:11Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=de63fafb-9cce-47c5-8cdc-f5c348b1777a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:41:15 np0005532048 stoic_margulis[384949]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:41:15 np0005532048 stoic_margulis[384949]: --> relative data size: 1.0
Nov 22 04:41:15 np0005532048 stoic_margulis[384949]: --> All data devices are unavailable
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.413 253665 DEBUG nova.network.os_vif_util [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.415 253665 DEBUG nova.network.os_vif_util [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.417 253665 DEBUG nova.objects.instance [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid de63fafb-9cce-47c5-8cdc-f5c348b1777a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.429 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <uuid>de63fafb-9cce-47c5-8cdc-f5c348b1777a</uuid>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <name>instance-0000007f</name>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkBasicOps-server-801609584</nova:name>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:41:14</nova:creationTime>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <nova:port uuid="a86218e5-015d-4324-b94e-b87b21f3333d">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <entry name="serial">de63fafb-9cce-47c5-8cdc-f5c348b1777a</entry>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <entry name="uuid">de63fafb-9cce-47c5-8cdc-f5c348b1777a</entry>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:9c:8b:9e"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <target dev="tapa86218e5-01"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/console.log" append="off"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:41:15 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:41:15 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:41:15 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:41:15 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.430 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Preparing to wait for external event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.431 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.431 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.432 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.433 253665 DEBUG nova.virt.libvirt.vif [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:41:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-801609584',display_name='tempest-TestNetworkBasicOps-server-801609584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-801609584',id=127,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBR3KlC+I+BrQmt+UKktZu9qmsBf630tj2ls2EAmBhPrPHG9I1DIKGXJ13OKDXnaKtyixc97nbX6Fgi3vYqBPQ5wohq9YCdMs+5UaDa5kTzpHNni4MDhpWBjxoEExVT1mA==',key_name='tempest-TestNetworkBasicOps-1734152809',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-091r93vu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:41:11Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=de63fafb-9cce-47c5-8cdc-f5c348b1777a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.433 253665 DEBUG nova.network.os_vif_util [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.434 253665 DEBUG nova.network.os_vif_util [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.434 253665 DEBUG os_vif [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.436 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.436 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.441 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.442 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa86218e5-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.442 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa86218e5-01, col_values=(('external_ids', {'iface-id': 'a86218e5-015d-4324-b94e-b87b21f3333d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:8b:9e', 'vm-uuid': 'de63fafb-9cce-47c5-8cdc-f5c348b1777a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:15 np0005532048 NetworkManager[48920]: <info>  [1763804475.4465] manager: (tapa86218e5-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.447 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:41:15 np0005532048 systemd[1]: libpod-51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800.scope: Deactivated successfully.
Nov 22 04:41:15 np0005532048 systemd[1]: libpod-51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800.scope: Consumed 1.018s CPU time.
Nov 22 04:41:15 np0005532048 podman[384933]: 2025-11-22 09:41:15.450525328 +0000 UTC m=+1.214838514 container died 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.452 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.453 253665 INFO os_vif [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:8b:9e,bridge_name='br-int',has_traffic_filtering=True,id=a86218e5-015d-4324-b94e-b87b21f3333d,network=Network(1915d045-a483-4ba0-9f22-02eb1e398b68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa86218e5-01')#033[00m
Nov 22 04:41:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-108db8e747d6db4557d1119a29a33578c38d6e4316980c8fa95fcebf9bf88e30-merged.mount: Deactivated successfully.
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.503 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.503 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.503 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:9c:8b:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.504 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Using config drive#033[00m
Nov 22 04:41:15 np0005532048 podman[384933]: 2025-11-22 09:41:15.511993149 +0000 UTC m=+1.276306335 container remove 51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:41:15 np0005532048 systemd[1]: libpod-conmon-51228a2cc7b2f73ce8bb3853388a813ce4735b861242d36407d26e3faf22a800.scope: Deactivated successfully.
Nov 22 04:41:15 np0005532048 nova_compute[253661]: 2025-11-22 09:41:15.527 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 65 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.124 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Creating config drive at /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config#033[00m
Nov 22 04:41:16 np0005532048 podman[385214]: 2025-11-22 09:41:16.12778725 +0000 UTC m=+0.041605027 container create 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.129 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpohgzvf82 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:16 np0005532048 systemd[1]: Started libpod-conmon-484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84.scope.
Nov 22 04:41:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:41:16 np0005532048 podman[385214]: 2025-11-22 09:41:16.200637168 +0000 UTC m=+0.114454985 container init 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:41:16 np0005532048 podman[385214]: 2025-11-22 09:41:16.109502104 +0000 UTC m=+0.023319911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:41:16 np0005532048 podman[385214]: 2025-11-22 09:41:16.207877205 +0000 UTC m=+0.121694992 container start 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:41:16 np0005532048 admiring_bhabha[385232]: 167 167
Nov 22 04:41:16 np0005532048 podman[385214]: 2025-11-22 09:41:16.212151659 +0000 UTC m=+0.125969436 container attach 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:41:16 np0005532048 systemd[1]: libpod-484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84.scope: Deactivated successfully.
Nov 22 04:41:16 np0005532048 podman[385214]: 2025-11-22 09:41:16.212875807 +0000 UTC m=+0.126693594 container died 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 04:41:16 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b914034428f425237100a5134871ed82feda09b5a5cb67823ed1fc4ba6f219be-merged.mount: Deactivated successfully.
Nov 22 04:41:16 np0005532048 podman[385214]: 2025-11-22 09:41:16.251625264 +0000 UTC m=+0.165443051 container remove 484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 22 04:41:16 np0005532048 systemd[1]: libpod-conmon-484cb9866d19f9bb3abd449ae63613b248371f093866a2567a58e4a7495a2f84.scope: Deactivated successfully.
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.271 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpohgzvf82" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.294 253665 DEBUG nova.storage.rbd_utils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.297 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.367 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:16 np0005532048 podman[385275]: 2025-11-22 09:41:16.42148612 +0000 UTC m=+0.051865567 container create 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:41:16 np0005532048 systemd[1]: Started libpod-conmon-2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd.scope.
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.464 253665 DEBUG oslo_concurrency.processutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config de63fafb-9cce-47c5-8cdc-f5c348b1777a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.464 253665 INFO nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Deleting local config drive /var/lib/nova/instances/de63fafb-9cce-47c5-8cdc-f5c348b1777a/disk.config because it was imported into RBD.#033[00m
Nov 22 04:41:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:41:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:16 np0005532048 podman[385275]: 2025-11-22 09:41:16.492839041 +0000 UTC m=+0.123218548 container init 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:41:16 np0005532048 podman[385275]: 2025-11-22 09:41:16.40103219 +0000 UTC m=+0.031411687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:41:16 np0005532048 podman[385275]: 2025-11-22 09:41:16.501970544 +0000 UTC m=+0.132349991 container start 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:41:16 np0005532048 podman[385275]: 2025-11-22 09:41:16.505985552 +0000 UTC m=+0.136365009 container attach 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:41:16 np0005532048 kernel: tapa86218e5-01: entered promiscuous mode
Nov 22 04:41:16 np0005532048 NetworkManager[48920]: <info>  [1763804476.5231] manager: (tapa86218e5-01): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Nov 22 04:41:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:41:16Z|01376|binding|INFO|Claiming lport a86218e5-015d-4324-b94e-b87b21f3333d for this chassis.
Nov 22 04:41:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:41:16Z|01377|binding|INFO|a86218e5-015d-4324-b94e-b87b21f3333d: Claiming fa:16:3e:9c:8b:9e 10.100.0.7
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.545 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8b:9e 10.100.0.7'], port_security=['fa:16:3e:9c:8b:9e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'de63fafb-9cce-47c5-8cdc-f5c348b1777a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1915d045-a483-4ba0-9f22-02eb1e398b68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c342dd41-b1eb-43d0-a96a-717d17dead9b, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a86218e5-015d-4324-b94e-b87b21f3333d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.546 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a86218e5-015d-4324-b94e-b87b21f3333d in datapath 1915d045-a483-4ba0-9f22-02eb1e398b68 bound to our chassis#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.548 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1915d045-a483-4ba0-9f22-02eb1e398b68#033[00m
Nov 22 04:41:16 np0005532048 systemd-udevd[385326]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.562 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c5675a80-c5c1-48e8-b083-43174f239fa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.563 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1915d045-a1 in ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.566 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1915d045-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.566 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[556111d1-d26c-4cd3-97b8-ebadb53547cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.567 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1b75e8e6-c35b-4a1f-968e-0a96bdca61af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 systemd-machined[215941]: New machine qemu-158-instance-0000007f.
Nov 22 04:41:16 np0005532048 NetworkManager[48920]: <info>  [1763804476.5746] device (tapa86218e5-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:41:16 np0005532048 NetworkManager[48920]: <info>  [1763804476.5754] device (tapa86218e5-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.586 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[7560725c-2607-46cc-af1b-aaa727deed3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 systemd[1]: Started Virtual Machine qemu-158-instance-0000007f.
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:41:16Z|01378|binding|INFO|Setting lport a86218e5-015d-4324-b94e-b87b21f3333d ovn-installed in OVS
Nov 22 04:41:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:41:16Z|01379|binding|INFO|Setting lport a86218e5-015d-4324-b94e-b87b21f3333d up in Southbound
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.609 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[050cd6b4-71bc-4113-aee4-e1be99ccaf61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.611 253665 DEBUG nova.network.neutron [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Updated VIF entry in instance network info cache for port a86218e5-015d-4324-b94e-b87b21f3333d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.612 253665 DEBUG nova.network.neutron [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Updating instance_info_cache with network_info: [{"id": "a86218e5-015d-4324-b94e-b87b21f3333d", "address": "fa:16:3e:9c:8b:9e", "network": {"id": "1915d045-a483-4ba0-9f22-02eb1e398b68", "bridge": "br-int", "label": "tempest-network-smoke--1392685052", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa86218e5-01", "ovs_interfaceid": "a86218e5-015d-4324-b94e-b87b21f3333d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.625 253665 DEBUG oslo_concurrency.lockutils [req-1d925586-68ed-49a3-ad22-3b6923f6a4c9 req-9cb7332c-73ac-42c7-9bbb-86740a73c485 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.643 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1f061489-1284-445d-a8f4-abf72e5d5054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 NetworkManager[48920]: <info>  [1763804476.6573] manager: (tap1915d045-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/553)
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.656 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8891ae50-4e1d-4b5f-a8af-4e241eccab31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.689 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[75a9ace3-f2e7-478f-b97c-93f88d9e5462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.693 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3cb50d-2b12-4151-8b3c-47d9c69ea77b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 NetworkManager[48920]: <info>  [1763804476.7168] device (tap1915d045-a0): carrier: link connected
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.721 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4a0360-5116-4f79-ae27-2c5c82b7b638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.736 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78178f2c-1c0c-41c3-8ef8-de2355669794]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1915d045-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:8a:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 391], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732745, 'reachable_time': 31586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385361, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.753 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b20a3114-4ffd-4050-a97d-13e0d2f658d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:8a9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 732745, 'tstamp': 732745}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385362, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.772 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[14ba98a6-adc3-49c2-b9d2-ae40f76a2f4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1915d045-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:8a:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 391], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 732745, 'reachable_time': 31586, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385363, 'error': None, 'target': 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.800 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7c3199-b5ce-4aa6-9457-bffce2607a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.865 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8af71a5-7802-455d-b54b-f3000a73ab72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.866 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1915d045-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.867 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.867 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1915d045-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:16 np0005532048 NetworkManager[48920]: <info>  [1763804476.9135] manager: (tap1915d045-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Nov 22 04:41:16 np0005532048 kernel: tap1915d045-a0: entered promiscuous mode
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.919 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1915d045-a0, col_values=(('external_ids', {'iface-id': '3f753c2a-471e-42be-9ebf-5498238bbd2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.920 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:16 np0005532048 ovn_controller[152872]: 2025-11-22T09:41:16Z|01380|binding|INFO|Releasing lport 3f753c2a-471e-42be-9ebf-5498238bbd2c from this chassis (sb_readonly=0)
Nov 22 04:41:16 np0005532048 nova_compute[253661]: 2025-11-22 09:41:16.935 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.936 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1915d045-a483-4ba0-9f22-02eb1e398b68.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1915d045-a483-4ba0-9f22-02eb1e398b68.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.937 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[27440e6a-5f8b-4402-98f9-1c2bcd4f868b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.938 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-1915d045-a483-4ba0-9f22-02eb1e398b68
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/1915d045-a483-4ba0-9f22-02eb1e398b68.pid.haproxy
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 1915d045-a483-4ba0-9f22-02eb1e398b68
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:41:16 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:16.938 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68', 'env', 'PROCESS_TAG=haproxy-1915d045-a483-4ba0-9f22-02eb1e398b68', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1915d045-a483-4ba0-9f22-02eb1e398b68.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:41:17 np0005532048 nova_compute[253661]: 2025-11-22 09:41:17.123 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804477.1225963, de63fafb-9cce-47c5-8cdc-f5c348b1777a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:41:17 np0005532048 nova_compute[253661]: 2025-11-22 09:41:17.123 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] VM Started (Lifecycle Event)#033[00m
Nov 22 04:41:17 np0005532048 nova_compute[253661]: 2025-11-22 09:41:17.139 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:41:17 np0005532048 nova_compute[253661]: 2025-11-22 09:41:17.144 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804477.1228218, de63fafb-9cce-47c5-8cdc-f5c348b1777a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:41:17 np0005532048 nova_compute[253661]: 2025-11-22 09:41:17.144 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:41:17 np0005532048 nova_compute[253661]: 2025-11-22 09:41:17.160 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:41:17 np0005532048 nova_compute[253661]: 2025-11-22 09:41:17.163 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:41:17 np0005532048 nova_compute[253661]: 2025-11-22 09:41:17.181 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:41:17 np0005532048 funny_swirles[385310]: {
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:    "0": [
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:        {
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "devices": [
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "/dev/loop3"
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            ],
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_name": "ceph_lv0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_size": "21470642176",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "name": "ceph_lv0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "tags": {
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.cluster_name": "ceph",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.crush_device_class": "",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.encrypted": "0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.osd_id": "0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.type": "block",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.vdo": "0"
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            },
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "type": "block",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "vg_name": "ceph_vg0"
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:        }
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:    ],
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:    "1": [
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:        {
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "devices": [
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "/dev/loop4"
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            ],
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_name": "ceph_lv1",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_size": "21470642176",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "name": "ceph_lv1",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "tags": {
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.cluster_name": "ceph",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.crush_device_class": "",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.encrypted": "0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.osd_id": "1",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.type": "block",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.vdo": "0"
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            },
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "type": "block",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "vg_name": "ceph_vg1"
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:        }
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:    ],
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:    "2": [
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:        {
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "devices": [
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "/dev/loop5"
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            ],
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_name": "ceph_lv2",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_size": "21470642176",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "name": "ceph_lv2",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "tags": {
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.cluster_name": "ceph",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.crush_device_class": "",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.encrypted": "0",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.osd_id": "2",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.type": "block",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:                "ceph.vdo": "0"
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            },
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "type": "block",
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:            "vg_name": "ceph_vg2"
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:        }
Nov 22 04:41:17 np0005532048 funny_swirles[385310]:    ]
Nov 22 04:41:17 np0005532048 funny_swirles[385310]: }
Nov 22 04:41:17 np0005532048 systemd[1]: libpod-2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd.scope: Deactivated successfully.
Nov 22 04:41:17 np0005532048 podman[385437]: 2025-11-22 09:41:17.362014008 +0000 UTC m=+0.093776840 container create 4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 22 04:41:17 np0005532048 podman[385437]: 2025-11-22 09:41:17.294224202 +0000 UTC m=+0.025987074 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:41:17 np0005532048 systemd[1]: Started libpod-conmon-4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85.scope.
Nov 22 04:41:17 np0005532048 podman[385452]: 2025-11-22 09:41:17.398365355 +0000 UTC m=+0.024502970 container died 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:41:17 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:41:17 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b7b2a7a5a23716c4c3002911f51c4d5a41209bbe3eaadcfdb8becdce955832/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-16bffa5653772c33444d0be3ae5d71f65400d10b538073b0e057f4abdd0b37ab-merged.mount: Deactivated successfully.
Nov 22 04:41:17 np0005532048 podman[385437]: 2025-11-22 09:41:17.455491559 +0000 UTC m=+0.187254421 container init 4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:41:17 np0005532048 podman[385437]: 2025-11-22 09:41:17.462114261 +0000 UTC m=+0.193877093 container start 4657850d6e83ad0800fc41135bcdd0fc07b72e1e82170a3f5097ff0fe2488f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:41:17 np0005532048 podman[385452]: 2025-11-22 09:41:17.466232831 +0000 UTC m=+0.092370426 container remove 2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swirles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:41:17 np0005532048 systemd[1]: libpod-conmon-2b60833f7cafa87a750f6e7d4e20b52d91e597c109edfd6edac65a430cda79bd.scope: Deactivated successfully.
Nov 22 04:41:17 np0005532048 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [NOTICE]   (385474) : New worker (385476) forked
Nov 22 04:41:17 np0005532048 neutron-haproxy-ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68[385468]: [NOTICE]   (385474) : Loading success.
Nov 22 04:41:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:41:18 np0005532048 podman[385625]: 2025-11-22 09:41:18.120553783 +0000 UTC m=+0.069649281 container create 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:41:18 np0005532048 systemd[1]: Started libpod-conmon-3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514.scope.
Nov 22 04:41:18 np0005532048 podman[385625]: 2025-11-22 09:41:18.072739066 +0000 UTC m=+0.021834594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:41:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:41:18 np0005532048 podman[385625]: 2025-11-22 09:41:18.205154148 +0000 UTC m=+0.154249646 container init 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:41:18 np0005532048 podman[385625]: 2025-11-22 09:41:18.215408008 +0000 UTC m=+0.164503516 container start 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:41:18 np0005532048 podman[385625]: 2025-11-22 09:41:18.21998376 +0000 UTC m=+0.169079328 container attach 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:41:18 np0005532048 exciting_leavitt[385641]: 167 167
Nov 22 04:41:18 np0005532048 systemd[1]: libpod-3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514.scope: Deactivated successfully.
Nov 22 04:41:18 np0005532048 podman[385625]: 2025-11-22 09:41:18.222251336 +0000 UTC m=+0.171346844 container died 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:41:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5226e1b02dc803701ca7876db97f61c1b1065c3673c23cc6a65eed7fafed48b4-merged.mount: Deactivated successfully.
Nov 22 04:41:18 np0005532048 podman[385625]: 2025-11-22 09:41:18.268121645 +0000 UTC m=+0.217217153 container remove 3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:41:18 np0005532048 systemd[1]: libpod-conmon-3b2f9de35fdea2827205b6ba18a9c2c5bb722cdb0f471b09fe785d200071f514.scope: Deactivated successfully.
Nov 22 04:41:18 np0005532048 podman[385664]: 2025-11-22 09:41:18.417722327 +0000 UTC m=+0.023912894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:41:18 np0005532048 podman[385664]: 2025-11-22 09:41:18.554248129 +0000 UTC m=+0.160438686 container create 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:41:18 np0005532048 systemd[1]: Started libpod-conmon-1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9.scope.
Nov 22 04:41:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:41:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:41:18 np0005532048 podman[385664]: 2025-11-22 09:41:18.641095979 +0000 UTC m=+0.247286556 container init 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:41:18 np0005532048 podman[385664]: 2025-11-22 09:41:18.646423549 +0000 UTC m=+0.252614096 container start 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:41:18 np0005532048 podman[385664]: 2025-11-22 09:41:18.649780601 +0000 UTC m=+0.255971158 container attach 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:41:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:41:19 np0005532048 elegant_curran[385680]: {
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "osd_id": 1,
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "type": "bluestore"
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:    },
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "osd_id": 0,
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "type": "bluestore"
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:    },
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "osd_id": 2,
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:        "type": "bluestore"
Nov 22 04:41:19 np0005532048 elegant_curran[385680]:    }
Nov 22 04:41:19 np0005532048 elegant_curran[385680]: }
Nov 22 04:41:19 np0005532048 systemd[1]: libpod-1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9.scope: Deactivated successfully.
Nov 22 04:41:19 np0005532048 systemd[1]: libpod-1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9.scope: Consumed 1.029s CPU time.
Nov 22 04:41:19 np0005532048 podman[385664]: 2025-11-22 09:41:19.670220399 +0000 UTC m=+1.276410946 container died 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:41:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-721e84e695cba21b6d7ae961383e0129a30e62c748fcd32a3e7e688940f4c88b-merged.mount: Deactivated successfully.
Nov 22 04:41:19 np0005532048 podman[385664]: 2025-11-22 09:41:19.744001111 +0000 UTC m=+1.350191648 container remove 1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:41:19 np0005532048 systemd[1]: libpod-conmon-1d230e874a432c949b7a4f037cf092652fb47c6b5c69306e564b0cd5422380b9.scope: Deactivated successfully.
Nov 22 04:41:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:41:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:41:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:41:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:41:19 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fe1d546c-0be4-4a9d-9c15-da9eedf7d7e5 does not exist
Nov 22 04:41:19 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 354cdb3b-33c6-4463-9e2a-17488542c0fb does not exist
Nov 22 04:41:19 np0005532048 nova_compute[253661]: 2025-11-22 09:41:19.825 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:19 np0005532048 nova_compute[253661]: 2025-11-22 09:41:19.826 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "a20eee04-e3b6-4162-91f7-e6c92d8a07fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:19 np0005532048 nova_compute[253661]: 2025-11-22 09:41:19.840 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:41:19 np0005532048 nova_compute[253661]: 2025-11-22 09:41:19.915 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:19 np0005532048 nova_compute[253661]: 2025-11-22 09:41:19.916 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:19 np0005532048 nova_compute[253661]: 2025-11-22 09:41:19.929 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:41:19 np0005532048 nova_compute[253661]: 2025-11-22 09:41:19.929 253665 INFO nova.compute.claims [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.045 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 88 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:41:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2620661734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.506 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.513 253665 DEBUG nova.compute.provider_tree [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.529 253665 DEBUG nova.scheduler.client.report [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.559 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.560 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.609 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.609 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.624 253665 INFO nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.642 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.723 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.724 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.725 253665 INFO nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Creating image(s)#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.757 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.784 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:41:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.834 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.840 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.950 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.951 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.952 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.952 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.978 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:20 np0005532048 nova_compute[253661]: 2025-11-22 09:41:20.984 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.042 253665 DEBUG nova.policy [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.204 253665 DEBUG nova.compute.manager [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.205 253665 DEBUG oslo_concurrency.lockutils [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.206 253665 DEBUG oslo_concurrency.lockutils [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.206 253665 DEBUG oslo_concurrency.lockutils [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.206 253665 DEBUG nova.compute.manager [req-650d071c-1d11-44cf-a1e9-304d55e84ce4 req-68bfe5c4-2db9-4834-abaf-3d7357914530 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Processing event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.208 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.216 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804481.2140331, de63fafb-9cce-47c5-8cdc-f5c348b1777a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.217 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.223 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.229 253665 INFO nova.virt.libvirt.driver [-] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Instance spawned successfully.#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.232 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.236 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.240 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.251 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.253 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.253 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.253 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.254 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.254 253665 DEBUG nova.virt.libvirt.driver [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.261 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.321 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.357 253665 INFO nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Took 9.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.358 253665 DEBUG nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.390 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.402 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.504 253665 DEBUG nova.objects.instance [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid a20eee04-e3b6-4162-91f7-e6c92d8a07fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.612 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.612 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Ensure instance console log exists: /var/lib/nova/instances/a20eee04-e3b6-4162-91f7-e6c92d8a07fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.613 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.613 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.613 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.652 253665 INFO nova.compute.manager [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Took 10.70 seconds to build instance.#033[00m
Nov 22 04:41:21 np0005532048 nova_compute[253661]: 2025-11-22 09:41:21.674 253665 DEBUG oslo_concurrency.lockutils [None req-3c250d88-e774-4cc7-95f2-0c8116e93cec 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 107 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.6 MiB/s wr, 39 op/s
Nov 22 04:41:22 np0005532048 nova_compute[253661]: 2025-11-22 09:41:22.585 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Successfully created port: c9b1b309-4443-4694-8649-f59d1739cdaf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:41:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:41:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:41:23 np0005532048 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG nova.compute.manager [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:23 np0005532048 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG oslo_concurrency.lockutils [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:23 np0005532048 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG oslo_concurrency.lockutils [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:23 np0005532048 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG oslo_concurrency.lockutils [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:23 np0005532048 nova_compute[253661]: 2025-11-22 09:41:23.965 253665 DEBUG nova.compute.manager [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] No waiting events found dispatching network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:41:23 np0005532048 nova_compute[253661]: 2025-11-22 09:41:23.966 253665 WARNING nova.compute.manager [req-a573a748-000e-4cad-82ba-bee9248b8f55 req-c4234156-7a62-4557-aebe-d2ce2d96d870 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received unexpected event network-vif-plugged-a86218e5-015d-4324-b94e-b87b21f3333d for instance with vm_state active and task_state None.#033[00m
Nov 22 04:41:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Nov 22 04:41:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:25.049 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:41:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:25.050 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:41:25 np0005532048 nova_compute[253661]: 2025-11-22 09:41:25.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:25 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:25.051 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:41:25 np0005532048 nova_compute[253661]: 2025-11-22 09:41:25.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:25 np0005532048 nova_compute[253661]: 2025-11-22 09:41:25.670 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Successfully updated port: c9b1b309-4443-4694-8649-f59d1739cdaf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:41:25 np0005532048 nova_compute[253661]: 2025-11-22 09:41:25.681 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:41:25 np0005532048 nova_compute[253661]: 2025-11-22 09:41:25.682 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:41:25 np0005532048 nova_compute[253661]: 2025-11-22 09:41:25.682 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:41:26 np0005532048 nova_compute[253661]: 2025-11-22 09:41:25.999 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:41:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 99 op/s
Nov 22 04:41:26 np0005532048 nova_compute[253661]: 2025-11-22 09:41:26.068 253665 DEBUG nova.compute.manager [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Received event network-changed-c9b1b309-4443-4694-8649-f59d1739cdaf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:26 np0005532048 nova_compute[253661]: 2025-11-22 09:41:26.069 253665 DEBUG nova.compute.manager [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Refreshing instance network info cache due to event network-changed-c9b1b309-4443-4694-8649-f59d1739cdaf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:41:26 np0005532048 nova_compute[253661]: 2025-11-22 09:41:26.069 253665 DEBUG oslo_concurrency.lockutils [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:41:26 np0005532048 podman[385965]: 2025-11-22 09:41:26.367753274 +0000 UTC m=+0.062915647 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 22 04:41:26 np0005532048 nova_compute[253661]: 2025-11-22 09:41:26.371 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:26 np0005532048 podman[385966]: 2025-11-22 09:41:26.38233595 +0000 UTC m=+0.077482502 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.052 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:27 np0005532048 NetworkManager[48920]: <info>  [1763804487.0553] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Nov 22 04:41:27 np0005532048 NetworkManager[48920]: <info>  [1763804487.0569] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.171 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:41:27Z|01381|binding|INFO|Releasing lport 3f753c2a-471e-42be-9ebf-5498238bbd2c from this chassis (sb_readonly=0)
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.807 253665 DEBUG nova.network.neutron [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Updating instance_info_cache with network_info: [{"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.957 253665 DEBUG oslo_concurrency.lockutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.958 253665 DEBUG nova.compute.manager [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Instance network_info: |[{"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.958 253665 DEBUG oslo_concurrency.lockutils [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-a20eee04-e3b6-4162-91f7-e6c92d8a07fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.958 253665 DEBUG nova.network.neutron [req-99d03b31-284e-4fac-ab44-d7ca142eb1b4 req-384b9059-659a-4b1a-8684-65b981a87ae7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Refreshing network info cache for port c9b1b309-4443-4694-8649-f59d1739cdaf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.961 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: a20eee04-e3b6-4162-91f7-e6c92d8a07fa] Start _get_guest_xml network_info=[{"id": "c9b1b309-4443-4694-8649-f59d1739cdaf", "address": "fa:16:3e:b5:31:0e", "network": {"id": "67335e4a-26f1-458f-8b59-73c6186dbd75", "bridge": "br-int", "label": "tempest-network-smoke--1213415235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9b1b309-44", "ovs_interfaceid": "c9b1b309-4443-4694-8649-f59d1739cdaf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.965 253665 WARNING nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.971 253665 DEBUG nova.virt.libvirt.host [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.972 253665 DEBUG nova.virt.libvirt.host [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.976 253665 DEBUG nova.virt.libvirt.host [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.977 253665 DEBUG nova.virt.libvirt.host [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.977 253665 DEBUG nova.virt.libvirt.driver [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.977 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.978 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.978 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.978 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.978 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.979 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.980 253665 DEBUG nova.virt.hardware [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:41:27 np0005532048 nova_compute[253661]: 2025-11-22 09:41:27.982 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:27.985 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:27.986 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:27.986 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 134 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 113 op/s
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.185 253665 DEBUG nova.compute.manager [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Received event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.185 253665 DEBUG nova.compute.manager [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Refreshing instance network info cache due to event network-changed-a86218e5-015d-4324-b94e-b87b21f3333d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.186 253665 DEBUG oslo_concurrency.lockutils [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.186 253665 DEBUG oslo_concurrency.lockutils [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-de63fafb-9cce-47c5-8cdc-f5c348b1777a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.186 253665 DEBUG nova.network.neutron [req-0b2d9e5c-f6dc-4b69-ba03-bb964f4d8097 req-6cd3c101-167a-4012-8977-2301756eb986 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Refreshing network info cache for port a86218e5-015d-4324-b94e-b87b21f3333d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.391 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.391 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.392 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.392 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.392 253665 DEBUG oslo_concurrency.lockutils [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "de63fafb-9cce-47c5-8cdc-f5c348b1777a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.393 253665 INFO nova.compute.manager [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Terminating instance#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.394 253665 DEBUG nova.compute.manager [None req-0d0839ce-4aab-4247-a780-f5931031100a 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: de63fafb-9cce-47c5-8cdc-f5c348b1777a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:41:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:41:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300531492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:41:28 np0005532048 kernel: tapa86218e5-01 (unregistering): left promiscuous mode
Nov 22 04:41:28 np0005532048 NetworkManager[48920]: <info>  [1763804488.4344] device (tapa86218e5-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.443 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:41:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:41:28Z|01382|binding|INFO|Releasing lport a86218e5-015d-4324-b94e-b87b21f3333d from this chassis (sb_readonly=0)
Nov 22 04:41:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:41:28Z|01383|binding|INFO|Setting lport a86218e5-015d-4324-b94e-b87b21f3333d down in Southbound
Nov 22 04:41:28 np0005532048 ovn_controller[152872]: 2025-11-22T09:41:28Z|01384|binding|INFO|Removing iface tapa86218e5-01 ovn-installed in OVS
Nov 22 04:41:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.467 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:8b:9e 10.100.0.7'], port_security=['fa:16:3e:9c:8b:9e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'de63fafb-9cce-47c5-8cdc-f5c348b1777a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1915d045-a483-4ba0-9f22-02eb1e398b68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-305883851', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1568c3cc-a804-4f98-8194-b53f79976399', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c342dd41-b1eb-43d0-a96a-717d17dead9b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=a86218e5-015d-4324-b94e-b87b21f3333d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:41:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.468 162862 INFO neutron.agent.ovn.metadata.agent [-] Port a86218e5-015d-4324-b94e-b87b21f3333d in datapath 1915d045-a483-4ba0-9f22-02eb1e398b68 unbound from our chassis#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.469 253665 DEBUG nova.storage.rbd_utils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image a20eee04-e3b6-4162-91f7-e6c92d8a07fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:41:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.470 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1915d045-a483-4ba0-9f22-02eb1e398b68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:41:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.472 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[29596ad0-1853-46f0-9cb4-bb86bed5bac6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:41:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:41:28.473 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1915d045-a483-4ba0-9f22-02eb1e398b68 namespace which is not needed anymore#033[00m
Nov 22 04:41:28 np0005532048 nova_compute[253661]: 2025-11-22 09:41:28.478 253665 DEBUG oslo_concurrency.processutils [None req-60eeb7e5-ab11-4be2-b90e-611963b2af14 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:41:28 np0005532048 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Nov 22 04:44:31 np0005532048 nova_compute[253661]: 2025-11-22 09:44:31.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:32 np0005532048 rsyslogd[1005]: imjournal: 6187 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 04:44:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 269 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.528 253665 DEBUG nova.network.neutron [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.551 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.551 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Instance network_info: |[{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.552 253665 DEBUG oslo_concurrency.lockutils [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.552 253665 DEBUG nova.network.neutron [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing network info cache for port 491b9f04-4133-4553-a044-0dffe6278421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.554 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Start _get_guest_xml network_info=[{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.560 253665 WARNING nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.569 253665 DEBUG nova.virt.libvirt.host [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.570 253665 DEBUG nova.virt.libvirt.host [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.574 253665 DEBUG nova.virt.libvirt.host [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.574 253665 DEBUG nova.virt.libvirt.host [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.574 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.575 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.575 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.575 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.576 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.576 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.576 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.576 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.577 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.577 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.577 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.577 253665 DEBUG nova.virt.hardware [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.580 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.638 253665 DEBUG nova.compute.manager [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.638 253665 DEBUG nova.compute.manager [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing instance network info cache due to event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.639 253665 DEBUG oslo_concurrency.lockutils [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.639 253665 DEBUG oslo_concurrency.lockutils [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:44:32 np0005532048 nova_compute[253661]: 2025-11-22 09:44:32.639 253665 DEBUG nova.network.neutron [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:44:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:44:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3868522305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.046 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.111 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.115 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:44:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3844498296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.580 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.583 253665 DEBUG nova.virt.libvirt.vif [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=138,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-97r64zcs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:26Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=71ef7514-c6bd-40ee-852a-4b850ca0a05c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.583 253665 DEBUG nova.network.os_vif_util [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.584 253665 DEBUG nova.network.os_vif_util [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.585 253665 DEBUG nova.objects.instance [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid 71ef7514-c6bd-40ee-852a-4b850ca0a05c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.597 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <uuid>71ef7514-c6bd-40ee-852a-4b850ca0a05c</uuid>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <name>instance-0000008a</name>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653</nova:name>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:44:32</nova:creationTime>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <nova:port uuid="491b9f04-4133-4553-a044-0dffe6278421">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <entry name="serial">71ef7514-c6bd-40ee-852a-4b850ca0a05c</entry>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <entry name="uuid">71ef7514-c6bd-40ee-852a-4b850ca0a05c</entry>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:a8:7d:61"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <target dev="tap491b9f04-41"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/console.log" append="off"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:44:33 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:44:33 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:44:33 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:44:33 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.604 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Preparing to wait for external event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.604 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.605 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.605 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.606 253665 DEBUG nova.virt.libvirt.vif [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=138,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-97r64zcs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:26Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=71ef7514-c6bd-40ee-852a-4b850ca0a05c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.607 253665 DEBUG nova.network.os_vif_util [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.607 253665 DEBUG nova.network.os_vif_util [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.608 253665 DEBUG os_vif [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.612 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.612 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.613 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.616 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap491b9f04-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.617 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap491b9f04-41, col_values=(('external_ids', {'iface-id': '491b9f04-4133-4553-a044-0dffe6278421', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:7d:61', 'vm-uuid': '71ef7514-c6bd-40ee-852a-4b850ca0a05c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:33 np0005532048 NetworkManager[48920]: <info>  [1763804673.6195] manager: (tap491b9f04-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/607)
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.622 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.625 253665 INFO os_vif [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41')#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.732 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.733 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.733 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:a8:7d:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.734 253665 INFO nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Using config drive#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.754 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.824 253665 DEBUG nova.network.neutron [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updated VIF entry in instance network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.825 253665 DEBUG nova.network.neutron [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updating instance_info_cache with network_info: [{"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:33 np0005532048 nova_compute[253661]: 2025-11-22 09:44:33.843 253665 DEBUG oslo_concurrency.lockutils [req-1f0ed2af-d1ae-44d1-806d-e745511c8ed8 req-8836e90d-adcb-4d1c-a8f1-a53a27ac21fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:44:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 179 op/s
Nov 22 04:44:34 np0005532048 nova_compute[253661]: 2025-11-22 09:44:34.175 253665 DEBUG nova.network.neutron [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updated VIF entry in instance network info cache for port 491b9f04-4133-4553-a044-0dffe6278421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:44:34 np0005532048 nova_compute[253661]: 2025-11-22 09:44:34.176 253665 DEBUG nova.network.neutron [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:34 np0005532048 nova_compute[253661]: 2025-11-22 09:44:34.192 253665 DEBUG oslo_concurrency.lockutils [req-b77e640e-c10d-45b3-8b25-ba0f4f849e17 req-923f8de5-3550-4380-9324-52e4ef967e5b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:34 np0005532048 nova_compute[253661]: 2025-11-22 09:44:34.832 253665 INFO nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Creating config drive at /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config#033[00m
Nov 22 04:44:34 np0005532048 nova_compute[253661]: 2025-11-22 09:44:34.837 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr2rekoic execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:34 np0005532048 nova_compute[253661]: 2025-11-22 09:44:34.986 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr2rekoic" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.019 253665 DEBUG nova.storage.rbd_utils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.025 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.279 253665 DEBUG oslo_concurrency.processutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config 71ef7514-c6bd-40ee-852a-4b850ca0a05c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.280 253665 INFO nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Deleting local config drive /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c/disk.config because it was imported into RBD.#033[00m
Nov 22 04:44:35 np0005532048 kernel: tap491b9f04-41: entered promiscuous mode
Nov 22 04:44:35 np0005532048 NetworkManager[48920]: <info>  [1763804675.3599] manager: (tap491b9f04-41): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Nov 22 04:44:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:35Z|01483|binding|INFO|Claiming lport 491b9f04-4133-4553-a044-0dffe6278421 for this chassis.
Nov 22 04:44:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:35Z|01484|binding|INFO|491b9f04-4133-4553-a044-0dffe6278421: Claiming fa:16:3e:a8:7d:61 10.100.0.11
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.372 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:7d:61 10.100.0.11'], port_security=['fa:16:3e:a8:7d:61 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '71ef7514-c6bd-40ee-852a-4b850ca0a05c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a734f39d-baf0-4591-94dc-9057caf53bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c524ade6-1430-48f4-af9a-629e8a61db96 d6471b4e-7bc5-407e-a8cc-88aa50b6222f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce1fe74-6934-45b2-a6d9-4702f1b2307a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=491b9f04-4133-4553-a044-0dffe6278421) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.373 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 491b9f04-4133-4553-a044-0dffe6278421 in datapath a734f39d-baf0-4591-94dc-9057caf53bb4 bound to our chassis#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.375 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a734f39d-baf0-4591-94dc-9057caf53bb4#033[00m
Nov 22 04:44:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:35Z|01485|binding|INFO|Setting lport 491b9f04-4133-4553-a044-0dffe6278421 ovn-installed in OVS
Nov 22 04:44:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:35Z|01486|binding|INFO|Setting lport 491b9f04-4133-4553-a044-0dffe6278421 up in Southbound
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.388 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5930566e-2fd1-4114-9db3-088320638997]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.389 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa734f39d-b1 in ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.392 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa734f39d-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.392 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9763f857-782c-4668-a7ad-8a3c9cb4dc26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.394 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3fddd9b-ace2-4dd9-9517-0af902c7929d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.407 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[3e352436-8d88-4466-939a-847bb71d7756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.427 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1de5d21-0dee-4551-9496-ae9b972d409c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:44:35 np0005532048 systemd-machined[215941]: New machine qemu-169-instance-0000008a.
Nov 22 04:44:35 np0005532048 systemd[1]: Started Virtual Machine qemu-169-instance-0000008a.
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:44:35 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b35c8a45-723a-42ed-841c-77e29070833d does not exist
Nov 22 04:44:35 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 719f2cb5-6785-4554-be52-a1adbbccdb43 does not exist
Nov 22 04:44:35 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8c981303-3a76-4da1-aea9-25e8ab62f99f does not exist
Nov 22 04:44:35 np0005532048 systemd-udevd[394070]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.466 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[124d9bf9-ec97-4c17-a06f-a5e532da57bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.472 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c34ffa6b-7f0f-48a9-85ed-554fe1abea68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 NetworkManager[48920]: <info>  [1763804675.4742] manager: (tapa734f39d-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/609)
Nov 22 04:44:35 np0005532048 systemd-udevd[394079]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:44:35 np0005532048 NetworkManager[48920]: <info>  [1763804675.4780] device (tap491b9f04-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:44:35 np0005532048 NetworkManager[48920]: <info>  [1763804675.4794] device (tap491b9f04-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.514 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[145ac6cb-9ccd-47ca-92d5-c7baac0c3efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.518 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b893da07-e96a-4122-a81f-975dd7d2443d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 NetworkManager[48920]: <info>  [1763804675.5531] device (tapa734f39d-b0): carrier: link connected
Nov 22 04:44:35 np0005532048 podman[394062]: 2025-11-22 09:44:35.570141374 +0000 UTC m=+0.127185732 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.572 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[303885cc-b6d2-411e-ae2a-fec10bf175c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.592 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[198c1914-d5d9-46af-a424-56a00e0070d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa734f39d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752629, 'reachable_time': 30146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394150, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.610 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[76699df7-65bd-434d-be80-6ee12dcf56ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:4fef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752629, 'tstamp': 752629}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394166, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.631 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb5ae2d-a9db-480b-9a66-a2ea2719d80f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa734f39d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752629, 'reachable_time': 30146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394169, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.665 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1851ae-492c-4e08-8fef-8519dc07d312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.735 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce5ee26-891d-45a3-96ab-53f1190896ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.736 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa734f39d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.736 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.736 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa734f39d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.738 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:35 np0005532048 NetworkManager[48920]: <info>  [1763804675.7385] manager: (tapa734f39d-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Nov 22 04:44:35 np0005532048 kernel: tapa734f39d-b0: entered promiscuous mode
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.740 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa734f39d-b0, col_values=(('external_ids', {'iface-id': '3db82a3e-3c50-4f8e-b5b4-8b4657d60723'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.741 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:35 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:35Z|01487|binding|INFO|Releasing lport 3db82a3e-3c50-4f8e-b5b4-8b4657d60723 from this chassis (sb_readonly=0)
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.745 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a734f39d-baf0-4591-94dc-9057caf53bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a734f39d-baf0-4591-94dc-9057caf53bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.747 253665 DEBUG nova.compute.manager [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.748 253665 DEBUG oslo_concurrency.lockutils [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.749 253665 DEBUG oslo_concurrency.lockutils [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.750 253665 DEBUG oslo_concurrency.lockutils [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.750 253665 DEBUG nova.compute.manager [req-e6f262c4-e247-479f-b73e-fbbc8fd2c5e9 req-3851b7b2-9fbf-4902-a8e4-518aa551fc6d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Processing event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.751 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.747 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c7bc49db-0b06-4db2-9e4d-9a466eae8126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.748 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-a734f39d-baf0-4591-94dc-9057caf53bb4
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/a734f39d-baf0-4591-94dc-9057caf53bb4.pid.haproxy
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID a734f39d-baf0-4591-94dc-9057caf53bb4
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.749 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'env', 'PROCESS_TAG=haproxy-a734f39d-baf0-4591-94dc-9057caf53bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a734f39d-baf0-4591-94dc-9057caf53bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.762 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:44:35 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:44:35 np0005532048 nova_compute[253661]: 2025-11-22 09:44:35.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:35 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:35.836 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.016 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804676.0157568, 71ef7514-c6bd-40ee-852a-4b850ca0a05c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.016 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] VM Started (Lifecycle Event)#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.020 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.024 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.028 253665 INFO nova.virt.libvirt.driver [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Instance spawned successfully.#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.029 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.049 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.059 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.063 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.068 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.069 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.069 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.069 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.070 253665 DEBUG nova.virt.libvirt.driver [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.083 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.084 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804676.0188131, 71ef7514-c6bd-40ee-852a-4b850ca0a05c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.084 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.124 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.131 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804676.0230613, 71ef7514-c6bd-40ee-852a-4b850ca0a05c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.131 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:44:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.153 253665 INFO nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Took 9.25 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.154 253665 DEBUG nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.166 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.169 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:44:36 np0005532048 podman[394322]: 2025-11-22 09:44:36.109595454 +0000 UTC m=+0.023885361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.205 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.244 253665 INFO nova.compute.manager [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Took 10.38 seconds to build instance.#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.260 253665 DEBUG oslo_concurrency.lockutils [None req-14d27819-32d1-42f8-b961-88d2f4f72b99 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:36 np0005532048 nova_compute[253661]: 2025-11-22 09:44:36.673 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:36 np0005532048 podman[394322]: 2025-11-22 09:44:36.680641225 +0000 UTC m=+0.594931112 container create cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:44:36 np0005532048 systemd[1]: Started libpod-conmon-cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715.scope.
Nov 22 04:44:36 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:44:37 np0005532048 podman[394341]: 2025-11-22 09:44:36.980221413 +0000 UTC m=+0.868381495 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:44:37 np0005532048 podman[394322]: 2025-11-22 09:44:37.311608195 +0000 UTC m=+1.225898102 container init cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:44:37 np0005532048 podman[394322]: 2025-11-22 09:44:37.319862289 +0000 UTC m=+1.234152176 container start cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:44:37 np0005532048 affectionate_bhaskara[394358]: 167 167
Nov 22 04:44:37 np0005532048 systemd[1]: libpod-cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715.scope: Deactivated successfully.
Nov 22 04:44:37 np0005532048 podman[394322]: 2025-11-22 09:44:37.575300417 +0000 UTC m=+1.489590324 container attach cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:44:37 np0005532048 podman[394322]: 2025-11-22 09:44:37.575964523 +0000 UTC m=+1.490254410 container died cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:44:37 np0005532048 nova_compute[253661]: 2025-11-22 09:44:37.856 253665 DEBUG nova.compute.manager [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:37 np0005532048 nova_compute[253661]: 2025-11-22 09:44:37.857 253665 DEBUG oslo_concurrency.lockutils [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:37 np0005532048 nova_compute[253661]: 2025-11-22 09:44:37.858 253665 DEBUG oslo_concurrency.lockutils [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:37 np0005532048 nova_compute[253661]: 2025-11-22 09:44:37.858 253665 DEBUG oslo_concurrency.lockutils [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:37 np0005532048 nova_compute[253661]: 2025-11-22 09:44:37.858 253665 DEBUG nova.compute.manager [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] No waiting events found dispatching network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:44:37 np0005532048 nova_compute[253661]: 2025-11-22 09:44:37.858 253665 WARNING nova.compute.manager [req-416e1cfd-5ba0-458e-8968-3d0d5ae6ae71 req-a5f37c5c-f9e2-4da1-b2d9-364481562bba 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received unexpected event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:44:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 172 op/s
Nov 22 04:44:38 np0005532048 podman[394341]: 2025-11-22 09:44:38.390405094 +0000 UTC m=+2.278565156 container create e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:44:38 np0005532048 nova_compute[253661]: 2025-11-22 09:44:38.620 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:38 np0005532048 systemd[1]: Started libpod-conmon-e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852.scope.
Nov 22 04:44:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:44:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e778d95479f50b481d89a1bee1b1080db3f4d51fefff371a7f567798dd2494e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:44:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-77fdfc0ca1d8ee2bb979ba324b331acee102c41c3e79416595c3e29b1cd4b6c4-merged.mount: Deactivated successfully.
Nov 22 04:44:39 np0005532048 podman[394322]: 2025-11-22 09:44:39.599253393 +0000 UTC m=+3.513543290 container remove cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:44:39 np0005532048 systemd[1]: libpod-conmon-cb2cf2c063679c48a7f0eefa9266bcf8a9fd6701b536981b9e782302838c5715.scope: Deactivated successfully.
Nov 22 04:44:39 np0005532048 podman[394341]: 2025-11-22 09:44:39.745793212 +0000 UTC m=+3.633953324 container init e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 04:44:39 np0005532048 podman[394341]: 2025-11-22 09:44:39.755476991 +0000 UTC m=+3.643637053 container start e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:44:39 np0005532048 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [NOTICE]   (394397) : New worker (394405) forked
Nov 22 04:44:39 np0005532048 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [NOTICE]   (394397) : Loading success.
Nov 22 04:44:39 np0005532048 podman[394389]: 2025-11-22 09:44:39.847493564 +0000 UTC m=+0.091811619 container create 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:44:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:39.851 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:44:39 np0005532048 podman[394389]: 2025-11-22 09:44:39.799934739 +0000 UTC m=+0.044252824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:44:39 np0005532048 systemd[1]: Started libpod-conmon-86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357.scope.
Nov 22 04:44:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:44:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:40 np0005532048 podman[394389]: 2025-11-22 09:44:40.037738851 +0000 UTC m=+0.282056896 container init 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:44:40 np0005532048 podman[394389]: 2025-11-22 09:44:40.04579838 +0000 UTC m=+0.290116435 container start 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:44:40 np0005532048 podman[394389]: 2025-11-22 09:44:40.076101779 +0000 UTC m=+0.320419834 container attach 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:44:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 234 op/s
Nov 22 04:44:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:40Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:76:7a 10.100.0.3
Nov 22 04:44:40 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:40Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:76:7a 10.100.0.3
Nov 22 04:44:41 np0005532048 blissful_driscoll[394416]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:44:41 np0005532048 blissful_driscoll[394416]: --> relative data size: 1.0
Nov 22 04:44:41 np0005532048 blissful_driscoll[394416]: --> All data devices are unavailable
Nov 22 04:44:41 np0005532048 systemd[1]: libpod-86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357.scope: Deactivated successfully.
Nov 22 04:44:41 np0005532048 systemd[1]: libpod-86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357.scope: Consumed 1.027s CPU time.
Nov 22 04:44:41 np0005532048 podman[394389]: 2025-11-22 09:44:41.178459519 +0000 UTC m=+1.422777584 container died 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:44:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b3f2b937ce4d89fd9033ba923e60e9fa66e287b24718ea07641a7b2f4ef81d50-merged.mount: Deactivated successfully.
Nov 22 04:44:41 np0005532048 podman[394389]: 2025-11-22 09:44:41.282426646 +0000 UTC m=+1.526744701 container remove 86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:44:41 np0005532048 systemd[1]: libpod-conmon-86afeeb18d1c270d7827ea4ae489c3ec3c4d25483c8c332d46f431b414c22357.scope: Deactivated successfully.
Nov 22 04:44:41 np0005532048 nova_compute[253661]: 2025-11-22 09:44:41.676 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:41 np0005532048 podman[394600]: 2025-11-22 09:44:41.865925614 +0000 UTC m=+0.021417959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:44:42 np0005532048 podman[394600]: 2025-11-22 09:44:42.025488164 +0000 UTC m=+0.180980479 container create 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:44:42 np0005532048 systemd[1]: Started libpod-conmon-00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b.scope.
Nov 22 04:44:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 298 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 177 op/s
Nov 22 04:44:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:44:42 np0005532048 nova_compute[253661]: 2025-11-22 09:44:42.344 253665 DEBUG nova.compute.manager [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-changed-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:42 np0005532048 nova_compute[253661]: 2025-11-22 09:44:42.345 253665 DEBUG nova.compute.manager [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing instance network info cache due to event network-changed-491b9f04-4133-4553-a044-0dffe6278421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:44:42 np0005532048 nova_compute[253661]: 2025-11-22 09:44:42.346 253665 DEBUG oslo_concurrency.lockutils [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:44:42 np0005532048 nova_compute[253661]: 2025-11-22 09:44:42.346 253665 DEBUG oslo_concurrency.lockutils [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:44:42 np0005532048 nova_compute[253661]: 2025-11-22 09:44:42.346 253665 DEBUG nova.network.neutron [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing network info cache for port 491b9f04-4133-4553-a044-0dffe6278421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:44:42 np0005532048 podman[394600]: 2025-11-22 09:44:42.479830874 +0000 UTC m=+0.635323209 container init 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 04:44:42 np0005532048 podman[394600]: 2025-11-22 09:44:42.488466487 +0000 UTC m=+0.643958802 container start 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:44:42 np0005532048 goofy_swirles[394616]: 167 167
Nov 22 04:44:42 np0005532048 systemd[1]: libpod-00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b.scope: Deactivated successfully.
Nov 22 04:44:42 np0005532048 podman[394600]: 2025-11-22 09:44:42.849660066 +0000 UTC m=+1.005152421 container attach 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:44:42 np0005532048 podman[394600]: 2025-11-22 09:44:42.851662545 +0000 UTC m=+1.007154930 container died 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:44:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:42.854 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.007 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.008 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.028 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.160 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.160 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.167 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.167 253665 INFO nova.compute.claims [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.361 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1f0226f17d3150a2c2e0684c2ccdba66d17da2aaa0560eabe8ba1d7710d8b0cc-merged.mount: Deactivated successfully.
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.678 253665 DEBUG nova.network.neutron [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updated VIF entry in instance network info cache for port 491b9f04-4133-4553-a044-0dffe6278421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.679 253665 DEBUG nova.network.neutron [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.706 253665 DEBUG oslo_concurrency.lockutils [req-e473d303-f8da-4378-9391-f6efaebcb71e req-b7a6b182-33c7-4808-8974-798dbba15d39 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:44:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117140290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.866 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.873 253665 DEBUG nova.compute.provider_tree [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.886 253665 DEBUG nova.scheduler.client.report [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:44:43 np0005532048 podman[394600]: 2025-11-22 09:44:43.893418249 +0000 UTC m=+2.048910564 container remove 00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.918 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.919 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:44:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.964 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.965 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:44:43 np0005532048 systemd[1]: libpod-conmon-00474f774ccfe0db126042756b0c9a47c931ec11682d865825709de410700c5b.scope: Deactivated successfully.
Nov 22 04:44:43 np0005532048 nova_compute[253661]: 2025-11-22 09:44:43.990 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.007 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:44:44 np0005532048 podman[394662]: 2025-11-22 09:44:44.094489664 +0000 UTC m=+0.042944962 container create 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.103 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.105 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.105 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Creating image(s)#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.131 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:44 np0005532048 systemd[1]: Started libpod-conmon-16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae.scope.
Nov 22 04:44:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Nov 22 04:44:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:44:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.166 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:44 np0005532048 podman[394662]: 2025-11-22 09:44:44.076337276 +0000 UTC m=+0.024792594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:44:44 np0005532048 podman[394662]: 2025-11-22 09:44:44.179098603 +0000 UTC m=+0.127553921 container init 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:44:44 np0005532048 podman[394662]: 2025-11-22 09:44:44.186430104 +0000 UTC m=+0.134885402 container start 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 04:44:44 np0005532048 podman[394662]: 2025-11-22 09:44:44.193736205 +0000 UTC m=+0.142191503 container attach 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.205 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.210 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.256 253665 DEBUG nova.policy [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.259 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.259 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.293 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.303 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.304 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.305 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.305 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.331 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:44 np0005532048 nova_compute[253661]: 2025-11-22 09:44:44.335 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]: {
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:    "0": [
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:        {
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "devices": [
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "/dev/loop3"
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            ],
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_name": "ceph_lv0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_size": "21470642176",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "name": "ceph_lv0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "tags": {
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.cluster_name": "ceph",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.crush_device_class": "",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.encrypted": "0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.osd_id": "0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.type": "block",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.vdo": "0"
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            },
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "type": "block",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "vg_name": "ceph_vg0"
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:        }
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:    ],
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:    "1": [
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:        {
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "devices": [
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "/dev/loop4"
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            ],
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_name": "ceph_lv1",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_size": "21470642176",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "name": "ceph_lv1",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "tags": {
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.cluster_name": "ceph",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.crush_device_class": "",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.encrypted": "0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.osd_id": "1",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.type": "block",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.vdo": "0"
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            },
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "type": "block",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "vg_name": "ceph_vg1"
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:        }
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:    ],
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:    "2": [
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:        {
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "devices": [
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "/dev/loop5"
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            ],
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_name": "ceph_lv2",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_size": "21470642176",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "name": "ceph_lv2",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "tags": {
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.cluster_name": "ceph",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.crush_device_class": "",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.encrypted": "0",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.osd_id": "2",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.type": "block",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:                "ceph.vdo": "0"
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            },
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "type": "block",
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:            "vg_name": "ceph_vg2"
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:        }
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]:    ]
Nov 22 04:44:45 np0005532048 objective_mcnulty[394696]: }
Nov 22 04:44:45 np0005532048 systemd[1]: libpod-16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae.scope: Deactivated successfully.
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.121 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Successfully created port: 2979286f-0fdd-4b20-9c29-da29aac8e5ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.171 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.836s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:45 np0005532048 podman[394781]: 2025-11-22 09:44:45.172610306 +0000 UTC m=+0.048582271 container died 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:44:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-aa6a15d5b569d1cc14bd1f972a3e605cb9b80b3d5b8a50397932611172c7634f-merged.mount: Deactivated successfully.
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.252 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:44:45 np0005532048 podman[394781]: 2025-11-22 09:44:45.260671561 +0000 UTC m=+0.136643506 container remove 16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.261 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:44:45 np0005532048 systemd[1]: libpod-conmon-16611f24e01fdfbf0d91d5188d1be0e1b7b4526f91a350a5b7e7280a1f6083ae.scope: Deactivated successfully.
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.434 253665 DEBUG nova.objects.instance [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.454 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.455 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Ensure instance console log exists: /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.455 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.456 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.456 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:45 np0005532048 nova_compute[253661]: 2025-11-22 09:44:45.822 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Successfully created port: 7b663864-2935-4127-ab02-75e4a0acfc73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:44:45 np0005532048 podman[395009]: 2025-11-22 09:44:45.926760428 +0000 UTC m=+0.040413598 container create 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:44:45 np0005532048 systemd[1]: Started libpod-conmon-927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced.scope.
Nov 22 04:44:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:44:46 np0005532048 podman[395009]: 2025-11-22 09:44:46.003613825 +0000 UTC m=+0.117267025 container init 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:44:46 np0005532048 podman[395009]: 2025-11-22 09:44:45.908921218 +0000 UTC m=+0.022574408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:44:46 np0005532048 podman[395009]: 2025-11-22 09:44:46.012109645 +0000 UTC m=+0.125762825 container start 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:44:46 np0005532048 podman[395009]: 2025-11-22 09:44:46.018032562 +0000 UTC m=+0.131685762 container attach 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:44:46 np0005532048 busy_dhawan[395025]: 167 167
Nov 22 04:44:46 np0005532048 systemd[1]: libpod-927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced.scope: Deactivated successfully.
Nov 22 04:44:46 np0005532048 podman[395009]: 2025-11-22 09:44:46.020656477 +0000 UTC m=+0.134309647 container died 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:44:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-70d534d345ba8b3d6c668ca7dc973b2edf6aa868e6b9de4f98b420164746a38f-merged.mount: Deactivated successfully.
Nov 22 04:44:46 np0005532048 podman[395009]: 2025-11-22 09:44:46.092530112 +0000 UTC m=+0.206183282 container remove 927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:44:46 np0005532048 systemd[1]: libpod-conmon-927b7c2fa30995526eea7802612881d150a89d077509fd1b947537ceb69eaced.scope: Deactivated successfully.
Nov 22 04:44:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Nov 22 04:44:46 np0005532048 nova_compute[253661]: 2025-11-22 09:44:46.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:44:46 np0005532048 podman[395049]: 2025-11-22 09:44:46.330683902 +0000 UTC m=+0.076609653 container create a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:44:46 np0005532048 podman[395049]: 2025-11-22 09:44:46.281280583 +0000 UTC m=+0.027206364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:44:46 np0005532048 systemd[1]: Started libpod-conmon-a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf.scope.
Nov 22 04:44:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:44:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:44:46 np0005532048 podman[395049]: 2025-11-22 09:44:46.477833176 +0000 UTC m=+0.223758937 container init a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:44:46 np0005532048 podman[395049]: 2025-11-22 09:44:46.483832853 +0000 UTC m=+0.229758604 container start a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:44:46 np0005532048 podman[395049]: 2025-11-22 09:44:46.502730081 +0000 UTC m=+0.248655842 container attach a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:44:46 np0005532048 nova_compute[253661]: 2025-11-22 09:44:46.679 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:47 np0005532048 nova_compute[253661]: 2025-11-22 09:44:47.116 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Successfully updated port: 2979286f-0fdd-4b20-9c29-da29aac8e5ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:44:47 np0005532048 nova_compute[253661]: 2025-11-22 09:44:47.244 253665 DEBUG nova.compute.manager [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:47 np0005532048 nova_compute[253661]: 2025-11-22 09:44:47.245 253665 DEBUG nova.compute.manager [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing instance network info cache due to event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:44:47 np0005532048 nova_compute[253661]: 2025-11-22 09:44:47.245 253665 DEBUG oslo_concurrency.lockutils [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:44:47 np0005532048 nova_compute[253661]: 2025-11-22 09:44:47.246 253665 DEBUG oslo_concurrency.lockutils [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:44:47 np0005532048 nova_compute[253661]: 2025-11-22 09:44:47.246 253665 DEBUG nova.network.neutron [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]: {
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "osd_id": 1,
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "type": "bluestore"
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:    },
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "osd_id": 0,
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "type": "bluestore"
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:    },
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "osd_id": 2,
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:        "type": "bluestore"
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]:    }
Nov 22 04:44:47 np0005532048 infallible_cerf[395066]: }
Nov 22 04:44:47 np0005532048 systemd[1]: libpod-a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf.scope: Deactivated successfully.
Nov 22 04:44:47 np0005532048 podman[395049]: 2025-11-22 09:44:47.441729267 +0000 UTC m=+1.187655018 container died a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:44:47 np0005532048 nova_compute[253661]: 2025-11-22 09:44:47.507 253665 DEBUG nova.network.neutron [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:44:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dd7a09fa9a1db1c083bebb66ad11a2346da80ce70abd587f31cf52ee87693c9d-merged.mount: Deactivated successfully.
Nov 22 04:44:47 np0005532048 nova_compute[253661]: 2025-11-22 09:44:47.996 253665 DEBUG nova.network.neutron [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:48 np0005532048 nova_compute[253661]: 2025-11-22 09:44:48.010 253665 DEBUG oslo_concurrency.lockutils [req-50d84586-e439-44df-a7b7-e2a237ac937f req-4fd3e7f9-95b6-44e8-88ad-b87eaa6b8699 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 335 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 163 op/s
Nov 22 04:44:48 np0005532048 nova_compute[253661]: 2025-11-22 09:44:48.174 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Successfully updated port: 7b663864-2935-4127-ab02-75e4a0acfc73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:44:48 np0005532048 nova_compute[253661]: 2025-11-22 09:44:48.193 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:44:48 np0005532048 nova_compute[253661]: 2025-11-22 09:44:48.193 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:44:48 np0005532048 nova_compute[253661]: 2025-11-22 09:44:48.193 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:44:48 np0005532048 podman[395049]: 2025-11-22 09:44:48.362201666 +0000 UTC m=+2.108127417 container remove a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:44:48 np0005532048 nova_compute[253661]: 2025-11-22 09:44:48.383 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:44:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:44:48 np0005532048 systemd[1]: libpod-conmon-a007bc7812d949118cfabde81039a2155daef6bedef0d07d6bc8136185d30cdf.scope: Deactivated successfully.
Nov 22 04:44:48 np0005532048 nova_compute[253661]: 2025-11-22 09:44:48.460 253665 INFO nova.compute.manager [None req-de14b6b8-e2b7-4e17-8607-154efc33cb04 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Get console output#033[00m
Nov 22 04:44:48 np0005532048 nova_compute[253661]: 2025-11-22 09:44:48.471 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:44:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:44:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:44:48 np0005532048 nova_compute[253661]: 2025-11-22 09:44:48.624 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:44:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d711e591-04ce-4bef-874d-20e8426dad10 does not exist
Nov 22 04:44:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a19af43e-ff9d-44c6-b0af-20c31e7af3a3 does not exist
Nov 22 04:44:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:44:49 np0005532048 nova_compute[253661]: 2025-11-22 09:44:49.373 253665 DEBUG nova.compute.manager [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-changed-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:49 np0005532048 nova_compute[253661]: 2025-11-22 09:44:49.374 253665 DEBUG nova.compute.manager [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing instance network info cache due to event network-changed-7b663864-2935-4127-ab02-75e4a0acfc73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:44:49 np0005532048 nova_compute[253661]: 2025-11-22 09:44:49.374 253665 DEBUG oslo_concurrency.lockutils [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:44:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:44:49 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:44:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 178 op/s
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.234 253665 DEBUG nova.compute.manager [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.234 253665 DEBUG nova.compute.manager [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing instance network info cache due to event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.235 253665 DEBUG oslo_concurrency.lockutils [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.235 253665 DEBUG oslo_concurrency.lockutils [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.235 253665 DEBUG nova.network.neutron [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.256 253665 DEBUG nova.network.neutron [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.284 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.284 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance network_info: |[{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.285 253665 DEBUG oslo_concurrency.lockutils [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.285 253665 DEBUG nova.network.neutron [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing network info cache for port 7b663864-2935-4127-ab02-75e4a0acfc73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.289 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start _get_guest_xml network_info=[{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.294 253665 WARNING nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.300 253665 DEBUG nova.virt.libvirt.host [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.301 253665 DEBUG nova.virt.libvirt.host [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.309 253665 DEBUG nova.virt.libvirt.host [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.309 253665 DEBUG nova.virt.libvirt.host [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.310 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.310 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.310 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.311 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.311 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.311 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.311 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.312 253665 DEBUG nova.virt.hardware [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.316 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:50 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:44:50 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44717686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.757 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.782 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:50 np0005532048 nova_compute[253661]: 2025-11-22 09:44:50.787 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:44:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062114375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.245 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.246 253665 DEBUG nova.virt.libvirt.vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:44Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.247 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.248 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.248 253665 DEBUG nova.virt.libvirt.vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:44Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.249 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.249 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.250 253665 DEBUG nova.objects.instance [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.263 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.263 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.304 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <uuid>b4a045a0-0a46-4644-8d2e-9ec4a6d893b9</uuid>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <name>instance-0000008b</name>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-63266585</nova:name>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:44:50</nova:creationTime>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <nova:port uuid="2979286f-0fdd-4b20-9c29-da29aac8e5ab">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <nova:port uuid="7b663864-2935-4127-ab02-75e4a0acfc73">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6f:1042" ipVersion="6"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <entry name="serial">b4a045a0-0a46-4644-8d2e-9ec4a6d893b9</entry>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <entry name="uuid">b4a045a0-0a46-4644-8d2e-9ec4a6d893b9</entry>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:f9:a0:45"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <target dev="tap2979286f-0f"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:6f:10:42"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <target dev="tap7b663864-29"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/console.log" append="off"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:44:51 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:44:51 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:44:51 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:44:51 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.305 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Preparing to wait for external event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.305 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.305 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.306 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.306 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Preparing to wait for external event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.307 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.307 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.307 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.308 253665 DEBUG nova.virt.libvirt.vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:44Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.308 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.309 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.309 253665 DEBUG os_vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.310 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.310 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.314 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.314 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2979286f-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.315 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2979286f-0f, col_values=(('external_ids', {'iface-id': '2979286f-0fdd-4b20-9c29-da29aac8e5ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:a0:45', 'vm-uuid': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:51 np0005532048 NetworkManager[48920]: <info>  [1763804691.3176] manager: (tap2979286f-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.326 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.328 253665 INFO os_vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f')#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.329 253665 DEBUG nova.virt.libvirt.vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:44:44Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.329 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.329 253665 DEBUG nova.network.os_vif_util [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.330 253665 DEBUG os_vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.330 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.330 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.330 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.332 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.332 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b663864-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.333 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b663864-29, col_values=(('external_ids', {'iface-id': '7b663864-2935-4127-ab02-75e4a0acfc73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:10:42', 'vm-uuid': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.334 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:51 np0005532048 NetworkManager[48920]: <info>  [1763804691.3356] manager: (tap7b663864-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Nov 22 04:44:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:51Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:7d:61 10.100.0.11
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:44:51 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:51Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:7d:61 10.100.0.11
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.346 253665 INFO os_vif [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29')#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.350 253665 INFO nova.compute.manager [None req-0eab2b6d-b6af-470d-a51b-425eb681dde3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Get console output#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.373 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.406 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.406 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.406 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:f9:a0:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.406 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:6f:10:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.407 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Using config drive#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.434 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.451 253665 DEBUG nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 WARNING nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.452 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.453 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.453 253665 DEBUG oslo_concurrency.lockutils [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.453 253665 DEBUG nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.453 253665 WARNING nova.compute.manager [req-91fed647-4c38-419f-92aa-d78e5f68e6a0 req-c08f2647-bf62-4255-b9ac-8290187689c5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:44:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580475424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.760 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.779 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Creating config drive at /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.784 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46vh0yht execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.910 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.911 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.914 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.914 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.918 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.918 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.921 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.921 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.924 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.924 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.926 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46vh0yht" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.949 253665 DEBUG nova.storage.rbd_utils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:44:51 np0005532048 nova_compute[253661]: 2025-11-22 09:44:51.952 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 618 KiB/s rd, 4.5 MiB/s wr, 114 op/s
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.246 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.247 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2811MB free_disk=59.80126190185547GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.248 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:44:52
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr']
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.286 253665 DEBUG oslo_concurrency.processutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.287 253665 INFO nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Deleting local config drive /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9/disk.config because it was imported into RBD.#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance dab57683-82b6-44b3-b663-556a4f0e3dab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.335 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 48af02cd-94c5-473f-a6f9-4d2caad8483f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 71ef7514-c6bd-40ee-852a-4b850ca0a05c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.336 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:44:52 np0005532048 NetworkManager[48920]: <info>  [1763804692.3497] manager: (tap2979286f-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Nov 22 04:44:52 np0005532048 kernel: tap2979286f-0f: entered promiscuous mode
Nov 22 04:44:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:52Z|01488|binding|INFO|Claiming lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab for this chassis.
Nov 22 04:44:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:52Z|01489|binding|INFO|2979286f-0fdd-4b20-9c29-da29aac8e5ab: Claiming fa:16:3e:f9:a0:45 10.100.0.13
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.356 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.367 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:a0:45 10.100.0.13'], port_security=['fa:16:3e:f9:a0:45 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621dd092-e20a-432f-8488-41d7fcd69532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=586feb4c-523c-413f-8bd3-6bc87edbdf4c, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2979286f-0fdd-4b20-9c29-da29aac8e5ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.369 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2979286f-0fdd-4b20-9c29-da29aac8e5ab in datapath 621dd092-e20a-432f-8488-41d7fcd69532 bound to our chassis#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.371 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621dd092-e20a-432f-8488-41d7fcd69532#033[00m
Nov 22 04:44:52 np0005532048 NetworkManager[48920]: <info>  [1763804692.3745] manager: (tap7b663864-29): new Tun device (/org/freedesktop/NetworkManager/Devices/614)
Nov 22 04:44:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:52Z|01490|binding|INFO|Setting lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab ovn-installed in OVS
Nov 22 04:44:52 np0005532048 kernel: tap7b663864-29: entered promiscuous mode
Nov 22 04:44:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:52Z|01491|binding|INFO|Setting lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab up in Southbound
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:52Z|01492|if_status|INFO|Not updating pb chassis for 7b663864-2935-4127-ab02-75e4a0acfc73 now as sb is readonly
Nov 22 04:44:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:52Z|01493|binding|INFO|Claiming lport 7b663864-2935-4127-ab02-75e4a0acfc73 for this chassis.
Nov 22 04:44:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:52Z|01494|binding|INFO|7b663864-2935-4127-ab02-75e4a0acfc73: Claiming fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.392 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68c69a2a-2867-43fa-b7ca-c8e35461e9f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:52Z|01495|binding|INFO|Setting lport 7b663864-2935-4127-ab02-75e4a0acfc73 up in Southbound
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.401 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:52 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:52Z|01496|binding|INFO|Setting lport 7b663864-2935-4127-ab02-75e4a0acfc73 ovn-installed in OVS
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:52 np0005532048 systemd-udevd[395329]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:44:52 np0005532048 systemd-udevd[395328]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.408 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042'], port_security=['fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:1042/64', 'neutron:device_id': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7556820e-db50-4efa-817c-86d63f0b8b71, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7b663864-2935-4127-ab02-75e4a0acfc73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:44:52 np0005532048 systemd-machined[215941]: New machine qemu-170-instance-0000008b.
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.424 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4618d677-8db0-4b23-9fd1-c80b49c82f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 NetworkManager[48920]: <info>  [1763804692.4295] device (tap7b663864-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:44:52 np0005532048 systemd[1]: Started Virtual Machine qemu-170-instance-0000008b.
Nov 22 04:44:52 np0005532048 NetworkManager[48920]: <info>  [1763804692.4311] device (tap7b663864-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.430 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1641da5b-023c-4515-b51d-afdb22f3dc03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 NetworkManager[48920]: <info>  [1763804692.4318] device (tap2979286f-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:44:52 np0005532048 NetworkManager[48920]: <info>  [1763804692.4327] device (tap2979286f-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.434 253665 DEBUG nova.network.neutron [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updated VIF entry in instance network info cache for port 7b663864-2935-4127-ab02-75e4a0acfc73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.435 253665 DEBUG nova.network.neutron [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.454 253665 DEBUG oslo_concurrency.lockutils [req-62415fb7-f76a-4a4f-a76b-44641a812729 req-a7cdc60d-5119-490f-8e74-ec64507517ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.467 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.469 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5323798a-6eb8-4ef9-af28-800b373672cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.493 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3705857a-05a8-4abf-b27e-70677e982f10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621dd092-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:9d:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750283, 'reachable_time': 20749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395340, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cd3958-07b4-427f-931b-16ba782c413a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621dd092-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750294, 'tstamp': 750294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395343, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621dd092-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750297, 'tstamp': 750297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395343, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.514 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621dd092-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.517 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621dd092-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.517 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621dd092-e0, col_values=(('external_ids', {'iface-id': 'ce538828-218d-4def-9bed-efeb786012c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.519 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7b663864-2935-4127-ab02-75e4a0acfc73 in datapath 7a504de2-27b2-4d01-a183-d9b0331ca31e unbound from our chassis#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.521 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a504de2-27b2-4d01-a183-d9b0331ca31e#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.539 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d55c36-ffdd-4ff4-83ea-8579136fa0c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.580 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9c8606-d63e-4ab5-9086-b8d1ba00faf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.583 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[79102230-f426-4bbd-a155-402fb6421f45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.602 253665 DEBUG nova.network.neutron [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updated VIF entry in instance network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.603 253665 DEBUG nova.network.neutron [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.616 253665 DEBUG nova.compute.manager [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.617 253665 DEBUG oslo_concurrency.lockutils [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.617 253665 DEBUG oslo_concurrency.lockutils [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.618 253665 DEBUG oslo_concurrency.lockutils [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.618 253665 DEBUG nova.compute.manager [req-d74ea61e-f5bf-4c9d-9a3e-b4c3bc08ff7f req-0956529d-6fc5-4efa-93e6-055158a999f8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Processing event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.623 253665 DEBUG oslo_concurrency.lockutils [req-f754d838-7529-4374-ae61-ccd176f09bc8 req-9c0d4141-3592-41a8-956d-c6f010643503 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.625 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fd19cb40-e856-4a03-b3d2-f91bd3e48adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.656 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[365f23e0-08e3-4658-9dab-c62e4a6f5a3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a504de2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:4b:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750378, 'reachable_time': 21380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395384, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.674 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[888aec68-c68d-4f77-a357-bf9149120a73]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a504de2-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750393, 'tstamp': 750393}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395388, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.676 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a504de2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.677 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.678 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.679 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a504de2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.679 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.680 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a504de2-20, col_values=(('external_ids', {'iface-id': 'b35ca171-2b2e-44d8-96a4-4559f6282fda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:52.680 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:44:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.822 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804692.821428, b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.822 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] VM Started (Lifecycle Event)#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.858 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.862 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804692.8215432, b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.862 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.886 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.889 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.924 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:44:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:44:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042584425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.951 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.956 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:44:52 np0005532048 nova_compute[253661]: 2025-11-22 09:44:52.971 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.001 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.001 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.308 253665 INFO nova.compute.manager [None req-114c33d1-d55b-4874-af54-bb98c8643c8c 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Get console output#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.314 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.591 253665 DEBUG nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.592 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.593 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.593 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.593 253665 DEBUG nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.593 253665 WARNING nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.594 253665 DEBUG nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.594 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.594 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.594 253665 DEBUG oslo_concurrency.lockutils [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.595 253665 DEBUG nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:44:53 np0005532048 nova_compute[253661]: 2025-11-22 09:44:53.595 253665 WARNING nova.compute.manager [req-2c67f964-73b0-4e50-9a73-cf9184d60332 req-3083b0fb-c4cf-480a-ad11-b7a3b829b425 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state None.#033[00m
Nov 22 04:44:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:44:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 5.4 MiB/s wr, 148 op/s
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.501 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.502 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.502 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.502 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.502 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.503 253665 INFO nova.compute.manager [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Terminating instance#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.504 253665 DEBUG nova.compute.manager [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:44:54 np0005532048 kernel: tapae02b780-c7 (unregistering): left promiscuous mode
Nov 22 04:44:54 np0005532048 NetworkManager[48920]: <info>  [1763804694.5760] device (tapae02b780-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.583 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:54Z|01497|binding|INFO|Releasing lport ae02b780-c76c-4fec-9f50-a8fb17aec607 from this chassis (sb_readonly=0)
Nov 22 04:44:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:54Z|01498|binding|INFO|Setting lport ae02b780-c76c-4fec-9f50-a8fb17aec607 down in Southbound
Nov 22 04:44:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:44:54Z|01499|binding|INFO|Removing iface tapae02b780-c7 ovn-installed in OVS
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.587 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.591 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:76:7a 10.100.0.3'], port_security=['fa:16:3e:82:76:7a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b59ce93-bc6e-4f8c-b65e-e937db06426e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cdf5ea7-dfee-4f0a-9b99-06484e8f93dc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=ae02b780-c76c-4fec-9f50-a8fb17aec607) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.593 162862 INFO neutron.agent.ovn.metadata.agent [-] Port ae02b780-c76c-4fec-9f50-a8fb17aec607 in datapath ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 unbound from our chassis#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.595 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ccaaf7d7-d083-4f4d-9c25-562b3924cdc3#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.614 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e25e247-8640-411a-94ef-360120403d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:54 np0005532048 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000089.scope: Deactivated successfully.
Nov 22 04:44:54 np0005532048 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000089.scope: Consumed 13.822s CPU time.
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.644 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4508397-40b7-4526-a2be-eb07e803b296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:54 np0005532048 systemd-machined[215941]: Machine qemu-168-instance-00000089 terminated.
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.649 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[e8eae088-748e-4559-9280-fdedf9936572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.686 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[7082a234-1c11-430e-b8aa-a8de62c2f5d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.706 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f7c68c-f078-4348-b899-fd98a38c4b24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapccaaf7d7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:e3:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749404, 'reachable_time': 23939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395425, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.724 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[22467d93-fd77-49b8-8e3f-1ef5dabebbcc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapccaaf7d7-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749418, 'tstamp': 749418}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395426, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapccaaf7d7-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749421, 'tstamp': 749421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395426, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.726 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccaaf7d7-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.727 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.732 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.733 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapccaaf7d7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.733 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.734 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapccaaf7d7-d0, col_values=(('external_ids', {'iface-id': '9302a453-ce7d-475e-8f32-fad5f9f06dff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:44:54.734 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.738 253665 INFO nova.virt.libvirt.driver [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Instance destroyed successfully.#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.739 253665 DEBUG nova.objects.instance [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.744 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No event matching network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab in dict_keys([('network-vif-plugged', '7b663864-2935-4127-ab02-75e4a0acfc73')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.745 253665 WARNING nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received unexpected event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing instance network info cache due to event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.746 253665 DEBUG nova.network.neutron [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.750 253665 DEBUG nova.virt.libvirt.vif [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:44:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-394306021',display_name='tempest-TestNetworkBasicOps-server-394306021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-394306021',id=137,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKV6hdIegm8gqIp/u4iZ0PF1QzxEfr8miYRpA6YX5J6vj9O+Yx765rE9yt47fSsnx60um/BGJdUGLYgUR7QR+U6SMxydnwIMFxPr8weXhUUIM0aYuUrJ1oofcn2oF77DhQ==',key_name='tempest-TestNetworkBasicOps-1881101001',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-i0tmx233',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:27Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.750 253665 DEBUG nova.network.os_vif_util [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.751 253665 DEBUG nova.network.os_vif_util [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.751 253665 DEBUG os_vif [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.752 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.753 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae02b780-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.755 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.760 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:44:54 np0005532048 nova_compute[253661]: 2025-11-22 09:44:54.763 253665 INFO os_vif [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:76:7a,bridge_name='br-int',has_traffic_filtering=True,id=ae02b780-c76c-4fec-9f50-a8fb17aec607,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae02b780-c7')#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.639 253665 INFO nova.virt.libvirt.driver [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Deleting instance files /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_del#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.640 253665 INFO nova.virt.libvirt.driver [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Deletion of /var/lib/nova/instances/8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb_del complete#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.686 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.686 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing instance network info cache due to event network-changed-ae02b780-c76c-4fec-9f50-a8fb17aec607. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.686 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.687 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.687 253665 DEBUG nova.network.neutron [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Refreshing network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:44:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:44:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:44:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:44:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:44:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.725 253665 INFO nova.compute.manager [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.725 253665 DEBUG oslo.service.loopingcall [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.726 253665 DEBUG nova.compute.manager [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.726 253665 DEBUG nova.network.neutron [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.991 253665 DEBUG nova.network.neutron [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updated VIF entry in instance network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:44:55 np0005532048 nova_compute[253661]: 2025-11-22 09:44:55.992 253665 DEBUG nova.network.neutron [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.006 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.007 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.007 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.007 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.008 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.008 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Processing event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.008 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.008 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.009 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.009 253665 DEBUG oslo_concurrency.lockutils [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.009 253665 DEBUG nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.010 253665 WARNING nova.compute.manager [req-d5dbc7cd-052e-4a84-84f7-606d2511a4e6 req-fead6626-4d60-4dc3-9060-e4686a9bbb81 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received unexpected event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.010 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.013 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804696.0133588, b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.014 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.015 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.018 253665 INFO nova.virt.libvirt.driver [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance spawned successfully.#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.019 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.033 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.037 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.040 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.040 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.040 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.041 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.041 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.042 253665 DEBUG nova.virt.libvirt.driver [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.079 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.114 253665 INFO nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Took 12.01 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.115 253665 DEBUG nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:44:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 3.9 MiB/s wr, 106 op/s
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.168 253665 INFO nova.compute.manager [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Took 13.07 seconds to build instance.#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.181 253665 DEBUG oslo_concurrency.lockutils [None req-a89a2d7a-71bf-4024-9dac-52a9cb46d1df 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.339 253665 DEBUG nova.network.neutron [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.353 253665 INFO nova.compute.manager [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Took 0.63 seconds to deallocate network for instance.#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.404 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.405 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.520 253665 DEBUG oslo_concurrency.processutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:44:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:44:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:44:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:44:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:44:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.717 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.861 253665 DEBUG nova.compute.manager [req-cf4a034a-e0a7-4f12-8f6a-844dd155985d req-66a7d5e1-344e-4bfd-92e1-145237524d55 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-deleted-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:44:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942560802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.988 253665 DEBUG oslo_concurrency.processutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:44:56 np0005532048 nova_compute[253661]: 2025-11-22 09:44:56.994 253665 DEBUG nova.compute.provider_tree [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.001 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.008 253665 DEBUG nova.scheduler.client.report [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.035 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.074 253665 INFO nova.scheduler.client.report [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.158 253665 DEBUG nova.network.neutron [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updated VIF entry in instance network info cache for port ae02b780-c76c-4fec-9f50-a8fb17aec607. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.159 253665 DEBUG nova.network.neutron [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Updating instance_info_cache with network_info: [{"id": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "address": "fa:16:3e:82:76:7a", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae02b780-c7", "ovs_interfaceid": "ae02b780-c76c-4fec-9f50-a8fb17aec607", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.164 253665 DEBUG oslo_concurrency.lockutils [None req-7abe50a3-eaaa-4c64-8c90-89e7e75c5108 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.186 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.187 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-unplugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.187 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.187 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.188 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.188 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] No waiting events found dispatching network-vif-unplugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.188 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-unplugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.188 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received event network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.189 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.189 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.189 253665 DEBUG oslo_concurrency.lockutils [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.189 253665 DEBUG nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] No waiting events found dispatching network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.190 253665 WARNING nova.compute.manager [req-fb9c1d50-e8c3-4c91-87a2-3f6839361fc7 req-dd2f30bd-2072-4b44-9a3b-0fb9185c60aa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Received unexpected event network-vif-plugged-ae02b780-c76c-4fec-9f50-a8fb17aec607 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:44:57 np0005532048 nova_compute[253661]: 2025-11-22 09:44:57.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:44:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 4.0 MiB/s wr, 131 op/s
Nov 22 04:44:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:44:59 np0005532048 nova_compute[253661]: 2025-11-22 09:44:59.780 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.070 253665 DEBUG nova.compute.manager [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.071 253665 DEBUG nova.compute.manager [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing instance network info cache due to event network-changed-7f5f15bb-83ef-4c81-8585-c447323ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.071 253665 DEBUG oslo_concurrency.lockutils [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.071 253665 DEBUG oslo_concurrency.lockutils [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.072 253665 DEBUG nova.network.neutron [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Refreshing network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.120 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.121 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.121 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.121 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.122 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.122 253665 INFO nova.compute.manager [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Terminating instance#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.123 253665 DEBUG nova.compute.manager [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:45:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 145 op/s
Nov 22 04:45:00 np0005532048 kernel: tap7f5f15bb-83 (unregistering): left promiscuous mode
Nov 22 04:45:00 np0005532048 NetworkManager[48920]: <info>  [1763804700.2245] device (tap7f5f15bb-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.232 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:00Z|01500|binding|INFO|Releasing lport 7f5f15bb-83ef-4c81-8585-c447323ac70f from this chassis (sb_readonly=0)
Nov 22 04:45:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:00Z|01501|binding|INFO|Setting lport 7f5f15bb-83ef-4c81-8585-c447323ac70f down in Southbound
Nov 22 04:45:00 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:00Z|01502|binding|INFO|Removing iface tap7f5f15bb-83 ovn-installed in OVS
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.234 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.239 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:14:00 10.100.0.12'], port_security=['fa:16:3e:e3:14:00 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dab57683-82b6-44b3-b663-556a4f0e3dab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7139b3cb-5e3b-45f1-be1c-957199bdba02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cdf5ea7-dfee-4f0a-9b99-06484e8f93dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7f5f15bb-83ef-4c81-8585-c447323ac70f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.240 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7f5f15bb-83ef-4c81-8585-c447323ac70f in datapath ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 unbound from our chassis#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.242 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.243 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[824db2d7-b16b-4603-8971-15f5910207bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.243 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 namespace which is not needed anymore#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 22 04:45:00 np0005532048 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000087.scope: Consumed 15.003s CPU time.
Nov 22 04:45:00 np0005532048 systemd-machined[215941]: Machine qemu-166-instance-00000087 terminated.
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.352 253665 INFO nova.virt.libvirt.driver [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Instance destroyed successfully.#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.353 253665 DEBUG nova.objects.instance [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid dab57683-82b6-44b3-b663-556a4f0e3dab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.367 253665 DEBUG nova.virt.libvirt.vif [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:43:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1917813057',display_name='tempest-TestNetworkBasicOps-server-1917813057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1917813057',id=135,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7RKy18xj59Mrr1Qz0apb8VM0RVo9aCtcuaRLK/Njyb/8H+0bdEC3XqXWMpAl+tfEMf3lBrH+nx/Y/xtjJAjEL/9WZ1nk79dDJUwDIjACi8kN3FW6TUbGYNm9djoYtMsA==',key_name='tempest-TestNetworkBasicOps-1265217701',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-1ixz1m3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:04Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=dab57683-82b6-44b3-b663-556a4f0e3dab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.367 253665 DEBUG nova.network.os_vif_util [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.368 253665 DEBUG nova.network.os_vif_util [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.369 253665 DEBUG os_vif [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.372 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f5f15bb-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.374 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.379 253665 INFO os_vif [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:14:00,bridge_name='br-int',has_traffic_filtering=True,id=7f5f15bb-83ef-4c81-8585-c447323ac70f,network=Network(ccaaf7d7-d083-4f4d-9c25-562b3924cdc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f5f15bb-83')#033[00m
Nov 22 04:45:00 np0005532048 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [NOTICE]   (392791) : haproxy version is 2.8.14-c23fe91
Nov 22 04:45:00 np0005532048 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [NOTICE]   (392791) : path to executable is /usr/sbin/haproxy
Nov 22 04:45:00 np0005532048 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [WARNING]  (392791) : Exiting Master process...
Nov 22 04:45:00 np0005532048 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [ALERT]    (392791) : Current worker (392793) exited with code 143 (Terminated)
Nov 22 04:45:00 np0005532048 neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3[392787]: [WARNING]  (392791) : All workers exited. Exiting... (0)
Nov 22 04:45:00 np0005532048 systemd[1]: libpod-9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3.scope: Deactivated successfully.
Nov 22 04:45:00 np0005532048 podman[395497]: 2025-11-22 09:45:00.395138714 +0000 UTC m=+0.060593197 container died 9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:45:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3-userdata-shm.mount: Deactivated successfully.
Nov 22 04:45:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-743f2a86a0af71728ae8cd7ad6b829b53bc590a580aa2c574008ce5dfdbe39ae-merged.mount: Deactivated successfully.
Nov 22 04:45:00 np0005532048 podman[395497]: 2025-11-22 09:45:00.476812841 +0000 UTC m=+0.142267304 container cleanup 9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:45:00 np0005532048 podman[395529]: 2025-11-22 09:45:00.478085212 +0000 UTC m=+0.064445822 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:45:00 np0005532048 podman[395539]: 2025-11-22 09:45:00.482743228 +0000 UTC m=+0.064823183 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 22 04:45:00 np0005532048 systemd[1]: libpod-conmon-9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3.scope: Deactivated successfully.
Nov 22 04:45:00 np0005532048 podman[395590]: 2025-11-22 09:45:00.572066633 +0000 UTC m=+0.073175368 container remove 9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.577 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7b06db-6ffd-476e-97ca-b15c52bb40a3]: (4, ('Sat Nov 22 09:45:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 (9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3)\n9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3\nSat Nov 22 09:45:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 (9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3)\n9c4939edab187deb2c3107c0b8f32a540b34ce19fe152e96a1da42cdc6fe2ed3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.579 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ade4ab52-1899-4ee7-b7f1-9d22b0a7282f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.579 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapccaaf7d7-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.581 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 kernel: tapccaaf7d7-d0: left promiscuous mode
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.598 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.601 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[44843621-c1b0-4131-bd05-f581c15f94f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.616 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7dad3086-01f6-412a-b5cf-8542b6a818e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.617 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23bead0c-d47b-46a1-8b82-21b6254d3193]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.633 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[195c74a4-b05b-49a7-9ca5-a8bbf406d4ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749395, 'reachable_time': 31571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395605, 'error': None, 'target': 'ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:00 np0005532048 systemd[1]: run-netns-ovnmeta\x2dccaaf7d7\x2dd083\x2d4f4d\x2d9c25\x2d562b3924cdc3.mount: Deactivated successfully.
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.635 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ccaaf7d7-d083-4f4d-9c25-562b3924cdc3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:45:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:00.635 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[95b606ca-e31d-4670-acf9-85714cc6554d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.743 253665 DEBUG nova.compute.manager [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.743 253665 DEBUG oslo_concurrency.lockutils [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.744 253665 DEBUG oslo_concurrency.lockutils [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.744 253665 DEBUG oslo_concurrency.lockutils [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.744 253665 DEBUG nova.compute.manager [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:00 np0005532048 nova_compute[253661]: 2025-11-22 09:45:00.744 253665 DEBUG nova.compute.manager [req-ac960867-6edf-4c04-984f-5b6bd4bca0cf req-ca964c15-8865-4e3d-acd0-ec63df9fbe83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-unplugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:45:01 np0005532048 nova_compute[253661]: 2025-11-22 09:45:01.401 253665 INFO nova.virt.libvirt.driver [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Deleting instance files /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab_del#033[00m
Nov 22 04:45:01 np0005532048 nova_compute[253661]: 2025-11-22 09:45:01.402 253665 INFO nova.virt.libvirt.driver [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Deletion of /var/lib/nova/instances/dab57683-82b6-44b3-b663-556a4f0e3dab_del complete#033[00m
Nov 22 04:45:01 np0005532048 nova_compute[253661]: 2025-11-22 09:45:01.449 253665 INFO nova.compute.manager [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Took 1.33 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:45:01 np0005532048 nova_compute[253661]: 2025-11-22 09:45:01.450 253665 DEBUG oslo.service.loopingcall [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:45:01 np0005532048 nova_compute[253661]: 2025-11-22 09:45:01.450 253665 DEBUG nova.compute.manager [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:45:01 np0005532048 nova_compute[253661]: 2025-11-22 09:45:01.450 253665 DEBUG nova.network.neutron [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:45:01 np0005532048 nova_compute[253661]: 2025-11-22 09:45:01.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Nov 22 04:45:02 np0005532048 nova_compute[253661]: 2025-11-22 09:45:02.854 253665 DEBUG nova.compute.manager [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:02 np0005532048 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 DEBUG oslo_concurrency.lockutils [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:02 np0005532048 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 DEBUG oslo_concurrency.lockutils [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:02 np0005532048 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 DEBUG oslo_concurrency.lockutils [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:02 np0005532048 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 DEBUG nova.compute.manager [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] No waiting events found dispatching network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:02 np0005532048 nova_compute[253661]: 2025-11-22 09:45:02.855 253665 WARNING nova.compute.manager [req-d52d95c7-da37-4a00-bffa-8a6a86fd1cb0 req-4da8468c-1582-4994-9121-46e28ac5e367 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received unexpected event network-vif-plugged-7f5f15bb-83ef-4c81-8585-c447323ac70f for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0023620132512860584 of space, bias 1.0, pg target 0.7086039753858175 quantized to 32 (current 32)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:45:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:45:03 np0005532048 nova_compute[253661]: 2025-11-22 09:45:03.093 253665 DEBUG nova.network.neutron [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updated VIF entry in instance network info cache for port 7f5f15bb-83ef-4c81-8585-c447323ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:45:03 np0005532048 nova_compute[253661]: 2025-11-22 09:45:03.093 253665 DEBUG nova.network.neutron [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [{"id": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "address": "fa:16:3e:e3:14:00", "network": {"id": "ccaaf7d7-d083-4f4d-9c25-562b3924cdc3", "bridge": "br-int", "label": "tempest-network-smoke--2092358508", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f5f15bb-83", "ovs_interfaceid": "7f5f15bb-83ef-4c81-8585-c447323ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:03 np0005532048 nova_compute[253661]: 2025-11-22 09:45:03.114 253665 DEBUG oslo_concurrency.lockutils [req-e41ac691-574f-40d7-b371-232f491f22c7 req-0aa8169d-ac8b-4beb-9ece-8329ef558413 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-dab57683-82b6-44b3-b663-556a4f0e3dab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:03 np0005532048 nova_compute[253661]: 2025-11-22 09:45:03.428 253665 DEBUG nova.network.neutron [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:03 np0005532048 nova_compute[253661]: 2025-11-22 09:45:03.451 253665 INFO nova.compute.manager [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Took 2.00 seconds to deallocate network for instance.#033[00m
Nov 22 04:45:03 np0005532048 nova_compute[253661]: 2025-11-22 09:45:03.516 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:03 np0005532048 nova_compute[253661]: 2025-11-22 09:45:03.516 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:03 np0005532048 nova_compute[253661]: 2025-11-22 09:45:03.663 253665 DEBUG oslo_concurrency.processutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.032 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.033 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.050 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:45:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:45:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267366814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.112 253665 DEBUG oslo_concurrency.processutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.116 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.118 253665 DEBUG nova.compute.provider_tree [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.129 253665 DEBUG nova.scheduler.client.report [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.146 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.148 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 180 op/s
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.154 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.155 253665 INFO nova.compute.claims [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.192 253665 INFO nova.scheduler.client.report [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance dab57683-82b6-44b3-b663-556a4f0e3dab#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.274 253665 DEBUG oslo_concurrency.lockutils [None req-969c53c7-a280-44b6-8f67-55c66fdd3b76 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "dab57683-82b6-44b3-b663-556a4f0e3dab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.342 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.478 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 WARNING nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 48af02cd-94c5-473f-a6f9-4d2caad8483f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 71ef7514-c6bd-40ee-852a-4b850ca0a05c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.513 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.514 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.514 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.514 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.558 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.563 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.592 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:45:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352016269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.783 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.789 253665 DEBUG nova.compute.provider_tree [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.809 253665 DEBUG nova.scheduler.client.report [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.837 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.838 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.884 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.885 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.906 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.922 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.946 253665 DEBUG nova.compute.manager [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Received event network-vif-deleted-7f5f15bb-83ef-4c81-8585-c447323ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.946 253665 DEBUG nova.compute.manager [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.947 253665 DEBUG nova.compute.manager [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing instance network info cache due to event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.947 253665 DEBUG oslo_concurrency.lockutils [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.947 253665 DEBUG oslo_concurrency.lockutils [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:04 np0005532048 nova_compute[253661]: 2025-11-22 09:45:04.948 253665 DEBUG nova.network.neutron [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.018 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.019 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.019 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Creating image(s)#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.040 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.059 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.086 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.090 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.141 253665 DEBUG nova.policy [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4993d04ad8774a15825d4bea194cd1ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '46d50d652376434585e9da83e40f96bb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.161 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.162 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.163 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.163 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.185 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.190 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:05 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.719 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.781 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] resizing rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.889 253665 DEBUG nova.objects.instance [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'migration_context' on Instance uuid ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.905 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.905 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Ensure instance console log exists: /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.905 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.906 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:05 np0005532048 nova_compute[253661]: 2025-11-22 09:45:05.906 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 127 op/s
Nov 22 04:45:06 np0005532048 podman[395817]: 2025-11-22 09:45:06.408374649 +0000 UTC m=+0.108477270 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 22 04:45:06 np0005532048 nova_compute[253661]: 2025-11-22 09:45:06.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:06 np0005532048 nova_compute[253661]: 2025-11-22 09:45:06.746 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Successfully created port: f0192978-0953-4171-b70f-7f21bd6af5a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:45:06 np0005532048 nova_compute[253661]: 2025-11-22 09:45:06.848 253665 DEBUG nova.network.neutron [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updated VIF entry in instance network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:45:06 np0005532048 nova_compute[253661]: 2025-11-22 09:45:06.849 253665 DEBUG nova.network.neutron [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:06 np0005532048 nova_compute[253661]: 2025-11-22 09:45:06.871 253665 DEBUG oslo_concurrency.lockutils [req-fa3828ae-635b-4b7e-a602-a9e9374dc7da req-9e99cd01-0fb8-40cc-ab69-fce184ae88ee 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:07 np0005532048 nova_compute[253661]: 2025-11-22 09:45:07.563 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Successfully updated port: f0192978-0953-4171-b70f-7f21bd6af5a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:45:07 np0005532048 nova_compute[253661]: 2025-11-22 09:45:07.586 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:07 np0005532048 nova_compute[253661]: 2025-11-22 09:45:07.586 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquired lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:07 np0005532048 nova_compute[253661]: 2025-11-22 09:45:07.587 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:45:07 np0005532048 nova_compute[253661]: 2025-11-22 09:45:07.656 253665 DEBUG nova.compute.manager [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-changed-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:07 np0005532048 nova_compute[253661]: 2025-11-22 09:45:07.656 253665 DEBUG nova.compute.manager [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Refreshing instance network info cache due to event network-changed-f0192978-0953-4171-b70f-7f21bd6af5a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:45:07 np0005532048 nova_compute[253661]: 2025-11-22 09:45:07.656 253665 DEBUG oslo_concurrency.lockutils [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:07 np0005532048 nova_compute[253661]: 2025-11-22 09:45:07.815 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:45:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 255 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 222 KiB/s wr, 138 op/s
Nov 22 04:45:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:08Z|01503|binding|INFO|Releasing lport 3db82a3e-3c50-4f8e-b5b4-8b4657d60723 from this chassis (sb_readonly=0)
Nov 22 04:45:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:08Z|01504|binding|INFO|Releasing lport b35ca171-2b2e-44d8-96a4-4559f6282fda from this chassis (sb_readonly=0)
Nov 22 04:45:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:08Z|01505|binding|INFO|Releasing lport ce538828-218d-4def-9bed-efeb786012c8 from this chassis (sb_readonly=0)
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.698 253665 DEBUG nova.network.neutron [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updating instance_info_cache with network_info: [{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.719 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Releasing lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.719 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance network_info: |[{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.720 253665 DEBUG oslo_concurrency.lockutils [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.720 253665 DEBUG nova.network.neutron [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Refreshing network info cache for port f0192978-0953-4171-b70f-7f21bd6af5a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.723 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start _get_guest_xml network_info=[{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.735 253665 WARNING nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.743 253665 DEBUG nova.virt.libvirt.host [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.744 253665 DEBUG nova.virt.libvirt.host [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.747 253665 DEBUG nova.virt.libvirt.host [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.748 253665 DEBUG nova.virt.libvirt.host [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.748 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.748 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.749 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.750 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.750 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.750 253665 DEBUG nova.virt.hardware [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:45:08 np0005532048 nova_compute[253661]: 2025-11-22 09:45:08.756 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:45:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185236624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.243 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.264 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.268 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.737 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804694.7360916, 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.738 253665 INFO nova.compute.manager [-] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.762 253665 DEBUG nova.compute.manager [None req-4f6ed317-5c6e-485d-8473-2fe3cec84725 - - - - - -] [instance: 8d1b4da3-0ec0-4b9a-91cb-d0380f768ceb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:45:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633205509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.817 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.819 253665 DEBUG nova.virt.libvirt.vif [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:45:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=140,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-h3590ebr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:45:04Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=ed3583b5-6d93-4e3f-83e0-3b36f25f08f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.819 253665 DEBUG nova.network.os_vif_util [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.820 253665 DEBUG nova.network.os_vif_util [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.821 253665 DEBUG nova.objects.instance [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'pci_devices' on Instance uuid ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.834 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <uuid>ed3583b5-6d93-4e3f-83e0-3b36f25f08f1</uuid>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <name>instance-0000008c</name>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118</nova:name>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:45:08</nova:creationTime>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <nova:user uuid="4993d04ad8774a15825d4bea194cd1ca">tempest-TestSecurityGroupsBasicOps-488258979-project-member</nova:user>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <nova:project uuid="46d50d652376434585e9da83e40f96bb">tempest-TestSecurityGroupsBasicOps-488258979</nova:project>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <nova:port uuid="f0192978-0953-4171-b70f-7f21bd6af5a0">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <entry name="serial">ed3583b5-6d93-4e3f-83e0-3b36f25f08f1</entry>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <entry name="uuid">ed3583b5-6d93-4e3f-83e0-3b36f25f08f1</entry>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:73:98:16"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <target dev="tapf0192978-09"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/console.log" append="off"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:45:09 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:45:09 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:45:09 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:45:09 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.835 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Preparing to wait for external event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.835 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.835 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.836 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.836 253665 DEBUG nova.virt.libvirt.vif [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:45:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=140,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-h3590ebr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:45:04Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=ed3583b5-6d93-4e3f-83e0-3b36f25f08f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.836 253665 DEBUG nova.network.os_vif_util [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.837 253665 DEBUG nova.network.os_vif_util [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.837 253665 DEBUG os_vif [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.838 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.839 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.842 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0192978-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.842 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0192978-09, col_values=(('external_ids', {'iface-id': 'f0192978-0953-4171-b70f-7f21bd6af5a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:98:16', 'vm-uuid': 'ed3583b5-6d93-4e3f-83e0-3b36f25f08f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:09 np0005532048 NetworkManager[48920]: <info>  [1763804709.8908] manager: (tapf0192978-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.893 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.898 253665 INFO os_vif [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09')#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.958 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.959 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.959 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] No VIF found with MAC fa:16:3e:73:98:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.959 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Using config drive#033[00m
Nov 22 04:45:09 np0005532048 nova_compute[253661]: 2025-11-22 09:45:09.980 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 297 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 138 op/s
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.453 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Creating config drive at /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.459 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9lod7y93 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.601 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9lod7y93" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.631 253665 DEBUG nova.storage.rbd_utils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] rbd image ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.635 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.674 253665 DEBUG nova.network.neutron [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updated VIF entry in instance network info cache for port f0192978-0953-4171-b70f-7f21bd6af5a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.676 253665 DEBUG nova.network.neutron [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updating instance_info_cache with network_info: [{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.690 253665 DEBUG oslo_concurrency.lockutils [req-93d91050-095a-4fa3-91ef-799471e8225a req-15b62dea-521e-4cb2-ac5c-c2eecd8238a2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.866 253665 DEBUG oslo_concurrency.processutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.867 253665 INFO nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Deleting local config drive /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1/disk.config because it was imported into RBD.#033[00m
Nov 22 04:45:10 np0005532048 kernel: tapf0192978-09: entered promiscuous mode
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.915 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:10Z|01506|binding|INFO|Claiming lport f0192978-0953-4171-b70f-7f21bd6af5a0 for this chassis.
Nov 22 04:45:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:10Z|01507|binding|INFO|f0192978-0953-4171-b70f-7f21bd6af5a0: Claiming fa:16:3e:73:98:16 10.100.0.9
Nov 22 04:45:10 np0005532048 NetworkManager[48920]: <info>  [1763804710.9171] manager: (tapf0192978-09): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Nov 22 04:45:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.921 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:98:16 10.100.0.9'], port_security=['fa:16:3e:73:98:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ed3583b5-6d93-4e3f-83e0-3b36f25f08f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a734f39d-baf0-4591-94dc-9057caf53bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c524ade6-1430-48f4-af9a-629e8a61db96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce1fe74-6934-45b2-a6d9-4702f1b2307a, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0192978-0953-4171-b70f-7f21bd6af5a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.923 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0192978-0953-4171-b70f-7f21bd6af5a0 in datapath a734f39d-baf0-4591-94dc-9057caf53bb4 bound to our chassis#033[00m
Nov 22 04:45:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.925 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a734f39d-baf0-4591-94dc-9057caf53bb4#033[00m
Nov 22 04:45:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:10Z|01508|binding|INFO|Setting lport f0192978-0953-4171-b70f-7f21bd6af5a0 ovn-installed in OVS
Nov 22 04:45:10 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:10Z|01509|binding|INFO|Setting lport f0192978-0953-4171-b70f-7f21bd6af5a0 up in Southbound
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.936 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:10 np0005532048 nova_compute[253661]: 2025-11-22 09:45:10.940 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.945 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f93e47f3-10ae-4e2c-b7ca-f1465845cfcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:10 np0005532048 systemd-machined[215941]: New machine qemu-171-instance-0000008c.
Nov 22 04:45:10 np0005532048 systemd[1]: Started Virtual Machine qemu-171-instance-0000008c.
Nov 22 04:45:10 np0005532048 systemd-udevd[395981]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:45:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.984 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[403fd93c-f30b-469e-b0a0-d0fa15fab46f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:10.989 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb30e9d-f05a-4e53-94cb-3299a40b8e40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:11 np0005532048 NetworkManager[48920]: <info>  [1763804710.9982] device (tapf0192978-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:45:11 np0005532048 NetworkManager[48920]: <info>  [1763804711.0004] device (tapf0192978-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:45:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.020 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4c32ccf3-4e85-4b16-9fd3-770bebbe246d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.037 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ade384-4785-42ef-b606-f64d608fe801]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa734f39d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752629, 'reachable_time': 34911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395992, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.052 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9a5d70-3e37-4857-8290-5c9fffa25c09]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa734f39d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752643, 'tstamp': 752643}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395993, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa734f39d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752646, 'tstamp': 752646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395993, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.053 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa734f39d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.107 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.108 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa734f39d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.108 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.109 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa734f39d-b0, col_values=(('external_ids', {'iface-id': '3db82a3e-3c50-4f8e-b5b4-8b4657d60723'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:11 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:11.109 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.139 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.187 253665 DEBUG nova.compute.manager [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.188 253665 DEBUG oslo_concurrency.lockutils [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.188 253665 DEBUG oslo_concurrency.lockutils [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.188 253665 DEBUG oslo_concurrency.lockutils [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.188 253665 DEBUG nova.compute.manager [req-1ed50ad8-c216-4ed9-be3b-684c8c064d02 req-f8df1ec0-39f6-471c-ac6c-599c6c58d0dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Processing event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:45:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:11Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:a0:45 10.100.0.13
Nov 22 04:45:11 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:11Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:a0:45 10.100.0.13
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.750 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804711.7499137, ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.751 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.752 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.756 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.760 253665 INFO nova.virt.libvirt.driver [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance spawned successfully.#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.761 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.787 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.793 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.796 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.796 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.796 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.797 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.797 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.797 253665 DEBUG nova.virt.libvirt.driver [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.825 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.825 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804711.7500281, ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.825 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.849 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.853 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804711.7555947, ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.853 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.860 253665 INFO nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Took 6.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.861 253665 DEBUG nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.867 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.869 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.887 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.912 253665 INFO nova.compute.manager [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Took 7.81 seconds to build instance.#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.924 253665 DEBUG oslo_concurrency.lockutils [None req-2b1870b1-4a59-4b31-8957-f6bedd45ee63 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.925 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 7.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:11 np0005532048 nova_compute[253661]: 2025-11-22 09:45:11.943 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.1 MiB/s wr, 133 op/s
Nov 22 04:45:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:45:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3910842087' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:45:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:45:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3910842087' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:45:13 np0005532048 nova_compute[253661]: 2025-11-22 09:45:13.329 253665 DEBUG nova.compute.manager [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:13 np0005532048 nova_compute[253661]: 2025-11-22 09:45:13.330 253665 DEBUG oslo_concurrency.lockutils [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:13 np0005532048 nova_compute[253661]: 2025-11-22 09:45:13.330 253665 DEBUG oslo_concurrency.lockutils [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:13 np0005532048 nova_compute[253661]: 2025-11-22 09:45:13.330 253665 DEBUG oslo_concurrency.lockutils [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:13 np0005532048 nova_compute[253661]: 2025-11-22 09:45:13.331 253665 DEBUG nova.compute.manager [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] No waiting events found dispatching network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:13 np0005532048 nova_compute[253661]: 2025-11-22 09:45:13.331 253665 WARNING nova.compute.manager [req-f1de9ae0-dacd-4409-9da5-9f1167ab8085 req-95550cff-a7ea-4383-8ba0-ae03e0377798 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received unexpected event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:45:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Nov 22 04:45:14 np0005532048 nova_compute[253661]: 2025-11-22 09:45:14.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:15 np0005532048 nova_compute[253661]: 2025-11-22 09:45:15.279 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:15 np0005532048 nova_compute[253661]: 2025-11-22 09:45:15.351 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804700.3500113, dab57683-82b6-44b3-b663-556a4f0e3dab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:15 np0005532048 nova_compute[253661]: 2025-11-22 09:45:15.351 253665 INFO nova.compute.manager [-] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:45:15 np0005532048 nova_compute[253661]: 2025-11-22 09:45:15.380 253665 DEBUG nova.compute.manager [None req-d50896d9-1a46-4c2a-ba6a-ac607b4aaa67 - - - - - -] [instance: dab57683-82b6-44b3-b663-556a4f0e3dab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:15 np0005532048 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG nova.compute.manager [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-changed-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:15 np0005532048 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG nova.compute.manager [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Refreshing instance network info cache due to event network-changed-f0192978-0953-4171-b70f-7f21bd6af5a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:45:15 np0005532048 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG oslo_concurrency.lockutils [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:15 np0005532048 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG oslo_concurrency.lockutils [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:15 np0005532048 nova_compute[253661]: 2025-11-22 09:45:15.881 253665 DEBUG nova.network.neutron [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Refreshing network info cache for port f0192978-0953-4171-b70f-7f21bd6af5a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:45:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Nov 22 04:45:16 np0005532048 nova_compute[253661]: 2025-11-22 09:45:16.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:18 np0005532048 nova_compute[253661]: 2025-11-22 09:45:18.069 253665 DEBUG nova.network.neutron [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updated VIF entry in instance network info cache for port f0192978-0953-4171-b70f-7f21bd6af5a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:45:18 np0005532048 nova_compute[253661]: 2025-11-22 09:45:18.070 253665 DEBUG nova.network.neutron [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updating instance_info_cache with network_info: [{"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:18 np0005532048 nova_compute[253661]: 2025-11-22 09:45:18.126 253665 DEBUG oslo_concurrency.lockutils [req-18c90cb6-f538-48c4-8346-018c0ae7bacf req-9ccbd0c6-6bb5-4ad9-9dc5-d95bd5ee3085 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Nov 22 04:45:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:19 np0005532048 nova_compute[253661]: 2025-11-22 09:45:19.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 154 op/s
Nov 22 04:45:21 np0005532048 nova_compute[253661]: 2025-11-22 09:45:21.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.004 253665 DEBUG nova.compute.manager [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.004 253665 DEBUG nova.compute.manager [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing instance network info cache due to event network-changed-2979286f-0fdd-4b20-9c29-da29aac8e5ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.004 253665 DEBUG oslo_concurrency.lockutils [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.005 253665 DEBUG oslo_concurrency.lockutils [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.005 253665 DEBUG nova.network.neutron [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Refreshing network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.093 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.094 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.095 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.095 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.095 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.097 253665 INFO nova.compute.manager [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Terminating instance#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.098 253665 DEBUG nova.compute.manager [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:45:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 132 op/s
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.227 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.229 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.246 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.340 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.341 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.504 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.505 253665 INFO nova.compute.claims [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:45:22 np0005532048 nova_compute[253661]: 2025-11-22 09:45:22.675 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:45:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:45:22 np0005532048 kernel: tap2979286f-0f (unregistering): left promiscuous mode
Nov 22 04:45:23 np0005532048 NetworkManager[48920]: <info>  [1763804723.0086] device (tap2979286f-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:45:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:23Z|01510|binding|INFO|Releasing lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab from this chassis (sb_readonly=0)
Nov 22 04:45:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:23Z|01511|binding|INFO|Setting lport 2979286f-0fdd-4b20-9c29-da29aac8e5ab down in Southbound
Nov 22 04:45:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:23Z|01512|binding|INFO|Removing iface tap2979286f-0f ovn-installed in OVS
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 kernel: tap7b663864-29 (unregistering): left promiscuous mode
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.029 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:a0:45 10.100.0.13'], port_security=['fa:16:3e:f9:a0:45 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621dd092-e20a-432f-8488-41d7fcd69532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=586feb4c-523c-413f-8bd3-6bc87edbdf4c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2979286f-0fdd-4b20-9c29-da29aac8e5ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.030 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2979286f-0fdd-4b20-9c29-da29aac8e5ab in datapath 621dd092-e20a-432f-8488-41d7fcd69532 unbound from our chassis#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.033 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 621dd092-e20a-432f-8488-41d7fcd69532#033[00m
Nov 22 04:45:23 np0005532048 NetworkManager[48920]: <info>  [1763804723.0342] device (tap7b663864-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.047 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[43a91cbd-e19c-4a77-8b9c-ba2c8f4e4179]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.063 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:23Z|01513|binding|INFO|Releasing lport 7b663864-2935-4127-ab02-75e4a0acfc73 from this chassis (sb_readonly=0)
Nov 22 04:45:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:23Z|01514|binding|INFO|Setting lport 7b663864-2935-4127-ab02-75e4a0acfc73 down in Southbound
Nov 22 04:45:23 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:23Z|01515|binding|INFO|Removing iface tap7b663864-29 ovn-installed in OVS
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.069 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042'], port_security=['fa:16:3e:6f:10:42 2001:db8::f816:3eff:fe6f:1042'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6f:1042/64', 'neutron:device_id': 'b4a045a0-0a46-4644-8d2e-9ec4a6d893b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7556820e-db50-4efa-817c-86d63f0b8b71, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=7b663864-2935-4127-ab02-75e4a0acfc73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.080 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.081 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[bad4c1dd-dd3d-4dfc-b220-73b4c00e5c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.084 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ca668d3a-7420-4ed5-87ca-7497ed4b3bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 22 04:45:23 np0005532048 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008b.scope: Consumed 14.076s CPU time.
Nov 22 04:45:23 np0005532048 systemd-machined[215941]: Machine qemu-170-instance-0000008b terminated.
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.117 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ba106420-07c4-45ba-8668-4436e4e819df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 NetworkManager[48920]: <info>  [1763804723.1329] manager: (tap7b663864-29): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.137 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b801ae-fbaf-403b-8105-9f07441b93f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap621dd092-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:9d:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750283, 'reachable_time': 20749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396074, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.158 253665 INFO nova.virt.libvirt.driver [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Instance destroyed successfully.#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.161 253665 DEBUG nova.objects.instance [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:45:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767172878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.176 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b85f49be-eeba-4ccd-a6ed-9f0f9894f1dd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap621dd092-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750294, 'tstamp': 750294}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396088, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap621dd092-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750297, 'tstamp': 750297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396088, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.178 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621dd092-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.187 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap621dd092-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.187 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.187 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap621dd092-e0, col_values=(('external_ids', {'iface-id': 'ce538828-218d-4def-9bed-efeb786012c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.187 253665 DEBUG nova.virt.libvirt.vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:56Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.188 253665 DEBUG nova.network.os_vif_util [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.188 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.188 253665 DEBUG nova.network.os_vif_util [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.189 253665 DEBUG os_vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.190 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 7b663864-2935-4127-ab02-75e4a0acfc73 in datapath 7a504de2-27b2-4d01-a183-d9b0331ca31e unbound from our chassis#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.191 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.191 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2979286f-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.192 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a504de2-27b2-4d01-a183-d9b0331ca31e#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.200 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.203 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.204 253665 INFO os_vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:a0:45,bridge_name='br-int',has_traffic_filtering=True,id=2979286f-0fdd-4b20-9c29-da29aac8e5ab,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2979286f-0f')#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.205 253665 DEBUG nova.virt.libvirt.vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:44:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-63266585',display_name='tempest-TestGettingAddress-server-63266585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-63266585',id=139,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-y946c4e6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:56Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=b4a045a0-0a46-4644-8d2e-9ec4a6d893b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.206 253665 DEBUG nova.network.os_vif_util [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.207 253665 DEBUG nova.network.os_vif_util [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.207 253665 DEBUG os_vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.209 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c590d1bb-df49-48e5-85c7-70372e4f0566]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.210 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b663864-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.213 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.215 253665 INFO os_vif [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:10:42,bridge_name='br-int',has_traffic_filtering=True,id=7b663864-2935-4127-ab02-75e4a0acfc73,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b663864-29')#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.236 253665 DEBUG nova.compute.provider_tree [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.239 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b117d142-4e39-4d69-a21c-e6d16adf05fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.244 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[477768dd-f97e-4b4d-9153-d4c37124adb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.267 253665 DEBUG nova.scheduler.client.report [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.283 253665 DEBUG nova.compute.manager [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-unplugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.283 253665 DEBUG oslo_concurrency.lockutils [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.283 253665 DEBUG oslo_concurrency.lockutils [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.283 253665 DEBUG oslo_concurrency.lockutils [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.284 253665 DEBUG nova.compute.manager [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-unplugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.284 253665 DEBUG nova.compute.manager [req-d2b03bdd-f7f3-466b-b519-5b81355049a5 req-514fb16b-b9df-4c1c-9646-d7202fb4d411 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-unplugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.287 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[adf121f8-8c0a-4e52-b6aa-68148e040ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.296 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.297 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.306 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f0452f-75bf-42d0-b7c6-65441cfbd930]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a504de2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:4b:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750378, 'reachable_time': 21380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396120, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.325 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e24a7f8d-ce58-4c32-ae93-262fc0e0733f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a504de2-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 750393, 'tstamp': 750393}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396121, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.327 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a504de2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.329 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a504de2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a504de2-20, col_values=(('external_ids', {'iface-id': 'b35ca171-2b2e-44d8-96a4-4559f6282fda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:23 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:23.330 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.365 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.366 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.377 253665 DEBUG nova.network.neutron [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updated VIF entry in instance network info cache for port 2979286f-0fdd-4b20-9c29-da29aac8e5ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.378 253665 DEBUG nova.network.neutron [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "address": "fa:16:3e:f9:a0:45", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2979286f-0f", "ovs_interfaceid": "2979286f-0fdd-4b20-9c29-da29aac8e5ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.391 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.407 253665 DEBUG oslo_concurrency.lockutils [req-729fbc00-b8da-4671-8a50-a4e573df1a14 req-ba5e6b33-6f41-4004-ab2e-8a6e220896a6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.419 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.523 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.524 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.524 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Creating image(s)#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.542 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.564 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.585 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.589 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.669 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.670 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.671 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.671 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.694 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.698 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:23 np0005532048 nova_compute[253661]: 2025-11-22 09:45:23.858 253665 DEBUG nova.policy [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '36208b8910e24c06bcc7f958bab2adf1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:45:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 877 KiB/s wr, 111 op/s
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.357 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Successfully created port: 2bf46f44-05ff-4af4-ba41-f280a21be09e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.432 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.432 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.432 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 WARNING nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received unexpected event network-vif-plugged-2979286f-0fdd-4b20-9c29-da29aac8e5ab for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-unplugged-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.433 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-unplugged-7b663864-2935-4127-ab02-75e4a0acfc73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-unplugged-7b663864-2935-4127-ab02-75e4a0acfc73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.434 253665 DEBUG oslo_concurrency.lockutils [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.435 253665 DEBUG nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] No waiting events found dispatching network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.435 253665 WARNING nova.compute.manager [req-9fdfc110-7be8-44b1-acf8-e7e9df6b575c req-ed3c595d-896b-405b-aace-5e2bd6eb9690 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received unexpected event network-vif-plugged-7b663864-2935-4127-ab02-75e4a0acfc73 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.613 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.916s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.679 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] resizing rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.881 253665 DEBUG nova.objects.instance [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'migration_context' on Instance uuid e1b6c07e-b79f-4b39-a2b8-a952e54f4972 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.895 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.896 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Ensure instance console log exists: /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.897 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.897 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:25 np0005532048 nova_compute[253661]: 2025-11-22 09:45:25.897 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 25 KiB/s wr, 27 op/s
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.379 253665 INFO nova.virt.libvirt.driver [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Deleting instance files /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_del#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.381 253665 INFO nova.virt.libvirt.driver [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Deletion of /var/lib/nova/instances/b4a045a0-0a46-4644-8d2e-9ec4a6d893b9_del complete#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.473 253665 INFO nova.compute.manager [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Took 4.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.474 253665 DEBUG oslo.service.loopingcall [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.475 253665 DEBUG nova.compute.manager [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.475 253665 DEBUG nova.network.neutron [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.839 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Successfully updated port: 2bf46f44-05ff-4af4-ba41-f280a21be09e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.867 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.867 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:26 np0005532048 nova_compute[253661]: 2025-11-22 09:45:26.868 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:45:27 np0005532048 nova_compute[253661]: 2025-11-22 09:45:27.050 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:45:27 np0005532048 nova_compute[253661]: 2025-11-22 09:45:27.535 253665 DEBUG nova.compute.manager [req-a93dc0ed-4f01-4f00-9b69-54a88c6e9f80 req-eee139d7-2f2a-4260-9c47-b828f2c444e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-deleted-2979286f-0fdd-4b20-9c29-da29aac8e5ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:27 np0005532048 nova_compute[253661]: 2025-11-22 09:45:27.536 253665 INFO nova.compute.manager [req-a93dc0ed-4f01-4f00-9b69-54a88c6e9f80 req-eee139d7-2f2a-4260-9c47-b828f2c444e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Neutron deleted interface 2979286f-0fdd-4b20-9c29-da29aac8e5ab; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:45:27 np0005532048 nova_compute[253661]: 2025-11-22 09:45:27.536 253665 DEBUG nova.network.neutron [req-a93dc0ed-4f01-4f00-9b69-54a88c6e9f80 req-eee139d7-2f2a-4260-9c47-b828f2c444e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [{"id": "7b663864-2935-4127-ab02-75e4a0acfc73", "address": "fa:16:3e:6f:10:42", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6f:1042", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b663864-29", "ovs_interfaceid": "7b663864-2935-4127-ab02-75e4a0acfc73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:27 np0005532048 nova_compute[253661]: 2025-11-22 09:45:27.568 253665 DEBUG nova.compute.manager [req-a93dc0ed-4f01-4f00-9b69-54a88c6e9f80 req-eee139d7-2f2a-4260-9c47-b828f2c444e7 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Detach interface failed, port_id=2979286f-0fdd-4b20-9c29-da29aac8e5ab, reason: Instance b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:45:27 np0005532048 nova_compute[253661]: 2025-11-22 09:45:27.619 253665 DEBUG nova.compute.manager [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:27 np0005532048 nova_compute[253661]: 2025-11-22 09:45:27.619 253665 DEBUG nova.compute.manager [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing instance network info cache due to event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:45:27 np0005532048 nova_compute[253661]: 2025-11-22 09:45:27.620 253665 DEBUG oslo_concurrency.lockutils [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:27Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:98:16 10.100.0.9
Nov 22 04:45:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:27Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:98:16 10.100.0.9
Nov 22 04:45:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:27.988 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:27.988 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:27.989 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.083 253665 DEBUG nova.network.neutron [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.098 253665 INFO nova.compute.manager [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Took 1.62 seconds to deallocate network for instance.#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.140 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.141 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 307 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 791 KiB/s wr, 65 op/s
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.212 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.255 253665 DEBUG oslo_concurrency.processutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.293 253665 DEBUG nova.network.neutron [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.309 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.310 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance network_info: |[{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.310 253665 DEBUG oslo_concurrency.lockutils [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.310 253665 DEBUG nova.network.neutron [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.313 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start _get_guest_xml network_info=[{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.317 253665 WARNING nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.326 253665 DEBUG nova.virt.libvirt.host [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.326 253665 DEBUG nova.virt.libvirt.host [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.329 253665 DEBUG nova.virt.libvirt.host [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.330 253665 DEBUG nova.virt.libvirt.host [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.330 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.330 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.330 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.331 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.332 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.332 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.332 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.332 253665 DEBUG nova.virt.hardware [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.335 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:45:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651576619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.709 253665 DEBUG oslo_concurrency.processutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.715 253665 DEBUG nova.compute.provider_tree [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.729 253665 DEBUG nova.scheduler.client.report [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.747 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:45:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478967708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.775 253665 INFO nova.scheduler.client.report [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance b4a045a0-0a46-4644-8d2e-9ec4a6d893b9#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.779 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.808 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.812 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:28 np0005532048 nova_compute[253661]: 2025-11-22 09:45:28.916 253665 DEBUG oslo_concurrency.lockutils [None req-b1540861-b683-4164-9bad-a35fcd4de08b 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "b4a045a0-0a46-4644-8d2e-9ec4a6d893b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:45:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142577657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.287 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.289 253665 DEBUG nova.virt.libvirt.vif [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:45:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-221401421',display_name='tempest-TestNetworkBasicOps-server-221401421',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-221401421',id=141,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFpRaOd0MA6ods5Fgu/bePJdKNA6xJzpwKTamybJrRd4vBorrEhiuMwvVBW2vy+fN3+ZAEzEiG8NI9LxFAosf7VdPZQ2Hzoq936Yx2tDHAB+5D4UznxlVut3DWP76u/ISw==',key_name='tempest-TestNetworkBasicOps-1054111100',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-qlv4m0ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:45:23Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=e1b6c07e-b79f-4b39-a2b8-a952e54f4972,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.289 253665 DEBUG nova.network.os_vif_util [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.290 253665 DEBUG nova.network.os_vif_util [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.291 253665 DEBUG nova.objects.instance [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1b6c07e-b79f-4b39-a2b8-a952e54f4972 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.302 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <uuid>e1b6c07e-b79f-4b39-a2b8-a952e54f4972</uuid>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <name>instance-0000008d</name>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestNetworkBasicOps-server-221401421</nova:name>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:45:28</nova:creationTime>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <nova:user uuid="36208b8910e24c06bcc7f958bab2adf1">tempest-TestNetworkBasicOps-927877775-project-member</nova:user>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <nova:project uuid="8acfdd432cb64d45993b2a66a2d29b82">tempest-TestNetworkBasicOps-927877775</nova:project>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <nova:port uuid="2bf46f44-05ff-4af4-ba41-f280a21be09e">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <entry name="serial">e1b6c07e-b79f-4b39-a2b8-a952e54f4972</entry>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <entry name="uuid">e1b6c07e-b79f-4b39-a2b8-a952e54f4972</entry>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:ec:a9:1c"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <target dev="tap2bf46f44-05"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/console.log" append="off"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:45:29 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:45:29 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:45:29 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:45:29 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.303 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Preparing to wait for external event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.303 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.303 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.303 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.304 253665 DEBUG nova.virt.libvirt.vif [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:45:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-221401421',display_name='tempest-TestNetworkBasicOps-server-221401421',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-221401421',id=141,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFpRaOd0MA6ods5Fgu/bePJdKNA6xJzpwKTamybJrRd4vBorrEhiuMwvVBW2vy+fN3+ZAEzEiG8NI9LxFAosf7VdPZQ2Hzoq936Yx2tDHAB+5D4UznxlVut3DWP76u/ISw==',key_name='tempest-TestNetworkBasicOps-1054111100',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-qlv4m0ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:45:23Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=e1b6c07e-b79f-4b39-a2b8-a952e54f4972,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.305 253665 DEBUG nova.network.os_vif_util [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.305 253665 DEBUG nova.network.os_vif_util [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.306 253665 DEBUG os_vif [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.307 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.307 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.312 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.312 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bf46f44-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.313 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bf46f44-05, col_values=(('external_ids', {'iface-id': '2bf46f44-05ff-4af4-ba41-f280a21be09e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:a9:1c', 'vm-uuid': 'e1b6c07e-b79f-4b39-a2b8-a952e54f4972'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.314 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:29 np0005532048 NetworkManager[48920]: <info>  [1763804729.3154] manager: (tap2bf46f44-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.319 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.320 253665 INFO os_vif [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05')#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.381 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.381 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.382 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] No VIF found with MAC fa:16:3e:ec:a9:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.382 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Using config drive#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.400 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.626 253665 DEBUG nova.compute.manager [req-eb9a8ee3-3122-4d09-98b8-d3e89ad7a5c0 req-6003503d-d8cd-4aa5-a4a7-d3d5f489080f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Received event network-vif-deleted-7b663864-2935-4127-ab02-75e4a0acfc73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.643 253665 DEBUG nova.network.neutron [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updated VIF entry in instance network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.644 253665 DEBUG nova.network.neutron [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.655 253665 DEBUG oslo_concurrency.lockutils [req-98a92980-a6d5-4576-98af-df27853384da req-38fc1bc9-b7d3-4cb3-b572-bbb176aafd9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.843 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Creating config drive at /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.848 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxymbl1u6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:29 np0005532048 nova_compute[253661]: 2025-11-22 09:45:29.988 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxymbl1u6" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.011 253665 DEBUG nova.storage.rbd_utils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] rbd image e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.014 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 324 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.179 253665 DEBUG oslo_concurrency.processutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config e1b6c07e-b79f-4b39-a2b8-a952e54f4972_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.180 253665 INFO nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Deleting local config drive /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972/disk.config because it was imported into RBD.#033[00m
Nov 22 04:45:30 np0005532048 kernel: tap2bf46f44-05: entered promiscuous mode
Nov 22 04:45:30 np0005532048 NetworkManager[48920]: <info>  [1763804730.2291] manager: (tap2bf46f44-05): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01516|binding|INFO|Claiming lport 2bf46f44-05ff-4af4-ba41-f280a21be09e for this chassis.
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01517|binding|INFO|2bf46f44-05ff-4af4-ba41-f280a21be09e: Claiming fa:16:3e:ec:a9:1c 10.100.0.9
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.234 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.244 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:a9:1c 10.100.0.9'], port_security=['fa:16:3e:ec:a9:1c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e1b6c07e-b79f-4b39-a2b8-a952e54f4972', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf4ccd55-5049-48da-a040-7bc492278d9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bf69086-9ee8-4131-a2f6-8ce3890c821e, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bf46f44-05ff-4af4-ba41-f280a21be09e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.244 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bf46f44-05ff-4af4-ba41-f280a21be09e in datapath 32b06b6f-2dbe-45a6-a0ed-07f342aa967b bound to our chassis#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.246 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32b06b6f-2dbe-45a6-a0ed-07f342aa967b#033[00m
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01518|binding|INFO|Setting lport 2bf46f44-05ff-4af4-ba41-f280a21be09e ovn-installed in OVS
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01519|binding|INFO|Setting lport 2bf46f44-05ff-4af4-ba41-f280a21be09e up in Southbound
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.252 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 systemd-udevd[396446]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.262 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a81404af-0571-45fc-9c80-59bfba61550b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.263 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32b06b6f-21 in ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.265 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32b06b6f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.265 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0e217332-26a8-403f-96e1-efbdfaa0b1af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.267 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7020dd-582b-4328-8146-4c906a6bcdce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 systemd-machined[215941]: New machine qemu-172-instance-0000008d.
Nov 22 04:45:30 np0005532048 NetworkManager[48920]: <info>  [1763804730.2729] device (tap2bf46f44-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:45:30 np0005532048 NetworkManager[48920]: <info>  [1763804730.2740] device (tap2bf46f44-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.278 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[542323b0-27d4-4373-991d-87c52dba0f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.283 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.283 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.283 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.284 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.284 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:30 np0005532048 systemd[1]: Started Virtual Machine qemu-172-instance-0000008d.
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.285 253665 INFO nova.compute.manager [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Terminating instance#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.286 253665 DEBUG nova.compute.manager [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.302 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d4566bee-e78f-4f6d-b342-2f1acd31bb7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.332 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8eef854b-8a1f-4d1b-9c10-53f294b846f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.338 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4055970a-ba2f-44ef-8557-a2aa52ec68ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 systemd-udevd[396450]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:45:30 np0005532048 NetworkManager[48920]: <info>  [1763804730.3400] manager: (tap32b06b6f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/620)
Nov 22 04:45:30 np0005532048 kernel: tap8e5490c3-8e (unregistering): left promiscuous mode
Nov 22 04:45:30 np0005532048 NetworkManager[48920]: <info>  [1763804730.3556] device (tap8e5490c3-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01520|binding|INFO|Releasing lport 8e5490c3-8e77-4f49-a612-31f17e0a3586 from this chassis (sb_readonly=0)
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01521|binding|INFO|Setting lport 8e5490c3-8e77-4f49-a612-31f17e0a3586 down in Southbound
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01522|binding|INFO|Removing iface tap8e5490c3-8e ovn-installed in OVS
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.367 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.370 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 kernel: tap1010674e-1b (unregistering): left promiscuous mode
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.376 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f8:55 10.100.0.4'], port_security=['fa:16:3e:ec:f8:55 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '48af02cd-94c5-473f-a6f9-4d2caad8483f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-621dd092-e20a-432f-8488-41d7fcd69532', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=586feb4c-523c-413f-8bd3-6bc87edbdf4c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8e5490c3-8e77-4f49-a612-31f17e0a3586) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:30 np0005532048 NetworkManager[48920]: <info>  [1763804730.3789] device (tap1010674e-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.383 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc14a8a-59e6-4b3c-8958-7b0fd88b9adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.385 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2b09daa8-91bf-4b72-8360-1cc68048e388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01523|binding|INFO|Releasing lport 1010674e-1b87-43cb-97bd-6bca4325a7f9 from this chassis (sb_readonly=0)
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01524|binding|INFO|Setting lport 1010674e-1b87-43cb-97bd-6bca4325a7f9 down in Southbound
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01525|binding|INFO|Removing iface tap1010674e-1b ovn-installed in OVS
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.402 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.404 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:03:fd 2001:db8::f816:3eff:fee5:3fd'], port_security=['fa:16:3e:e5:03:fd 2001:db8::f816:3eff:fee5:3fd'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee5:3fd/64', 'neutron:device_id': '48af02cd-94c5-473f-a6f9-4d2caad8483f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fffaaec8-1dee-4e16-9a50-50b2fc979aa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7556820e-db50-4efa-817c-86d63f0b8b71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1010674e-1b87-43cb-97bd-6bca4325a7f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:30 np0005532048 NetworkManager[48920]: <info>  [1763804730.4147] device (tap32b06b6f-20): carrier: link connected
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.420 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4d273153-1f60-4959-a088-ea81190f5a5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000088.scope: Deactivated successfully.
Nov 22 04:45:30 np0005532048 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000088.scope: Consumed 15.873s CPU time.
Nov 22 04:45:30 np0005532048 systemd-machined[215941]: Machine qemu-167-instance-00000088 terminated.
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.440 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa7d72c-dd0d-4057-a67b-64b01f897067]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32b06b6f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:46:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758115, 'reachable_time': 17068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396489, 'error': None, 'target': 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.455 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[23368851-e2f9-499b-9cb4-a1018ae5ada1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:464b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758115, 'tstamp': 758115}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396491, 'error': None, 'target': 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.471 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4429b106-41aa-4cdd-bcdb-ab6d97f0f082]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32b06b6f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:46:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758115, 'reachable_time': 17068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396492, 'error': None, 'target': 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.503 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[739fe10d-4bc9-4992-979f-3392a8652d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 NetworkManager[48920]: <info>  [1763804730.5066] manager: (tap8e5490c3-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.514 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.534 253665 INFO nova.virt.libvirt.driver [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Instance destroyed successfully.#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.534 253665 DEBUG nova.objects.instance [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 48af02cd-94c5-473f-a6f9-4d2caad8483f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.549 253665 DEBUG nova.virt.libvirt.vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:43:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1350591661',display_name='tempest-TestGettingAddress-server-1350591661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1350591661',id=136,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-002uioix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:17Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=48af02cd-94c5-473f-a6f9-4d2caad8483f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.549 253665 DEBUG nova.network.os_vif_util [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.550 253665 DEBUG nova.network.os_vif_util [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.550 253665 DEBUG os_vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.552 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e5490c3-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.553 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.559 253665 INFO os_vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:f8:55,bridge_name='br-int',has_traffic_filtering=True,id=8e5490c3-8e77-4f49-a612-31f17e0a3586,network=Network(621dd092-e20a-432f-8488-41d7fcd69532),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e5490c3-8e')#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.561 253665 DEBUG nova.virt.libvirt.vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:43:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1350591661',display_name='tempest-TestGettingAddress-server-1350591661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1350591661',id=136,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB7KT6FM/eX2C8VJWcFTS+ctOwuksvajEhYgZZglXe2OYMEsygDuc9AwLPcAAKz0jmJOu+gLFZHLAGzCVVDUltE/K61B9A8dvcRKxsKjfuYVPb0DbzptQUCSjkum26i4Mw==',key_name='tempest-TestGettingAddress-358778407',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-002uioix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:17Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=48af02cd-94c5-473f-a6f9-4d2caad8483f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.561 253665 DEBUG nova.network.os_vif_util [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.562 253665 DEBUG nova.network.os_vif_util [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.562 253665 DEBUG os_vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.563 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.564 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1010674e-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.565 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.566 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.568 253665 INFO os_vif [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:03:fd,bridge_name='br-int',has_traffic_filtering=True,id=1010674e-1b87-43cb-97bd-6bca4325a7f9,network=Network(7a504de2-27b2-4d01-a183-d9b0331ca31e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1010674e-1b')#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.598 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[68d0c5b8-0c65-4f03-b975-fa98b1dd6dd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.600 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32b06b6f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.600 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.600 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32b06b6f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:30 np0005532048 NetworkManager[48920]: <info>  [1763804730.6032] manager: (tap32b06b6f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/622)
Nov 22 04:45:30 np0005532048 kernel: tap32b06b6f-20: entered promiscuous mode
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.607 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32b06b6f-20, col_values=(('external_ids', {'iface-id': 'acb44b8b-e586-4d56-8c91-42b393fbe8ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:30Z|01526|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.625 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32b06b6f-2dbe-45a6-a0ed-07f342aa967b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32b06b6f-2dbe-45a6-a0ed-07f342aa967b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.626 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d78bde6-821a-4de7-ad88-f5c46e568ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.628 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-32b06b6f-2dbe-45a6-a0ed-07f342aa967b
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/32b06b6f-2dbe-45a6-a0ed-07f342aa967b.pid.haproxy
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 32b06b6f-2dbe-45a6-a0ed-07f342aa967b
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:45:30 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:30.628 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'env', 'PROCESS_TAG=haproxy-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32b06b6f-2dbe-45a6-a0ed-07f342aa967b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:45:30 np0005532048 podman[396552]: 2025-11-22 09:45:30.639202689 +0000 UTC m=+0.067838226 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:45:30 np0005532048 podman[396549]: 2025-11-22 09:45:30.639206279 +0000 UTC m=+0.069188390 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.698 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804730.6985383, e1b6c07e-b79f-4b39-a2b8-a952e54f4972 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.699 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] VM Started (Lifecycle Event)#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.725 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.729 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804730.6986473, e1b6c07e-b79f-4b39-a2b8-a952e54f4972 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.729 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.757 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.760 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:45:30 np0005532048 nova_compute[253661]: 2025-11-22 09:45:30.783 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.020 253665 INFO nova.virt.libvirt.driver [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Deleting instance files /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f_del#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.021 253665 INFO nova.virt.libvirt.driver [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Deletion of /var/lib/nova/instances/48af02cd-94c5-473f-a6f9-4d2caad8483f_del complete#033[00m
Nov 22 04:45:31 np0005532048 podman[396642]: 2025-11-22 09:45:31.0333084 +0000 UTC m=+0.072436669 container create 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:45:31 np0005532048 systemd[1]: Started libpod-conmon-990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322.scope.
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.077 253665 INFO nova.compute.manager [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.078 253665 DEBUG oslo.service.loopingcall [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.078 253665 DEBUG nova.compute.manager [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.078 253665 DEBUG nova.network.neutron [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:45:31 np0005532048 podman[396642]: 2025-11-22 09:45:30.994566643 +0000 UTC m=+0.033694942 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:45:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:45:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05d1c1e0ba78584dff2dec95748fc513d262bc386f559cea8bbfce0b30478ef5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:31 np0005532048 podman[396642]: 2025-11-22 09:45:31.14385218 +0000 UTC m=+0.182980489 container init 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:45:31 np0005532048 podman[396642]: 2025-11-22 09:45:31.150558186 +0000 UTC m=+0.189686465 container start 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [NOTICE]   (396661) : New worker (396663) forked
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [NOTICE]   (396661) : Loading success.
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.213 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8e5490c3-8e77-4f49-a612-31f17e0a3586 in datapath 621dd092-e20a-432f-8488-41d7fcd69532 unbound from our chassis#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.215 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 621dd092-e20a-432f-8488-41d7fcd69532, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.217 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2824d233-183c-41ff-b85e-54d46134d9a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.217 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532 namespace which is not needed anymore#033[00m
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [NOTICE]   (393104) : haproxy version is 2.8.14-c23fe91
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [NOTICE]   (393104) : path to executable is /usr/sbin/haproxy
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [WARNING]  (393104) : Exiting Master process...
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [ALERT]    (393104) : Current worker (393106) exited with code 143 (Terminated)
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532[393100]: [WARNING]  (393104) : All workers exited. Exiting... (0)
Nov 22 04:45:31 np0005532048 systemd[1]: libpod-fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6.scope: Deactivated successfully.
Nov 22 04:45:31 np0005532048 conmon[393100]: conmon fd2df0d2ca6f704d4075 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6.scope/container/memory.events
Nov 22 04:45:31 np0005532048 podman[396689]: 2025-11-22 09:45:31.400889998 +0000 UTC m=+0.068059852 container died fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:45:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6-userdata-shm.mount: Deactivated successfully.
Nov 22 04:45:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0ec57aa6059e798a6228ab70569192414cfc2ac4701797ee13c0ca77e6733e50-merged.mount: Deactivated successfully.
Nov 22 04:45:31 np0005532048 podman[396689]: 2025-11-22 09:45:31.464685362 +0000 UTC m=+0.131855216 container cleanup fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:45:31 np0005532048 systemd[1]: libpod-conmon-fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6.scope: Deactivated successfully.
Nov 22 04:45:31 np0005532048 podman[396720]: 2025-11-22 09:45:31.526536079 +0000 UTC m=+0.040243104 container remove fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.532 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cf6844d2-7baf-45bc-99f0-e621545b495e]: (4, ('Sat Nov 22 09:45:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532 (fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6)\nfd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6\nSat Nov 22 09:45:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532 (fd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6)\nfd2df0d2ca6f704d4075dec0f1d59718717b9057e5406ce5bc16061e26c947c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.533 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e72be594-9926-4ce2-ac24-74ff0930ff51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.534 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap621dd092-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:31 np0005532048 kernel: tap621dd092-e0: left promiscuous mode
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.536 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.555 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2198c3-7d18-4a3e-adb5-a2a9090c52dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.573 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e95be6e1-919a-40f7-b03c-c65523a87eea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.574 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[60d6ab1c-c97b-4781-9641-4f75515f923c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.590 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c83c410e-fea1-4a60-877b-fd7aa2e8a25a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750275, 'reachable_time': 16057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396735, 'error': None, 'target': 'ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.593 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-621dd092-e20a-432f-8488-41d7fcd69532 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.593 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[4db22d86-a9cd-422d-93f8-95a67e8763c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.594 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1010674e-1b87-43cb-97bd-6bca4325a7f9 in datapath 7a504de2-27b2-4d01-a183-d9b0331ca31e unbound from our chassis#033[00m
Nov 22 04:45:31 np0005532048 systemd[1]: run-netns-ovnmeta\x2d621dd092\x2de20a\x2d432f\x2d8488\x2d41d7fcd69532.mount: Deactivated successfully.
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.596 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a504de2-27b2-4d01-a183-d9b0331ca31e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa74460-e1d2-4f46-924d-d093a5087c60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.597 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e namespace which is not needed anymore#033[00m
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [NOTICE]   (393174) : haproxy version is 2.8.14-c23fe91
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [NOTICE]   (393174) : path to executable is /usr/sbin/haproxy
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [WARNING]  (393174) : Exiting Master process...
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [WARNING]  (393174) : Exiting Master process...
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [ALERT]    (393174) : Current worker (393176) exited with code 143 (Terminated)
Nov 22 04:45:31 np0005532048 neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e[393170]: [WARNING]  (393174) : All workers exited. Exiting... (0)
Nov 22 04:45:31 np0005532048 systemd[1]: libpod-f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7.scope: Deactivated successfully.
Nov 22 04:45:31 np0005532048 podman[396752]: 2025-11-22 09:45:31.736040923 +0000 UTC m=+0.043467364 container died f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.749 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-changed-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.750 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing instance network info cache due to event network-changed-8e5490c3-8e77-4f49-a612-31f17e0a3586. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.750 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.750 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.751 253665 DEBUG nova.network.neutron [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Refreshing network info cache for port 8e5490c3-8e77-4f49-a612-31f17e0a3586 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:45:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7-userdata-shm.mount: Deactivated successfully.
Nov 22 04:45:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-29096a98af37d370bc2562ae093d356474f4646e4103cb41d2d7d5290f21d167-merged.mount: Deactivated successfully.
Nov 22 04:45:31 np0005532048 podman[396752]: 2025-11-22 09:45:31.773930159 +0000 UTC m=+0.081356600 container cleanup f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:45:31 np0005532048 systemd[1]: libpod-conmon-f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7.scope: Deactivated successfully.
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.885 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.888 253665 DEBUG nova.compute.manager [req-4563dcd1-701d-4572-9f68-9e7a64ef6692 req-227bf369-557b-4d36-bedc-71c2c8b8f4a5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-deleted-1010674e-1b87-43cb-97bd-6bca4325a7f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.889 253665 INFO nova.compute.manager [req-4563dcd1-701d-4572-9f68-9e7a64ef6692 req-227bf369-557b-4d36-bedc-71c2c8b8f4a5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Neutron deleted interface 1010674e-1b87-43cb-97bd-6bca4325a7f9; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.889 253665 DEBUG nova.network.neutron [req-4563dcd1-701d-4572-9f68-9e7a64ef6692 req-227bf369-557b-4d36-bedc-71c2c8b8f4a5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [{"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.912 253665 DEBUG nova.compute.manager [req-4563dcd1-701d-4572-9f68-9e7a64ef6692 req-227bf369-557b-4d36-bedc-71c2c8b8f4a5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Detach interface failed, port_id=1010674e-1b87-43cb-97bd-6bca4325a7f9, reason: Instance 48af02cd-94c5-473f-a6f9-4d2caad8483f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:45:31 np0005532048 podman[396781]: 2025-11-22 09:45:31.987476302 +0000 UTC m=+0.195388386 container remove f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.992 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6585ce0a-e6f4-433c-b879-baf2fbdf0e44]: (4, ('Sat Nov 22 09:45:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e (f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7)\nf1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7\nSat Nov 22 09:45:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e (f1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7)\nf1f6f2ed20c0dc26fdbcae40db0bbe720226c81da15e781da757ddd670a42eb7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d364bcf5-fc3a-4b2b-9783-f4927635e458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:31.995 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a504de2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:31 np0005532048 kernel: tap7a504de2-20: left promiscuous mode
Nov 22 04:45:31 np0005532048 nova_compute[253661]: 2025-11-22 09:45:31.997 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:32 np0005532048 nova_compute[253661]: 2025-11-22 09:45:32.011 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.016 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb3b3b0-fb08-451d-8f42-cc80689c8544]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.041 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[235e1442-49a6-4f01-a297-ba5ce4c05571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.042 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32f22b5b-bfd8-4551-8c31-deec5bfc7293]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.059 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f2a937-61aa-4a15-a36d-7fca02547ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 750368, 'reachable_time': 37441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396797, 'error': None, 'target': 'ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.061 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a504de2-27b2-4d01-a183-d9b0331ca31e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:45:32 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:32.061 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[37a134ef-49cb-4467-8006-0d4c7cae973d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 444 KiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 22 04:45:32 np0005532048 systemd[1]: run-netns-ovnmeta\x2d7a504de2\x2d27b2\x2d4d01\x2da183\x2dd9b0331ca31e.mount: Deactivated successfully.
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.049 253665 DEBUG nova.network.neutron [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.070 253665 INFO nova.compute.manager [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Took 1.99 seconds to deallocate network for instance.#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.120 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.120 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.242 253665 DEBUG oslo_concurrency.processutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.278 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.278 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.279 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.279 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.279 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.281 253665 INFO nova.compute.manager [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Terminating instance#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.281 253665 DEBUG nova.compute.manager [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:45:33 np0005532048 kernel: tapf0192978-09 (unregistering): left promiscuous mode
Nov 22 04:45:33 np0005532048 NetworkManager[48920]: <info>  [1763804733.3894] device (tapf0192978-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:45:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:33Z|01527|binding|INFO|Releasing lport f0192978-0953-4171-b70f-7f21bd6af5a0 from this chassis (sb_readonly=0)
Nov 22 04:45:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:33Z|01528|binding|INFO|Setting lport f0192978-0953-4171-b70f-7f21bd6af5a0 down in Southbound
Nov 22 04:45:33 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:33Z|01529|binding|INFO|Removing iface tapf0192978-09 ovn-installed in OVS
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.398 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.412 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:98:16 10.100.0.9'], port_security=['fa:16:3e:73:98:16 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ed3583b5-6d93-4e3f-83e0-3b36f25f08f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a734f39d-baf0-4591-94dc-9057caf53bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '049b93fb-a2d2-4853-99c3-bef4a7dfe745', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce1fe74-6934-45b2-a6d9-4702f1b2307a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=f0192978-0953-4171-b70f-7f21bd6af5a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.413 162862 INFO neutron.agent.ovn.metadata.agent [-] Port f0192978-0953-4171-b70f-7f21bd6af5a0 in datapath a734f39d-baf0-4591-94dc-9057caf53bb4 unbound from our chassis#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.415 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a734f39d-baf0-4591-94dc-9057caf53bb4#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.434 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5967577e-899d-48ac-af32-8ac4fd5b6e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:33 np0005532048 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 22 04:45:33 np0005532048 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008c.scope: Consumed 13.562s CPU time.
Nov 22 04:45:33 np0005532048 systemd-machined[215941]: Machine qemu-171-instance-0000008c terminated.
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.467 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[fe464810-656b-49d3-a608-855f461dd046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.471 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6dfdbf-1380-4f5e-a7c5-775f3ec3ca68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.505 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[af40aa40-96bd-45d9-a621-fe2a1a4f5e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.525 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[037f1221-0aea-4c27-b048-562669b46c6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa734f39d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:4f:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752629, 'reachable_time': 34911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396835, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.528 253665 INFO nova.virt.libvirt.driver [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Instance destroyed successfully.#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.529 253665 DEBUG nova.objects.instance [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.547 253665 DEBUG nova.virt.libvirt.vif [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:45:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-gen-1-2008595118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-gen',id=140,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:45:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-h3590ebr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:45:11Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=ed3583b5-6d93-4e3f-83e0-3b36f25f08f1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.548 253665 DEBUG nova.network.os_vif_util [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "f0192978-0953-4171-b70f-7f21bd6af5a0", "address": "fa:16:3e:73:98:16", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0192978-09", "ovs_interfaceid": "f0192978-0953-4171-b70f-7f21bd6af5a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.547 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a610acb8-32df-4903-9996-6e3a7306eda0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa734f39d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752643, 'tstamp': 752643}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396841, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa734f39d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752646, 'tstamp': 752646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396841, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.549 253665 DEBUG nova.network.os_vif_util [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.549 253665 DEBUG os_vif [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.549 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa734f39d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.551 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.552 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0192978-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.555 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa734f39d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.555 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.556 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa734f39d-b0, col_values=(('external_ids', {'iface-id': '3db82a3e-3c50-4f8e-b5b4-8b4657d60723'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:33 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:33.556 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.558 253665 INFO os_vif [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:98:16,bridge_name='br-int',has_traffic_filtering=True,id=f0192978-0953-4171-b70f-7f21bd6af5a0,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0192978-09')#033[00m
Nov 22 04:45:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:45:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329337882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.716 253665 DEBUG oslo_concurrency.processutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.722 253665 DEBUG nova.compute.provider_tree [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.738 253665 DEBUG nova.scheduler.client.report [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.771 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.809 253665 INFO nova.scheduler.client.report [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 48af02cd-94c5-473f-a6f9-4d2caad8483f#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.863 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.864 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.864 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.864 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.864 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No waiting events found dispatching network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.865 253665 WARNING nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received unexpected event network-vif-plugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.865 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-unplugged-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.865 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.865 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.866 253665 DEBUG oslo_concurrency.lockutils [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.866 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] No waiting events found dispatching network-vif-unplugged-f0192978-0953-4171-b70f-7f21bd6af5a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.866 253665 DEBUG nova.compute.manager [req-70c915e5-f724-4723-aefd-4bb912174bb8 req-70dc0f2a-8477-459d-8e6a-3f79d40bdd22 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-unplugged-f0192978-0953-4171-b70f-7f21bd6af5a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:45:33 np0005532048 nova_compute[253661]: 2025-11-22 09:45:33.882 253665 DEBUG oslo_concurrency.lockutils [None req-4cd947bb-8323-41dd-8eab-2c7871a048de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.002 253665 DEBUG nova.compute.manager [req-ceb51883-689c-4644-b50d-103bbebb8806 req-3d5e576c-2b43-4224-9bd4-b5fc32a1252f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-deleted-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.099 253665 INFO nova.virt.libvirt.driver [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Deleting instance files /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_del#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.100 253665 INFO nova.virt.libvirt.driver [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Deletion of /var/lib/nova/instances/ed3583b5-6d93-4e3f-83e0-3b36f25f08f1_del complete#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.146 253665 INFO nova.compute.manager [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.147 253665 DEBUG oslo.service.loopingcall [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.147 253665 DEBUG nova.compute.manager [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.147 253665 DEBUG nova.network.neutron [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:45:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 196 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 464 KiB/s rd, 3.9 MiB/s wr, 226 op/s
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.423 253665 DEBUG nova.network.neutron [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updated VIF entry in instance network info cache for port 8e5490c3-8e77-4f49-a612-31f17e0a3586. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.424 253665 DEBUG nova.network.neutron [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Updating instance_info_cache with network_info: [{"id": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "address": "fa:16:3e:ec:f8:55", "network": {"id": "621dd092-e20a-432f-8488-41d7fcd69532", "bridge": "br-int", "label": "tempest-network-smoke--1349909520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e5490c3-8e", "ovs_interfaceid": "8e5490c3-8e77-4f49-a612-31f17e0a3586", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "address": "fa:16:3e:e5:03:fd", "network": {"id": "7a504de2-27b2-4d01-a183-d9b0331ca31e", "bridge": "br-int", "label": "tempest-network-smoke--648653697", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee5:3fd", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1010674e-1b", "ovs_interfaceid": "1010674e-1b87-43cb-97bd-6bca4325a7f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.437 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-48af02cd-94c5-473f-a6f9-4d2caad8483f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.437 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.437 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Processing event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.438 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] No waiting events found dispatching network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 WARNING nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received unexpected event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e for instance with vm_state building and task_state spawning.#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.439 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-unplugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No waiting events found dispatching network-vif-unplugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-unplugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.440 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No waiting events found dispatching network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.441 253665 WARNING nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received unexpected event network-vif-plugged-8e5490c3-8e77-4f49-a612-31f17e0a3586 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-unplugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG oslo_concurrency.lockutils [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "48af02cd-94c5-473f-a6f9-4d2caad8483f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.442 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] No waiting events found dispatching network-vif-unplugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.443 253665 DEBUG nova.compute.manager [req-4694f3e2-1e57-48f4-8b88-a2fc8a7131ec req-ed5b0059-620e-42e6-9821-91e63e9eb410 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Received event network-vif-unplugged-1010674e-1b87-43cb-97bd-6bca4325a7f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.443 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.448 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804734.4478645, e1b6c07e-b79f-4b39-a2b8-a952e54f4972 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.448 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.450 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.453 253665 INFO nova.virt.libvirt.driver [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance spawned successfully.#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.454 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.464 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.472 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.474 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.475 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.475 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.475 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.476 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.476 253665 DEBUG nova.virt.libvirt.driver [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.521 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.553 253665 INFO nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Took 11.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.553 253665 DEBUG nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.608 253665 INFO nova.compute.manager [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Took 12.31 seconds to build instance.#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.624 253665 DEBUG oslo_concurrency.lockutils [None req-40652b6b-b776-4ba1-a893-76b5470fd3af 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.754 253665 DEBUG nova.network.neutron [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.775 253665 INFO nova.compute.manager [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Took 0.63 seconds to deallocate network for instance.#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.824 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.825 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:34 np0005532048 nova_compute[253661]: 2025-11-22 09:45:34.919 253665 DEBUG oslo_concurrency.processutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:45:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402981110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.380 253665 DEBUG oslo_concurrency.processutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.387 253665 DEBUG nova.compute.provider_tree [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.403 253665 DEBUG nova.scheduler.client.report [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.427 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.466 253665 INFO nova.scheduler.client.report [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance ed3583b5-6d93-4e3f-83e0-3b36f25f08f1#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.534 253665 DEBUG oslo_concurrency.lockutils [None req-e9f0b2ac-9438-4dae-8715-715feaaae3f1 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.946 253665 DEBUG nova.compute.manager [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.946 253665 DEBUG oslo_concurrency.lockutils [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.947 253665 DEBUG oslo_concurrency.lockutils [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.947 253665 DEBUG oslo_concurrency.lockutils [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "ed3583b5-6d93-4e3f-83e0-3b36f25f08f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.947 253665 DEBUG nova.compute.manager [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] No waiting events found dispatching network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:35 np0005532048 nova_compute[253661]: 2025-11-22 09:45:35.947 253665 WARNING nova.compute.manager [req-3739bab9-d90c-452d-ab27-76d20061a4cf req-f2a7cebd-1867-4d9a-9f3d-9e10ac5b32db 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received unexpected event network-vif-plugged-f0192978-0953-4171-b70f-7f21bd6af5a0 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:45:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 196 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 418 KiB/s rd, 3.9 MiB/s wr, 209 op/s
Nov 22 04:45:36 np0005532048 nova_compute[253661]: 2025-11-22 09:45:36.218 253665 DEBUG nova.compute.manager [req-ef57665d-fcb2-4098-a1a3-0f6fc623c93e req-263fd912-2d84-48df-8598-67aa05ab31cc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Received event network-vif-deleted-f0192978-0953-4171-b70f-7f21bd6af5a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:36.332 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:36 np0005532048 nova_compute[253661]: 2025-11-22 09:45:36.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:36.334 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:45:36 np0005532048 nova_compute[253661]: 2025-11-22 09:45:36.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:37 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:37.336 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:37 np0005532048 podman[396886]: 2025-11-22 09:45:37.40516126 +0000 UTC m=+0.089825369 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.153 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804723.1486976, b4a045a0-0a46-4644-8d2e-9ec4a6d893b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.153 253665 INFO nova.compute.manager [-] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:45:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 178 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 3.9 MiB/s wr, 226 op/s
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.183 253665 DEBUG nova.compute.manager [None req-a2738896-3169-4a6c-bcdb-82b1e82b86c4 - - - - - -] [instance: b4a045a0-0a46-4644-8d2e-9ec4a6d893b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.380 253665 DEBUG nova.compute.manager [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-changed-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.380 253665 DEBUG nova.compute.manager [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing instance network info cache due to event network-changed-491b9f04-4133-4553-a044-0dffe6278421. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.380 253665 DEBUG oslo_concurrency.lockutils [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.381 253665 DEBUG oslo_concurrency.lockutils [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.381 253665 DEBUG nova.network.neutron [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Refreshing network info cache for port 491b9f04-4133-4553-a044-0dffe6278421 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.582 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.582 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.582 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.583 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.583 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.584 253665 INFO nova.compute.manager [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Terminating instance#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.585 253665 DEBUG nova.compute.manager [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:45:38 np0005532048 kernel: tap491b9f04-41 (unregistering): left promiscuous mode
Nov 22 04:45:38 np0005532048 NetworkManager[48920]: <info>  [1763804738.6381] device (tap491b9f04-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:45:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:38Z|01530|binding|INFO|Releasing lport 491b9f04-4133-4553-a044-0dffe6278421 from this chassis (sb_readonly=0)
Nov 22 04:45:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:38Z|01531|binding|INFO|Setting lport 491b9f04-4133-4553-a044-0dffe6278421 down in Southbound
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:38Z|01532|binding|INFO|Removing iface tap491b9f04-41 ovn-installed in OVS
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.663 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:38 np0005532048 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 22 04:45:38 np0005532048 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008a.scope: Consumed 15.607s CPU time.
Nov 22 04:45:38 np0005532048 systemd-machined[215941]: Machine qemu-169-instance-0000008a terminated.
Nov 22 04:45:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.710 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:7d:61 10.100.0.11'], port_security=['fa:16:3e:a8:7d:61 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '71ef7514-c6bd-40ee-852a-4b850ca0a05c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a734f39d-baf0-4591-94dc-9057caf53bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '46d50d652376434585e9da83e40f96bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c524ade6-1430-48f4-af9a-629e8a61db96 d6471b4e-7bc5-407e-a8cc-88aa50b6222f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce1fe74-6934-45b2-a6d9-4702f1b2307a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=491b9f04-4133-4553-a044-0dffe6278421) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.711 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 491b9f04-4133-4553-a044-0dffe6278421 in datapath a734f39d-baf0-4591-94dc-9057caf53bb4 unbound from our chassis#033[00m
Nov 22 04:45:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.713 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a734f39d-baf0-4591-94dc-9057caf53bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:45:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.714 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f388ac8f-b2a4-4740-bed3-f333cd25cecc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:38.714 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 namespace which is not needed anymore#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.816 253665 INFO nova.virt.libvirt.driver [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Instance destroyed successfully.#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.817 253665 DEBUG nova.objects.instance [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lazy-loading 'resources' on Instance uuid 71ef7514-c6bd-40ee-852a-4b850ca0a05c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.827 253665 DEBUG nova.virt.libvirt.vif [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:44:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-488258979-access_point-211837653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-488258979-acc',id=138,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBOwRQlDAdo+g60Ps/HwU/VMS64eGZhSkvI6bOPavIrg+ELfIh5TkgiKpEGXEdq5ORKgO91xQXWepwxlqtHh67VkaK6Xf3kHKOB8vlHPEMg4W1PVvZy7W3qb1i+rXVHWpw==',key_name='tempest-TestSecurityGroupsBasicOps-584634060',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:44:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='46d50d652376434585e9da83e40f96bb',ramdisk_id='',reservation_id='r-97r64zcs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-488258979',owner_user_name='tempest-TestSecurityGroupsBasicOps-488258979-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:44:36Z,user_data=None,user_id='4993d04ad8774a15825d4bea194cd1ca',uuid=71ef7514-c6bd-40ee-852a-4b850ca0a05c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.827 253665 DEBUG nova.network.os_vif_util [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converting VIF {"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.828 253665 DEBUG nova.network.os_vif_util [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.829 253665 DEBUG os_vif [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.830 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap491b9f04-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:38 np0005532048 nova_compute[253661]: 2025-11-22 09:45:38.838 253665 INFO os_vif [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:7d:61,bridge_name='br-int',has_traffic_filtering=True,id=491b9f04-4133-4553-a044-0dffe6278421,network=Network(a734f39d-baf0-4591-94dc-9057caf53bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap491b9f04-41')#033[00m
Nov 22 04:45:38 np0005532048 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [NOTICE]   (394397) : haproxy version is 2.8.14-c23fe91
Nov 22 04:45:38 np0005532048 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [NOTICE]   (394397) : path to executable is /usr/sbin/haproxy
Nov 22 04:45:38 np0005532048 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [WARNING]  (394397) : Exiting Master process...
Nov 22 04:45:38 np0005532048 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [ALERT]    (394397) : Current worker (394405) exited with code 143 (Terminated)
Nov 22 04:45:38 np0005532048 neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4[394378]: [WARNING]  (394397) : All workers exited. Exiting... (0)
Nov 22 04:45:38 np0005532048 systemd[1]: libpod-e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852.scope: Deactivated successfully.
Nov 22 04:45:38 np0005532048 podman[396936]: 2025-11-22 09:45:38.87600112 +0000 UTC m=+0.064043703 container died e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:45:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852-userdata-shm.mount: Deactivated successfully.
Nov 22 04:45:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e778d95479f50b481d89a1bee1b1080db3f4d51fefff371a7f567798dd2494e6-merged.mount: Deactivated successfully.
Nov 22 04:45:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:38 np0005532048 podman[396936]: 2025-11-22 09:45:38.938555884 +0000 UTC m=+0.126598467 container cleanup e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:45:38 np0005532048 systemd[1]: libpod-conmon-e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852.scope: Deactivated successfully.
Nov 22 04:45:39 np0005532048 podman[396993]: 2025-11-22 09:45:39.023743038 +0000 UTC m=+0.057352658 container remove e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:45:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.033 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1965c7-0a74-443d-899d-7915b1c44f86]: (4, ('Sat Nov 22 09:45:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 (e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852)\ne02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852\nSat Nov 22 09:45:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 (e02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852)\ne02d4b4482ddf730f48b88f79ba299aab10840eeac3a335f3d02a1f69448a852\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.035 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc657208-fb3a-477e-8fd2-f35b43fbf2fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.037 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa734f39d-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:45:39 np0005532048 nova_compute[253661]: 2025-11-22 09:45:39.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:39 np0005532048 kernel: tapa734f39d-b0: left promiscuous mode
Nov 22 04:45:39 np0005532048 nova_compute[253661]: 2025-11-22 09:45:39.093 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.097 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99541157-c87c-495a-9f06-20d9efd5cde8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:39 np0005532048 nova_compute[253661]: 2025-11-22 09:45:39.112 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.116 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a099bd28-4423-469e-9432-f00464ee74a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.117 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[791be265-f481-4776-bdd0-da19db410be0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.138 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c527661e-1604-4eac-881e-64aed9c483ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752619, 'reachable_time': 39111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397008, 'error': None, 'target': 'ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:39 np0005532048 systemd[1]: run-netns-ovnmeta\x2da734f39d\x2dbaf0\x2d4591\x2d94dc\x2d9057caf53bb4.mount: Deactivated successfully.
Nov 22 04:45:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.144 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a734f39d-baf0-4591-94dc-9057caf53bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:45:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:39.144 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e48c2eda-19b0-44ff-bc0d-036dfc23f855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:39 np0005532048 nova_compute[253661]: 2025-11-22 09:45:39.355 253665 INFO nova.virt.libvirt.driver [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Deleting instance files /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c_del#033[00m
Nov 22 04:45:39 np0005532048 nova_compute[253661]: 2025-11-22 09:45:39.357 253665 INFO nova.virt.libvirt.driver [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Deletion of /var/lib/nova/instances/71ef7514-c6bd-40ee-852a-4b850ca0a05c_del complete#033[00m
Nov 22 04:45:39 np0005532048 nova_compute[253661]: 2025-11-22 09:45:39.443 253665 INFO nova.compute.manager [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:45:39 np0005532048 nova_compute[253661]: 2025-11-22 09:45:39.445 253665 DEBUG oslo.service.loopingcall [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:45:39 np0005532048 nova_compute[253661]: 2025-11-22 09:45:39.445 253665 DEBUG nova.compute.manager [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:45:39 np0005532048 nova_compute[253661]: 2025-11-22 09:45:39.445 253665 DEBUG nova.network.neutron [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:45:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 255 op/s
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.506 253665 DEBUG nova.compute.manager [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-unplugged-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.506 253665 DEBUG oslo_concurrency.lockutils [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.507 253665 DEBUG oslo_concurrency.lockutils [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.507 253665 DEBUG oslo_concurrency.lockutils [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.507 253665 DEBUG nova.compute.manager [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] No waiting events found dispatching network-vif-unplugged-491b9f04-4133-4553-a044-0dffe6278421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.508 253665 DEBUG nova.compute.manager [req-7d7e8530-ae1e-4834-8050-9f02fcdaa9ef req-f341ff2e-b145-4506-a2c6-9939faac2536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-unplugged-491b9f04-4133-4553-a044-0dffe6278421 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.880 253665 DEBUG nova.network.neutron [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.890 253665 DEBUG nova.compute.manager [req-d18c5feb-c26c-4926-9368-27b26692b2ee req-62565068-ade0-4c30-a34e-70ec10cb5252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-deleted-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.890 253665 INFO nova.compute.manager [req-d18c5feb-c26c-4926-9368-27b26692b2ee req-62565068-ade0-4c30-a34e-70ec10cb5252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Neutron deleted interface 491b9f04-4133-4553-a044-0dffe6278421; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.890 253665 DEBUG nova.network.neutron [req-d18c5feb-c26c-4926-9368-27b26692b2ee req-62565068-ade0-4c30-a34e-70ec10cb5252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.943 253665 INFO nova.compute.manager [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Took 1.50 seconds to deallocate network for instance.#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.950 253665 DEBUG nova.compute.manager [req-d18c5feb-c26c-4926-9368-27b26692b2ee req-62565068-ade0-4c30-a34e-70ec10cb5252 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Detach interface failed, port_id=491b9f04-4133-4553-a044-0dffe6278421, reason: Instance 71ef7514-c6bd-40ee-852a-4b850ca0a05c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.993 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:40 np0005532048 nova_compute[253661]: 2025-11-22 09:45:40.994 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.056 253665 DEBUG oslo_concurrency.processutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:45:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554038413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.496 253665 DEBUG oslo_concurrency.processutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.503 253665 DEBUG nova.compute.provider_tree [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.517 253665 DEBUG nova.scheduler.client.report [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.598 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.650 253665 INFO nova.scheduler.client.report [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Deleted allocations for instance 71ef7514-c6bd-40ee-852a-4b850ca0a05c#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.726 253665 DEBUG oslo_concurrency.lockutils [None req-3a8782cf-8850-4d1c-8b9e-1d14312e9824 4993d04ad8774a15825d4bea194cd1ca 46d50d652376434585e9da83e40f96bb - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:41Z|01533|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.865 253665 DEBUG nova.network.neutron [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updated VIF entry in instance network info cache for port 491b9f04-4133-4553-a044-0dffe6278421. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.865 253665 DEBUG nova.network.neutron [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Updating instance_info_cache with network_info: [{"id": "491b9f04-4133-4553-a044-0dffe6278421", "address": "fa:16:3e:a8:7d:61", "network": {"id": "a734f39d-baf0-4591-94dc-9057caf53bb4", "bridge": "br-int", "label": "tempest-network-smoke--860148507", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "46d50d652376434585e9da83e40f96bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap491b9f04-41", "ovs_interfaceid": "491b9f04-4133-4553-a044-0dffe6278421", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:41 np0005532048 nova_compute[253661]: 2025-11-22 09:45:41.892 253665 DEBUG oslo_concurrency.lockutils [req-eeb58b5f-8e5a-4db6-a17d-c65e817961f5 req-6fa8ba6a-3260-451f-ac6f-dbfe587fd08d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-71ef7514-c6bd-40ee-852a-4b850ca0a05c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 152 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 43 KiB/s wr, 165 op/s
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.752 253665 DEBUG nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.753 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.753 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "71ef7514-c6bd-40ee-852a-4b850ca0a05c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 DEBUG nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] No waiting events found dispatching network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 WARNING nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Received unexpected event network-vif-plugged-491b9f04-4133-4553-a044-0dffe6278421 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 DEBUG nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.754 253665 DEBUG nova.compute.manager [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing instance network info cache due to event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.755 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.755 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:42 np0005532048 nova_compute[253661]: 2025-11-22 09:45:42.755 253665 DEBUG nova.network.neutron [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:45:43 np0005532048 nova_compute[253661]: 2025-11-22 09:45:43.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:44 np0005532048 nova_compute[253661]: 2025-11-22 09:45:44.128 253665 DEBUG nova.network.neutron [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updated VIF entry in instance network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:45:44 np0005532048 nova_compute[253661]: 2025-11-22 09:45:44.130 253665 DEBUG nova.network.neutron [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:44 np0005532048 nova_compute[253661]: 2025-11-22 09:45:44.161 253665 DEBUG oslo_concurrency.lockutils [req-c1154c15-b529-4ec0-9eea-6bb9d30d6d81 req-b02b17fb-cf7e-4c06-b0ec-4930651506ec 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 88 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 38 KiB/s wr, 144 op/s
Nov 22 04:45:44 np0005532048 nova_compute[253661]: 2025-11-22 09:45:44.256 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.437 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.437 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.437 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.437 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e1b6c07e-b79f-4b39-a2b8-a952e54f4972 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.527 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804730.5260248, 48af02cd-94c5-473f-a6f9-4d2caad8483f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.528 253665 INFO nova.compute.manager [-] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:45:45 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:45Z|01534|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.548 253665 DEBUG nova.compute.manager [None req-8cae69bc-5698-4e33-9435-94e6b2db11f6 - - - - - -] [instance: 48af02cd-94c5-473f-a6f9-4d2caad8483f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:45 np0005532048 nova_compute[253661]: 2025-11-22 09:45:45.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:45 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Nov 22 04:45:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 88 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 111 op/s
Nov 22 04:45:46 np0005532048 nova_compute[253661]: 2025-11-22 09:45:46.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:47 np0005532048 nova_compute[253661]: 2025-11-22 09:45:47.446 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:45:47 np0005532048 nova_compute[253661]: 2025-11-22 09:45:47.458 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:45:47 np0005532048 nova_compute[253661]: 2025-11-22 09:45:47.458 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:45:47 np0005532048 nova_compute[253661]: 2025-11-22 09:45:47.459 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:47 np0005532048 nova_compute[253661]: 2025-11-22 09:45:47.459 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 96 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 511 KiB/s wr, 117 op/s
Nov 22 04:45:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:48Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:a9:1c 10.100.0.9
Nov 22 04:45:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:48Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:a9:1c 10.100.0.9
Nov 22 04:45:48 np0005532048 nova_compute[253661]: 2025-11-22 09:45:48.524 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804733.5205002, ed3583b5-6d93-4e3f-83e0-3b36f25f08f1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:48 np0005532048 nova_compute[253661]: 2025-11-22 09:45:48.524 253665 INFO nova.compute.manager [-] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:45:48 np0005532048 nova_compute[253661]: 2025-11-22 09:45:48.567 253665 DEBUG nova.compute.manager [None req-62b85729-f9b5-49a8-9f64-c829ed74e638 - - - - - -] [instance: ed3583b5-6d93-4e3f-83e0-3b36f25f08f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:48 np0005532048 nova_compute[253661]: 2025-11-22 09:45:48.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.246968) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749247047, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2076, "num_deletes": 251, "total_data_size": 3349266, "memory_usage": 3394248, "flush_reason": "Manual Compaction"}
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749267028, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3293594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49980, "largest_seqno": 52055, "table_properties": {"data_size": 3284181, "index_size": 5907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19543, "raw_average_key_size": 20, "raw_value_size": 3265357, "raw_average_value_size": 3394, "num_data_blocks": 261, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804527, "oldest_key_time": 1763804527, "file_creation_time": 1763804749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 20110 microseconds, and 10394 cpu microseconds.
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.267083) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3293594 bytes OK
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.267103) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.269579) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.269592) EVENT_LOG_v1 {"time_micros": 1763804749269588, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.269612) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3340507, prev total WAL file size 3340507, number of live WAL files 2.
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.270425) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3216KB)], [116(8424KB)]
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749270489, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11920429, "oldest_snapshot_seqno": -1}
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7501 keys, 10263728 bytes, temperature: kUnknown
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749321163, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10263728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10213399, "index_size": 30441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 194100, "raw_average_key_size": 25, "raw_value_size": 10079283, "raw_average_value_size": 1343, "num_data_blocks": 1189, "num_entries": 7501, "num_filter_entries": 7501, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.321558) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10263728 bytes
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.323034) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.7 rd, 202.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 8015, records dropped: 514 output_compression: NoCompression
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.323050) EVENT_LOG_v1 {"time_micros": 1763804749323043, "job": 70, "event": "compaction_finished", "compaction_time_micros": 50800, "compaction_time_cpu_micros": 25020, "output_level": 6, "num_output_files": 1, "total_output_size": 10263728, "num_input_records": 8015, "num_output_records": 7501, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749323649, "job": 70, "event": "table_file_deletion", "file_number": 118}
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804749324951, "job": 70, "event": "table_file_deletion", "file_number": 116}
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.270331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:49.325080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:45:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 13982a1a-0d5e-49d1-a709-5b066d8063c9 does not exist
Nov 22 04:45:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev feb23909-56fd-4244-a562-357bbf836155 does not exist
Nov 22 04:45:49 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d325f99f-9083-428b-a3fd-cd98673860d3 does not exist
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:45:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:45:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 117 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 158 op/s
Nov 22 04:45:50 np0005532048 nova_compute[253661]: 2025-11-22 09:45:50.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:50 np0005532048 nova_compute[253661]: 2025-11-22 09:45:50.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:45:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:45:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:45:50 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:45:50 np0005532048 podman[397301]: 2025-11-22 09:45:50.424400355 +0000 UTC m=+0.043427044 container create b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:45:50 np0005532048 systemd[1]: Started libpod-conmon-b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5.scope.
Nov 22 04:45:50 np0005532048 podman[397301]: 2025-11-22 09:45:50.405892657 +0000 UTC m=+0.024919356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:45:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:45:50 np0005532048 podman[397301]: 2025-11-22 09:45:50.524208789 +0000 UTC m=+0.143235478 container init b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:45:50 np0005532048 podman[397301]: 2025-11-22 09:45:50.534247827 +0000 UTC m=+0.153274526 container start b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:45:50 np0005532048 condescending_noyce[397317]: 167 167
Nov 22 04:45:50 np0005532048 systemd[1]: libpod-b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5.scope: Deactivated successfully.
Nov 22 04:45:50 np0005532048 podman[397301]: 2025-11-22 09:45:50.541389633 +0000 UTC m=+0.160416322 container attach b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:45:50 np0005532048 conmon[397317]: conmon b9eae249f28d8cd89514 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5.scope/container/memory.events
Nov 22 04:45:50 np0005532048 podman[397301]: 2025-11-22 09:45:50.542705116 +0000 UTC m=+0.161731805 container died b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:45:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f097ba5c088ee27d0a1201d016b5d2d00cd064a2ef7aa12e4ea866a1d8e0eefd-merged.mount: Deactivated successfully.
Nov 22 04:45:50 np0005532048 podman[397301]: 2025-11-22 09:45:50.589284495 +0000 UTC m=+0.208311174 container remove b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:45:50 np0005532048 systemd[1]: libpod-conmon-b9eae249f28d8cd895142c5a78092ea4bc870d3c1218841b53d2cf20ccbed5b5.scope: Deactivated successfully.
Nov 22 04:45:50 np0005532048 podman[397341]: 2025-11-22 09:45:50.755425148 +0000 UTC m=+0.043586827 container create a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:45:50 np0005532048 systemd[1]: Started libpod-conmon-a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95.scope.
Nov 22 04:45:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:45:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:50 np0005532048 podman[397341]: 2025-11-22 09:45:50.737904675 +0000 UTC m=+0.026066384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:45:50 np0005532048 podman[397341]: 2025-11-22 09:45:50.843229307 +0000 UTC m=+0.131390996 container init a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:45:50 np0005532048 podman[397341]: 2025-11-22 09:45:50.849298536 +0000 UTC m=+0.137460215 container start a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:45:50 np0005532048 podman[397341]: 2025-11-22 09:45:50.853166452 +0000 UTC m=+0.141328141 container attach a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:45:51 np0005532048 nova_compute[253661]: 2025-11-22 09:45:51.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:51 np0005532048 boring_brahmagupta[397356]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:45:51 np0005532048 boring_brahmagupta[397356]: --> relative data size: 1.0
Nov 22 04:45:51 np0005532048 boring_brahmagupta[397356]: --> All data devices are unavailable
Nov 22 04:45:51 np0005532048 nova_compute[253661]: 2025-11-22 09:45:51.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:51 np0005532048 systemd[1]: libpod-a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95.scope: Deactivated successfully.
Nov 22 04:45:51 np0005532048 podman[397341]: 2025-11-22 09:45:51.908443229 +0000 UTC m=+1.196604908 container died a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:45:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-706572ca95733d184462c5dc3039b16e3c3ed61f32fec3e811d645649e889e21-merged.mount: Deactivated successfully.
Nov 22 04:45:51 np0005532048 podman[397341]: 2025-11-22 09:45:51.970522393 +0000 UTC m=+1.258684072 container remove a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:45:51 np0005532048 systemd[1]: libpod-conmon-a63d0f0a675cd29426f98089425dfb943db449663b1ef3e340f5ac6981b75f95.scope: Deactivated successfully.
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 22 04:45:52 np0005532048 nova_compute[253661]: 2025-11-22 09:45:52.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:45:52
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data']
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:45:52 np0005532048 podman[397541]: 2025-11-22 09:45:52.575732037 +0000 UTC m=+0.047847103 container create bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:45:52 np0005532048 systemd[1]: Started libpod-conmon-bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8.scope.
Nov 22 04:45:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:45:52 np0005532048 podman[397541]: 2025-11-22 09:45:52.650802461 +0000 UTC m=+0.122917547 container init bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:45:52 np0005532048 podman[397541]: 2025-11-22 09:45:52.556783769 +0000 UTC m=+0.028898865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:45:52 np0005532048 podman[397541]: 2025-11-22 09:45:52.659865684 +0000 UTC m=+0.131980750 container start bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:45:52 np0005532048 wizardly_dijkstra[397557]: 167 167
Nov 22 04:45:52 np0005532048 podman[397541]: 2025-11-22 09:45:52.664214691 +0000 UTC m=+0.136329757 container attach bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:45:52 np0005532048 systemd[1]: libpod-bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8.scope: Deactivated successfully.
Nov 22 04:45:52 np0005532048 podman[397541]: 2025-11-22 09:45:52.665181266 +0000 UTC m=+0.137296352 container died bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 04:45:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-23fef82c69ace631f1a831194362a094e8191c5c4ca48a156aa946cc27d19f33-merged.mount: Deactivated successfully.
Nov 22 04:45:52 np0005532048 podman[397541]: 2025-11-22 09:45:52.70584038 +0000 UTC m=+0.177955446 container remove bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dijkstra, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:45:52 np0005532048 systemd[1]: libpod-conmon-bef02bcc66d747381087585b3c985322b6c304b90056189e9eda6f93ab6f7dd8.scope: Deactivated successfully.
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:45:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:45:52 np0005532048 podman[397580]: 2025-11-22 09:45:52.866979068 +0000 UTC m=+0.037899216 container create 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:45:52 np0005532048 systemd[1]: Started libpod-conmon-72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f.scope.
Nov 22 04:45:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:45:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:52 np0005532048 podman[397580]: 2025-11-22 09:45:52.850728397 +0000 UTC m=+0.021648565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:45:52 np0005532048 podman[397580]: 2025-11-22 09:45:52.950929771 +0000 UTC m=+0.121849939 container init 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:45:52 np0005532048 podman[397580]: 2025-11-22 09:45:52.95897042 +0000 UTC m=+0.129890578 container start 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:45:52 np0005532048 podman[397580]: 2025-11-22 09:45:52.962518927 +0000 UTC m=+0.133439085 container attach 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.705 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:45:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229626796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.785 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.815 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804738.8142846, 71ef7514-c6bd-40ee-852a-4b850ca0a05c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.815 253665 INFO nova.compute.manager [-] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]: {
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:    "0": [
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:        {
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "devices": [
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "/dev/loop3"
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            ],
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_name": "ceph_lv0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_size": "21470642176",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "name": "ceph_lv0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "tags": {
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.cluster_name": "ceph",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.crush_device_class": "",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.encrypted": "0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.osd_id": "0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.type": "block",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.vdo": "0"
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            },
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "type": "block",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "vg_name": "ceph_vg0"
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:        }
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:    ],
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:    "1": [
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:        {
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "devices": [
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "/dev/loop4"
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            ],
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_name": "ceph_lv1",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_size": "21470642176",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "name": "ceph_lv1",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "tags": {
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.cluster_name": "ceph",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.crush_device_class": "",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.encrypted": "0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.osd_id": "1",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.type": "block",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.vdo": "0"
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            },
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "type": "block",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "vg_name": "ceph_vg1"
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:        }
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:    ],
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:    "2": [
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:        {
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "devices": [
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "/dev/loop5"
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            ],
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_name": "ceph_lv2",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_size": "21470642176",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "name": "ceph_lv2",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "tags": {
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.cluster_name": "ceph",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.crush_device_class": "",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.encrypted": "0",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.osd_id": "2",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.type": "block",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:                "ceph.vdo": "0"
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            },
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "type": "block",
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:            "vg_name": "ceph_vg2"
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:        }
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]:    ]
Nov 22 04:45:53 np0005532048 affectionate_germain[397598]: }
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.843 253665 DEBUG nova.compute.manager [None req-dc53686c-0146-472f-bdab-e254b47e6830 - - - - - -] [instance: 71ef7514-c6bd-40ee-852a-4b850ca0a05c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:45:53 np0005532048 systemd[1]: libpod-72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f.scope: Deactivated successfully.
Nov 22 04:45:53 np0005532048 podman[397580]: 2025-11-22 09:45:53.853515759 +0000 UTC m=+1.024435907 container died 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.868 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:45:53 np0005532048 nova_compute[253661]: 2025-11-22 09:45:53.869 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:45:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f601991e271b3a1e0ead416723811745aba65ac2111e3167a8ebd9d89f1c6128-merged.mount: Deactivated successfully.
Nov 22 04:45:53 np0005532048 podman[397580]: 2025-11-22 09:45:53.921891007 +0000 UTC m=+1.092811145 container remove 72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:45:53 np0005532048 systemd[1]: libpod-conmon-72a8e2f1630ae2e66c46629dedb64c86fae2672f22b162428cd698e167a4551f.scope: Deactivated successfully.
Nov 22 04:45:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.042 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.044 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3360MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.044 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.044 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.127 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance e1b6c07e-b79f-4b39-a2b8-a952e54f4972 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.128 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.128 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:45:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.187 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:45:54 np0005532048 podman[397800]: 2025-11-22 09:45:54.524806285 +0000 UTC m=+0.045912195 container create c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:45:54 np0005532048 systemd[1]: Started libpod-conmon-c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14.scope.
Nov 22 04:45:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:45:54 np0005532048 podman[397800]: 2025-11-22 09:45:54.504782681 +0000 UTC m=+0.025888651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:45:54 np0005532048 podman[397800]: 2025-11-22 09:45:54.60480505 +0000 UTC m=+0.125910980 container init c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:45:54 np0005532048 podman[397800]: 2025-11-22 09:45:54.612824449 +0000 UTC m=+0.133930359 container start c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:45:54 np0005532048 podman[397800]: 2025-11-22 09:45:54.616445118 +0000 UTC m=+0.137551118 container attach c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:45:54 np0005532048 intelligent_wilson[397816]: 167 167
Nov 22 04:45:54 np0005532048 systemd[1]: libpod-c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14.scope: Deactivated successfully.
Nov 22 04:45:54 np0005532048 podman[397800]: 2025-11-22 09:45:54.621998285 +0000 UTC m=+0.143104195 container died c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:45:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:45:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690095993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:45:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d31b9923edef04c69e0041813fab3f3ca3614c81c054bd6914110691100e0483-merged.mount: Deactivated successfully.
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.645 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.654 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:45:54 np0005532048 podman[397800]: 2025-11-22 09:45:54.657387779 +0000 UTC m=+0.178493689 container remove c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:45:54 np0005532048 systemd[1]: libpod-conmon-c4f33b7e8cb4265ee52dfcf5218e8ae8f8683b199807a31d805c5cf6c7466c14.scope: Deactivated successfully.
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.681 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.718 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:45:54 np0005532048 nova_compute[253661]: 2025-11-22 09:45:54.718 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:45:54 np0005532048 podman[397842]: 2025-11-22 09:45:54.843376992 +0000 UTC m=+0.043679300 container create 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:45:54 np0005532048 systemd[1]: Started libpod-conmon-57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee.scope.
Nov 22 04:45:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:45:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:45:54 np0005532048 podman[397842]: 2025-11-22 09:45:54.825702085 +0000 UTC m=+0.026004413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:45:54 np0005532048 podman[397842]: 2025-11-22 09:45:54.926238078 +0000 UTC m=+0.126540386 container init 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:45:54 np0005532048 podman[397842]: 2025-11-22 09:45:54.942218702 +0000 UTC m=+0.142520990 container start 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:45:54 np0005532048 podman[397842]: 2025-11-22 09:45:54.946079608 +0000 UTC m=+0.146381906 container attach 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:45:55 np0005532048 nova_compute[253661]: 2025-11-22 09:45:55.046 253665 INFO nova.compute.manager [None req-2a338163-4df8-4940-b164-898080d580ed 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Get console output#033[00m
Nov 22 04:45:55 np0005532048 nova_compute[253661]: 2025-11-22 09:45:55.052 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:45:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:55Z|01535|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 04:45:55 np0005532048 nova_compute[253661]: 2025-11-22 09:45:55.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:45:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:45:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:45:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:45:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:45:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:55Z|01536|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 04:45:55 np0005532048 nova_compute[253661]: 2025-11-22 09:45:55.740 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:55 np0005532048 exciting_cori[397858]: {
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "osd_id": 1,
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "type": "bluestore"
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:    },
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "osd_id": 0,
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "type": "bluestore"
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:    },
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "osd_id": 2,
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:        "type": "bluestore"
Nov 22 04:45:55 np0005532048 exciting_cori[397858]:    }
Nov 22 04:45:55 np0005532048 exciting_cori[397858]: }
Nov 22 04:45:55 np0005532048 systemd[1]: libpod-57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee.scope: Deactivated successfully.
Nov 22 04:45:55 np0005532048 podman[397892]: 2025-11-22 09:45:55.947404043 +0000 UTC m=+0.021749717 container died 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:45:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0707808a8124fa5f57f5d9926f6a73f38f3bfb0a36bf79b50426f664bbca88f2-merged.mount: Deactivated successfully.
Nov 22 04:45:56 np0005532048 podman[397892]: 2025-11-22 09:45:56.011174288 +0000 UTC m=+0.085519942 container remove 57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cori, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:45:56 np0005532048 systemd[1]: libpod-conmon-57ba937154e57ad8509dbdf2f52c387fdb14d125d6912da5500df06fb7e582ee.scope: Deactivated successfully.
Nov 22 04:45:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:45:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:45:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:45:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:45:56 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1293a0ad-8670-482e-9e8a-acdf317ef2a4 does not exist
Nov 22 04:45:56 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5234f4eb-510d-4bdd-8c92-93dbe8abad78 does not exist
Nov 22 04:45:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 04:45:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:56.228 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:f0:a2 10.100.0.2 2001:db8::f816:3eff:fe38:f0a2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe38:f0a2/64', 'neutron:device_id': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b8d092bb-b893-4593-9090-1acdc081ae18) old=Port_Binding(mac=['fa:16:3e:38:f0:a2 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:56.229 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b8d092bb-b893-4593-9090-1acdc081ae18 in datapath b6b9221a-729b-4988-afa8-72f95360d9ea updated#033[00m
Nov 22 04:45:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:56.230 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6b9221a-729b-4988-afa8-72f95360d9ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:45:56 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:56.232 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4f2cd3-9272-4fd2-92fd-e30f8c14d4f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:45:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:45:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:45:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:45:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:45:56 np0005532048 nova_compute[253661]: 2025-11-22 09:45:56.719 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:56 np0005532048 nova_compute[253661]: 2025-11-22 09:45:56.740 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:45:56 np0005532048 nova_compute[253661]: 2025-11-22 09:45:56.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:56 np0005532048 nova_compute[253661]: 2025-11-22 09:45:56.920 253665 INFO nova.compute.manager [None req-c862ba58-47ea-4467-8afc-4297b5472aa3 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Get console output#033[00m
Nov 22 04:45:56 np0005532048 nova_compute[253661]: 2025-11-22 09:45:56.925 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:45:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:57.018 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:19:23 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c6337497-985f-4f89-84be-8d10ca67dfa1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6337497-985f-4f89-84be-8d10ca67dfa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8239e99d-886b-4c86-bd64-ce377c2ec6f6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=47f379dc-c905-4e14-9660-00b10a28ef04) old=Port_Binding(mac=['fa:16:3e:45:19:23 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c6337497-985f-4f89-84be-8d10ca67dfa1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6337497-985f-4f89-84be-8d10ca67dfa1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:45:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:57.020 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 47f379dc-c905-4e14-9660-00b10a28ef04 in datapath c6337497-985f-4f89-84be-8d10ca67dfa1 updated#033[00m
Nov 22 04:45:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:57.021 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6337497-985f-4f89-84be-8d10ca67dfa1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:45:57 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:45:57.021 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[98d13600-d0c0-4739-b20a-dedf83602143]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:45:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:45:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:45:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 04:45:58 np0005532048 nova_compute[253661]: 2025-11-22 09:45:58.208 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:58 np0005532048 NetworkManager[48920]: <info>  [1763804758.2090] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Nov 22 04:45:58 np0005532048 NetworkManager[48920]: <info>  [1763804758.2106] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Nov 22 04:45:58 np0005532048 nova_compute[253661]: 2025-11-22 09:45:58.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:45:58Z|01537|binding|INFO|Releasing lport acb44b8b-e586-4d56-8c91-42b393fbe8ed from this chassis (sb_readonly=0)
Nov 22 04:45:58 np0005532048 nova_compute[253661]: 2025-11-22 09:45:58.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:58 np0005532048 nova_compute[253661]: 2025-11-22 09:45:58.626 253665 INFO nova.compute.manager [None req-1291d30d-9dcb-4021-8fea-8638cbf08b80 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Get console output#033[00m
Nov 22 04:45:58 np0005532048 nova_compute[253661]: 2025-11-22 09:45:58.631 311943 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 22 04:45:58 np0005532048 nova_compute[253661]: 2025-11-22 09:45:58.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.972692) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804758972736, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 370, "num_deletes": 257, "total_data_size": 210915, "memory_usage": 218712, "flush_reason": "Manual Compaction"}
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804758987699, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 209513, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52056, "largest_seqno": 52425, "table_properties": {"data_size": 207189, "index_size": 424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5706, "raw_average_key_size": 18, "raw_value_size": 202516, "raw_average_value_size": 647, "num_data_blocks": 18, "num_entries": 313, "num_filter_entries": 313, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804749, "oldest_key_time": 1763804749, "file_creation_time": 1763804758, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 15066 microseconds, and 1496 cpu microseconds.
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.987754) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 209513 bytes OK
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.987780) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990308) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990321) EVENT_LOG_v1 {"time_micros": 1763804758990318, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990370) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 208428, prev total WAL file size 208428, number of live WAL files 2.
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303035' seq:72057594037927935, type:22 .. '6C6F676D0032323538' seq:0, type:0; will stop at (end)
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(204KB)], [119(10023KB)]
Nov 22 04:45:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804758990851, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10473241, "oldest_snapshot_seqno": -1}
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7288 keys, 10350214 bytes, temperature: kUnknown
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804759151504, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10350214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10300623, "index_size": 30279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 190628, "raw_average_key_size": 26, "raw_value_size": 10169514, "raw_average_value_size": 1395, "num_data_blocks": 1179, "num_entries": 7288, "num_filter_entries": 7288, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804758, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.151836) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10350214 bytes
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.156901) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.2 rd, 64.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.8 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(99.4) write-amplify(49.4) OK, records in: 7814, records dropped: 526 output_compression: NoCompression
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.156923) EVENT_LOG_v1 {"time_micros": 1763804759156911, "job": 72, "event": "compaction_finished", "compaction_time_micros": 160755, "compaction_time_cpu_micros": 23834, "output_level": 6, "num_output_files": 1, "total_output_size": 10350214, "num_input_records": 7814, "num_output_records": 7288, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804759157110, "job": 72, "event": "table_file_deletion", "file_number": 121}
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804759159489, "job": 72, "event": "table_file_deletion", "file_number": 119}
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:58.990620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:45:59.159534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:45:59 np0005532048 nova_compute[253661]: 2025-11-22 09:45:59.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:46:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Nov 22 04:46:01 np0005532048 podman[397958]: 2025-11-22 09:46:01.387845324 +0000 UTC m=+0.071981319 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 04:46:01 np0005532048 podman[397959]: 2025-11-22 09:46:01.388197303 +0000 UTC m=+0.072661826 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.897 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.935 253665 DEBUG nova.compute.manager [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.935 253665 DEBUG nova.compute.manager [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing instance network info cache due to event network-changed-2bf46f44-05ff-4af4-ba41-f280a21be09e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.935 253665 DEBUG oslo_concurrency.lockutils [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.935 253665 DEBUG oslo_concurrency.lockutils [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.936 253665 DEBUG nova.network.neutron [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Refreshing network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.957 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.958 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.958 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.958 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.958 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.959 253665 INFO nova.compute.manager [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Terminating instance#033[00m
Nov 22 04:46:01 np0005532048 nova_compute[253661]: 2025-11-22 09:46:01.960 253665 DEBUG nova.compute.manager [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:46:02 np0005532048 kernel: tap2bf46f44-05 (unregistering): left promiscuous mode
Nov 22 04:46:02 np0005532048 NetworkManager[48920]: <info>  [1763804762.0276] device (tap2bf46f44-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.036 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:02Z|01538|binding|INFO|Releasing lport 2bf46f44-05ff-4af4-ba41-f280a21be09e from this chassis (sb_readonly=0)
Nov 22 04:46:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:02Z|01539|binding|INFO|Setting lport 2bf46f44-05ff-4af4-ba41-f280a21be09e down in Southbound
Nov 22 04:46:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:02Z|01540|binding|INFO|Removing iface tap2bf46f44-05 ovn-installed in OVS
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.040 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.046 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:a9:1c 10.100.0.9'], port_security=['fa:16:3e:ec:a9:1c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e1b6c07e-b79f-4b39-a2b8-a952e54f4972', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8acfdd432cb64d45993b2a66a2d29b82', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf4ccd55-5049-48da-a040-7bc492278d9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bf69086-9ee8-4131-a2f6-8ce3890c821e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=2bf46f44-05ff-4af4-ba41-f280a21be09e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.048 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 2bf46f44-05ff-4af4-ba41-f280a21be09e in datapath 32b06b6f-2dbe-45a6-a0ed-07f342aa967b unbound from our chassis#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.050 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32b06b6f-2dbe-45a6-a0ed-07f342aa967b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.052 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb4d291-74ca-4c97-aefe-a6ee343fe257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.053 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b namespace which is not needed anymore#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:02 np0005532048 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 22 04:46:02 np0005532048 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Consumed 13.290s CPU time.
Nov 22 04:46:02 np0005532048 systemd-machined[215941]: Machine qemu-172-instance-0000008d terminated.
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 30 KiB/s wr, 1 op/s
Nov 22 04:46:02 np0005532048 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [NOTICE]   (396661) : haproxy version is 2.8.14-c23fe91
Nov 22 04:46:02 np0005532048 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [NOTICE]   (396661) : path to executable is /usr/sbin/haproxy
Nov 22 04:46:02 np0005532048 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [WARNING]  (396661) : Exiting Master process...
Nov 22 04:46:02 np0005532048 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [ALERT]    (396661) : Current worker (396663) exited with code 143 (Terminated)
Nov 22 04:46:02 np0005532048 neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b[396657]: [WARNING]  (396661) : All workers exited. Exiting... (0)
Nov 22 04:46:02 np0005532048 systemd[1]: libpod-990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322.scope: Deactivated successfully.
Nov 22 04:46:02 np0005532048 podman[398021]: 2025-11-22 09:46:02.19155953 +0000 UTC m=+0.050289083 container died 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.202 253665 INFO nova.virt.libvirt.driver [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Instance destroyed successfully.#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.203 253665 DEBUG nova.objects.instance [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lazy-loading 'resources' on Instance uuid e1b6c07e-b79f-4b39-a2b8-a952e54f4972 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:46:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322-userdata-shm.mount: Deactivated successfully.
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.223 253665 DEBUG nova.virt.libvirt.vif [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:45:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-221401421',display_name='tempest-TestNetworkBasicOps-server-221401421',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-221401421',id=141,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFpRaOd0MA6ods5Fgu/bePJdKNA6xJzpwKTamybJrRd4vBorrEhiuMwvVBW2vy+fN3+ZAEzEiG8NI9LxFAosf7VdPZQ2Hzoq936Yx2tDHAB+5D4UznxlVut3DWP76u/ISw==',key_name='tempest-TestNetworkBasicOps-1054111100',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:45:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8acfdd432cb64d45993b2a66a2d29b82',ramdisk_id='',reservation_id='r-qlv4m0ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-927877775',owner_user_name='tempest-TestNetworkBasicOps-927877775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:45:34Z,user_data=None,user_id='36208b8910e24c06bcc7f958bab2adf1',uuid=e1b6c07e-b79f-4b39-a2b8-a952e54f4972,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.223 253665 DEBUG nova.network.os_vif_util [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converting VIF {"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.224 253665 DEBUG nova.network.os_vif_util [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.225 253665 DEBUG os_vif [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.227 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bf46f44-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-05d1c1e0ba78584dff2dec95748fc513d262bc386f559cea8bbfce0b30478ef5-merged.mount: Deactivated successfully.
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.232 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.234 253665 INFO os_vif [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:a9:1c,bridge_name='br-int',has_traffic_filtering=True,id=2bf46f44-05ff-4af4-ba41-f280a21be09e,network=Network(32b06b6f-2dbe-45a6-a0ed-07f342aa967b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bf46f44-05')#033[00m
Nov 22 04:46:02 np0005532048 podman[398021]: 2025-11-22 09:46:02.239827582 +0000 UTC m=+0.098557135 container cleanup 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:46:02 np0005532048 systemd[1]: libpod-conmon-990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322.scope: Deactivated successfully.
Nov 22 04:46:02 np0005532048 podman[398078]: 2025-11-22 09:46:02.314357713 +0000 UTC m=+0.051168065 container remove 990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.321 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[082e9408-4f17-425c-8df5-d45d4daefd29]: (4, ('Sat Nov 22 09:46:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b (990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322)\n990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322\nSat Nov 22 09:46:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b (990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322)\n990d18a54a71316a89283433d5d651004eb95fea6a5eca26fdcf1b93fe6a7322\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.324 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[844ad775-4436-4da6-9105-62b3347775fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.325 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32b06b6f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:02 np0005532048 kernel: tap32b06b6f-20: left promiscuous mode
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[361989cd-acf9-442e-9c1f-74711d703c93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.359 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d225522e-cd11-421f-9840-8275343bb6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.361 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4d538e-2841-4831-bc64-6b769352dc5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.377 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[36f01dd9-bd70-4274-9aa5-4f5a6d5ea575]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758106, 'reachable_time': 15612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398096, 'error': None, 'target': 'ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.379 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32b06b6f-2dbe-45a6-a0ed-07f342aa967b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:46:02 np0005532048 systemd[1]: run-netns-ovnmeta\x2d32b06b6f\x2d2dbe\x2d45a6\x2da0ed\x2d07f342aa967b.mount: Deactivated successfully.
Nov 22 04:46:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:02.379 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[6c88735b-39f7-4420-88e0-f6950d95f0a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.667 253665 INFO nova.virt.libvirt.driver [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Deleting instance files /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972_del#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.668 253665 INFO nova.virt.libvirt.driver [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Deletion of /var/lib/nova/instances/e1b6c07e-b79f-4b39-a2b8-a952e54f4972_del complete#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.769 253665 INFO nova.compute.manager [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.769 253665 DEBUG oslo.service.loopingcall [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.769 253665 DEBUG nova.compute.manager [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:46:02 np0005532048 nova_compute[253661]: 2025-11-22 09:46:02.770 253665 DEBUG nova.network.neutron [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007596545956241453 of space, bias 1.0, pg target 0.22789637868724358 quantized to 32 (current 32)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:46:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:46:03 np0005532048 nova_compute[253661]: 2025-11-22 09:46:03.776 253665 DEBUG nova.network.neutron [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updated VIF entry in instance network info cache for port 2bf46f44-05ff-4af4-ba41-f280a21be09e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:46:03 np0005532048 nova_compute[253661]: 2025-11-22 09:46:03.776 253665 DEBUG nova.network.neutron [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [{"id": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "address": "fa:16:3e:ec:a9:1c", "network": {"id": "32b06b6f-2dbe-45a6-a0ed-07f342aa967b", "bridge": "br-int", "label": "tempest-network-smoke--1797200609", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8acfdd432cb64d45993b2a66a2d29b82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bf46f44-05", "ovs_interfaceid": "2bf46f44-05ff-4af4-ba41-f280a21be09e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:46:03 np0005532048 nova_compute[253661]: 2025-11-22 09:46:03.792 253665 DEBUG nova.network.neutron [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:46:03 np0005532048 nova_compute[253661]: 2025-11-22 09:46:03.833 253665 DEBUG oslo_concurrency.lockutils [req-7229fd0c-53ae-4f04-97df-1409dd547740 req-54d1c048-033c-44ea-b03e-85e0147d3718 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-e1b6c07e-b79f-4b39-a2b8-a952e54f4972" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:46:03 np0005532048 nova_compute[253661]: 2025-11-22 09:46:03.849 253665 INFO nova.compute.manager [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Took 1.08 seconds to deallocate network for instance.#033[00m
Nov 22 04:46:03 np0005532048 nova_compute[253661]: 2025-11-22 09:46:03.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:03 np0005532048 nova_compute[253661]: 2025-11-22 09:46:03.911 253665 DEBUG nova.compute.manager [req-1cc8c360-0aa9-4b88-af78-3776a3878bcf req-caaa8977-6731-4a2c-a071-eea85a1c44eb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-deleted-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:03 np0005532048 nova_compute[253661]: 2025-11-22 09:46:03.963 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:03 np0005532048 nova_compute[253661]: 2025-11-22 09:46:03.963 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.026 253665 DEBUG oslo_concurrency.processutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.080 253665 DEBUG nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-unplugged-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.080 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.080 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.081 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.081 253665 DEBUG nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] No waiting events found dispatching network-vif-unplugged-2bf46f44-05ff-4af4-ba41-f280a21be09e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.081 253665 WARNING nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received unexpected event network-vif-unplugged-2bf46f44-05ff-4af4-ba41-f280a21be09e for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.081 253665 DEBUG nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 DEBUG oslo_concurrency.lockutils [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 DEBUG nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] No waiting events found dispatching network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.082 253665 WARNING nova.compute.manager [req-302ca41a-07c7-4d83-8df1-d71ef38a484d req-0ae670e1-cf04-47ad-b9c1-30a1a8ec0fe2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Received unexpected event network-vif-plugged-2bf46f44-05ff-4af4-ba41-f280a21be09e for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:46:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 57 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 16 KiB/s wr, 23 op/s
Nov 22 04:46:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:46:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472057837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.462 253665 DEBUG oslo_concurrency.processutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.468 253665 DEBUG nova.compute.provider_tree [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.488 253665 DEBUG nova.scheduler.client.report [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.519 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.569 253665 INFO nova.scheduler.client.report [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Deleted allocations for instance e1b6c07e-b79f-4b39-a2b8-a952e54f4972#033[00m
Nov 22 04:46:04 np0005532048 nova_compute[253661]: 2025-11-22 09:46:04.667 253665 DEBUG oslo_concurrency.lockutils [None req-ee12a98c-dce6-45f2-9883-5b0b577be4ad 36208b8910e24c06bcc7f958bab2adf1 8acfdd432cb64d45993b2a66a2d29b82 - - default default] Lock "e1b6c07e-b79f-4b39-a2b8-a952e54f4972" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:05.051 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:f0:a2 10.100.0.2 2001:db8:0:1:f816:3eff:fe38:f0a2 2001:db8::f816:3eff:fe38:f0a2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe38:f0a2/64 2001:db8::f816:3eff:fe38:f0a2/64', 'neutron:device_id': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b8d092bb-b893-4593-9090-1acdc081ae18) old=Port_Binding(mac=['fa:16:3e:38:f0:a2 10.100.0.2 2001:db8::f816:3eff:fe38:f0a2'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe38:f0a2/64', 'neutron:device_id': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:46:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:05.052 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b8d092bb-b893-4593-9090-1acdc081ae18 in datapath b6b9221a-729b-4988-afa8-72f95360d9ea updated#033[00m
Nov 22 04:46:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:05.053 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6b9221a-729b-4988-afa8-72f95360d9ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:46:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:05.054 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ab645c66-0fc9-41ab-9bdb-7402f184531e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 57 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 5.7 KiB/s wr, 23 op/s
Nov 22 04:46:06 np0005532048 nova_compute[253661]: 2025-11-22 09:46:06.900 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:07 np0005532048 nova_compute[253661]: 2025-11-22 09:46:07.230 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Nov 22 04:46:08 np0005532048 podman[398120]: 2025-11-22 09:46:08.373672495 +0000 UTC m=+0.071284891 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 22 04:46:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Nov 22 04:46:11 np0005532048 nova_compute[253661]: 2025-11-22 09:46:11.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Nov 22 04:46:12 np0005532048 nova_compute[253661]: 2025-11-22 09:46:12.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:46:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683379569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:46:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:46:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2683379569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:46:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Nov 22 04:46:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 852 B/s wr, 5 op/s
Nov 22 04:46:16 np0005532048 nova_compute[253661]: 2025-11-22 09:46:16.903 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:17 np0005532048 nova_compute[253661]: 2025-11-22 09:46:17.199 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804762.198196, e1b6c07e-b79f-4b39-a2b8-a952e54f4972 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:46:17 np0005532048 nova_compute[253661]: 2025-11-22 09:46:17.199 253665 INFO nova.compute.manager [-] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:46:17 np0005532048 nova_compute[253661]: 2025-11-22 09:46:17.225 253665 DEBUG nova.compute.manager [None req-cefc2a13-754c-4093-bee5-f64f99e3d4ac - - - - - -] [instance: e1b6c07e-b79f-4b39-a2b8-a952e54f4972] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:46:17 np0005532048 nova_compute[253661]: 2025-11-22 09:46:17.232 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 853 B/s wr, 5 op/s
Nov 22 04:46:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:46:21 np0005532048 nova_compute[253661]: 2025-11-22 09:46:21.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:46:22 np0005532048 nova_compute[253661]: 2025-11-22 09:46:22.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:46:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:46:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:46:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:46:26 np0005532048 nova_compute[253661]: 2025-11-22 09:46:26.907 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:26 np0005532048 nova_compute[253661]: 2025-11-22 09:46:26.941 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:27 np0005532048 nova_compute[253661]: 2025-11-22 09:46:27.109 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:27 np0005532048 nova_compute[253661]: 2025-11-22 09:46:27.234 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:27.989 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:27.989 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:27.989 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:46:28 np0005532048 nova_compute[253661]: 2025-11-22 09:46:28.233 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:28 np0005532048 nova_compute[253661]: 2025-11-22 09:46:28.233 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:28 np0005532048 nova_compute[253661]: 2025-11-22 09:46:28.256 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:46:28 np0005532048 nova_compute[253661]: 2025-11-22 09:46:28.381 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:28 np0005532048 nova_compute[253661]: 2025-11-22 09:46:28.381 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:28 np0005532048 nova_compute[253661]: 2025-11-22 09:46:28.388 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:46:28 np0005532048 nova_compute[253661]: 2025-11-22 09:46:28.388 253665 INFO nova.compute.claims [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:46:28 np0005532048 nova_compute[253661]: 2025-11-22 09:46:28.779 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:46:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520195221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.205 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.211 253665 DEBUG nova.compute.provider_tree [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.230 253665 DEBUG nova.scheduler.client.report [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.436 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.436 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.614 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.615 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.725 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.760 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.917 253665 DEBUG nova.policy [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.920 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.921 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.922 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Creating image(s)#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.942 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.961 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.978 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:29 np0005532048 nova_compute[253661]: 2025-11-22 09:46:29.982 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.062 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.063 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.063 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.064 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.083 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.087 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 970 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.342 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.392 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.473 253665 DEBUG nova.objects.instance [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.487 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.487 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Ensure instance console log exists: /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.488 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.488 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:30 np0005532048 nova_compute[253661]: 2025-11-22 09:46:30.488 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:31 np0005532048 nova_compute[253661]: 2025-11-22 09:46:31.910 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:31 np0005532048 nova_compute[253661]: 2025-11-22 09:46:31.970 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Successfully created port: be2ad403-fc37-4e1b-a9b8-f0e116595caf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:46:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 59 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 749 KiB/s wr, 11 op/s
Nov 22 04:46:32 np0005532048 nova_compute[253661]: 2025-11-22 09:46:32.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:32 np0005532048 podman[398336]: 2025-11-22 09:46:32.359168836 +0000 UTC m=+0.055309677 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:46:32 np0005532048 podman[398335]: 2025-11-22 09:46:32.379774186 +0000 UTC m=+0.077941106 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 04:46:33 np0005532048 nova_compute[253661]: 2025-11-22 09:46:33.809 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Successfully updated port: be2ad403-fc37-4e1b-a9b8-f0e116595caf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:46:33 np0005532048 nova_compute[253661]: 2025-11-22 09:46:33.874 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:46:33 np0005532048 nova_compute[253661]: 2025-11-22 09:46:33.875 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:46:33 np0005532048 nova_compute[253661]: 2025-11-22 09:46:33.875 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:46:33 np0005532048 nova_compute[253661]: 2025-11-22 09:46:33.934 253665 DEBUG nova.compute.manager [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:33 np0005532048 nova_compute[253661]: 2025-11-22 09:46:33.935 253665 DEBUG nova.compute.manager [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing instance network info cache due to event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:46:33 np0005532048 nova_compute[253661]: 2025-11-22 09:46:33.935 253665 DEBUG oslo_concurrency.lockutils [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:46:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:46:34 np0005532048 nova_compute[253661]: 2025-11-22 09:46:34.242 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:46:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.893 253665 DEBUG nova.network.neutron [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.912 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.934 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.934 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance network_info: |[{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.935 253665 DEBUG oslo_concurrency.lockutils [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.935 253665 DEBUG nova.network.neutron [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.938 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start _get_guest_xml network_info=[{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.941 253665 WARNING nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.947 253665 DEBUG nova.virt.libvirt.host [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.948 253665 DEBUG nova.virt.libvirt.host [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.953 253665 DEBUG nova.virt.libvirt.host [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.954 253665 DEBUG nova.virt.libvirt.host [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.954 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.954 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.955 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.955 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.955 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.955 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.956 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.957 253665 DEBUG nova.virt.hardware [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:46:36 np0005532048 nova_compute[253661]: 2025-11-22 09:46:36.959 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:46:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324791487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.398 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.420 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.424 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:46:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/864849740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.878 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.880 253665 DEBUG nova.virt.libvirt.vif [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:46:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-533516475',display_name='tempest-TestGettingAddress-server-533516475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-533516475',id=142,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-khcmddwq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:46:29Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=63134c6f-fc14-4157-9874-e7c6227f8d0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.880 253665 DEBUG nova.network.os_vif_util [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.881 253665 DEBUG nova.network.os_vif_util [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.882 253665 DEBUG nova.objects.instance [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.894 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <uuid>63134c6f-fc14-4157-9874-e7c6227f8d0a</uuid>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <name>instance-0000008e</name>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-533516475</nova:name>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:46:36</nova:creationTime>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <nova:port uuid="be2ad403-fc37-4e1b-a9b8-f0e116595caf">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feca:893" ipVersion="6"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feca:893" ipVersion="6"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <entry name="serial">63134c6f-fc14-4157-9874-e7c6227f8d0a</entry>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <entry name="uuid">63134c6f-fc14-4157-9874-e7c6227f8d0a</entry>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/63134c6f-fc14-4157-9874-e7c6227f8d0a_disk">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:ca:08:93"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <target dev="tapbe2ad403-fc"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/console.log" append="off"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:46:37 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:46:37 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:46:37 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:46:37 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.894 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Preparing to wait for external event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.895 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.896 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.896 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.896 253665 DEBUG nova.virt.libvirt.vif [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:46:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-533516475',display_name='tempest-TestGettingAddress-server-533516475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-533516475',id=142,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-khcmddwq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:46:29Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=63134c6f-fc14-4157-9874-e7c6227f8d0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.897 253665 DEBUG nova.network.os_vif_util [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.897 253665 DEBUG nova.network.os_vif_util [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.898 253665 DEBUG os_vif [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.899 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.899 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.901 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe2ad403-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.902 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe2ad403-fc, col_values=(('external_ids', {'iface-id': 'be2ad403-fc37-4e1b-a9b8-f0e116595caf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:08:93', 'vm-uuid': '63134c6f-fc14-4157-9874-e7c6227f8d0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.903 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:37 np0005532048 NetworkManager[48920]: <info>  [1763804797.9046] manager: (tapbe2ad403-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.909 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.910 253665 INFO os_vif [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc')#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.962 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.962 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.962 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:ca:08:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.962 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Using config drive#033[00m
Nov 22 04:46:37 np0005532048 nova_compute[253661]: 2025-11-22 09:46:37.983 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.427 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.428 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.428 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.483 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Creating config drive at /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.492 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_9sg_v_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.643 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf_9sg_v_" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.682 253665 DEBUG nova.storage.rbd_utils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.687 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.866 253665 DEBUG oslo_concurrency.processutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config 63134c6f-fc14-4157-9874-e7c6227f8d0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.867 253665 INFO nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Deleting local config drive /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a/disk.config because it was imported into RBD.#033[00m
Nov 22 04:46:38 np0005532048 kernel: tapbe2ad403-fc: entered promiscuous mode
Nov 22 04:46:38 np0005532048 NetworkManager[48920]: <info>  [1763804798.9222] manager: (tapbe2ad403-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/626)
Nov 22 04:46:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:38Z|01541|binding|INFO|Claiming lport be2ad403-fc37-4e1b-a9b8-f0e116595caf for this chassis.
Nov 22 04:46:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:38Z|01542|binding|INFO|be2ad403-fc37-4e1b-a9b8-f0e116595caf: Claiming fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.930 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.961 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893'], port_security=['fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feca:893/64 2001:db8::f816:3eff:feca:893/64', 'neutron:device_id': '63134c6f-fc14-4157-9874-e7c6227f8d0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b8f1ae80-edda-4d40-9085-393558ac5aa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=be2ad403-fc37-4e1b-a9b8-f0e116595caf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.962 162862 INFO neutron.agent.ovn.metadata.agent [-] Port be2ad403-fc37-4e1b-a9b8-f0e116595caf in datapath b6b9221a-729b-4988-afa8-72f95360d9ea bound to our chassis#033[00m
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.964 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6b9221a-729b-4988-afa8-72f95360d9ea#033[00m
Nov 22 04:46:38 np0005532048 systemd-machined[215941]: New machine qemu-173-instance-0000008e.
Nov 22 04:46:38 np0005532048 systemd[1]: Started Virtual Machine qemu-173-instance-0000008e.
Nov 22 04:46:38 np0005532048 systemd-udevd[398521]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.977 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[798ba77f-5661-4a63-b1a4-9059f866358a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.978 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb6b9221a-71 in ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.979 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb6b9221a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.980 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f23af61a-b242-45cb-a132-1d355f6cbcb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.980 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[448a1106-fd1b-4fbc-9b82-97ef3e2e3128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:38 np0005532048 NetworkManager[48920]: <info>  [1763804798.9880] device (tapbe2ad403-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:46:38 np0005532048 NetworkManager[48920]: <info>  [1763804798.9891] device (tapbe2ad403-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:46:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:38.994 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[397ce3e6-730c-4459-989c-cf471caf0957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:38 np0005532048 nova_compute[253661]: 2025-11-22 09:46:38.994 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:39Z|01543|binding|INFO|Setting lport be2ad403-fc37-4e1b-a9b8-f0e116595caf ovn-installed in OVS
Nov 22 04:46:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:39Z|01544|binding|INFO|Setting lport be2ad403-fc37-4e1b-a9b8-f0e116595caf up in Southbound
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.003 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.008 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8918ce58-4d61-4bda-a517-76c3cc46163e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.039 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa94ee3-b2ce-47d6-82d1-dc9f53cb5471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 systemd-udevd[398526]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.046 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6b483b14-a47e-4d36-9edb-01e45594cf3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 NetworkManager[48920]: <info>  [1763804799.0477] manager: (tapb6b9221a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/627)
Nov 22 04:46:39 np0005532048 podman[398507]: 2025-11-22 09:46:39.054296739 +0000 UTC m=+0.100504023 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.076 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a756565f-fc19-4188-aaad-d334200d58b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.079 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf5df24-b80d-4041-9fa9-c11ea7f66a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 NetworkManager[48920]: <info>  [1763804799.0977] device (tapb6b9221a-70): carrier: link connected
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.102 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa4a1ba-b6bc-4d34-a682-1c325f88e466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.121 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94672775-2453-4c4a-9d40-a08db899a684]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6b9221a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:f0:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764983, 'reachable_time': 18873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398569, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.139 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc38b0e-1518-4a17-9c70-ed50f9b16bce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:f0a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764983, 'tstamp': 764983}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398570, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.155 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[632baa45-1ecc-4cb3-9cd5-8a295eb2d684]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6b9221a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:f0:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764983, 'reachable_time': 18873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398571, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.183 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[21759096-2cc4-46ad-bc54-b5e54df510f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.244 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c573f1f7-13d6-40ed-8f85-0d9d514c76cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.245 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6b9221a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.245 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.246 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6b9221a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:39 np0005532048 kernel: tapb6b9221a-70: entered promiscuous mode
Nov 22 04:46:39 np0005532048 NetworkManager[48920]: <info>  [1763804799.2501] manager: (tapb6b9221a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.250 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6b9221a-70, col_values=(('external_ids', {'iface-id': 'b8d092bb-b893-4593-9090-1acdc081ae18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.253 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b6b9221a-729b-4988-afa8-72f95360d9ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b6b9221a-729b-4988-afa8-72f95360d9ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:46:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:39Z|01545|binding|INFO|Releasing lport b8d092bb-b893-4593-9090-1acdc081ae18 from this chassis (sb_readonly=0)
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.254 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b86c5c7e-050a-4864-8cb2-3b21a07f8157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.255 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-b6b9221a-729b-4988-afa8-72f95360d9ea
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/b6b9221a-729b-4988-afa8-72f95360d9ea.pid.haproxy
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID b6b9221a-729b-4988-afa8-72f95360d9ea
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.255 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'env', 'PROCESS_TAG=haproxy-b6b9221a-729b-4988-afa8-72f95360d9ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b6b9221a-729b-4988-afa8-72f95360d9ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.266 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.308 253665 DEBUG nova.compute.manager [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.308 253665 DEBUG oslo_concurrency.lockutils [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.309 253665 DEBUG oslo_concurrency.lockutils [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.309 253665 DEBUG oslo_concurrency.lockutils [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.309 253665 DEBUG nova.compute.manager [req-3f205d78-d04a-422f-99b3-b09dd8e1a906 req-cdfe801f-77ff-421f-bd2b-7fc167ed3512 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Processing event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:46:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:39.430 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.465 253665 DEBUG nova.network.neutron [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated VIF entry in instance network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.466 253665 DEBUG nova.network.neutron [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.483 253665 DEBUG oslo_concurrency.lockutils [req-a5f68da7-50b8-4623-86fc-7a30a01575bb req-7d9bdc15-5100-4206-bb03-85c30afa44b2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:46:39 np0005532048 podman[398644]: 2025-11-22 09:46:39.598912417 +0000 UTC m=+0.050442826 container create effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.614 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804799.612993, 63134c6f-fc14-4157-9874-e7c6227f8d0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.615 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] VM Started (Lifecycle Event)#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.617 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.620 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.627 253665 INFO nova.virt.libvirt.driver [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance spawned successfully.#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.627 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:46:39 np0005532048 systemd[1]: Started libpod-conmon-effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e.scope.
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.651 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.657 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:46:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.665 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.665 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.666 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.666 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.667 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.667 253665 DEBUG nova.virt.libvirt.driver [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:39 np0005532048 podman[398644]: 2025-11-22 09:46:39.574055303 +0000 UTC m=+0.025585742 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:46:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd7c84ba3a988d7548558565c563363ed9d49bd874d1c96a511c8c8772c831a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:46:39 np0005532048 podman[398644]: 2025-11-22 09:46:39.68324636 +0000 UTC m=+0.134776809 container init effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:46:39 np0005532048 podman[398644]: 2025-11-22 09:46:39.690657812 +0000 UTC m=+0.142188221 container start effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.708 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.709 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804799.6145248, 63134c6f-fc14-4157-9874-e7c6227f8d0a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.709 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:46:39 np0005532048 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [NOTICE]   (398665) : New worker (398667) forked
Nov 22 04:46:39 np0005532048 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [NOTICE]   (398665) : Loading success.
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.740 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.744 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804799.6196914, 63134c6f-fc14-4157-9874-e7c6227f8d0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.744 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.771 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.774 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.790 253665 INFO nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Took 9.87 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.790 253665 DEBUG nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.797 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.874 253665 INFO nova.compute.manager [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Took 11.53 seconds to build instance.#033[00m
Nov 22 04:46:39 np0005532048 nova_compute[253661]: 2025-11-22 09:46:39.902 253665 DEBUG oslo_concurrency.lockutils [None req-1ee46a13-b70f-448d-baff-d38a4a2db7cb 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:46:41 np0005532048 nova_compute[253661]: 2025-11-22 09:46:41.409 253665 DEBUG nova.compute.manager [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:41 np0005532048 nova_compute[253661]: 2025-11-22 09:46:41.410 253665 DEBUG oslo_concurrency.lockutils [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:41 np0005532048 nova_compute[253661]: 2025-11-22 09:46:41.410 253665 DEBUG oslo_concurrency.lockutils [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:41 np0005532048 nova_compute[253661]: 2025-11-22 09:46:41.410 253665 DEBUG oslo_concurrency.lockutils [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:41 np0005532048 nova_compute[253661]: 2025-11-22 09:46:41.411 253665 DEBUG nova.compute.manager [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] No waiting events found dispatching network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:46:41 np0005532048 nova_compute[253661]: 2025-11-22 09:46:41.411 253665 WARNING nova.compute.manager [req-22e2b2e8-62e9-4d90-b78d-c72686e57c5a req-cae85b19-c407-4696-a1af-ef2dbb7de534 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received unexpected event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf for instance with vm_state active and task_state None.#033[00m
Nov 22 04:46:41 np0005532048 nova_compute[253661]: 2025-11-22 09:46:41.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 938 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 22 04:46:42 np0005532048 NetworkManager[48920]: <info>  [1763804802.2067] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Nov 22 04:46:42 np0005532048 NetworkManager[48920]: <info>  [1763804802.2074] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/630)
Nov 22 04:46:42 np0005532048 nova_compute[253661]: 2025-11-22 09:46:42.209 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:42 np0005532048 nova_compute[253661]: 2025-11-22 09:46:42.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:42Z|01546|binding|INFO|Releasing lport b8d092bb-b893-4593-9090-1acdc081ae18 from this chassis (sb_readonly=0)
Nov 22 04:46:42 np0005532048 nova_compute[253661]: 2025-11-22 09:46:42.329 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:42 np0005532048 nova_compute[253661]: 2025-11-22 09:46:42.395 253665 DEBUG nova.compute.manager [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:42 np0005532048 nova_compute[253661]: 2025-11-22 09:46:42.396 253665 DEBUG nova.compute.manager [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing instance network info cache due to event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:46:42 np0005532048 nova_compute[253661]: 2025-11-22 09:46:42.396 253665 DEBUG oslo_concurrency.lockutils [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:46:42 np0005532048 nova_compute[253661]: 2025-11-22 09:46:42.396 253665 DEBUG oslo_concurrency.lockutils [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:46:42 np0005532048 nova_compute[253661]: 2025-11-22 09:46:42.397 253665 DEBUG nova.network.neutron [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:46:42 np0005532048 nova_compute[253661]: 2025-11-22 09:46:42.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:43 np0005532048 nova_compute[253661]: 2025-11-22 09:46:43.768 253665 DEBUG nova.network.neutron [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated VIF entry in instance network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:46:43 np0005532048 nova_compute[253661]: 2025-11-22 09:46:43.769 253665 DEBUG nova.network.neutron [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:46:43 np0005532048 nova_compute[253661]: 2025-11-22 09:46:43.801 253665 DEBUG oslo_concurrency.lockutils [req-afff51e5-4e49-4845-8e00-5cde56fd8f6b req-6d70a12d-845e-418c-8f59-846f192a6834 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:46:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 88 op/s
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.223 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.223 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.254 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.329 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.329 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.335 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.335 253665 INFO nova.compute.claims [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.360 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.360 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.360 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.360 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.447 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:46:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578900337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.907 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.914 253665 DEBUG nova.compute.provider_tree [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.928 253665 DEBUG nova.scheduler.client.report [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.960 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:45 np0005532048 nova_compute[253661]: 2025-11-22 09:46:45.961 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.008 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.008 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.030 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.050 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:46:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.263 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.265 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.265 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating image(s)#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.292 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.316 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.338 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.342 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.406 253665 DEBUG nova.policy [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15f54ba9d7eb4efd9b760da5c85ec22e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.446 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.447 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.448 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.448 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.468 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.474 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.857 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.925 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] resizing rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:46:46 np0005532048 nova_compute[253661]: 2025-11-22 09:46:46.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:47 np0005532048 nova_compute[253661]: 2025-11-22 09:46:47.043 253665 DEBUG nova.objects.instance [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'migration_context' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:46:47 np0005532048 nova_compute[253661]: 2025-11-22 09:46:47.056 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:46:47 np0005532048 nova_compute[253661]: 2025-11-22 09:46:47.056 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Ensure instance console log exists: /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:46:47 np0005532048 nova_compute[253661]: 2025-11-22 09:46:47.057 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:47 np0005532048 nova_compute[253661]: 2025-11-22 09:46:47.057 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:47 np0005532048 nova_compute[253661]: 2025-11-22 09:46:47.057 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:47 np0005532048 nova_compute[253661]: 2025-11-22 09:46:47.360 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Successfully created port: 88d574be-cb53-4693-a025-34a039ee625c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:46:47 np0005532048 nova_compute[253661]: 2025-11-22 09:46:47.907 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.083 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.097 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.097 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.097 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:46:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 88 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.368 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Successfully updated port: 88d574be-cb53-4693-a025-34a039ee625c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.383 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.384 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.384 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.480 253665 DEBUG nova.compute.manager [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.481 253665 DEBUG nova.compute.manager [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.481 253665 DEBUG oslo_concurrency.lockutils [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:46:48 np0005532048 nova_compute[253661]: 2025-11-22 09:46:48.519 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:46:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:49 np0005532048 nova_compute[253661]: 2025-11-22 09:46:49.090 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:46:49 np0005532048 nova_compute[253661]: 2025-11-22 09:46:49.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:46:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 121 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 941 KiB/s wr, 99 op/s
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.910 253665 DEBUG nova.network.neutron [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.936 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.937 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance network_info: |[{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.937 253665 DEBUG oslo_concurrency.lockutils [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.937 253665 DEBUG nova.network.neutron [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.940 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start _get_guest_xml network_info=[{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.944 253665 WARNING nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.949 253665 DEBUG nova.virt.libvirt.host [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.950 253665 DEBUG nova.virt.libvirt.host [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.958 253665 DEBUG nova.virt.libvirt.host [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.959 253665 DEBUG nova.virt.libvirt.host [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.959 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.960 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.960 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.960 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.961 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.961 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.961 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.961 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.962 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.962 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.962 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.962 253665 DEBUG nova.virt.hardware [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:46:50 np0005532048 nova_compute[253661]: 2025-11-22 09:46:50.965 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:46:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925495255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:46:51 np0005532048 nova_compute[253661]: 2025-11-22 09:46:51.500 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:51 np0005532048 nova_compute[253661]: 2025-11-22 09:46:51.539 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:51 np0005532048 nova_compute[253661]: 2025-11-22 09:46:51.547 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:51.999 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:46:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2473614015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.088 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.090 253665 DEBUG nova.virt.libvirt.vif [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJMfV5BjTM8GJujok7HYi2H1JqAcE7EEyl3AluUOeV8mGOJe1kvDgduzG9FjqiMj3IyTkvrleTcL49x3Y3dHrfp4PbZT/WUxBgqL6QlOxXbuGaO695U0GzmKtLI552+pbw==',key_name='tempest-TestShelveInstance-1840126280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:46:46Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.091 253665 DEBUG nova.network.os_vif_util [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.092 253665 DEBUG nova.network.os_vif_util [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.093 253665 DEBUG nova.objects.instance [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.117 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <uuid>91cfde9c-3aa6-4946-92d6-471c8f63eb2f</uuid>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <name>instance-0000008f</name>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestShelveInstance-server-140973884</nova:name>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:46:50</nova:creationTime>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <nova:user uuid="15f54ba9d7eb4efd9b760da5c85ec22e">tempest-TestShelveInstance-463882348-project-member</nova:user>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <nova:project uuid="2a86e5c3f3c34f2285b7958147f6bbd3">tempest-TestShelveInstance-463882348</nova:project>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <nova:port uuid="88d574be-cb53-4693-a025-34a039ee625c">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <entry name="serial">91cfde9c-3aa6-4946-92d6-471c8f63eb2f</entry>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <entry name="uuid">91cfde9c-3aa6-4946-92d6-471c8f63eb2f</entry>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:88:cb:74"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <target dev="tap88d574be-cb"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/console.log" append="off"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:46:52 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:46:52 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:46:52 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:46:52 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.119 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Preparing to wait for external event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.120 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.120 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.120 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.121 253665 DEBUG nova.virt.libvirt.vif [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJMfV5BjTM8GJujok7HYi2H1JqAcE7EEyl3AluUOeV8mGOJe1kvDgduzG9FjqiMj3IyTkvrleTcL49x3Y3dHrfp4PbZT/WUxBgqL6QlOxXbuGaO695U0GzmKtLI552+pbw==',key_name='tempest-TestShelveInstance-1840126280',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:46:46Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.122 253665 DEBUG nova.network.os_vif_util [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.122 253665 DEBUG nova.network.os_vif_util [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.123 253665 DEBUG os_vif [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.124 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.124 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.125 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.127 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.128 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88d574be-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.128 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88d574be-cb, col_values=(('external_ids', {'iface-id': '88d574be-cb53-4693-a025-34a039ee625c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:cb:74', 'vm-uuid': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:52 np0005532048 NetworkManager[48920]: <info>  [1763804812.1314] manager: (tap88d574be-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.140 253665 INFO os_vif [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb')#033[00m
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 138 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.222 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.223 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.223 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No VIF found with MAC fa:16:3e:88:cb:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.224 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Using config drive#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.250 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:46:52
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.678 253665 DEBUG nova.network.neutron [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.678 253665 DEBUG nova.network.neutron [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.698 253665 DEBUG oslo_concurrency.lockutils [req-5cb982cd-2d92-49e7-93e9-5f64c7c37d4f req-4856d797-adcf-4097-8452-18036085687b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:46:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.833 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating config drive at /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.839 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_c4e2qz2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:52 np0005532048 nova_compute[253661]: 2025-11-22 09:46:52.983 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_c4e2qz2" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.020 253665 DEBUG nova.storage.rbd_utils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.025 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.163 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:e1:fd 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99fc36cb-9a4c-4a60-8325-715974e22da5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c512420e-0b9a-4ee2-8cc1-60a3bae398ca) old=Port_Binding(mac=['fa:16:3e:5f:e1:fd 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.165 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c512420e-0b9a-4ee2-8cc1-60a3bae398ca in datapath 2420418b-f976-4644-88b8-5c9c24d72ca2 updated#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.168 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2420418b-f976-4644-88b8-5c9c24d72ca2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.169 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[004c1179-9af7-4ea3-9a82-0a0ea5be4e35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.205 253665 DEBUG oslo_concurrency.processutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.207 253665 INFO nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deleting local config drive /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config because it was imported into RBD.#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.264 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.264 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:53 np0005532048 kernel: tap88d574be-cb: entered promiscuous mode
Nov 22 04:46:53 np0005532048 NetworkManager[48920]: <info>  [1763804813.2690] manager: (tap88d574be-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/632)
Nov 22 04:46:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:53Z|01547|binding|INFO|Claiming lport 88d574be-cb53-4693-a025-34a039ee625c for this chassis.
Nov 22 04:46:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:53Z|01548|binding|INFO|88d574be-cb53-4693-a025-34a039ee625c: Claiming fa:16:3e:88:cb:74 10.100.0.14
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.279 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:cb:74 10.100.0.14'], port_security=['fa:16:3e:88:cb:74 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-449be411-464c-4d69-be15-6372ecacd778', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'da881b1b-2aad-4a91-9422-a708cc3c5d34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a67d762-85ed-414e-ab70-eac2ab54b109, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88d574be-cb53-4693-a025-34a039ee625c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.280 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88d574be-cb53-4693-a025-34a039ee625c in datapath 449be411-464c-4d69-be15-6372ecacd778 bound to our chassis#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.285 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 449be411-464c-4d69-be15-6372ecacd778#033[00m
Nov 22 04:46:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:53Z|01549|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c ovn-installed in OVS
Nov 22 04:46:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:53Z|01550|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c up in Southbound
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.299 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf22108-3a2a-4db7-9823-a6875d7c38ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.303 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap449be411-41 in ovnmeta-449be411-464c-4d69-be15-6372ecacd778 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.307 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap449be411-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.308 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb0249f-f0f2-4d73-b285-aa53783cfe1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.309 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e8efb8-e0e2-4028-9e92-353468434247]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:53 np0005532048 systemd-machined[215941]: New machine qemu-174-instance-0000008f.
Nov 22 04:46:53 np0005532048 systemd-udevd[399004]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.323 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ca433674-0899-4f0f-b12b-37152c90ec81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ceph-mgr[75315]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1636168236
Nov 22 04:46:53 np0005532048 systemd[1]: Started Virtual Machine qemu-174-instance-0000008f.
Nov 22 04:46:53 np0005532048 NetworkManager[48920]: <info>  [1763804813.3412] device (tap88d574be-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:46:53 np0005532048 NetworkManager[48920]: <info>  [1763804813.3448] device (tap88d574be-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.345 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0eed014-2683-40ab-860f-6bf5ccdf604d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.388 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[484562ee-5185-4b82-a658-c0c7372b090c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 NetworkManager[48920]: <info>  [1763804813.3966] manager: (tap449be411-40): new Veth device (/org/freedesktop/NetworkManager/Devices/633)
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.395 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab81a66-7da9-4457-918f-9822339756a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.446 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[2815de9e-a9b6-4709-9461-394f1196c1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.451 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[32c50c5c-df82-44c7-8c58-6ec2b7ca414b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 NetworkManager[48920]: <info>  [1763804813.4906] device (tap449be411-40): carrier: link connected
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.498 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[539f12ec-90c3-44fe-8721-b863c3c946cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.523 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac44893a-e308-432b-80e3-e17fc6d54e49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap449be411-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 443], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766422, 'reachable_time': 20612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399055, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.548 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aca67983-0ba6-4031-976e-00d4a8f10968]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:5a86'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766422, 'tstamp': 766422}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399056, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[06369b4d-fe19-4143-ad99-062c74caf7d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap449be411-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 443], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766422, 'reachable_time': 20612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399057, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:53Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ca:08:93 10.100.0.6
Nov 22 04:46:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:53Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ca:08:93 10.100.0.6
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.613 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[24800f5a-e9fa-4e8f-971e-a61f9f45a34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.687 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e26876fe-7e8d-4bb4-947e-de4a92ca32de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.689 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap449be411-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.689 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.689 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap449be411-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:53 np0005532048 kernel: tap449be411-40: entered promiscuous mode
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:53 np0005532048 NetworkManager[48920]: <info>  [1763804813.6922] manager: (tap449be411-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.693 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.694 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap449be411-40, col_values=(('external_ids', {'iface-id': '02bcb711-03d1-4bf4-b274-247c09a1af89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.696 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:46:53Z|01551|binding|INFO|Releasing lport 02bcb711-03d1-4bf4-b274-247c09a1af89 from this chassis (sb_readonly=0)
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.712 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.713 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c58441d8-6e6d-453e-8931-ffd7b5318aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.714 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-449be411-464c-4d69-be15-6372ecacd778
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 449be411-464c-4d69-be15-6372ecacd778
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:46:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:46:53.715 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'env', 'PROCESS_TAG=haproxy-449be411-464c-4d69-be15-6372ecacd778', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/449be411-464c-4d69-be15-6372ecacd778.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:46:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:46:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668907189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.763 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.858 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.859 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.862 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:46:53 np0005532048 nova_compute[253661]: 2025-11-22 09:46:53.863 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:46:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.080 253665 DEBUG nova.compute.manager [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.082 253665 DEBUG oslo_concurrency.lockutils [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.082 253665 DEBUG oslo_concurrency.lockutils [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.082 253665 DEBUG oslo_concurrency.lockutils [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.082 253665 DEBUG nova.compute.manager [req-d88f7591-229d-43dc-896e-7b72d65b9ef9 req-0b7499ca-15a4-4a6b-b84b-3607bffdec9f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Processing event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.086 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.087 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3366MB free_disk=59.940032958984375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.087 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.087 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.179 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.182 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804814.1781473, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.182 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Started (Lifecycle Event)#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.186 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 63134c6f-fc14-4157-9874-e7c6227f8d0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.187 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:46:54 np0005532048 podman[399129]: 2025-11-22 09:46:54.187889851 +0000 UTC m=+0.055715817 container create 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.187 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.189 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:46:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 159 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 106 op/s
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.196 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.205 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance spawned successfully.#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.205 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.210 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.213 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.217 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.229 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.230 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.234 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.235 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.236 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.237 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.237 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.237 253665 DEBUG nova.virt.libvirt.driver [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:46:54 np0005532048 systemd[1]: Started libpod-conmon-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec.scope.
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.245 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.245 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804814.1783524, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.245 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.249 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:46:54 np0005532048 podman[399129]: 2025-11-22 09:46:54.157243384 +0000 UTC m=+0.025069380 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:46:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:46:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e471b3ba63f09a57e7dc430fa813007eb58d33dbbf5330f5c3a4f5390b8ed3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.276 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.283 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.290 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804814.1866298, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.290 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.301 253665 INFO nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Took 8.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.301 253665 DEBUG nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:46:54 np0005532048 podman[399129]: 2025-11-22 09:46:54.30609764 +0000 UTC m=+0.173923686 container init 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.311 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:46:54 np0005532048 podman[399129]: 2025-11-22 09:46:54.311846652 +0000 UTC m=+0.179672658 container start 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.318 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:46:54 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [NOTICE]   (399149) : New worker (399151) forked
Nov 22 04:46:54 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [NOTICE]   (399149) : Loading success.
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.343 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.394 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.405 253665 INFO nova.compute.manager [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Took 9.10 seconds to build instance.#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.425 253665 DEBUG oslo_concurrency.lockutils [None req-d81bfb9e-cff7-4b20-b25d-59bf2a655df0 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:46:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1512643873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.808 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.815 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.834 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.858 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:46:54 np0005532048 nova_compute[253661]: 2025-11-22 09:46:54.859 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:46:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:46:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:46:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:46:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:46:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 159 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 3.5 MiB/s wr, 70 op/s
Nov 22 04:46:56 np0005532048 nova_compute[253661]: 2025-11-22 09:46:56.444 253665 DEBUG nova.compute.manager [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:56 np0005532048 nova_compute[253661]: 2025-11-22 09:46:56.444 253665 DEBUG oslo_concurrency.lockutils [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:46:56 np0005532048 nova_compute[253661]: 2025-11-22 09:46:56.445 253665 DEBUG oslo_concurrency.lockutils [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:46:56 np0005532048 nova_compute[253661]: 2025-11-22 09:46:56.446 253665 DEBUG oslo_concurrency.lockutils [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:46:56 np0005532048 nova_compute[253661]: 2025-11-22 09:46:56.446 253665 DEBUG nova.compute.manager [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:46:56 np0005532048 nova_compute[253661]: 2025-11-22 09:46:56.446 253665 WARNING nova.compute.manager [req-b31f9599-ae6d-482f-945a-37de4db74af3 req-7fe15820-8524-497d-95ce-17340b1420bd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state None.#033[00m
Nov 22 04:46:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:46:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:46:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:46:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:46:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:46:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:46:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:46:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:46:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:46:57 np0005532048 nova_compute[253661]: 2025-11-22 09:46:57.001 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:57 np0005532048 nova_compute[253661]: 2025-11-22 09:46:57.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:46:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2592f2bc-30b4-4b0a-9408-27efdbb8b69b does not exist
Nov 22 04:46:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5facc5d3-2a21-41da-a18c-5c5e3283f372 does not exist
Nov 22 04:46:57 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5d3d7e95-7ec1-4c21-a2b6-1f24ccd4a2e2 does not exist
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:46:57 np0005532048 nova_compute[253661]: 2025-11-22 09:46:57.859 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:46:57 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:46:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Nov 22 04:46:58 np0005532048 podman[399574]: 2025-11-22 09:46:58.306808369 +0000 UTC m=+0.039722591 container create 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 04:46:58 np0005532048 systemd[1]: Started libpod-conmon-57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c.scope.
Nov 22 04:46:58 np0005532048 podman[399574]: 2025-11-22 09:46:58.289305337 +0000 UTC m=+0.022219579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:46:58 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:46:58 np0005532048 podman[399574]: 2025-11-22 09:46:58.412482359 +0000 UTC m=+0.145396691 container init 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:46:58 np0005532048 podman[399574]: 2025-11-22 09:46:58.422726902 +0000 UTC m=+0.155641124 container start 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:46:58 np0005532048 podman[399574]: 2025-11-22 09:46:58.42713146 +0000 UTC m=+0.160045732 container attach 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:46:58 np0005532048 hopeful_pike[399590]: 167 167
Nov 22 04:46:58 np0005532048 systemd[1]: libpod-57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c.scope: Deactivated successfully.
Nov 22 04:46:58 np0005532048 podman[399574]: 2025-11-22 09:46:58.433386944 +0000 UTC m=+0.166301196 container died 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:46:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-89d40256d28c4db50d655cf531a306ca1eb3d410f331b53ade1c4b95bf67ef68-merged.mount: Deactivated successfully.
Nov 22 04:46:58 np0005532048 podman[399574]: 2025-11-22 09:46:58.47690647 +0000 UTC m=+0.209820702 container remove 57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:46:58 np0005532048 systemd[1]: libpod-conmon-57367b15e0c5a5108c438fa364b670335dd2bf37bf1598af057d895ff5f7cb6c.scope: Deactivated successfully.
Nov 22 04:46:58 np0005532048 nova_compute[253661]: 2025-11-22 09:46:58.625 253665 DEBUG nova.compute.manager [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:46:58 np0005532048 nova_compute[253661]: 2025-11-22 09:46:58.626 253665 DEBUG nova.compute.manager [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:46:58 np0005532048 nova_compute[253661]: 2025-11-22 09:46:58.626 253665 DEBUG oslo_concurrency.lockutils [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:46:58 np0005532048 nova_compute[253661]: 2025-11-22 09:46:58.627 253665 DEBUG oslo_concurrency.lockutils [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:46:58 np0005532048 nova_compute[253661]: 2025-11-22 09:46:58.627 253665 DEBUG nova.network.neutron [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:46:58 np0005532048 podman[399614]: 2025-11-22 09:46:58.674398796 +0000 UTC m=+0.049018431 container create 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:46:58 np0005532048 systemd[1]: Started libpod-conmon-9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2.scope.
Nov 22 04:46:58 np0005532048 podman[399614]: 2025-11-22 09:46:58.654971327 +0000 UTC m=+0.029590982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:46:58 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:46:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:46:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:46:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:46:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:46:58 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:46:58 np0005532048 podman[399614]: 2025-11-22 09:46:58.81022021 +0000 UTC m=+0.184839865 container init 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:46:58 np0005532048 podman[399614]: 2025-11-22 09:46:58.821690433 +0000 UTC m=+0.196310108 container start 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:46:58 np0005532048 podman[399614]: 2025-11-22 09:46:58.826785929 +0000 UTC m=+0.201405574 container attach 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:46:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:46:59 np0005532048 thirsty_ellis[399630]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:46:59 np0005532048 thirsty_ellis[399630]: --> relative data size: 1.0
Nov 22 04:46:59 np0005532048 thirsty_ellis[399630]: --> All data devices are unavailable
Nov 22 04:46:59 np0005532048 systemd[1]: libpod-9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2.scope: Deactivated successfully.
Nov 22 04:46:59 np0005532048 systemd[1]: libpod-9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2.scope: Consumed 1.068s CPU time.
Nov 22 04:46:59 np0005532048 podman[399614]: 2025-11-22 09:46:59.957846178 +0000 UTC m=+1.332465813 container died 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:47:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ff3800a5fba01c3b7aec8492d004a57958832bd08db1da1ea8efeca883f4c946-merged.mount: Deactivated successfully.
Nov 22 04:47:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 22 04:47:00 np0005532048 nova_compute[253661]: 2025-11-22 09:47:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:47:00 np0005532048 podman[399614]: 2025-11-22 09:47:00.246920307 +0000 UTC m=+1.621539952 container remove 9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:47:00 np0005532048 systemd[1]: libpod-conmon-9bd4c7049866bbc82042564c9406417d7e6fd3080bd673d5b696dd207b5f52c2.scope: Deactivated successfully.
Nov 22 04:47:00 np0005532048 nova_compute[253661]: 2025-11-22 09:47:00.268 253665 DEBUG nova.network.neutron [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:47:00 np0005532048 nova_compute[253661]: 2025-11-22 09:47:00.268 253665 DEBUG nova.network.neutron [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:47:00 np0005532048 nova_compute[253661]: 2025-11-22 09:47:00.296 253665 DEBUG oslo_concurrency.lockutils [req-a73b1874-7e35-49ca-ae01-bc604662702e req-f1e5b4ba-003c-4c6e-8d37-16f3f1baf379 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:47:00 np0005532048 podman[399813]: 2025-11-22 09:47:00.961624944 +0000 UTC m=+0.092682479 container create 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:47:00 np0005532048 podman[399813]: 2025-11-22 09:47:00.898139457 +0000 UTC m=+0.029196992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:47:01 np0005532048 systemd[1]: Started libpod-conmon-9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7.scope.
Nov 22 04:47:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:47:01 np0005532048 podman[399813]: 2025-11-22 09:47:01.159673535 +0000 UTC m=+0.290731080 container init 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:47:01 np0005532048 podman[399813]: 2025-11-22 09:47:01.167837026 +0000 UTC m=+0.298894551 container start 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:47:01 np0005532048 goofy_goodall[399829]: 167 167
Nov 22 04:47:01 np0005532048 systemd[1]: libpod-9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7.scope: Deactivated successfully.
Nov 22 04:47:01 np0005532048 podman[399813]: 2025-11-22 09:47:01.268755959 +0000 UTC m=+0.399813484 container attach 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:47:01 np0005532048 podman[399813]: 2025-11-22 09:47:01.269576239 +0000 UTC m=+0.400633764 container died 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:47:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:01.448 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:e1:fd 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99fc36cb-9a4c-4a60-8325-715974e22da5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c512420e-0b9a-4ee2-8cc1-60a3bae398ca) old=Port_Binding(mac=['fa:16:3e:5f:e1:fd 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2420418b-f976-4644-88b8-5c9c24d72ca2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb042667d47c4d07a7e9967c65430c7b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:47:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:01.450 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c512420e-0b9a-4ee2-8cc1-60a3bae398ca in datapath 2420418b-f976-4644-88b8-5c9c24d72ca2 updated#033[00m
Nov 22 04:47:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:01.453 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2420418b-f976-4644-88b8-5c9c24d72ca2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:47:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:01.454 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0d4c71-0b86-4971-a697-55bb5841ecdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-35dcf518a6cbd241e895628bf2316d01fc511d8c53d10870dd60bea615d00c68-merged.mount: Deactivated successfully.
Nov 22 04:47:01 np0005532048 podman[399813]: 2025-11-22 09:47:01.896710804 +0000 UTC m=+1.027768319 container remove 9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goodall, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:47:01 np0005532048 systemd[1]: libpod-conmon-9c2faab9408e755c145626b40338901d0d41f484040143ebc6c168b348160bd7.scope: Deactivated successfully.
Nov 22 04:47:02 np0005532048 nova_compute[253661]: 2025-11-22 09:47:02.044 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:02 np0005532048 nova_compute[253661]: 2025-11-22 09:47:02.132 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:02 np0005532048 podman[399853]: 2025-11-22 09:47:02.096192491 +0000 UTC m=+0.024535898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 139 op/s
Nov 22 04:47:02 np0005532048 podman[399853]: 2025-11-22 09:47:02.216693026 +0000 UTC m=+0.145036403 container create 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:47:02 np0005532048 systemd[1]: Started libpod-conmon-6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6.scope.
Nov 22 04:47:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:47:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:47:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:47:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:47:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:47:02 np0005532048 podman[399853]: 2025-11-22 09:47:02.321508134 +0000 UTC m=+0.249851511 container init 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:47:02 np0005532048 podman[399853]: 2025-11-22 09:47:02.32781692 +0000 UTC m=+0.256160277 container start 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:47:02 np0005532048 podman[399853]: 2025-11-22 09:47:02.368525375 +0000 UTC m=+0.296868752 container attach 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011071142231555643 of space, bias 1.0, pg target 0.3321342669466693 quantized to 32 (current 32)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:47:02 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:47:03 np0005532048 infallible_borg[399869]: {
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:    "0": [
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:        {
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "devices": [
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "/dev/loop3"
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            ],
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_name": "ceph_lv0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_size": "21470642176",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "name": "ceph_lv0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "tags": {
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.cluster_name": "ceph",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.crush_device_class": "",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.encrypted": "0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.osd_id": "0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.type": "block",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.vdo": "0"
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            },
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "type": "block",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "vg_name": "ceph_vg0"
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:        }
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:    ],
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:    "1": [
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:        {
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "devices": [
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "/dev/loop4"
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            ],
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_name": "ceph_lv1",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_size": "21470642176",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "name": "ceph_lv1",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "tags": {
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.cluster_name": "ceph",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.crush_device_class": "",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.encrypted": "0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.osd_id": "1",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.type": "block",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.vdo": "0"
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            },
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "type": "block",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "vg_name": "ceph_vg1"
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:        }
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:    ],
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:    "2": [
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:        {
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "devices": [
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "/dev/loop5"
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            ],
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_name": "ceph_lv2",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_size": "21470642176",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "name": "ceph_lv2",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "tags": {
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.cluster_name": "ceph",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.crush_device_class": "",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.encrypted": "0",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.osd_id": "2",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.type": "block",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:                "ceph.vdo": "0"
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            },
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "type": "block",
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:            "vg_name": "ceph_vg2"
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:        }
Nov 22 04:47:03 np0005532048 infallible_borg[399869]:    ]
Nov 22 04:47:03 np0005532048 infallible_borg[399869]: }
Nov 22 04:47:03 np0005532048 systemd[1]: libpod-6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6.scope: Deactivated successfully.
Nov 22 04:47:03 np0005532048 podman[399853]: 2025-11-22 09:47:03.11468559 +0000 UTC m=+1.043028947 container died 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:47:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4efc877db160df9023300d30fdae736abc3e4ac307bed2ff88d87af1925abf28-merged.mount: Deactivated successfully.
Nov 22 04:47:03 np0005532048 podman[399853]: 2025-11-22 09:47:03.225894265 +0000 UTC m=+1.154237622 container remove 6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:47:03 np0005532048 podman[399879]: 2025-11-22 09:47:03.234357944 +0000 UTC m=+0.091399827 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:47:03 np0005532048 systemd[1]: libpod-conmon-6c8c779a92197a75ae5511a7340ab2cda372fa5277184c02ddf8ffd17e3934b6.scope: Deactivated successfully.
Nov 22 04:47:03 np0005532048 podman[399885]: 2025-11-22 09:47:03.26211795 +0000 UTC m=+0.118161278 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:47:03 np0005532048 podman[400070]: 2025-11-22 09:47:03.842371909 +0000 UTC m=+0.038084132 container create 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:47:03 np0005532048 systemd[1]: Started libpod-conmon-74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1.scope.
Nov 22 04:47:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:47:03 np0005532048 podman[400070]: 2025-11-22 09:47:03.917849013 +0000 UTC m=+0.113561256 container init 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:47:03 np0005532048 podman[400070]: 2025-11-22 09:47:03.827802178 +0000 UTC m=+0.023514421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:47:03 np0005532048 podman[400070]: 2025-11-22 09:47:03.924542037 +0000 UTC m=+0.120254260 container start 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 04:47:03 np0005532048 podman[400070]: 2025-11-22 09:47:03.928788442 +0000 UTC m=+0.124500665 container attach 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:47:03 np0005532048 nostalgic_heisenberg[400085]: 167 167
Nov 22 04:47:03 np0005532048 systemd[1]: libpod-74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1.scope: Deactivated successfully.
Nov 22 04:47:03 np0005532048 podman[400070]: 2025-11-22 09:47:03.930917115 +0000 UTC m=+0.126629348 container died 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:47:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-97b54562e4751b3371ada03443a59148331cf97b1783f7fb3ed9536a6c656d83-merged.mount: Deactivated successfully.
Nov 22 04:47:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:03 np0005532048 podman[400070]: 2025-11-22 09:47:03.968442612 +0000 UTC m=+0.164154835 container remove 74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 04:47:03 np0005532048 systemd[1]: libpod-conmon-74eff131488d3b1f92402f7241d10ea93f38707a3bd4e30e00a1eae00fd3fad1.scope: Deactivated successfully.
Nov 22 04:47:04 np0005532048 podman[400109]: 2025-11-22 09:47:04.145804931 +0000 UTC m=+0.039024815 container create c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:47:04 np0005532048 systemd[1]: Started libpod-conmon-c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9.scope.
Nov 22 04:47:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 22 04:47:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:47:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:47:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:47:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:47:04 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:47:04 np0005532048 podman[400109]: 2025-11-22 09:47:04.127971871 +0000 UTC m=+0.021191775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:47:04 np0005532048 podman[400109]: 2025-11-22 09:47:04.237500505 +0000 UTC m=+0.130720419 container init c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:47:04 np0005532048 podman[400109]: 2025-11-22 09:47:04.247345068 +0000 UTC m=+0.140564952 container start c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:47:04 np0005532048 podman[400109]: 2025-11-22 09:47:04.250913306 +0000 UTC m=+0.144133220 container attach c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]: {
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "osd_id": 1,
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "type": "bluestore"
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:    },
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "osd_id": 0,
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "type": "bluestore"
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:    },
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "osd_id": 2,
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:        "type": "bluestore"
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]:    }
Nov 22 04:47:05 np0005532048 vigilant_jang[400125]: }
Nov 22 04:47:05 np0005532048 systemd[1]: libpod-c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9.scope: Deactivated successfully.
Nov 22 04:47:05 np0005532048 podman[400109]: 2025-11-22 09:47:05.379026653 +0000 UTC m=+1.272246557 container died c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:47:05 np0005532048 systemd[1]: libpod-c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9.scope: Consumed 1.134s CPU time.
Nov 22 04:47:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d971efd03788542d967f279d94acb86eac1984712c4f3915294614dfa00dfbb5-merged.mount: Deactivated successfully.
Nov 22 04:47:05 np0005532048 podman[400109]: 2025-11-22 09:47:05.52065804 +0000 UTC m=+1.413877924 container remove c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:47:05 np0005532048 systemd[1]: libpod-conmon-c9b4a39a64198cd58ad22cd27f2a3b498c867517fdb2cc8e0319ae91c165a5d9.scope: Deactivated successfully.
Nov 22 04:47:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:47:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:47:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:47:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:47:05 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 652ebff0-9c3e-46e3-ab94-ec377e25351f does not exist
Nov 22 04:47:05 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1b6d76a4-074c-46e2-baa6-94da4cee413f does not exist
Nov 22 04:47:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 460 KiB/s wr, 93 op/s
Nov 22 04:47:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:47:06 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:47:07 np0005532048 nova_compute[253661]: 2025-11-22 09:47:07.047 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:47:07Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:cb:74 10.100.0.14
Nov 22 04:47:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:47:07Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:cb:74 10.100.0.14
Nov 22 04:47:07 np0005532048 nova_compute[253661]: 2025-11-22 09:47:07.134 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 178 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Nov 22 04:47:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:09 np0005532048 podman[400222]: 2025-11-22 09:47:09.403005478 +0000 UTC m=+0.091126571 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:47:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 198 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Nov 22 04:47:12 np0005532048 nova_compute[253661]: 2025-11-22 09:47:12.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:12 np0005532048 nova_compute[253661]: 2025-11-22 09:47:12.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 04:47:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:47:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4242003812' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:47:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:47:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4242003812' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:47:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 04:47:14 np0005532048 nova_compute[253661]: 2025-11-22 09:47:14.743 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:14 np0005532048 nova_compute[253661]: 2025-11-22 09:47:14.744 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:14 np0005532048 nova_compute[253661]: 2025-11-22 09:47:14.744 253665 INFO nova.compute.manager [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Shelving#033[00m
Nov 22 04:47:14 np0005532048 nova_compute[253661]: 2025-11-22 09:47:14.769 253665 DEBUG nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 22 04:47:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 04:47:17 np0005532048 kernel: tap88d574be-cb (unregistering): left promiscuous mode
Nov 22 04:47:17 np0005532048 NetworkManager[48920]: <info>  [1763804837.0314] device (tap88d574be-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:47:17 np0005532048 ovn_controller[152872]: 2025-11-22T09:47:17Z|01552|binding|INFO|Releasing lport 88d574be-cb53-4693-a025-34a039ee625c from this chassis (sb_readonly=0)
Nov 22 04:47:17 np0005532048 ovn_controller[152872]: 2025-11-22T09:47:17Z|01553|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c down in Southbound
Nov 22 04:47:17 np0005532048 ovn_controller[152872]: 2025-11-22T09:47:17Z|01554|binding|INFO|Removing iface tap88d574be-cb ovn-installed in OVS
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.090 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:cb:74 10.100.0.14'], port_security=['fa:16:3e:88:cb:74 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-449be411-464c-4d69-be15-6372ecacd778', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'da881b1b-2aad-4a91-9422-a708cc3c5d34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a67d762-85ed-414e-ab70-eac2ab54b109, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88d574be-cb53-4693-a025-34a039ee625c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.093 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88d574be-cb53-4693-a025-34a039ee625c in datapath 449be411-464c-4d69-be15-6372ecacd778 unbound from our chassis#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.095 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 449be411-464c-4d69-be15-6372ecacd778, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.098 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8d55b2a3-ccd8-452d-940e-b2801407b830]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.099 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-449be411-464c-4d69-be15-6372ecacd778 namespace which is not needed anymore#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.138 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:17 np0005532048 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 22 04:47:17 np0005532048 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008f.scope: Consumed 14.060s CPU time.
Nov 22 04:47:17 np0005532048 systemd-machined[215941]: Machine qemu-174-instance-0000008f terminated.
Nov 22 04:47:17 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [NOTICE]   (399149) : haproxy version is 2.8.14-c23fe91
Nov 22 04:47:17 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [NOTICE]   (399149) : path to executable is /usr/sbin/haproxy
Nov 22 04:47:17 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [WARNING]  (399149) : Exiting Master process...
Nov 22 04:47:17 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [ALERT]    (399149) : Current worker (399151) exited with code 143 (Terminated)
Nov 22 04:47:17 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[399145]: [WARNING]  (399149) : All workers exited. Exiting... (0)
Nov 22 04:47:17 np0005532048 systemd[1]: libpod-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec.scope: Deactivated successfully.
Nov 22 04:47:17 np0005532048 conmon[399145]: conmon 1c25e6f021647f540dfd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec.scope/container/memory.events
Nov 22 04:47:17 np0005532048 podman[400272]: 2025-11-22 09:47:17.241389642 +0000 UTC m=+0.046768875 container died 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:47:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec-userdata-shm.mount: Deactivated successfully.
Nov 22 04:47:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c2e471b3ba63f09a57e7dc430fa813007eb58d33dbbf5330f5c3a4f5390b8ed3-merged.mount: Deactivated successfully.
Nov 22 04:47:17 np0005532048 podman[400272]: 2025-11-22 09:47:17.281043122 +0000 UTC m=+0.086422355 container cleanup 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 04:47:17 np0005532048 systemd[1]: libpod-conmon-1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec.scope: Deactivated successfully.
Nov 22 04:47:17 np0005532048 podman[400306]: 2025-11-22 09:47:17.34251216 +0000 UTC m=+0.041526197 container remove 1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.348 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5c55b06f-a662-47a0-ad90-2b72d038bdec]: (4, ('Sat Nov 22 09:47:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778 (1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec)\n1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec\nSat Nov 22 09:47:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778 (1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec)\n1c25e6f021647f540dfd2e4a065fd0c074a599bd7fe5f4c6f1ba084c1f22e1ec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.349 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b9cf10-e215-4a39-9743-822194fc1469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.350 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap449be411-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:17 np0005532048 kernel: tap449be411-40: left promiscuous mode
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.371 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c552472-d0ee-4309-9c23-b16bfc85accc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.380 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.380 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.387 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[78076d3c-15fb-4615-9f52-f851283e8402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.388 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9424ad82-b556-4191-80a8-8e060f46871a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.389 253665 DEBUG nova.compute.manager [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.390 253665 DEBUG oslo_concurrency.lockutils [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.390 253665 DEBUG oslo_concurrency.lockutils [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.390 253665 DEBUG oslo_concurrency.lockutils [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.391 253665 DEBUG nova.compute.manager [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.391 253665 WARNING nova.compute.manager [req-08e2d457-17ad-4f21-a1e3-ebedeb2a66d2 req-0aa80fb5-0794-4a22-9543-45dd4a14801a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state shelving.#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.400 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.404 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8b10da92-f1c9-4a14-a0ec-1de311e2c20e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766411, 'reachable_time': 30458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400331, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.407 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-449be411-464c-4d69-be15-6372ecacd778 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:47:17 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:17.407 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbce1ba-dc94-4c16-b13c-15f8242383eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:17 np0005532048 systemd[1]: run-netns-ovnmeta\x2d449be411\x2d464c\x2d4d69\x2dbe15\x2d6372ecacd778.mount: Deactivated successfully.
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.578 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.579 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.586 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.587 253665 INFO nova.compute.claims [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.787 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance shutdown successfully after 3 seconds.#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.791 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance destroyed successfully.#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.792 253665 DEBUG nova.objects.instance [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:47:17 np0005532048 nova_compute[253661]: 2025-11-22 09:47:17.808 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:47:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.227 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Beginning cold snapshot process#033[00m
Nov 22 04:47:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:47:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883190485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.251 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.256 253665 DEBUG nova.compute.provider_tree [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.267 253665 DEBUG nova.scheduler.client.report [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.289 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.290 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.431 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.431 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.437 253665 DEBUG nova.virt.libvirt.imagebackend [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.459 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.486 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.599 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.601 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.601 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Creating image(s)#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.619 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.641 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.663 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.666 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.708 253665 DEBUG nova.policy [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.714 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] creating snapshot(9d9a2ea52f604f159154dc1633b490bd) on rbd image(91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.748 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.748 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.749 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.749 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.770 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:47:18 np0005532048 nova_compute[253661]: 2025-11-22 09:47:18.773 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:47:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.104 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.161 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.254 253665 DEBUG nova.objects.instance [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.271 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.271 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Ensure instance console log exists: /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.272 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.272 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.272 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.480 253665 DEBUG nova.compute.manager [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.480 253665 DEBUG oslo_concurrency.lockutils [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.480 253665 DEBUG oslo_concurrency.lockutils [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.481 253665 DEBUG oslo_concurrency.lockutils [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.481 253665 DEBUG nova.compute.manager [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.481 253665 WARNING nova.compute.manager [req-e6c4b469-5e40-4e91-a809-9081d41266a6 req-4ddede5e-cdbc-4ece-a92e-5c623ab7ef9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 22 04:47:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Nov 22 04:47:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.603 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] cloning vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk@9d9a2ea52f604f159154dc1633b490bd to images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.787 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] flattening images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:47:19 np0005532048 nova_compute[253661]: 2025-11-22 09:47:19.901 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Successfully created port: 1a443391-105a-4568-ba24-7748b702e21d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:47:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 197 KiB/s wr, 20 op/s
Nov 22 04:47:20 np0005532048 nova_compute[253661]: 2025-11-22 09:47:20.483 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] removing snapshot(9d9a2ea52f604f159154dc1633b490bd) on rbd image(91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 22 04:47:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Nov 22 04:47:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Nov 22 04:47:20 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Nov 22 04:47:20 np0005532048 nova_compute[253661]: 2025-11-22 09:47:20.601 253665 DEBUG nova.storage.rbd_utils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] creating snapshot(snap) on rbd image(7427dc9c-0c7d-45bc-9904-89241d5b4e4d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:47:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Nov 22 04:47:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Nov 22 04:47:21 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Nov 22 04:47:22 np0005532048 nova_compute[253661]: 2025-11-22 09:47:22.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:22 np0005532048 nova_compute[253661]: 2025-11-22 09:47:22.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 272 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 7.3 MiB/s wr, 70 op/s
Nov 22 04:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:47:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:47:22 np0005532048 nova_compute[253661]: 2025-11-22 09:47:22.927 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Successfully updated port: 1a443391-105a-4568-ba24-7748b702e21d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:47:23 np0005532048 nova_compute[253661]: 2025-11-22 09:47:23.037 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:47:23 np0005532048 nova_compute[253661]: 2025-11-22 09:47:23.037 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:47:23 np0005532048 nova_compute[253661]: 2025-11-22 09:47:23.038 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:47:23 np0005532048 nova_compute[253661]: 2025-11-22 09:47:23.931 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:47:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 221 op/s
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.362 253665 DEBUG nova.compute.manager [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-changed-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.363 253665 DEBUG nova.compute.manager [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing instance network info cache due to event network-changed-1a443391-105a-4568-ba24-7748b702e21d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.363 253665 DEBUG oslo_concurrency.lockutils [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.855 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Snapshot image upload complete#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.856 253665 DEBUG nova.compute.manager [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.968 253665 INFO nova.compute.manager [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Shelve offloading#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.981 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance destroyed successfully.#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.981 253665 DEBUG nova.compute.manager [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.985 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.985 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:47:24 np0005532048 nova_compute[253661]: 2025-11-22 09:47:24.986 253665 DEBUG nova.network.neutron [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:47:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 10 MiB/s wr, 181 op/s
Nov 22 04:47:27 np0005532048 nova_compute[253661]: 2025-11-22 09:47:27.101 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:27 np0005532048 nova_compute[253661]: 2025-11-22 09:47:27.141 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:27.990 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:27.990 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:27.990 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.4 MiB/s wr, 153 op/s
Nov 22 04:47:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Nov 22 04:47:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Nov 22 04:47:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Nov 22 04:47:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 7.7 MiB/s wr, 150 op/s
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.628 253665 DEBUG nova.network.neutron [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.708 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.709 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance network_info: |[{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.710 253665 DEBUG oslo_concurrency.lockutils [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.711 253665 DEBUG nova.network.neutron [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing network info cache for port 1a443391-105a-4568-ba24-7748b702e21d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.716 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start _get_guest_xml network_info=[{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.722 253665 WARNING nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.736 253665 DEBUG nova.virt.libvirt.host [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.737 253665 DEBUG nova.virt.libvirt.host [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.743 253665 DEBUG nova.virt.libvirt.host [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.743 253665 DEBUG nova.virt.libvirt.host [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.744 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.745 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.746 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.746 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.746 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.747 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.747 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.748 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.748 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.749 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.749 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.750 253665 DEBUG nova.virt.hardware [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:47:30 np0005532048 nova_compute[253661]: 2025-11-22 09:47:30.755 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:47:31 np0005532048 nova_compute[253661]: 2025-11-22 09:47:31.268 253665 DEBUG nova.network.neutron [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:47:31 np0005532048 nova_compute[253661]: 2025-11-22 09:47:31.334 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:47:32 np0005532048 nova_compute[253661]: 2025-11-22 09:47:32.104 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:32 np0005532048 nova_compute[253661]: 2025-11-22 09:47:32.143 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 100 op/s
Nov 22 04:47:32 np0005532048 nova_compute[253661]: 2025-11-22 09:47:32.302 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804837.3013563, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:47:32 np0005532048 nova_compute[253661]: 2025-11-22 09:47:32.303 253665 INFO nova.compute.manager [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:47:32 np0005532048 nova_compute[253661]: 2025-11-22 09:47:32.331 253665 DEBUG nova.compute.manager [None req-bf8c5ef0-060c-4a12-b059-9545f00169d0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:47:32 np0005532048 nova_compute[253661]: 2025-11-22 09:47:32.338 253665 DEBUG nova.compute.manager [None req-bf8c5ef0-060c-4a12-b059-9545f00169d0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:47:32 np0005532048 nova_compute[253661]: 2025-11-22 09:47:32.399 253665 INFO nova.compute.manager [None req-bf8c5ef0-060c-4a12-b059-9545f00169d0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Nov 22 04:47:33 np0005532048 podman[400672]: 2025-11-22 09:47:33.362621296 +0000 UTC m=+0.059419637 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:47:33 np0005532048 podman[400673]: 2025-11-22 09:47:33.397636011 +0000 UTC m=+0.087739787 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:47:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 2.0 KiB/s wr, 8 op/s
Nov 22 04:47:34 np0005532048 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 04:47:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 2.0 KiB/s wr, 8 op/s
Nov 22 04:47:37 np0005532048 nova_compute[253661]: 2025-11-22 09:47:37.106 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:37 np0005532048 nova_compute[253661]: 2025-11-22 09:47:37.145 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 409 B/s wr, 6 op/s
Nov 22 04:47:38 np0005532048 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 04:47:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:47:40 np0005532048 podman[400713]: 2025-11-22 09:47:40.39920092 +0000 UTC m=+0.089746967 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:47:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.7173 seconds
Nov 22 04:47:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:47:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432132538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:47:42 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.456047058s, txc = 0x56208096c000
Nov 22 04:47:42 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 12.455920219s
Nov 22 04:47:42 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 12.455920219s
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.124 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 11.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:47:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.219 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.225 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:47:42 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006889541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.781 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.782 253665 DEBUG nova.virt.libvirt.vif [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:47:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-338385867',display_name='tempest-TestGettingAddress-server-338385867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-338385867',id=144,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rpa99d70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:47:18Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=6973b14c-b2af-4012-9d0c-1e86b6eb3a28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.783 253665 DEBUG nova.network.os_vif_util [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.784 253665 DEBUG nova.network.os_vif_util [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.785 253665 DEBUG nova.objects.instance [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.795 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <uuid>6973b14c-b2af-4012-9d0c-1e86b6eb3a28</uuid>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <name>instance-00000090</name>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-338385867</nova:name>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:47:30</nova:creationTime>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <nova:port uuid="1a443391-105a-4568-ba24-7748b702e21d">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe1b:f53" ipVersion="6"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe1b:f53" ipVersion="6"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <entry name="serial">6973b14c-b2af-4012-9d0c-1e86b6eb3a28</entry>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <entry name="uuid">6973b14c-b2af-4012-9d0c-1e86b6eb3a28</entry>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:1b:0f:53"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <target dev="tap1a443391-10"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/console.log" append="off"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:47:42 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:47:42 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:47:42 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:47:42 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.796 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Preparing to wait for external event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.797 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.797 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.797 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.798 253665 DEBUG nova.virt.libvirt.vif [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:47:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-338385867',display_name='tempest-TestGettingAddress-server-338385867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-338385867',id=144,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rpa99d70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:47:18Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=6973b14c-b2af-4012-9d0c-1e86b6eb3a28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.798 253665 DEBUG nova.network.os_vif_util [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.798 253665 DEBUG nova.network.os_vif_util [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.799 253665 DEBUG os_vif [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.799 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.800 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.802 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.802 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a443391-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.803 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a443391-10, col_values=(('external_ids', {'iface-id': '1a443391-105a-4568-ba24-7748b702e21d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:0f:53', 'vm-uuid': '6973b14c-b2af-4012-9d0c-1e86b6eb3a28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:42 np0005532048 NetworkManager[48920]: <info>  [1763804862.8055] manager: (tap1a443391-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.813 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.814 253665 INFO os_vif [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10')#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.916 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.916 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.916 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:1b:0f:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.917 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Using config drive#033[00m
Nov 22 04:47:42 np0005532048 nova_compute[253661]: 2025-11-22 09:47:42.937 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:47:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Nov 22 04:47:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Nov 22 04:47:46 np0005532048 nova_compute[253661]: 2025-11-22 09:47:46.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:47:46 np0005532048 nova_compute[253661]: 2025-11-22 09:47:46.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:47:46 np0005532048 nova_compute[253661]: 2025-11-22 09:47:46.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:47:46 np0005532048 nova_compute[253661]: 2025-11-22 09:47:46.250 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:47:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:47 np0005532048 nova_compute[253661]: 2025-11-22 09:47:47.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:47 np0005532048 nova_compute[253661]: 2025-11-22 09:47:47.225 253665 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 2.95 sec#033[00m
Nov 22 04:47:47 np0005532048 nova_compute[253661]: 2025-11-22 09:47:47.805 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.448 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance destroyed successfully.#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.449 253665 DEBUG nova.objects.instance [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'resources' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.468 253665 DEBUG nova.virt.libvirt.vif [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJMfV5BjTM8GJujok7HYi2H1JqAcE7EEyl3AluUOeV8mGOJe1kvDgduzG9FjqiMj3IyTkvrleTcL49x3Y3dHrfp4PbZT/WUxBgqL6QlOxXbuGaO695U0GzmKtLI552+pbw==',key_name='tempest-TestShelveInstance-1840126280',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:46:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member',shelved_at='2025-11-22T09:47:24.856148',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7427dc9c-0c7d-45bc-9904-89241d5b4e4d'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:47:18Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.469 253665 DEBUG nova.network.os_vif_util [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.470 253665 DEBUG nova.network.os_vif_util [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.470 253665 DEBUG os_vif [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.472 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88d574be-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.480 253665 INFO os_vif [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb')#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.681 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Creating config drive at /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.686 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4p_yv5nx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.728 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.729 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.730 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.730 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.733 253665 DEBUG nova.network.neutron [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updated VIF entry in instance network info cache for port 1a443391-105a-4568-ba24-7748b702e21d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.734 253665 DEBUG nova.network.neutron [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.769 253665 DEBUG oslo_concurrency.lockutils [req-4ab32b34-9f9c-42f4-9369-79da729e6ec4 req-c2f524e5-445c-4618-b932-1a6806926ed3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.838 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4p_yv5nx" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.864 253665 DEBUG nova.storage.rbd_utils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:47:49 np0005532048 nova_compute[253661]: 2025-11-22 09:47:49.869 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:47:50 np0005532048 nova_compute[253661]: 2025-11-22 09:47:50.002 253665 DEBUG nova.compute.manager [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:47:50 np0005532048 nova_compute[253661]: 2025-11-22 09:47:50.003 253665 DEBUG nova.compute.manager [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:47:50 np0005532048 nova_compute[253661]: 2025-11-22 09:47:50.003 253665 DEBUG oslo_concurrency.lockutils [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:47:50 np0005532048 nova_compute[253661]: 2025-11-22 09:47:50.003 253665 DEBUG oslo_concurrency.lockutils [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:47:50 np0005532048 nova_compute[253661]: 2025-11-22 09:47:50.004 253665 DEBUG nova.network.neutron [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:47:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 1 op/s
Nov 22 04:47:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:51.528 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:47:51 np0005532048 nova_compute[253661]: 2025-11-22 09:47:51.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:51 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:51.529 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:47:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:52 np0005532048 nova_compute[253661]: 2025-11-22 09:47:52.166 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 8.7 KiB/s wr, 2 op/s
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:47:52
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images']
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:47:52 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:52.531 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:47:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:47:53 np0005532048 nova_compute[253661]: 2025-11-22 09:47:53.074 253665 DEBUG oslo_concurrency.processutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config 6973b14c-b2af-4012-9d0c-1e86b6eb3a28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:47:53 np0005532048 nova_compute[253661]: 2025-11-22 09:47:53.075 253665 INFO nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Deleting local config drive /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28/disk.config because it was imported into RBD.#033[00m
Nov 22 04:47:53 np0005532048 kernel: tap1a443391-10: entered promiscuous mode
Nov 22 04:47:53 np0005532048 NetworkManager[48920]: <info>  [1763804873.1284] manager: (tap1a443391-10): new Tun device (/org/freedesktop/NetworkManager/Devices/636)
Nov 22 04:47:53 np0005532048 nova_compute[253661]: 2025-11-22 09:47:53.130 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:47:53Z|01555|binding|INFO|Claiming lport 1a443391-105a-4568-ba24-7748b702e21d for this chassis.
Nov 22 04:47:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:47:53Z|01556|binding|INFO|1a443391-105a-4568-ba24-7748b702e21d: Claiming fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53
Nov 22 04:47:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:47:53Z|01557|binding|INFO|Setting lport 1a443391-105a-4568-ba24-7748b702e21d ovn-installed in OVS
Nov 22 04:47:53 np0005532048 nova_compute[253661]: 2025-11-22 09:47:53.151 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:53 np0005532048 systemd-udevd[400885]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:47:53 np0005532048 nova_compute[253661]: 2025-11-22 09:47:53.154 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:53 np0005532048 systemd-machined[215941]: New machine qemu-175-instance-00000090.
Nov 22 04:47:53 np0005532048 NetworkManager[48920]: <info>  [1763804873.1682] device (tap1a443391-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:47:53 np0005532048 NetworkManager[48920]: <info>  [1763804873.1692] device (tap1a443391-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:47:53 np0005532048 systemd[1]: Started Virtual Machine qemu-175-instance-00000090.
Nov 22 04:47:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:47:53Z|01558|binding|INFO|Setting lport 1a443391-105a-4568-ba24-7748b702e21d up in Southbound
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.350 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53'], port_security=['fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:fe1b:f53/64 2001:db8::f816:3eff:fe1b:f53/64', 'neutron:device_id': '6973b14c-b2af-4012-9d0c-1e86b6eb3a28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b8f1ae80-edda-4d40-9085-393558ac5aa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1a443391-105a-4568-ba24-7748b702e21d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.351 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1a443391-105a-4568-ba24-7748b702e21d in datapath b6b9221a-729b-4988-afa8-72f95360d9ea bound to our chassis#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.353 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6b9221a-729b-4988-afa8-72f95360d9ea#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.367 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[33197980-f24e-4253-8bc3-c434a8b2d363]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.398 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[a57cf19e-c4eb-4c85-b480-5425c0635876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.401 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[dd314488-f8b3-46a2-8267-a84d246d391a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.449 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4591b1ed-29bd-4da7-87e8-ad68935a17bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.483 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d5964b7a-3027-4d72-a132-4b773a89fe84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6b9221a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:f0:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764983, 'reachable_time': 18873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400900, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.512 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[05d7db40-cd0c-4a2c-b827-ca5d370a7504]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb6b9221a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764994, 'tstamp': 764994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400901, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb6b9221a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764997, 'tstamp': 764997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400901, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.514 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6b9221a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:47:53 np0005532048 nova_compute[253661]: 2025-11-22 09:47:53.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:53 np0005532048 nova_compute[253661]: 2025-11-22 09:47:53.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6b9221a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.518 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6b9221a-70, col_values=(('external_ids', {'iface-id': 'b8d092bb-b893-4593-9090-1acdc081ae18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:47:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:47:53.519 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:47:53 np0005532048 nova_compute[253661]: 2025-11-22 09:47:53.943 253665 DEBUG nova.network.neutron [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:47:53 np0005532048 nova_compute[253661]: 2025-11-22 09:47:53.944 253665 DEBUG nova.network.neutron [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": null, "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap88d574be-cb", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:47:54 np0005532048 nova_compute[253661]: 2025-11-22 09:47:54.216 253665 DEBUG oslo_concurrency.lockutils [req-27380d09-4944-4480-a8ae-91a5aff2fafb req-43528b82-b84c-4660-a7e3-8676634321e3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:47:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 8.9 KiB/s wr, 13 op/s
Nov 22 04:47:54 np0005532048 nova_compute[253661]: 2025-11-22 09:47:54.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:55 np0005532048 nova_compute[253661]: 2025-11-22 09:47:55.501 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804875.5006216, 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:47:55 np0005532048 nova_compute[253661]: 2025-11-22 09:47:55.501 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] VM Started (Lifecycle Event)#033[00m
Nov 22 04:47:55 np0005532048 nova_compute[253661]: 2025-11-22 09:47:55.533 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:47:55 np0005532048 nova_compute[253661]: 2025-11-22 09:47:55.537 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804875.5008843, 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:47:55 np0005532048 nova_compute[253661]: 2025-11-22 09:47:55.538 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:47:55 np0005532048 nova_compute[253661]: 2025-11-22 09:47:55.560 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:47:55 np0005532048 nova_compute[253661]: 2025-11-22 09:47:55.563 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:47:55 np0005532048 nova_compute[253661]: 2025-11-22 09:47:55.579 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:47:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:47:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:47:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:47:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:47:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:47:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 8.8 KiB/s wr, 12 op/s
Nov 22 04:47:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:47:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:47:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:47:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:47:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.021 253665 DEBUG nova.compute.manager [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.022 253665 DEBUG oslo_concurrency.lockutils [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.022 253665 DEBUG oslo_concurrency.lockutils [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.022 253665 DEBUG oslo_concurrency.lockutils [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.023 253665 DEBUG nova.compute.manager [req-824a1a07-7258-47de-8e58-d5823459e458 req-a902dd38-534d-4f71-aa14-5bae17993d8d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Processing event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.023 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.027 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804877.0274258, 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.028 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.030 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.034 253665 INFO nova.virt.libvirt.driver [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance spawned successfully.#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.034 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.050 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.056 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.059 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.060 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.060 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.060 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.061 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.061 253665 DEBUG nova.virt.libvirt.driver [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.092 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:47:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.351 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.376 253665 INFO nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Took 38.78 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.376 253665 DEBUG nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.377 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.377 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.378 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.378 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.379 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.417 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.417 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.417 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.418 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.418 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.639 253665 INFO nova.compute.manager [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Took 40.09 seconds to build instance.#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.694 253665 DEBUG oslo_concurrency.lockutils [None req-1cd8ec3c-1359-4c59-8ef6-03ac5bbf855f 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 40.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:47:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697904501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.869 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.963 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.964 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.969 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.970 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.974 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:47:57 np0005532048 nova_compute[253661]: 2025-11-22 09:47:57.974 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.176 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.177 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3347MB free_disk=59.87614822387695GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.177 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.178 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 302 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 21 KiB/s wr, 14 op/s
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.243 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 63134c6f-fc14-4157-9874-e7c6227f8d0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.244 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:47:58 np0005532048 nova_compute[253661]: 2025-11-22 09:47:58.340 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:47:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:47:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1990508250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.040 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.047 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.063 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.134 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.135 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.174 253665 DEBUG nova.compute.manager [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.175 253665 DEBUG oslo_concurrency.lockutils [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.175 253665 DEBUG oslo_concurrency.lockutils [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.175 253665 DEBUG oslo_concurrency.lockutils [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.175 253665 DEBUG nova.compute.manager [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] No waiting events found dispatching network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.176 253665 WARNING nova.compute.manager [req-f793b22b-0295-4775-98be-807546e51c26 req-ae0e4330-23cf-4f81-8233-b13960ff2b0b 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received unexpected event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d for instance with vm_state active and task_state None.#033[00m
Nov 22 04:47:59 np0005532048 nova_compute[253661]: 2025-11-22 09:47:59.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 249 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 749 KiB/s rd, 13 KiB/s wr, 49 op/s
Nov 22 04:48:00 np0005532048 nova_compute[253661]: 2025-11-22 09:48:00.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:01 np0005532048 nova_compute[253661]: 2025-11-22 09:48:01.129 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:01 np0005532048 nova_compute[253661]: 2025-11-22 09:48:01.130 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:01 np0005532048 nova_compute[253661]: 2025-11-22 09:48:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:02 np0005532048 nova_compute[253661]: 2025-11-22 09:48:02.172 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 92 op/s
Nov 22 04:48:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:02 np0005532048 nova_compute[253661]: 2025-11-22 09:48:02.767 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deleting instance files /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_del#033[00m
Nov 22 04:48:02 np0005532048 nova_compute[253661]: 2025-11-22 09:48:02.769 253665 INFO nova.virt.libvirt.driver [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deletion of /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_del complete#033[00m
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011106117120856939 of space, bias 1.0, pg target 0.33318351362570814 quantized to 32 (current 32)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014249405808426125 of space, bias 1.0, pg target 0.42748217425278373 quantized to 32 (current 32)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:48:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:48:03 np0005532048 nova_compute[253661]: 2025-11-22 09:48:03.314 253665 INFO nova.scheduler.client.report [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Deleted allocations for instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f#033[00m
Nov 22 04:48:03 np0005532048 nova_compute[253661]: 2025-11-22 09:48:03.912 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:03 np0005532048 nova_compute[253661]: 2025-11-22 09:48:03.913 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:04 np0005532048 nova_compute[253661]: 2025-11-22 09:48:04.008 253665 DEBUG oslo_concurrency.processutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Nov 22 04:48:04 np0005532048 podman[401010]: 2025-11-22 09:48:04.399348636 +0000 UTC m=+0.078834357 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:48:04 np0005532048 podman[401011]: 2025-11-22 09:48:04.410741588 +0000 UTC m=+0.088767443 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:48:04 np0005532048 nova_compute[253661]: 2025-11-22 09:48:04.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:48:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002951629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:48:04 np0005532048 nova_compute[253661]: 2025-11-22 09:48:04.522 253665 DEBUG oslo_concurrency.processutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:04 np0005532048 nova_compute[253661]: 2025-11-22 09:48:04.530 253665 DEBUG nova.compute.provider_tree [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:48:04 np0005532048 nova_compute[253661]: 2025-11-22 09:48:04.567 253665 DEBUG nova.scheduler.client.report [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:48:04 np0005532048 nova_compute[253661]: 2025-11-22 09:48:04.647 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:04 np0005532048 nova_compute[253661]: 2025-11-22 09:48:04.759 253665 DEBUG oslo_concurrency.lockutils [None req-3fb795bb-7363-4178-92c2-0fe22a0e1bcd 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 50.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:48:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b25f7685-ade7-43a4-95d9-3ecf1d48878f does not exist
Nov 22 04:48:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1ea8fd3f-ecc5-4fd2-a00d-eb40aaa3b22c does not exist
Nov 22 04:48:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5dd8e59b-5e79-4f98-a50a-caf64e08a137 does not exist
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:48:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:48:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:48:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:48:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:48:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:48:07 np0005532048 nova_compute[253661]: 2025-11-22 09:48:07.164 253665 DEBUG nova.compute.manager [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-changed-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:07 np0005532048 nova_compute[253661]: 2025-11-22 09:48:07.167 253665 DEBUG nova.compute.manager [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing instance network info cache due to event network-changed-1a443391-105a-4568-ba24-7748b702e21d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:48:07 np0005532048 nova_compute[253661]: 2025-11-22 09:48:07.168 253665 DEBUG oslo_concurrency.lockutils [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:07 np0005532048 nova_compute[253661]: 2025-11-22 09:48:07.168 253665 DEBUG oslo_concurrency.lockutils [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:07 np0005532048 nova_compute[253661]: 2025-11-22 09:48:07.169 253665 DEBUG nova.network.neutron [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing network info cache for port 1a443391-105a-4568-ba24-7748b702e21d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:48:07 np0005532048 nova_compute[253661]: 2025-11-22 09:48:07.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:07 np0005532048 podman[401314]: 2025-11-22 09:48:07.438554644 +0000 UTC m=+0.026754422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:48:07 np0005532048 podman[401314]: 2025-11-22 09:48:07.537956558 +0000 UTC m=+0.126156316 container create 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:48:07 np0005532048 systemd[1]: Started libpod-conmon-772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55.scope.
Nov 22 04:48:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:48:07 np0005532048 podman[401314]: 2025-11-22 09:48:07.694044442 +0000 UTC m=+0.282244200 container init 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 04:48:07 np0005532048 podman[401314]: 2025-11-22 09:48:07.705463245 +0000 UTC m=+0.293663003 container start 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:48:07 np0005532048 compassionate_mestorf[401330]: 167 167
Nov 22 04:48:07 np0005532048 systemd[1]: libpod-772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55.scope: Deactivated successfully.
Nov 22 04:48:07 np0005532048 podman[401314]: 2025-11-22 09:48:07.786776252 +0000 UTC m=+0.374976040 container attach 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 04:48:07 np0005532048 podman[401314]: 2025-11-22 09:48:07.787834858 +0000 UTC m=+0.376034616 container died 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:48:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-503f5ffa285e7be60851db40ccbb20339112461cbb7d125faf80d128dabac895-merged.mount: Deactivated successfully.
Nov 22 04:48:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Nov 22 04:48:09 np0005532048 podman[401314]: 2025-11-22 09:48:09.144105909 +0000 UTC m=+1.732305677 container remove 772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mestorf, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:48:09 np0005532048 systemd[1]: libpod-conmon-772314bd0a74c01150583666ad9b9327742db3a88a330bc71f39b60625ac2f55.scope: Deactivated successfully.
Nov 22 04:48:09 np0005532048 podman[401354]: 2025-11-22 09:48:09.384146125 +0000 UTC m=+0.045104734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:48:09 np0005532048 nova_compute[253661]: 2025-11-22 09:48:09.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:09 np0005532048 podman[401354]: 2025-11-22 09:48:09.720949103 +0000 UTC m=+0.381907702 container create b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:48:10 np0005532048 systemd[1]: Started libpod-conmon-b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd.scope.
Nov 22 04:48:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:48:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 88 op/s
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.273 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.274 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.274 253665 INFO nova.compute.manager [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Unshelving#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.351 253665 DEBUG nova.network.neutron [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updated VIF entry in instance network info cache for port 1a443391-105a-4568-ba24-7748b702e21d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.352 253665 DEBUG nova.network.neutron [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.356 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.356 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.363 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.374 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.382 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.383 253665 INFO nova.compute.claims [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.446 253665 DEBUG oslo_concurrency.lockutils [req-9f5a656b-9b23-4fae-93af-00a17c6ea294 req-400c23b8-b4ac-4299-982c-24e1fbd9320f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:10 np0005532048 nova_compute[253661]: 2025-11-22 09:48:10.549 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:11 np0005532048 nova_compute[253661]: 2025-11-22 09:48:11.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:11 np0005532048 podman[401354]: 2025-11-22 09:48:11.044236868 +0000 UTC m=+1.705195467 container init b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:48:11 np0005532048 podman[401354]: 2025-11-22 09:48:11.070168019 +0000 UTC m=+1.731126578 container start b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:48:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:48:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2008884121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:48:11 np0005532048 nova_compute[253661]: 2025-11-22 09:48:11.142 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:11 np0005532048 nova_compute[253661]: 2025-11-22 09:48:11.150 253665 DEBUG nova.compute.provider_tree [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:48:11 np0005532048 nova_compute[253661]: 2025-11-22 09:48:11.162 253665 DEBUG nova.scheduler.client.report [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:48:11 np0005532048 nova_compute[253661]: 2025-11-22 09:48:11.183 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:11 np0005532048 nova_compute[253661]: 2025-11-22 09:48:11.312 253665 INFO nova.network.neutron [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating port 88d574be-cb53-4693-a025-34a039ee625c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 22 04:48:11 np0005532048 podman[401354]: 2025-11-22 09:48:11.988983097 +0000 UTC m=+2.649941766 container attach b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:48:12 np0005532048 podman[401397]: 2025-11-22 09:48:12.200151282 +0000 UTC m=+0.878728060 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 04:48:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 249 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 293 KiB/s wr, 58 op/s
Nov 22 04:48:12 np0005532048 nova_compute[253661]: 2025-11-22 09:48:12.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:48:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3015865944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:48:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:48:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3015865944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:48:12 np0005532048 nova_compute[253661]: 2025-11-22 09:48:12.918 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:12 np0005532048 nova_compute[253661]: 2025-11-22 09:48:12.919 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:12 np0005532048 nova_compute[253661]: 2025-11-22 09:48:12.919 253665 DEBUG nova.network.neutron [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:48:13 np0005532048 nova_compute[253661]: 2025-11-22 09:48:13.054 253665 DEBUG nova.compute.manager [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:13 np0005532048 nova_compute[253661]: 2025-11-22 09:48:13.054 253665 DEBUG nova.compute.manager [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:48:13 np0005532048 nova_compute[253661]: 2025-11-22 09:48:13.055 253665 DEBUG oslo_concurrency.lockutils [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:13 np0005532048 magical_wozniak[401370]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:48:13 np0005532048 magical_wozniak[401370]: --> relative data size: 1.0
Nov 22 04:48:13 np0005532048 magical_wozniak[401370]: --> All data devices are unavailable
Nov 22 04:48:13 np0005532048 systemd[1]: libpod-b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd.scope: Deactivated successfully.
Nov 22 04:48:13 np0005532048 systemd[1]: libpod-b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd.scope: Consumed 1.126s CPU time.
Nov 22 04:48:13 np0005532048 podman[401447]: 2025-11-22 09:48:13.393124889 +0000 UTC m=+0.028252888 container died b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:48:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 254 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.380 253665 DEBUG nova.network.neutron [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.397 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.399 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.400 253665 INFO nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating image(s)#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.428 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.434 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.437 253665 DEBUG oslo_concurrency.lockutils [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.437 253665 DEBUG nova.network.neutron [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.488 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.681 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.687 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "137b68f7209b57adc5bfe7c053ca10718182857d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.688 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "137b68f7209b57adc5bfe7c053ca10718182857d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ec857b6763a714834ed149b6974c98d9624b7fa3ddf8c015125ed8f3cb6f0f1d-merged.mount: Deactivated successfully.
Nov 22 04:48:14 np0005532048 nova_compute[253661]: 2025-11-22 09:48:14.955 253665 DEBUG nova.virt.libvirt.imagebackend [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image locations are: [{'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 22 04:48:15 np0005532048 nova_compute[253661]: 2025-11-22 09:48:15.176 253665 DEBUG nova.virt.libvirt.imagebackend [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Selected location: {'url': 'rbd://34829716-a12c-57a6-8915-c1aa615c9d8a/images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 22 04:48:15 np0005532048 nova_compute[253661]: 2025-11-22 09:48:15.176 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] cloning images/7427dc9c-0c7d-45bc-9904-89241d5b4e4d@snap to None/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:48:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 254 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.1 MiB/s wr, 17 op/s
Nov 22 04:48:16 np0005532048 nova_compute[253661]: 2025-11-22 09:48:16.236 253665 DEBUG nova.network.neutron [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:48:16 np0005532048 nova_compute[253661]: 2025-11-22 09:48:16.238 253665 DEBUG nova.network.neutron [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:16 np0005532048 nova_compute[253661]: 2025-11-22 09:48:16.250 253665 DEBUG oslo_concurrency.lockutils [req-e2ebf338-f1c5-43e5-b515-597a6f79b468 req-f3fb2a7a-6f6e-4a74-983f-82b7f9772451 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:16 np0005532048 nova_compute[253661]: 2025-11-22 09:48:16.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:16 np0005532048 podman[401447]: 2025-11-22 09:48:16.450626408 +0000 UTC m=+3.085754387 container remove b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:48:16 np0005532048 systemd[1]: libpod-conmon-b1ce89ae6d88664eaca68df09d5ca66bfa078cd7e2fbba71915d85409c356afd.scope: Deactivated successfully.
Nov 22 04:48:17 np0005532048 nova_compute[253661]: 2025-11-22 09:48:17.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:17 np0005532048 podman[401723]: 2025-11-22 09:48:17.178878042 +0000 UTC m=+0.027043530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:48:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:18 np0005532048 podman[401723]: 2025-11-22 09:48:18.197385981 +0000 UTC m=+1.045551469 container create f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 22 04:48:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 259 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1.5 MiB/s wr, 22 op/s
Nov 22 04:48:18 np0005532048 systemd[1]: Started libpod-conmon-f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951.scope.
Nov 22 04:48:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:48:18 np0005532048 podman[401723]: 2025-11-22 09:48:18.758253171 +0000 UTC m=+1.606418739 container init f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:48:18 np0005532048 podman[401723]: 2025-11-22 09:48:18.769712074 +0000 UTC m=+1.617877562 container start f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:48:18 np0005532048 gallant_easley[401743]: 167 167
Nov 22 04:48:18 np0005532048 systemd[1]: libpod-f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951.scope: Deactivated successfully.
Nov 22 04:48:19 np0005532048 podman[401723]: 2025-11-22 09:48:19.120401883 +0000 UTC m=+1.968567451 container attach f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:48:19 np0005532048 podman[401723]: 2025-11-22 09:48:19.122146946 +0000 UTC m=+1.970312434 container died f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:48:19 np0005532048 nova_compute[253661]: 2025-11-22 09:48:19.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:19 np0005532048 nova_compute[253661]: 2025-11-22 09:48:19.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 264 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 22 04:48:20 np0005532048 systemd[1]: var-lib-containers-storage-overlay-17e76301868f54794deab97962d82984139fb90daca31db140d6d60a2ec6afc9-merged.mount: Deactivated successfully.
Nov 22 04:48:20 np0005532048 nova_compute[253661]: 2025-11-22 09:48:20.715 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "137b68f7209b57adc5bfe7c053ca10718182857d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 6.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:20 np0005532048 nova_compute[253661]: 2025-11-22 09:48:20.881 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'migration_context' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:20 np0005532048 nova_compute[253661]: 2025-11-22 09:48:20.951 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] flattening vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:48:21 np0005532048 podman[401723]: 2025-11-22 09:48:21.175113981 +0000 UTC m=+4.023279459 container remove f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_easley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:48:21 np0005532048 systemd[1]: libpod-conmon-f478061ad53ce8f3852dd89d33d61b7f29fa9ab3fe306073404820ab9756a951.scope: Deactivated successfully.
Nov 22 04:48:21 np0005532048 podman[401858]: 2025-11-22 09:48:21.394024396 +0000 UTC m=+0.026188318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:48:21 np0005532048 podman[401858]: 2025-11-22 09:48:21.541693023 +0000 UTC m=+0.173856925 container create e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:48:21 np0005532048 systemd[1]: Started libpod-conmon-e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577.scope.
Nov 22 04:48:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:48:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:22 np0005532048 nova_compute[253661]: 2025-11-22 09:48:22.242 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 267 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 47 op/s
Nov 22 04:48:22 np0005532048 podman[401858]: 2025-11-22 09:48:22.293696592 +0000 UTC m=+0.925860514 container init e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:48:22 np0005532048 podman[401858]: 2025-11-22 09:48:22.301392312 +0000 UTC m=+0.933556204 container start e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 04:48:22 np0005532048 podman[401858]: 2025-11-22 09:48:22.560851648 +0000 UTC m=+1.193015550 container attach e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:48:22 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 22 04:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:48:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:48:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]: {
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:    "0": [
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:        {
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "devices": [
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "/dev/loop3"
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            ],
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_name": "ceph_lv0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_size": "21470642176",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "name": "ceph_lv0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "tags": {
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.cluster_name": "ceph",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.crush_device_class": "",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.encrypted": "0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.osd_id": "0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.type": "block",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.vdo": "0"
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            },
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "type": "block",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "vg_name": "ceph_vg0"
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:        }
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:    ],
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:    "1": [
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:        {
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "devices": [
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "/dev/loop4"
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            ],
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_name": "ceph_lv1",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_size": "21470642176",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "name": "ceph_lv1",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "tags": {
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.cluster_name": "ceph",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.crush_device_class": "",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.encrypted": "0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.osd_id": "1",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.type": "block",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.vdo": "0"
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            },
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "type": "block",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "vg_name": "ceph_vg1"
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:        }
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:    ],
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:    "2": [
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:        {
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "devices": [
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "/dev/loop5"
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            ],
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_name": "ceph_lv2",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_size": "21470642176",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "name": "ceph_lv2",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "tags": {
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.cluster_name": "ceph",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.crush_device_class": "",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.encrypted": "0",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.osd_id": "2",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.type": "block",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:                "ceph.vdo": "0"
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            },
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "type": "block",
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:            "vg_name": "ceph_vg2"
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:        }
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]:    ]
Nov 22 04:48:23 np0005532048 admiring_matsumoto[401875]: }
Nov 22 04:48:23 np0005532048 systemd[1]: libpod-e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577.scope: Deactivated successfully.
Nov 22 04:48:23 np0005532048 podman[401858]: 2025-11-22 09:48:23.511021241 +0000 UTC m=+2.143185143 container died e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:48:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 280 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 72 op/s
Nov 22 04:48:24 np0005532048 nova_compute[253661]: 2025-11-22 09:48:24.701 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f9798464308605702f1c84346ca15c2eacd12f467332aaf86357cab41986bf9b-merged.mount: Deactivated successfully.
Nov 22 04:48:25 np0005532048 podman[401858]: 2025-11-22 09:48:25.702883365 +0000 UTC m=+4.335047267 container remove e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:48:25 np0005532048 systemd[1]: libpod-conmon-e326c9597e016203ad058c79486cdf9e3d7823d5e80a528b4163341946cf0577.scope: Deactivated successfully.
Nov 22 04:48:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 71 op/s
Nov 22 04:48:26 np0005532048 podman[402037]: 2025-11-22 09:48:26.372616343 +0000 UTC m=+0.026737141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:48:26 np0005532048 podman[402037]: 2025-11-22 09:48:26.515783657 +0000 UTC m=+0.169904425 container create ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.579 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Image rbd:vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.582 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.583 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Ensure instance console log exists: /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.584 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.584 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.584 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.586 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start _get_guest_xml network_info=[{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:47:14Z,direct_url=<?>,disk_format='raw',id=7427dc9c-0c7d-45bc-9904-89241d5b4e4d,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-140973884-shelved',owner='2a86e5c3f3c34f2285b7958147f6bbd3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:47:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.592 253665 WARNING nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:48:26 np0005532048 systemd[1]: Started libpod-conmon-ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0.scope.
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.603 253665 DEBUG nova.virt.libvirt.host [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.604 253665 DEBUG nova.virt.libvirt.host [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.607 253665 DEBUG nova.virt.libvirt.host [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.608 253665 DEBUG nova.virt.libvirt.host [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.608 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.608 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T09:47:14Z,direct_url=<?>,disk_format='raw',id=7427dc9c-0c7d-45bc-9904-89241d5b4e4d,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-140973884-shelved',owner='2a86e5c3f3c34f2285b7958147f6bbd3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T09:47:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.609 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.609 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.609 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.610 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.610 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.610 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.610 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.611 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.611 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.611 253665 DEBUG nova.virt.hardware [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.611 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:26 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:48:26 np0005532048 nova_compute[253661]: 2025-11-22 09:48:26.633 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:26 np0005532048 podman[402037]: 2025-11-22 09:48:26.890436029 +0000 UTC m=+0.544556877 container init ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:48:26 np0005532048 podman[402037]: 2025-11-22 09:48:26.900716503 +0000 UTC m=+0.554837291 container start ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:48:26 np0005532048 nifty_nash[402053]: 167 167
Nov 22 04:48:26 np0005532048 systemd[1]: libpod-ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0.scope: Deactivated successfully.
Nov 22 04:48:26 np0005532048 conmon[402053]: conmon ed85f4140273c1be404b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0.scope/container/memory.events
Nov 22 04:48:26 np0005532048 podman[402037]: 2025-11-22 09:48:26.957648989 +0000 UTC m=+0.611769787 container attach ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:48:26 np0005532048 podman[402037]: 2025-11-22 09:48:26.958465918 +0000 UTC m=+0.612586686 container died ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2346501455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.151 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.179 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.184 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4334e4ed212c0f16f05b9b8692a0d0d63380db2af75dc02e67829d1c996b9892-merged.mount: Deactivated successfully.
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776956792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.690 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.692 253665 DEBUG nova.virt.libvirt.vif [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='7427dc9c-0c7d-45bc-9904-89241d5b4e4d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1840126280',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:46:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member',shelved_at='2025-11-22T09:47:24.856148',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7427dc9c-0c7d-45bc-9904-89241d5b4e4d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:48:10Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.693 253665 DEBUG nova.network.os_vif_util [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.694 253665 DEBUG nova.network.os_vif_util [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.696 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.710 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <uuid>91cfde9c-3aa6-4946-92d6-471c8f63eb2f</uuid>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <name>instance-0000008f</name>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestShelveInstance-server-140973884</nova:name>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:48:26</nova:creationTime>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <nova:user uuid="15f54ba9d7eb4efd9b760da5c85ec22e">tempest-TestShelveInstance-463882348-project-member</nova:user>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <nova:project uuid="2a86e5c3f3c34f2285b7958147f6bbd3">tempest-TestShelveInstance-463882348</nova:project>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="7427dc9c-0c7d-45bc-9904-89241d5b4e4d"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <nova:port uuid="88d574be-cb53-4693-a025-34a039ee625c">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <entry name="serial">91cfde9c-3aa6-4946-92d6-471c8f63eb2f</entry>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <entry name="uuid">91cfde9c-3aa6-4946-92d6-471c8f63eb2f</entry>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:88:cb:74"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <target dev="tap88d574be-cb"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/console.log" append="off"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <input type="keyboard" bus="usb"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:48:27 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:48:27 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:48:27 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:48:27 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.711 253665 DEBUG nova.compute.manager [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Preparing to wait for external event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.712 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.712 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.713 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.713 253665 DEBUG nova.virt.libvirt.vif [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='7427dc9c-0c7d-45bc-9904-89241d5b4e4d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1840126280',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:46:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member',shelved_at='2025-11-22T09:47:24.856148',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7427dc9c-0c7d-45bc-9904-89241d5b4e4d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:48:10Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.714 253665 DEBUG nova.network.os_vif_util [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.714 253665 DEBUG nova.network.os_vif_util [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.715 253665 DEBUG os_vif [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.715 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.716 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.716 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.719 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.719 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88d574be-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.720 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88d574be-cb, col_values=(('external_ids', {'iface-id': '88d574be-cb53-4693-a025-34a039ee625c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:cb:74', 'vm-uuid': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:27 np0005532048 NetworkManager[48920]: <info>  [1763804907.7231] manager: (tap88d574be-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.731 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.733 253665 INFO os_vif [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb')#033[00m
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:27Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:0f:53 10.100.0.8
Nov 22 04:48:27 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:27Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:0f:53 10.100.0.8
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.901 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.902 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.902 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] No VIF found with MAC fa:16:3e:88:cb:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.903 253665 INFO nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Using config drive#033[00m
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.926 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:27.935899) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804907935941, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1416, "num_deletes": 253, "total_data_size": 2146440, "memory_usage": 2187328, "flush_reason": "Manual Compaction"}
Nov 22 04:48:27 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 22 04:48:27 np0005532048 nova_compute[253661]: 2025-11-22 09:48:27.952 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:27.990 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:27.991 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:27.992 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:28 np0005532048 nova_compute[253661]: 2025-11-22 09:48:28.005 253665 DEBUG nova.objects.instance [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'keypairs' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908122750, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1285403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52426, "largest_seqno": 53841, "table_properties": {"data_size": 1280301, "index_size": 2370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13594, "raw_average_key_size": 21, "raw_value_size": 1269039, "raw_average_value_size": 1961, "num_data_blocks": 107, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804759, "oldest_key_time": 1763804759, "file_creation_time": 1763804907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 186897 microseconds, and 4576 cpu microseconds.
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:48:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 334 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.6 MiB/s wr, 93 op/s
Nov 22 04:48:28 np0005532048 nova_compute[253661]: 2025-11-22 09:48:28.289 253665 INFO nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Creating config drive at /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config#033[00m
Nov 22 04:48:28 np0005532048 nova_compute[253661]: 2025-11-22 09:48:28.294 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ervky0t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:28 np0005532048 podman[402037]: 2025-11-22 09:48:28.391675459 +0000 UTC m=+2.045796227 container remove ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:48:28 np0005532048 nova_compute[253661]: 2025-11-22 09:48:28.437 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ervky0t" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:28 np0005532048 systemd[1]: libpod-conmon-ed85f4140273c1be404ba739321e62a8f1b4d266bc1432aa9d307fc572d408d0.scope: Deactivated successfully.
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.122793) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1285403 bytes OK
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.122815) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.615412) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.615462) EVENT_LOG_v1 {"time_micros": 1763804908615451, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.615489) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2140115, prev total WAL file size 2142652, number of live WAL files 2.
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.616878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303034' seq:72057594037927935, type:22 .. '6D6772737461740032323537' seq:0, type:0; will stop at (end)
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1255KB)], [122(10107KB)]
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908616955, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11635617, "oldest_snapshot_seqno": -1}
Nov 22 04:48:28 np0005532048 podman[402169]: 2025-11-22 09:48:28.57919863 +0000 UTC m=+0.025419249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7476 keys, 9135298 bytes, temperature: kUnknown
Nov 22 04:48:28 np0005532048 podman[402169]: 2025-11-22 09:48:28.76387914 +0000 UTC m=+0.210099759 container create 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908763638, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9135298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9087305, "index_size": 28169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 194807, "raw_average_key_size": 26, "raw_value_size": 8955844, "raw_average_value_size": 1197, "num_data_blocks": 1096, "num_entries": 7476, "num_filter_entries": 7476, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804908, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:48:28 np0005532048 nova_compute[253661]: 2025-11-22 09:48:28.789 253665 DEBUG nova.storage.rbd_utils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] rbd image 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:28 np0005532048 nova_compute[253661]: 2025-11-22 09:48:28.797 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.764026) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9135298 bytes
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.839502) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 79.3 rd, 62.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.9 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(16.2) write-amplify(7.1) OK, records in: 7935, records dropped: 459 output_compression: NoCompression
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.839545) EVENT_LOG_v1 {"time_micros": 1763804908839528, "job": 74, "event": "compaction_finished", "compaction_time_micros": 146784, "compaction_time_cpu_micros": 25687, "output_level": 6, "num_output_files": 1, "total_output_size": 9135298, "num_input_records": 7935, "num_output_records": 7476, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908840044, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804908842301, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.616430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:48:28 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:48:28.842477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:48:28 np0005532048 systemd[1]: Started libpod-conmon-9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0.scope.
Nov 22 04:48:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:48:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:28 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:29 np0005532048 podman[402169]: 2025-11-22 09:48:29.214835875 +0000 UTC m=+0.661056494 container init 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:48:29 np0005532048 podman[402169]: 2025-11-22 09:48:29.223888758 +0000 UTC m=+0.670109397 container start 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:48:29 np0005532048 podman[402169]: 2025-11-22 09:48:29.478301211 +0000 UTC m=+0.924521840 container attach 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:48:29 np0005532048 nova_compute[253661]: 2025-11-22 09:48:29.602 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:29 np0005532048 nova_compute[253661]: 2025-11-22 09:48:29.603 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:29 np0005532048 nova_compute[253661]: 2025-11-22 09:48:29.617 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:48:29 np0005532048 nova_compute[253661]: 2025-11-22 09:48:29.733 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:29 np0005532048 nova_compute[253661]: 2025-11-22 09:48:29.733 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:29 np0005532048 nova_compute[253661]: 2025-11-22 09:48:29.745 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:48:29 np0005532048 nova_compute[253661]: 2025-11-22 09:48:29.745 253665 INFO nova.compute.claims [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:48:29 np0005532048 nova_compute[253661]: 2025-11-22 09:48:29.914 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 345 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.4 MiB/s wr, 100 op/s
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]: {
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "osd_id": 1,
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "type": "bluestore"
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:    },
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "osd_id": 0,
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "type": "bluestore"
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:    },
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "osd_id": 2,
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:        "type": "bluestore"
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]:    }
Nov 22 04:48:30 np0005532048 eloquent_ride[402213]: }
Nov 22 04:48:30 np0005532048 systemd[1]: libpod-9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0.scope: Deactivated successfully.
Nov 22 04:48:30 np0005532048 systemd[1]: libpod-9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0.scope: Consumed 1.054s CPU time.
Nov 22 04:48:30 np0005532048 podman[402169]: 2025-11-22 09:48:30.299629732 +0000 UTC m=+1.745850331 container died 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:48:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:48:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/588911086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.359 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.366 253665 DEBUG nova.compute.provider_tree [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.379 253665 DEBUG nova.scheduler.client.report [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.402 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.403 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.443 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.444 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.473 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.495 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:48:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ab490d84df5c007d07ea081a4c82fa976b421815df7b3679e949e225e65ac928-merged.mount: Deactivated successfully.
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.603 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.605 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:48:30 np0005532048 nova_compute[253661]: 2025-11-22 09:48:30.606 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Creating image(s)#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.170 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.198 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.225 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.230 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.271 253665 DEBUG nova.policy [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aff683c22adc499393a2037bae323af6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce09df5a051f4f24bbb216fbe5785dcb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.308 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.309 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.309 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.310 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.331 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.335 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 73f1da2d-d075-455d-94dd-f10146df7d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:31 np0005532048 podman[402169]: 2025-11-22 09:48:31.349589959 +0000 UTC m=+2.795810558 container remove 9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:48:31 np0005532048 systemd[1]: libpod-conmon-9245fa7ef8b90b8e01cd4dc99fb02875b56f5d1b0413279a5710231a1e5339d0.scope: Deactivated successfully.
Nov 22 04:48:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:48:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:48:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:48:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.492 253665 DEBUG oslo_concurrency.processutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config 91cfde9c-3aa6-4946-92d6-471c8f63eb2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.493 253665 INFO nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deleting local config drive /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f/disk.config because it was imported into RBD.#033[00m
Nov 22 04:48:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 497f58a1-1d12-4d32-ac68-d00a404ee57a does not exist
Nov 22 04:48:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 35dfff9a-c14e-4e0b-b2cb-8d0414c47bed does not exist
Nov 22 04:48:31 np0005532048 kernel: tap88d574be-cb: entered promiscuous mode
Nov 22 04:48:31 np0005532048 NetworkManager[48920]: <info>  [1763804911.5784] manager: (tap88d574be-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Nov 22 04:48:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:31Z|01559|binding|INFO|Claiming lport 88d574be-cb53-4693-a025-34a039ee625c for this chassis.
Nov 22 04:48:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:31Z|01560|binding|INFO|88d574be-cb53-4693-a025-34a039ee625c: Claiming fa:16:3e:88:cb:74 10.100.0.14
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.584 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.591 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:cb:74 10.100.0.14'], port_security=['fa:16:3e:88:cb:74 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-449be411-464c-4d69-be15-6372ecacd778', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'da881b1b-2aad-4a91-9422-a708cc3c5d34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a67d762-85ed-414e-ab70-eac2ab54b109, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88d574be-cb53-4693-a025-34a039ee625c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.593 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88d574be-cb53-4693-a025-34a039ee625c in datapath 449be411-464c-4d69-be15-6372ecacd778 bound to our chassis#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.595 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 449be411-464c-4d69-be15-6372ecacd778#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.607 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:31Z|01561|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c ovn-installed in OVS
Nov 22 04:48:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:31Z|01562|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c up in Southbound
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.608 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cc995e-11c6-41ff-ae56-ed2d1568274d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.609 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap449be411-41 in ovnmeta-449be411-464c-4d69-be15-6372ecacd778 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.612 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap449be411-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.612 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[edcba10a-7ea2-4513-b59f-9cba98817aad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.614 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba23774-6705-4d6d-85be-8a1d98eb3b7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.627 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[7919bd15-ca38-4266-8a8c-8d830ccc4cc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 systemd-udevd[402440]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:48:31 np0005532048 systemd-machined[215941]: New machine qemu-176-instance-0000008f.
Nov 22 04:48:31 np0005532048 systemd[1]: Started Virtual Machine qemu-176-instance-0000008f.
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.655 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b65cf79f-40a4-4b4e-907b-433431ec83aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 NetworkManager[48920]: <info>  [1763804911.6617] device (tap88d574be-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:48:31 np0005532048 NetworkManager[48920]: <info>  [1763804911.6628] device (tap88d574be-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.690 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb4af6e-fdf8-4589-900d-142a3e7d949e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.697 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc7c1a3-505c-484b-ad7b-beb795cb0ccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 NetworkManager[48920]: <info>  [1763804911.6990] manager: (tap449be411-40): new Veth device (/org/freedesktop/NetworkManager/Devices/639)
Nov 22 04:48:31 np0005532048 systemd-udevd[402449]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.731 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[61b46d8e-cc58-4507-aca1-d8fe9613a972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.735 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[14d5fd3c-59d1-4b0b-ab59-bb121760b41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 NetworkManager[48920]: <info>  [1763804911.7632] device (tap449be411-40): carrier: link connected
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.773 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3306891a-8766-4442-88dc-bf9122fe85a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6bee20c3-8f31-4f75-bd6b-063c6f7f4a65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap449be411-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776250, 'reachable_time': 36535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402476, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.820 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8c49eb6e-c087-413c-b489-5f48952960ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:5a86'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 776250, 'tstamp': 776250}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402477, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.835 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 73f1da2d-d075-455d-94dd-f10146df7d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.846 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3afd4060-c728-4dd4-806a-076316b6080d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap449be411-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:5a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776250, 'reachable_time': 36535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402478, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.886 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0d147129-9418-464b-9bfb-f1f1bc171b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.918 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] resizing rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.960 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0ad9dd-a07b-41a3-965b-4ae45be61ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.964 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap449be411-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.964 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.966 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap449be411-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:31 np0005532048 NetworkManager[48920]: <info>  [1763804911.9694] manager: (tap449be411-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/640)
Nov 22 04:48:31 np0005532048 kernel: tap449be411-40: entered promiscuous mode
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.971 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap449be411-40, col_values=(('external_ids', {'iface-id': '02bcb711-03d1-4bf4-b274-247c09a1af89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:31 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:31Z|01563|binding|INFO|Releasing lport 02bcb711-03d1-4bf4-b274-247c09a1af89 from this chassis (sb_readonly=0)
Nov 22 04:48:31 np0005532048 nova_compute[253661]: 2025-11-22 09:48:31.988 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.993 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.994 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[97df544c-c1b0-4d1f-a4d4-1b55aa051fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.996 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-449be411-464c-4d69-be15-6372ecacd778
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/449be411-464c-4d69-be15-6372ecacd778.pid.haproxy
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 449be411-464c-4d69-be15-6372ecacd778
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:48:31 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:31.997 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'env', 'PROCESS_TAG=haproxy-449be411-464c-4d69-be15-6372ecacd778', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/449be411-464c-4d69-be15-6372ecacd778.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.030 253665 DEBUG nova.objects.instance [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lazy-loading 'migration_context' on Instance uuid 73f1da2d-d075-455d-94dd-f10146df7d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.069 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.070 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Ensure instance console log exists: /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.070 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.071 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.071 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 113 op/s
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.289 253665 DEBUG nova.compute.manager [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.290 253665 DEBUG oslo_concurrency.lockutils [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.290 253665 DEBUG oslo_concurrency.lockutils [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.290 253665 DEBUG oslo_concurrency.lockutils [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.291 253665 DEBUG nova.compute.manager [req-549c25ba-4d35-437c-9a23-02102e914220 req-039bb5b0-fb42-430d-90d5-7938d80d0ef1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Processing event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:48:32 np0005532048 podman[402582]: 2025-11-22 09:48:32.43213937 +0000 UTC m=+0.064420692 container create b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:48:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:48:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:48:32 np0005532048 systemd[1]: Started libpod-conmon-b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd.scope.
Nov 22 04:48:32 np0005532048 podman[402582]: 2025-11-22 09:48:32.399577455 +0000 UTC m=+0.031858797 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:48:32 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:48:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf74c07a4a6889725f3f85eec25642fe7fcac08ba09ce686a69d19763d2ff09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:32 np0005532048 podman[402582]: 2025-11-22 09:48:32.546448342 +0000 UTC m=+0.178729684 container init b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:48:32 np0005532048 podman[402582]: 2025-11-22 09:48:32.553290311 +0000 UTC m=+0.185571633 container start b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 04:48:32 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [NOTICE]   (402638) : New worker (402644) forked
Nov 22 04:48:32 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [NOTICE]   (402638) : Loading success.
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.652 253665 DEBUG nova.compute.manager [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.654 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804912.6522875, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.654 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Started (Lifecycle Event)#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.658 253665 DEBUG nova.virt.libvirt.driver [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.662 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance spawned successfully.#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.673 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.678 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.696 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.696 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804912.653604, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.696 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.711 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.714 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804912.6583905, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.714 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.729 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.732 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.761 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:48:32 np0005532048 nova_compute[253661]: 2025-11-22 09:48:32.811 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Successfully created port: 8d1f2012-aa57-4dfc-a744-a852d1353ad2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:48:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Nov 22 04:48:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Nov 22 04:48:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Nov 22 04:48:34 np0005532048 nova_compute[253661]: 2025-11-22 09:48:34.144 253665 DEBUG nova.compute.manager [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:48:34 np0005532048 nova_compute[253661]: 2025-11-22 09:48:34.230 253665 DEBUG oslo_concurrency.lockutils [None req-61b0d173-fbf2-455b-93c8-9c36da076d91 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 23.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.0 MiB/s wr, 124 op/s
Nov 22 04:48:34 np0005532048 nova_compute[253661]: 2025-11-22 09:48:34.352 253665 DEBUG nova.compute.manager [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:34 np0005532048 nova_compute[253661]: 2025-11-22 09:48:34.352 253665 DEBUG oslo_concurrency.lockutils [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:34 np0005532048 nova_compute[253661]: 2025-11-22 09:48:34.353 253665 DEBUG oslo_concurrency.lockutils [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:34 np0005532048 nova_compute[253661]: 2025-11-22 09:48:34.353 253665 DEBUG oslo_concurrency.lockutils [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:34 np0005532048 nova_compute[253661]: 2025-11-22 09:48:34.353 253665 DEBUG nova.compute.manager [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:48:34 np0005532048 nova_compute[253661]: 2025-11-22 09:48:34.353 253665 WARNING nova.compute.manager [req-380fcceb-75b1-4e84-936a-0597c2a30c79 req-8f53a5a3-1717-493c-bf57-accd199b911e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state None.#033[00m
Nov 22 04:48:35 np0005532048 nova_compute[253661]: 2025-11-22 09:48:35.016 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Successfully updated port: 8d1f2012-aa57-4dfc-a744-a852d1353ad2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:48:35 np0005532048 nova_compute[253661]: 2025-11-22 09:48:35.029 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:35 np0005532048 nova_compute[253661]: 2025-11-22 09:48:35.029 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquired lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:35 np0005532048 nova_compute[253661]: 2025-11-22 09:48:35.029 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:48:35 np0005532048 nova_compute[253661]: 2025-11-22 09:48:35.185 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:48:35 np0005532048 podman[402655]: 2025-11-22 09:48:35.391597787 +0000 UTC m=+0.068042460 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:48:35 np0005532048 podman[402654]: 2025-11-22 09:48:35.41151163 +0000 UTC m=+0.090738102 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 22 04:48:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 196 op/s
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.339 253665 DEBUG nova.network.neutron [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updating instance_info_cache with network_info: [{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.366 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Releasing lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.367 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance network_info: |[{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.369 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start _get_guest_xml network_info=[{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.373 253665 WARNING nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.380 253665 DEBUG nova.virt.libvirt.host [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.380 253665 DEBUG nova.virt.libvirt.host [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.384 253665 DEBUG nova.virt.libvirt.host [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.384 253665 DEBUG nova.virt.libvirt.host [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.384 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.385 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.385 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.385 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.386 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.386 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.386 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.386 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.387 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.387 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.387 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.387 253665 DEBUG nova.virt.hardware [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.390 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.446 253665 DEBUG nova.compute.manager [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-changed-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.446 253665 DEBUG nova.compute.manager [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Refreshing instance network info cache due to event network-changed-8d1f2012-aa57-4dfc-a744-a852d1353ad2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.447 253665 DEBUG oslo_concurrency.lockutils [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.447 253665 DEBUG oslo_concurrency.lockutils [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.447 253665 DEBUG nova.network.neutron [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Refreshing network info cache for port 8d1f2012-aa57-4dfc-a744-a852d1353ad2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:48:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:48:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604616291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.853 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.879 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:36 np0005532048 nova_compute[253661]: 2025-11-22 09:48:36.885 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:48:37 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1060518713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.354 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.356 253665 DEBUG nova.virt.libvirt.vif [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:48:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-762381395',display_name='tempest-TestServerBasicOps-server-762381395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-762381395',id=145,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtTbF8bJfFddW96zTLdAkxE2iVzgX9zcT4Pj4BnA20Jji6o4SOv+z2CVEObDH8w0qoNYti5+X9zzKmkIowUY67LzvbSFwG+M1TtD6ysNGURVyIwLTyMSUq/al9LkPsMvg==',key_name='tempest-TestServerBasicOps-443879583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce09df5a051f4f24bbb216fbe5785dcb',ramdisk_id='',reservation_id='r-b7997295',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1909013265',owner_user_name='tempest-TestServerBasicOps-1909013265-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:48:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aff683c22adc499393a2037bae323af6',uuid=73f1da2d-d075-455d-94dd-f10146df7d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.357 253665 DEBUG nova.network.os_vif_util [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converting VIF {"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.358 253665 DEBUG nova.network.os_vif_util [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.359 253665 DEBUG nova.objects.instance [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lazy-loading 'pci_devices' on Instance uuid 73f1da2d-d075-455d-94dd-f10146df7d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.375 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <uuid>73f1da2d-d075-455d-94dd-f10146df7d30</uuid>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <name>instance-00000091</name>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestServerBasicOps-server-762381395</nova:name>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:48:36</nova:creationTime>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <nova:user uuid="aff683c22adc499393a2037bae323af6">tempest-TestServerBasicOps-1909013265-project-member</nova:user>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <nova:project uuid="ce09df5a051f4f24bbb216fbe5785dcb">tempest-TestServerBasicOps-1909013265</nova:project>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <nova:port uuid="8d1f2012-aa57-4dfc-a744-a852d1353ad2">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <entry name="serial">73f1da2d-d075-455d-94dd-f10146df7d30</entry>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <entry name="uuid">73f1da2d-d075-455d-94dd-f10146df7d30</entry>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/73f1da2d-d075-455d-94dd-f10146df7d30_disk">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/73f1da2d-d075-455d-94dd-f10146df7d30_disk.config">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:43:7c:5c"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <target dev="tap8d1f2012-aa"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/console.log" append="off"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:48:37 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:48:37 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:48:37 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:48:37 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.376 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Preparing to wait for external event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.376 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.376 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.377 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.378 253665 DEBUG nova.virt.libvirt.vif [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:48:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-762381395',display_name='tempest-TestServerBasicOps-server-762381395',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-762381395',id=145,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtTbF8bJfFddW96zTLdAkxE2iVzgX9zcT4Pj4BnA20Jji6o4SOv+z2CVEObDH8w0qoNYti5+X9zzKmkIowUY67LzvbSFwG+M1TtD6ysNGURVyIwLTyMSUq/al9LkPsMvg==',key_name='tempest-TestServerBasicOps-443879583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce09df5a051f4f24bbb216fbe5785dcb',ramdisk_id='',reservation_id='r-b7997295',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1909013265',owner_user_name='tempest-TestServerBasicOps-1909013265-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:48:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aff683c22adc499393a2037bae323af6',uuid=73f1da2d-d075-455d-94dd-f10146df7d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.378 253665 DEBUG nova.network.os_vif_util [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converting VIF {"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.379 253665 DEBUG nova.network.os_vif_util [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.379 253665 DEBUG os_vif [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.380 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.384 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d1f2012-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.390 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d1f2012-aa, col_values=(('external_ids', {'iface-id': '8d1f2012-aa57-4dfc-a744-a852d1353ad2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:7c:5c', 'vm-uuid': '73f1da2d-d075-455d-94dd-f10146df7d30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:37 np0005532048 NetworkManager[48920]: <info>  [1763804917.3927] manager: (tap8d1f2012-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.400 253665 INFO os_vif [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa')#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.458 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.459 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.460 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] No VIF found with MAC fa:16:3e:43:7c:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.460 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Using config drive#033[00m
Nov 22 04:48:37 np0005532048 nova_compute[253661]: 2025-11-22 09:48:37.484 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.081 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Creating config drive at /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.087 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp50wf67b2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.139 253665 DEBUG nova.network.neutron [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updated VIF entry in instance network info cache for port 8d1f2012-aa57-4dfc-a744-a852d1353ad2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.140 253665 DEBUG nova.network.neutron [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updating instance_info_cache with network_info: [{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.155 253665 DEBUG oslo_concurrency.lockutils [req-d90f91f7-c379-43be-b68e-6dc8561b6a20 req-a32785aa-860e-44a9-9f5e-812e6691374a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.247 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp50wf67b2" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 187 op/s
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.274 253665 DEBUG nova.storage.rbd_utils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] rbd image 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.279 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.464 253665 DEBUG oslo_concurrency.processutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config 73f1da2d-d075-455d-94dd-f10146df7d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.466 253665 INFO nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Deleting local config drive /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30/disk.config because it was imported into RBD.#033[00m
Nov 22 04:48:38 np0005532048 kernel: tap8d1f2012-aa: entered promiscuous mode
Nov 22 04:48:38 np0005532048 NetworkManager[48920]: <info>  [1763804918.5516] manager: (tap8d1f2012-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Nov 22 04:48:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:38Z|01564|binding|INFO|Claiming lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 for this chassis.
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:38Z|01565|binding|INFO|8d1f2012-aa57-4dfc-a744-a852d1353ad2: Claiming fa:16:3e:43:7c:5c 10.100.0.9
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.572 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:7c:5c 10.100.0.9'], port_security=['fa:16:3e:43:7c:5c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73f1da2d-d075-455d-94dd-f10146df7d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473e817e-09da-452b-aec0-d46546489b36', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce09df5a051f4f24bbb216fbe5785dcb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '134111d9-6c1c-466d-8bad-cdc68aa178a5 f24a4b05-cdda-49a7-af44-458e15bd9a13', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf052f1b-89e1-46ee-8169-e44075a76fcb, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8d1f2012-aa57-4dfc-a744-a852d1353ad2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.574 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8d1f2012-aa57-4dfc-a744-a852d1353ad2 in datapath 473e817e-09da-452b-aec0-d46546489b36 bound to our chassis#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.577 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 473e817e-09da-452b-aec0-d46546489b36#033[00m
Nov 22 04:48:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:38Z|01566|binding|INFO|Setting lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 ovn-installed in OVS
Nov 22 04:48:38 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:38Z|01567|binding|INFO|Setting lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 up in Southbound
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.587 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.592 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[07dee0ab-6220-4cef-b22b-d1dc661c857a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.593 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap473e817e-01 in ovnmeta-473e817e-09da-452b-aec0-d46546489b36 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.596 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap473e817e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.596 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[29b9d308-55a1-414a-ae88-e50af46d117d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.597 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ac026fae-3bb6-445a-9ae5-703d0264376a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 systemd-udevd[402825]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:48:38 np0005532048 systemd-machined[215941]: New machine qemu-177-instance-00000091.
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.617 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a49010-2a7a-47d7-8b71-f7df8519bbca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 NetworkManager[48920]: <info>  [1763804918.6242] device (tap8d1f2012-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:48:38 np0005532048 NetworkManager[48920]: <info>  [1763804918.6252] device (tap8d1f2012-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:48:38 np0005532048 systemd[1]: Started Virtual Machine qemu-177-instance-00000091.
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.648 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ab296e-d8c9-47f5-b0d0-b796e7809440]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.702 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4cba8b-52b1-4a62-b58d-a9669da4c4cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 systemd-udevd[402831]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:48:38 np0005532048 NetworkManager[48920]: <info>  [1763804918.7092] manager: (tap473e817e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/643)
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.708 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a855f93b-1bc7-48e3-ac14-9489db07e916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.759 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[ce974b5f-8d24-4629-a172-43bba36b99d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.763 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0b92f34a-48eb-42f5-a4af-5ecbc6a69777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 NetworkManager[48920]: <info>  [1763804918.8003] device (tap473e817e-00): carrier: link connected
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.812 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[d4afd9e3-2376-4510-933a-be1366354bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.836 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a505fcf3-448f-41c6-b722-687904348cdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473e817e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:74:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776953, 'reachable_time': 43715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402860, 'error': None, 'target': 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.848 253665 DEBUG nova.compute.manager [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.849 253665 DEBUG oslo_concurrency.lockutils [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.849 253665 DEBUG oslo_concurrency.lockutils [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.850 253665 DEBUG oslo_concurrency.lockutils [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:38 np0005532048 nova_compute[253661]: 2025-11-22 09:48:38.850 253665 DEBUG nova.compute.manager [req-efb9426e-b1d9-4caa-a5f6-b610beb0c1c1 req-9bd7317c-1279-447f-9fcf-8bfd64f5761a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Processing event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.857 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[108b0b4b-edcc-46df-af7a-311038647983]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:74ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 776953, 'tstamp': 776953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402861, 'error': None, 'target': 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.887 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[813521c5-8ad8-4216-8f05-41c05c17fd53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap473e817e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:74:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776953, 'reachable_time': 43715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402862, 'error': None, 'target': 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:38 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:38.933 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[03053e16-b21f-45d7-bd13-2554b48c8126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[04c39bc0-f803-4c05-8320-15a2bc7a452c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.017 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473e817e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.017 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.017 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473e817e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:39 np0005532048 kernel: tap473e817e-00: entered promiscuous mode
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.019 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:39 np0005532048 NetworkManager[48920]: <info>  [1763804919.0205] manager: (tap473e817e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.025 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.026 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap473e817e-00, col_values=(('external_ids', {'iface-id': 'fb48fac2-f19f-4ef4-a7bc-e07e49098585'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.027 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:39 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:39Z|01568|binding|INFO|Releasing lport fb48fac2-f19f-4ef4-a7bc-e07e49098585 from this chassis (sb_readonly=0)
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.045 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.050 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.052 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/473e817e-09da-452b-aec0-d46546489b36.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/473e817e-09da-452b-aec0-d46546489b36.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.053 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bc166c-4901-4721-9a8f-01ab034d8169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.053 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-473e817e-09da-452b-aec0-d46546489b36
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/473e817e-09da-452b-aec0-d46546489b36.pid.haproxy
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 473e817e-09da-452b-aec0-d46546489b36
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:48:39 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:39.054 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'env', 'PROCESS_TAG=haproxy-473e817e-09da-452b-aec0-d46546489b36', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/473e817e-09da-452b-aec0-d46546489b36.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.140 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804919.139901, 73f1da2d-d075-455d-94dd-f10146df7d30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.141 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] VM Started (Lifecycle Event)#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.143 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.148 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.152 253665 INFO nova.virt.libvirt.driver [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance spawned successfully.#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.152 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.180 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.195 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.201 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.202 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.202 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.203 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.203 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.204 253665 DEBUG nova.virt.libvirt.driver [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.233 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.233 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804919.1400502, 73f1da2d-d075-455d-94dd-f10146df7d30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.233 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.263 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.267 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804919.1471126, 73f1da2d-d075-455d-94dd-f10146df7d30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.267 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.273 253665 INFO nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Took 8.67 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.273 253665 DEBUG nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.282 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.286 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.317 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.334 253665 INFO nova.compute.manager [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Took 9.63 seconds to build instance.#033[00m
Nov 22 04:48:39 np0005532048 nova_compute[253661]: 2025-11-22 09:48:39.348 253665 DEBUG oslo_concurrency.lockutils [None req-5971c867-fdc0-4f65-861d-4785eb7f9b6f aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:39 np0005532048 podman[402936]: 2025-11-22 09:48:39.476857694 +0000 UTC m=+0.069120357 container create e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 04:48:39 np0005532048 systemd[1]: Started libpod-conmon-e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf.scope.
Nov 22 04:48:39 np0005532048 podman[402936]: 2025-11-22 09:48:39.444043105 +0000 UTC m=+0.036305888 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:48:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:48:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b593cb210fa24aaae4778a6a8c263ac8ff16d9e064d9f8dea041fd87cb8a31/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:48:39 np0005532048 podman[402936]: 2025-11-22 09:48:39.574893685 +0000 UTC m=+0.167156378 container init e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:48:39 np0005532048 podman[402936]: 2025-11-22 09:48:39.588091982 +0000 UTC m=+0.180354645 container start e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 04:48:39 np0005532048 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [NOTICE]   (402955) : New worker (402957) forked
Nov 22 04:48:39 np0005532048 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [NOTICE]   (402955) : Loading success.
Nov 22 04:48:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Nov 22 04:48:40 np0005532048 nova_compute[253661]: 2025-11-22 09:48:40.940 253665 DEBUG nova.compute.manager [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:40 np0005532048 nova_compute[253661]: 2025-11-22 09:48:40.941 253665 DEBUG oslo_concurrency.lockutils [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:40 np0005532048 nova_compute[253661]: 2025-11-22 09:48:40.941 253665 DEBUG oslo_concurrency.lockutils [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:40 np0005532048 nova_compute[253661]: 2025-11-22 09:48:40.941 253665 DEBUG oslo_concurrency.lockutils [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:40 np0005532048 nova_compute[253661]: 2025-11-22 09:48:40.941 253665 DEBUG nova.compute.manager [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] No waiting events found dispatching network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:48:40 np0005532048 nova_compute[253661]: 2025-11-22 09:48:40.942 253665 WARNING nova.compute.manager [req-da456b51-888a-4b0a-a7af-9f957a0c04df req-acdc72ec-fd89-48bb-be28-b3f2e5be2033 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received unexpected event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.147 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.147 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.148 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.148 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.148 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.149 253665 INFO nova.compute.manager [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Terminating instance#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.150 253665 DEBUG nova.compute.manager [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:48:42 np0005532048 kernel: tap1a443391-10 (unregistering): left promiscuous mode
Nov 22 04:48:42 np0005532048 NetworkManager[48920]: <info>  [1763804922.2271] device (tap1a443391-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:42Z|01569|binding|INFO|Releasing lport 1a443391-105a-4568-ba24-7748b702e21d from this chassis (sb_readonly=0)
Nov 22 04:48:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:42Z|01570|binding|INFO|Setting lport 1a443391-105a-4568-ba24-7748b702e21d down in Southbound
Nov 22 04:48:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:42Z|01571|binding|INFO|Removing iface tap1a443391-10 ovn-installed in OVS
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 207 op/s
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.260 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53'], port_security=['fa:16:3e:1b:0f:53 10.100.0.8 2001:db8:0:1:f816:3eff:fe1b:f53 2001:db8::f816:3eff:fe1b:f53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:fe1b:f53/64 2001:db8::f816:3eff:fe1b:f53/64', 'neutron:device_id': '6973b14c-b2af-4012-9d0c-1e86b6eb3a28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b8f1ae80-edda-4d40-9085-393558ac5aa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=1a443391-105a-4568-ba24-7748b702e21d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.262 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 1a443391-105a-4568-ba24-7748b702e21d in datapath b6b9221a-729b-4988-afa8-72f95360d9ea unbound from our chassis#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.266 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b6b9221a-729b-4988-afa8-72f95360d9ea#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:42 np0005532048 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 22 04:48:42 np0005532048 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000090.scope: Consumed 15.808s CPU time.
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.291 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[746f0df0-966c-4fe5-bdd9-436eec434158]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:42 np0005532048 systemd-machined[215941]: Machine qemu-175-instance-00000090 terminated.
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.324 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4e404c0f-5986-43e2-acd0-1b44409969f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.330 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3a671407-1592-4dd0-a019-015bb6be0337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:42 np0005532048 podman[402966]: 2025-11-22 09:48:42.354147303 +0000 UTC m=+0.102430380 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.364 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[baaa86e0-96ea-4f55-9c75-ad58788e9597]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.387 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f03b1737-ade6-4ac7-9d87-7d4214df7d6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb6b9221a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:f0:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 441], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764983, 'reachable_time': 18873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403004, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.396 253665 INFO nova.virt.libvirt.driver [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Instance destroyed successfully.#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.396 253665 DEBUG nova.objects.instance [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.406 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[47aaaa71-2d29-4620-8e8d-db7d0fb10bb8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb6b9221a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764994, 'tstamp': 764994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403009, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb6b9221a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764997, 'tstamp': 764997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403009, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.408 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6b9221a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.411 253665 DEBUG nova.virt.libvirt.vif [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:47:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-338385867',display_name='tempest-TestGettingAddress-server-338385867',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-338385867',id=144,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:47:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-rpa99d70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:47:57Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=6973b14c-b2af-4012-9d0c-1e86b6eb3a28,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.412 253665 DEBUG nova.network.os_vif_util [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.412 253665 DEBUG nova.network.os_vif_util [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.413 253665 DEBUG os_vif [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.414 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.415 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a443391-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.417 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.418 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6b9221a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.418 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.419 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb6b9221a-70, col_values=(('external_ids', {'iface-id': 'b8d092bb-b893-4593-9090-1acdc081ae18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.420 253665 INFO os_vif [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:0f:53,bridge_name='br-int',has_traffic_filtering=True,id=1a443391-105a-4568-ba24-7748b702e21d,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a443391-10')#033[00m
Nov 22 04:48:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:42.420 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.818 253665 INFO nova.virt.libvirt.driver [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Deleting instance files /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28_del#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.819 253665 INFO nova.virt.libvirt.driver [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Deletion of /var/lib/nova/instances/6973b14c-b2af-4012-9d0c-1e86b6eb3a28_del complete#033[00m
Nov 22 04:48:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.871 253665 INFO nova.compute.manager [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.872 253665 DEBUG oslo.service.loopingcall [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.872 253665 DEBUG nova.compute.manager [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:48:42 np0005532048 nova_compute[253661]: 2025-11-22 09:48:42.872 253665 DEBUG nova.network.neutron [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:48:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Nov 22 04:48:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Nov 22 04:48:43 np0005532048 nova_compute[253661]: 2025-11-22 09:48:43.033 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-changed-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:43 np0005532048 nova_compute[253661]: 2025-11-22 09:48:43.034 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing instance network info cache due to event network-changed-1a443391-105a-4568-ba24-7748b702e21d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:48:43 np0005532048 nova_compute[253661]: 2025-11-22 09:48:43.034 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:43 np0005532048 nova_compute[253661]: 2025-11-22 09:48:43.034 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:43 np0005532048 nova_compute[253661]: 2025-11-22 09:48:43.034 253665 DEBUG nova.network.neutron [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Refreshing network info cache for port 1a443391-105a-4568-ba24-7748b702e21d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:48:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 299 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 192 op/s
Nov 22 04:48:44 np0005532048 nova_compute[253661]: 2025-11-22 09:48:44.649 253665 DEBUG nova.network.neutron [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:44 np0005532048 nova_compute[253661]: 2025-11-22 09:48:44.665 253665 INFO nova.compute.manager [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Took 1.79 seconds to deallocate network for instance.#033[00m
Nov 22 04:48:44 np0005532048 nova_compute[253661]: 2025-11-22 09:48:44.712 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:44 np0005532048 nova_compute[253661]: 2025-11-22 09:48:44.713 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:44 np0005532048 nova_compute[253661]: 2025-11-22 09:48:44.739 253665 DEBUG nova.compute.manager [req-5884cfa9-152d-4f16-a08a-1c0a68f360a4 req-6a027da9-78d8-44bf-bc82-5d971da3c95e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-deleted-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:44 np0005532048 nova_compute[253661]: 2025-11-22 09:48:44.806 253665 DEBUG oslo_concurrency.processutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.114 253665 DEBUG nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.115 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.115 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.116 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.116 253665 DEBUG nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] No waiting events found dispatching network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.116 253665 WARNING nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received unexpected event network-vif-plugged-1a443391-105a-4568-ba24-7748b702e21d for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.116 253665 DEBUG nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-changed-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.117 253665 DEBUG nova.compute.manager [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Refreshing instance network info cache due to event network-changed-8d1f2012-aa57-4dfc-a744-a852d1353ad2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.117 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.117 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.117 253665 DEBUG nova.network.neutron [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Refreshing network info cache for port 8d1f2012-aa57-4dfc-a744-a852d1353ad2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:48:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:48:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876950918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.331 253665 DEBUG oslo_concurrency.processutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.336 253665 DEBUG nova.compute.provider_tree [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.350 253665 DEBUG nova.scheduler.client.report [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.378 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.408 253665 INFO nova.scheduler.client.report [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 6973b14c-b2af-4012-9d0c-1e86b6eb3a28#033[00m
Nov 22 04:48:45 np0005532048 nova_compute[253661]: 2025-11-22 09:48:45.500 253665 DEBUG oslo_concurrency.lockutils [None req-35d2c07a-790e-48ca-94b7-90a608a26b4e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.176 253665 DEBUG nova.network.neutron [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updated VIF entry in instance network info cache for port 1a443391-105a-4568-ba24-7748b702e21d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.176 253665 DEBUG nova.network.neutron [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Updating instance_info_cache with network_info: [{"id": "1a443391-105a-4568-ba24-7748b702e21d", "address": "fa:16:3e:1b:0f:53", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe1b:f53", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a443391-10", "ovs_interfaceid": "1a443391-105a-4568-ba24-7748b702e21d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.205 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-6973b14c-b2af-4012-9d0c-1e86b6eb3a28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.205 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-unplugged-1a443391-105a-4568-ba24-7748b702e21d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.205 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.206 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.206 253665 DEBUG oslo_concurrency.lockutils [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "6973b14c-b2af-4012-9d0c-1e86b6eb3a28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.206 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] No waiting events found dispatching network-vif-unplugged-1a443391-105a-4568-ba24-7748b702e21d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.206 253665 DEBUG nova.compute.manager [req-0bca521d-1d18-4f41-a6e7-6aaff2cd8861 req-5e6617de-5c6f-4b9c-baee-b6d89b4dc3dc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Received event network-vif-unplugged-1a443391-105a-4568-ba24-7748b702e21d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:48:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 282 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 972 KiB/s wr, 155 op/s
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.451 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.452 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.452 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.453 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.453 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.454 253665 INFO nova.compute.manager [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Terminating instance#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.456 253665 DEBUG nova.compute.manager [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:48:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:46Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:cb:74 10.100.0.14
Nov 22 04:48:46 np0005532048 kernel: tapbe2ad403-fc (unregistering): left promiscuous mode
Nov 22 04:48:46 np0005532048 NetworkManager[48920]: <info>  [1763804926.5330] device (tapbe2ad403-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:48:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:46Z|01572|binding|INFO|Releasing lport be2ad403-fc37-4e1b-a9b8-f0e116595caf from this chassis (sb_readonly=0)
Nov 22 04:48:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:46Z|01573|binding|INFO|Setting lport be2ad403-fc37-4e1b-a9b8-f0e116595caf down in Southbound
Nov 22 04:48:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:46Z|01574|binding|INFO|Removing iface tapbe2ad403-fc ovn-installed in OVS
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.555 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.566 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893'], port_security=['fa:16:3e:ca:08:93 10.100.0.6 2001:db8:0:1:f816:3eff:feca:893 2001:db8::f816:3eff:feca:893'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feca:893/64 2001:db8::f816:3eff:feca:893/64', 'neutron:device_id': '63134c6f-fc14-4157-9874-e7c6227f8d0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b6b9221a-729b-4988-afa8-72f95360d9ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b8f1ae80-edda-4d40-9085-393558ac5aa1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eb6cfbf-9d17-4d61-b927-87a60dc61782, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=be2ad403-fc37-4e1b-a9b8-f0e116595caf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.567 162862 INFO neutron.agent.ovn.metadata.agent [-] Port be2ad403-fc37-4e1b-a9b8-f0e116595caf in datapath b6b9221a-729b-4988-afa8-72f95360d9ea unbound from our chassis#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.574 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b6b9221a-729b-4988-afa8-72f95360d9ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.575 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa17a8f-d1f6-4a37-a4bd-a6dd7e6ba554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.577 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea namespace which is not needed anymore#033[00m
Nov 22 04:48:46 np0005532048 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Nov 22 04:48:46 np0005532048 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008e.scope: Consumed 18.967s CPU time.
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.588 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:46 np0005532048 systemd-machined[215941]: Machine qemu-173-instance-0000008e terminated.
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.697 253665 INFO nova.virt.libvirt.driver [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Instance destroyed successfully.#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.698 253665 DEBUG nova.objects.instance [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 63134c6f-fc14-4157-9874-e7c6227f8d0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.709 253665 DEBUG nova.virt.libvirt.vif [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:46:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-533516475',display_name='tempest-TestGettingAddress-server-533516475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-533516475',id=142,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpILKTYWQ3kfrev/53VAY+pIDp4KWqBaIuz4XZlRuV7cYP/3tSjynSwyzK2UmsUCSjsXQFLnnvZ6v16tA6+0Is85ND23t1ywaxzBRdcHpQBUN3ph/tnW10JsUxuXJTUFw==',key_name='tempest-TestGettingAddress-1100634772',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:46:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-khcmddwq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:46:39Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=63134c6f-fc14-4157-9874-e7c6227f8d0a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.712 253665 DEBUG nova.network.os_vif_util [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.714 253665 DEBUG nova.network.os_vif_util [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.714 253665 DEBUG os_vif [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.716 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.717 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe2ad403-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.724 253665 INFO os_vif [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:08:93,bridge_name='br-int',has_traffic_filtering=True,id=be2ad403-fc37-4e1b-a9b8-f0e116595caf,network=Network(b6b9221a-729b-4988-afa8-72f95360d9ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2ad403-fc')#033[00m
Nov 22 04:48:46 np0005532048 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [NOTICE]   (398665) : haproxy version is 2.8.14-c23fe91
Nov 22 04:48:46 np0005532048 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [NOTICE]   (398665) : path to executable is /usr/sbin/haproxy
Nov 22 04:48:46 np0005532048 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [WARNING]  (398665) : Exiting Master process...
Nov 22 04:48:46 np0005532048 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [WARNING]  (398665) : Exiting Master process...
Nov 22 04:48:46 np0005532048 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [ALERT]    (398665) : Current worker (398667) exited with code 143 (Terminated)
Nov 22 04:48:46 np0005532048 neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea[398661]: [WARNING]  (398665) : All workers exited. Exiting... (0)
Nov 22 04:48:46 np0005532048 systemd[1]: libpod-effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e.scope: Deactivated successfully.
Nov 22 04:48:46 np0005532048 podman[403079]: 2025-11-22 09:48:46.752406549 +0000 UTC m=+0.052688622 container died effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:48:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e-userdata-shm.mount: Deactivated successfully.
Nov 22 04:48:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ecd7c84ba3a988d7548558565c563363ed9d49bd874d1c96a511c8c8772c831a-merged.mount: Deactivated successfully.
Nov 22 04:48:46 np0005532048 podman[403079]: 2025-11-22 09:48:46.797817451 +0000 UTC m=+0.098099524 container cleanup effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:48:46 np0005532048 systemd[1]: libpod-conmon-effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e.scope: Deactivated successfully.
Nov 22 04:48:46 np0005532048 podman[403133]: 2025-11-22 09:48:46.870666559 +0000 UTC m=+0.044604372 container remove effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.877 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[683e3d25-5780-472c-9489-4f3468679325]: (4, ('Sat Nov 22 09:48:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea (effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e)\neffa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e\nSat Nov 22 09:48:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea (effa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e)\neffa53bbdaf5f0df9b57e5b7edd2a295b10ce49727395e3fa3f3e65a1672945e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.880 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd1d444-4f50-4441-852d-cec1a069a847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.881 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6b9221a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:46 np0005532048 kernel: tapb6b9221a-70: left promiscuous mode
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.890 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[aaaf7f8a-4dc4-4392-93d0-ee07f42aaced]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:46 np0005532048 nova_compute[253661]: 2025-11-22 09:48:46.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.912 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[efac061d-d5cb-4214-bbcb-7e79067ed562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.914 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8698e341-5303-468f-bc68-b0335a59ae73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.934 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a2b2a3-d29f-4262-939b-ab8a8c6a5d97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764977, 'reachable_time': 21527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403148, 'error': None, 'target': 'ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:46 np0005532048 systemd[1]: run-netns-ovnmeta\x2db6b9221a\x2d729b\x2d4988\x2dafa8\x2d72f95360d9ea.mount: Deactivated successfully.
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.940 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b6b9221a-729b-4988-afa8-72f95360d9ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:48:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:46.940 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[e68af062-2da9-4c12-a4f7-2bd5b45e5c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.175 253665 INFO nova.virt.libvirt.driver [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Deleting instance files /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a_del#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.176 253665 INFO nova.virt.libvirt.driver [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Deletion of /var/lib/nova/instances/63134c6f-fc14-4157-9874-e7c6227f8d0a_del complete#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.184 253665 DEBUG nova.network.neutron [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updated VIF entry in instance network info cache for port 8d1f2012-aa57-4dfc-a744-a852d1353ad2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.185 253665 DEBUG nova.network.neutron [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updating instance_info_cache with network_info: [{"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.199 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.200 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing instance network info cache due to event network-changed-be2ad403-fc37-4e1b-a9b8-f0e116595caf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.200 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.200 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.201 253665 DEBUG nova.network.neutron [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Refreshing network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.219 253665 DEBUG oslo_concurrency.lockutils [req-99ee1f5c-f65a-4e6d-969c-77e595c40e90 req-b5467805-4227-48f4-83d9-873a990f9648 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-73f1da2d-d075-455d-94dd-f10146df7d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.235 253665 INFO nova.compute.manager [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.236 253665 DEBUG oslo.service.loopingcall [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.236 253665 DEBUG nova.compute.manager [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.236 253665 DEBUG nova.network.neutron [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.268 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.389 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.390 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:47 np0005532048 nova_compute[253661]: 2025-11-22 09:48:47.390 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:48:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 52 KiB/s wr, 171 op/s
Nov 22 04:48:48 np0005532048 nova_compute[253661]: 2025-11-22 09:48:48.403 253665 DEBUG nova.network.neutron [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:48 np0005532048 nova_compute[253661]: 2025-11-22 09:48:48.423 253665 INFO nova.compute.manager [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Took 1.19 seconds to deallocate network for instance.#033[00m
Nov 22 04:48:48 np0005532048 nova_compute[253661]: 2025-11-22 09:48:48.461 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:48 np0005532048 nova_compute[253661]: 2025-11-22 09:48:48.461 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:48 np0005532048 nova_compute[253661]: 2025-11-22 09:48:48.537 253665 DEBUG oslo_concurrency.processutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:48 np0005532048 nova_compute[253661]: 2025-11-22 09:48:48.907 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:48 np0005532048 nova_compute[253661]: 2025-11-22 09:48:48.928 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:48 np0005532048 nova_compute[253661]: 2025-11-22 09:48:48.929 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:48:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:48:49 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305776052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.049 253665 DEBUG oslo_concurrency.processutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.054 253665 DEBUG nova.compute.provider_tree [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.072 253665 DEBUG nova.scheduler.client.report [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.091 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.121 253665 INFO nova.scheduler.client.report [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 63134c6f-fc14-4157-9874-e7c6227f8d0a#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.181 253665 DEBUG oslo_concurrency.lockutils [None req-71282b54-473b-413a-aeca-d3ec0dbf30de 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.297 253665 DEBUG nova.compute.manager [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.298 253665 DEBUG oslo_concurrency.lockutils [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.298 253665 DEBUG oslo_concurrency.lockutils [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.298 253665 DEBUG oslo_concurrency.lockutils [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.299 253665 DEBUG nova.compute.manager [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] No waiting events found dispatching network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.299 253665 WARNING nova.compute.manager [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received unexpected event network-vif-plugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.299 253665 DEBUG nova.compute.manager [req-58e609c6-0787-4451-8be2-d63b328a27c1 req-713b1f83-3e97-4a43-b0fd-d42d190653df 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-deleted-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.477 253665 DEBUG nova.network.neutron [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updated VIF entry in instance network info cache for port be2ad403-fc37-4e1b-a9b8-f0e116595caf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.478 253665 DEBUG nova.network.neutron [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Updating instance_info_cache with network_info: [{"id": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "address": "fa:16:3e:ca:08:93", "network": {"id": "b6b9221a-729b-4988-afa8-72f95360d9ea", "bridge": "br-int", "label": "tempest-network-smoke--496876643", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feca:893", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2ad403-fc", "ovs_interfaceid": "be2ad403-fc37-4e1b-a9b8-f0e116595caf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.496 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-63134c6f-fc14-4157-9874-e7c6227f8d0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.497 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-unplugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.497 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.497 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.497 253665 DEBUG oslo_concurrency.lockutils [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "63134c6f-fc14-4157-9874-e7c6227f8d0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.498 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] No waiting events found dispatching network-vif-unplugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:48:49 np0005532048 nova_compute[253661]: 2025-11-22 09:48:49.498 253665 DEBUG nova.compute.manager [req-bdb5f25d-cb5d-408a-9a08-12c5b97878b2 req-006260f5-4fa3-4c3b-b4a4-d569869c87fd 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Received event network-vif-unplugged-be2ad403-fc37-4e1b-a9b8-f0e116595caf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:48:50 np0005532048 nova_compute[253661]: 2025-11-22 09:48:50.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 228 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 52 KiB/s wr, 186 op/s
Nov 22 04:48:51 np0005532048 nova_compute[253661]: 2025-11-22 09:48:51.720 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:52 np0005532048 nova_compute[253661]: 2025-11-22 09:48:52.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:52 np0005532048 nova_compute[253661]: 2025-11-22 09:48:52.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 22 KiB/s wr, 149 op/s
Nov 22 04:48:52 np0005532048 nova_compute[253661]: 2025-11-22 09:48:52.271 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:48:52
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'vms', 'backups', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data']
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:48:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:48:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:53 np0005532048 nova_compute[253661]: 2025-11-22 09:48:53.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:53Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:7c:5c 10.100.0.9
Nov 22 04:48:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:53Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:7c:5c 10.100.0.9
Nov 22 04:48:54 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.251 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.251 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 175 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 573 KiB/s wr, 137 op/s
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.281 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.282 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.470 253665 DEBUG nova.compute.manager [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-changed-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.471 253665 DEBUG nova.compute.manager [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing instance network info cache due to event network-changed-88d574be-cb53-4693-a025-34a039ee625c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.472 253665 DEBUG oslo_concurrency.lockutils [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.472 253665 DEBUG oslo_concurrency.lockutils [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.473 253665 DEBUG nova.network.neutron [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Refreshing network info cache for port 88d574be-cb53-4693-a025-34a039ee625c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.540 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.541 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.541 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.542 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.542 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.543 253665 INFO nova.compute.manager [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Terminating instance#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.544 253665 DEBUG nova.compute.manager [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:48:54 np0005532048 kernel: tap88d574be-cb (unregistering): left promiscuous mode
Nov 22 04:48:54 np0005532048 NetworkManager[48920]: <info>  [1763804934.5960] device (tap88d574be-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:48:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:54Z|01575|binding|INFO|Releasing lport 88d574be-cb53-4693-a025-34a039ee625c from this chassis (sb_readonly=0)
Nov 22 04:48:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:54Z|01576|binding|INFO|Setting lport 88d574be-cb53-4693-a025-34a039ee625c down in Southbound
Nov 22 04:48:54 np0005532048 ovn_controller[152872]: 2025-11-22T09:48:54Z|01577|binding|INFO|Removing iface tap88d574be-cb ovn-installed in OVS
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.604 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:cb:74 10.100.0.14'], port_security=['fa:16:3e:88:cb:74 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '91cfde9c-3aa6-4946-92d6-471c8f63eb2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-449be411-464c-4d69-be15-6372ecacd778', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a86e5c3f3c34f2285b7958147f6bbd3', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'da881b1b-2aad-4a91-9422-a708cc3c5d34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a67d762-85ed-414e-ab70-eac2ab54b109, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=88d574be-cb53-4693-a025-34a039ee625c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.605 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 88d574be-cb53-4693-a025-34a039ee625c in datapath 449be411-464c-4d69-be15-6372ecacd778 unbound from our chassis#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.607 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 449be411-464c-4d69-be15-6372ecacd778, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.611 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d0729055-77da-4d47-a14a-4e7b95d0a9b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.612 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-449be411-464c-4d69-be15-6372ecacd778 namespace which is not needed anymore#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:54 np0005532048 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 22 04:48:54 np0005532048 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008f.scope: Consumed 14.700s CPU time.
Nov 22 04:48:54 np0005532048 systemd-machined[215941]: Machine qemu-176-instance-0000008f terminated.
Nov 22 04:48:54 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [NOTICE]   (402638) : haproxy version is 2.8.14-c23fe91
Nov 22 04:48:54 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [NOTICE]   (402638) : path to executable is /usr/sbin/haproxy
Nov 22 04:48:54 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [ALERT]    (402638) : Current worker (402644) exited with code 143 (Terminated)
Nov 22 04:48:54 np0005532048 neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778[402612]: [WARNING]  (402638) : All workers exited. Exiting... (0)
Nov 22 04:48:54 np0005532048 systemd[1]: libpod-b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd.scope: Deactivated successfully.
Nov 22 04:48:54 np0005532048 podman[403214]: 2025-11-22 09:48:54.753563771 +0000 UTC m=+0.044917230 container died b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:48:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:48:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513861184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.766 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.783 253665 INFO nova.virt.libvirt.driver [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Instance destroyed successfully.#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.784 253665 DEBUG nova.objects.instance [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lazy-loading 'resources' on Instance uuid 91cfde9c-3aa6-4946-92d6-471c8f63eb2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:48:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd-userdata-shm.mount: Deactivated successfully.
Nov 22 04:48:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6cf74c07a4a6889725f3f85eec25642fe7fcac08ba09ce686a69d19763d2ff09-merged.mount: Deactivated successfully.
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.799 253665 DEBUG nova.virt.libvirt.vif [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-22T09:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-140973884',display_name='tempest-TestShelveInstance-server-140973884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-140973884',id=143,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJMfV5BjTM8GJujok7HYi2H1JqAcE7EEyl3AluUOeV8mGOJe1kvDgduzG9FjqiMj3IyTkvrleTcL49x3Y3dHrfp4PbZT/WUxBgqL6QlOxXbuGaO695U0GzmKtLI552+pbw==',key_name='tempest-TestShelveInstance-1840126280',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a86e5c3f3c34f2285b7958147f6bbd3',ramdisk_id='',reservation_id='r-4322pjah',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-463882348',owner_user_name='tempest-TestShelveInstance-463882348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:48:34Z,user_data=None,user_id='15f54ba9d7eb4efd9b760da5c85ec22e',uuid=91cfde9c-3aa6-4946-92d6-471c8f63eb2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.800 253665 DEBUG nova.network.os_vif_util [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converting VIF {"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:48:54 np0005532048 podman[403214]: 2025-11-22 09:48:54.801141066 +0000 UTC m=+0.092494515 container cleanup b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.801 253665 DEBUG nova.network.os_vif_util [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.802 253665 DEBUG os_vif [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.804 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.805 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88d574be-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.808 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.809 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.811 253665 INFO os_vif [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:cb:74,bridge_name='br-int',has_traffic_filtering=True,id=88d574be-cb53-4693-a025-34a039ee625c,network=Network(449be411-464c-4d69-be15-6372ecacd778),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88d574be-cb')#033[00m
Nov 22 04:48:54 np0005532048 systemd[1]: libpod-conmon-b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd.scope: Deactivated successfully.
Nov 22 04:48:54 np0005532048 podman[403254]: 2025-11-22 09:48:54.873256706 +0000 UTC m=+0.048501698 container remove b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.879 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7cabffca-da77-4bef-abd8-ce09b52889d5]: (4, ('Sat Nov 22 09:48:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778 (b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd)\nb300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd\nSat Nov 22 09:48:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-449be411-464c-4d69-be15-6372ecacd778 (b300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd)\nb300adb070f643821f88c36ad80f049323fad27c65c5d69adde3765a038e81dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.880 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3615d060-226d-45bc-b8d0-97133dc9725b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.881 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap449be411-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:48:54 np0005532048 kernel: tap449be411-40: left promiscuous mode
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8399c637-09d8-43d1-97ff-d041a2ea6b25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.899 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.906 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1185e2fa-8397-4521-9157-d514a15915ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.907 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[20918a84-5242-4f69-8a37-97c6d3c91621]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.924 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c51cefd8-bb38-4f5e-a5d0-10924aa3c5bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776242, 'reachable_time': 38129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403289, 'error': None, 'target': 'ovnmeta-449be411-464c-4d69-be15-6372ecacd778', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:54 np0005532048 systemd[1]: run-netns-ovnmeta\x2d449be411\x2d464c\x2d4d69\x2dbe15\x2d6372ecacd778.mount: Deactivated successfully.
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.929 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-449be411-464c-4d69-be15-6372ecacd778 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:48:54 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:48:54.929 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[d05ff1ae-4081-4459-bd4e-9ef3a645489d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.931 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.931 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.937 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:48:54 np0005532048 nova_compute[253661]: 2025-11-22 09:48:54.938 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.121 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.122 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3320MB free_disk=59.91588592529297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.122 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.123 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.224 253665 INFO nova.virt.libvirt.driver [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deleting instance files /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_del#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.225 253665 INFO nova.virt.libvirt.driver [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deletion of /var/lib/nova/instances/91cfde9c-3aa6-4946-92d6-471c8f63eb2f_del complete#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.276 253665 INFO nova.compute.manager [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.278 253665 DEBUG oslo.service.loopingcall [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.278 253665 DEBUG nova.compute.manager [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.278 253665 DEBUG nova.network.neutron [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.303 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.304 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 73f1da2d-d075-455d-94dd-f10146df7d30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.304 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.304 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.511 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:48:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:48:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:48:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:48:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:48:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:48:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042091176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:48:55 np0005532048 nova_compute[253661]: 2025-11-22 09:48:55.994 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.000 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.015 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.047 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.048 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 138 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 1.0 MiB/s wr, 142 op/s
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.608 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.608 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.608 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.608 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.609 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.609 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-unplugged-88d574be-cb53-4693-a025-34a039ee625c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.609 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.610 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.610 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.610 253665 DEBUG oslo_concurrency.lockutils [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.610 253665 DEBUG nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] No waiting events found dispatching network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:48:56 np0005532048 nova_compute[253661]: 2025-11-22 09:48:56.611 253665 WARNING nova.compute.manager [req-f517343a-d59b-435f-b0db-ade4d8e1bc61 req-e24d5e7c-feb2-4a3c-9c0c-d7308bd6e1fa 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received unexpected event network-vif-plugged-88d574be-cb53-4693-a025-34a039ee625c for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:48:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:48:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:48:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:48:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:48:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.048 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.132 253665 DEBUG nova.network.neutron [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updated VIF entry in instance network info cache for port 88d574be-cb53-4693-a025-34a039ee625c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.132 253665 DEBUG nova.network.neutron [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [{"id": "88d574be-cb53-4693-a025-34a039ee625c", "address": "fa:16:3e:88:cb:74", "network": {"id": "449be411-464c-4d69-be15-6372ecacd778", "bridge": "br-int", "label": "tempest-TestShelveInstance-1930241389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a86e5c3f3c34f2285b7958147f6bbd3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88d574be-cb", "ovs_interfaceid": "88d574be-cb53-4693-a025-34a039ee625c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.150 253665 DEBUG oslo_concurrency.lockutils [req-0f308d90-f74d-4bbd-819a-7b1f6e5b21ab req-a8b43acb-f78a-433c-bef8-507e2d218ed2 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-91cfde9c-3aa6-4946-92d6-471c8f63eb2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.274 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.378 253665 DEBUG nova.network.neutron [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.388 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804922.387406, 6973b14c-b2af-4012-9d0c-1e86b6eb3a28 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.388 253665 INFO nova.compute.manager [-] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.401 253665 INFO nova.compute.manager [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Took 2.12 seconds to deallocate network for instance.#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.411 253665 DEBUG nova.compute.manager [None req-36ae3343-eb16-4e99-809f-9adc9d680151 - - - - - -] [instance: 6973b14c-b2af-4012-9d0c-1e86b6eb3a28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.453 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.453 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.467 253665 DEBUG nova.compute.manager [req-ebb9a721-03f9-4088-af7b-c4f0a9dcfe2b req-f8abd34f-ba66-4fc6-8d09-42314795a12d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Received event network-vif-deleted-88d574be-cb53-4693-a025-34a039ee625c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:48:57 np0005532048 nova_compute[253661]: 2025-11-22 09:48:57.519 253665 DEBUG oslo_concurrency.processutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:48:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:48:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:48:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879184415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:48:58 np0005532048 nova_compute[253661]: 2025-11-22 09:48:58.033 253665 DEBUG oslo_concurrency.processutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:48:58 np0005532048 nova_compute[253661]: 2025-11-22 09:48:58.038 253665 DEBUG nova.compute.provider_tree [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:48:58 np0005532048 nova_compute[253661]: 2025-11-22 09:48:58.053 253665 DEBUG nova.scheduler.client.report [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:48:58 np0005532048 nova_compute[253661]: 2025-11-22 09:48:58.072 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:58 np0005532048 nova_compute[253661]: 2025-11-22 09:48:58.106 253665 INFO nova.scheduler.client.report [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Deleted allocations for instance 91cfde9c-3aa6-4946-92d6-471c8f63eb2f#033[00m
Nov 22 04:48:58 np0005532048 nova_compute[253661]: 2025-11-22 09:48:58.159 253665 DEBUG oslo_concurrency.lockutils [None req-d809f282-4fd8-489b-9910-090f7b6d3402 15f54ba9d7eb4efd9b760da5c85ec22e 2a86e5c3f3c34f2285b7958147f6bbd3 - - default default] Lock "91cfde9c-3aa6-4946-92d6-471c8f63eb2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:48:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 140 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 847 KiB/s rd, 2.1 MiB/s wr, 151 op/s
Nov 22 04:48:59 np0005532048 nova_compute[253661]: 2025-11-22 09:48:59.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:48:59 np0005532048 nova_compute[253661]: 2025-11-22 09:48:59.807 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 625 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 22 04:49:01 np0005532048 nova_compute[253661]: 2025-11-22 09:49:01.238 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:01Z|01578|binding|INFO|Releasing lport fb48fac2-f19f-4ef4-a7bc-e07e49098585 from this chassis (sb_readonly=0)
Nov 22 04:49:01 np0005532048 nova_compute[253661]: 2025-11-22 09:49:01.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:01 np0005532048 nova_compute[253661]: 2025-11-22 09:49:01.692 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804926.6905787, 63134c6f-fc14-4157-9874-e7c6227f8d0a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:49:01 np0005532048 nova_compute[253661]: 2025-11-22 09:49:01.692 253665 INFO nova.compute.manager [-] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:49:01 np0005532048 nova_compute[253661]: 2025-11-22 09:49:01.720 253665 DEBUG nova.compute.manager [None req-d77a7d9c-635c-4a9d-800c-57df3ab9eae1 - - - - - -] [instance: 63134c6f-fc14-4157-9874-e7c6227f8d0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 572 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Nov 22 04:49:02 np0005532048 nova_compute[253661]: 2025-11-22 09:49:02.276 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:02.284 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:02 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:02Z|01579|binding|INFO|Releasing lport fb48fac2-f19f-4ef4-a7bc-e07e49098585 from this chassis (sb_readonly=0)
Nov 22 04:49:02 np0005532048 nova_compute[253661]: 2025-11-22 09:49:02.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007573653301426059 of space, bias 1.0, pg target 0.2272095990427818 quantized to 32 (current 32)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:49:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:49:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 559 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Nov 22 04:49:04 np0005532048 nova_compute[253661]: 2025-11-22 09:49:04.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:06 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:06Z|01580|binding|INFO|Releasing lport fb48fac2-f19f-4ef4-a7bc-e07e49098585 from this chassis (sb_readonly=0)
Nov 22 04:49:06 np0005532048 nova_compute[253661]: 2025-11-22 09:49:06.191 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 519 KiB/s rd, 1.6 MiB/s wr, 93 op/s
Nov 22 04:49:06 np0005532048 podman[403336]: 2025-11-22 09:49:06.401131432 +0000 UTC m=+0.080116829 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:49:06 np0005532048 podman[403335]: 2025-11-22 09:49:06.418011639 +0000 UTC m=+0.093964571 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:49:07 np0005532048 nova_compute[253661]: 2025-11-22 09:49:07.278 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:08 np0005532048 nova_compute[253661]: 2025-11-22 09:49:08.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:08 np0005532048 nova_compute[253661]: 2025-11-22 09:49:08.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:49:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 263 KiB/s rd, 1.2 MiB/s wr, 53 op/s
Nov 22 04:49:09 np0005532048 nova_compute[253661]: 2025-11-22 09:49:09.364 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:09 np0005532048 nova_compute[253661]: 2025-11-22 09:49:09.780 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804934.7778037, 91cfde9c-3aa6-4946-92d6-471c8f63eb2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:49:09 np0005532048 nova_compute[253661]: 2025-11-22 09:49:09.780 253665 INFO nova.compute.manager [-] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:49:09 np0005532048 nova_compute[253661]: 2025-11-22 09:49:09.795 253665 DEBUG nova.compute.manager [None req-4279b867-7e63-4f4b-ba16-ab1b0ff66a86 - - - - - -] [instance: 91cfde9c-3aa6-4946-92d6-471c8f63eb2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:09 np0005532048 nova_compute[253661]: 2025-11-22 09:49:09.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 15 KiB/s wr, 5 op/s
Nov 22 04:49:11 np0005532048 nova_compute[253661]: 2025-11-22 09:49:11.978 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 22 04:49:12 np0005532048 nova_compute[253661]: 2025-11-22 09:49:12.280 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:49:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3573905986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:49:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:49:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3573905986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:49:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:13 np0005532048 nova_compute[253661]: 2025-11-22 09:49:13.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:13 np0005532048 podman[403373]: 2025-11-22 09:49:13.405765498 +0000 UTC m=+0.090113907 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 04:49:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:13.697 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:d3:2f 10.100.0.2 2001:db8::f816:3eff:fe02:d32f'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:d32f/64', 'neutron:device_id': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=da001788-faa3-412b-9b6a-82fe1a808a87) old=Port_Binding(mac=['fa:16:3e:02:d3:2f 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:49:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:13.698 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port da001788-faa3-412b-9b6a-82fe1a808a87 in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce updated#033[00m
Nov 22 04:49:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:13.700 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:49:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:13.701 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a52eb677-5b7d-4761-a674-aff35ff58e9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 22 04:49:14 np0005532048 nova_compute[253661]: 2025-11-22 09:49:14.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 22 04:49:17 np0005532048 nova_compute[253661]: 2025-11-22 09:49:17.282 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 22 04:49:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:19.192 162970 DEBUG eventlet.wsgi.server [-] (162970) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 22 04:49:19 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:19.193 162970 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Nov 22 04:49:19 np0005532048 ovn_metadata_agent[162856]: Accept: */*#015
Nov 22 04:49:19 np0005532048 ovn_metadata_agent[162856]: Connection: close#015
Nov 22 04:49:19 np0005532048 ovn_metadata_agent[162856]: Content-Type: text/plain#015
Nov 22 04:49:19 np0005532048 ovn_metadata_agent[162856]: Host: 169.254.169.254#015
Nov 22 04:49:19 np0005532048 ovn_metadata_agent[162856]: User-Agent: curl/7.84.0#015
Nov 22 04:49:19 np0005532048 ovn_metadata_agent[162856]: X-Forwarded-For: 10.100.0.9#015
Nov 22 04:49:19 np0005532048 ovn_metadata_agent[162856]: X-Ovn-Network-Id: 473e817e-09da-452b-aec0-d46546489b36 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 22 04:49:19 np0005532048 nova_compute[253661]: 2025-11-22 09:49:19.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.023 162970 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.024 162970 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.8306718#033[00m
Nov 22 04:49:22 np0005532048 haproxy-metadata-proxy-473e817e-09da-452b-aec0-d46546489b36[402957]: 10.100.0.9:50110 [22/Nov/2025:09:49:19.190] listener listener/metadata 0/0/0/2833/2833 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.099 162970 DEBUG eventlet.wsgi.server [-] (162970) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.100 162970 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: Accept: */*#015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: Connection: close#015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: Content-Length: 100#015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: Content-Type: application/x-www-form-urlencoded#015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: Host: 169.254.169.254#015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: User-Agent: curl/7.84.0#015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: X-Forwarded-For: 10.100.0.9#015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: X-Ovn-Network-Id: 473e817e-09da-452b-aec0-d46546489b36#015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: #015
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 22 04:49:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 22 04:49:22 np0005532048 nova_compute[253661]: 2025-11-22 09:49:22.286 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.380 162970 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 22 04:49:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:22.380 162970 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2798340#033[00m
Nov 22 04:49:22 np0005532048 haproxy-metadata-proxy-473e817e-09da-452b-aec0-d46546489b36[402957]: 10.100.0.9:50116 [22/Nov/2025:09:49:22.098] listener listener/metadata 0/0/0/282/282 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 22 04:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:49:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:49:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:23 np0005532048 nova_compute[253661]: 2025-11-22 09:49:23.158 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.446 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.447 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.447 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.447 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.448 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.449 253665 INFO nova.compute.manager [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Terminating instance#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.450 253665 DEBUG nova.compute.manager [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:49:24 np0005532048 kernel: tap8d1f2012-aa (unregistering): left promiscuous mode
Nov 22 04:49:24 np0005532048 NetworkManager[48920]: <info>  [1763804964.5035] device (tap8d1f2012-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:49:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:24Z|01581|binding|INFO|Releasing lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 from this chassis (sb_readonly=0)
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.509 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:24Z|01582|binding|INFO|Setting lport 8d1f2012-aa57-4dfc-a744-a852d1353ad2 down in Southbound
Nov 22 04:49:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:24Z|01583|binding|INFO|Removing iface tap8d1f2012-aa ovn-installed in OVS
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.516 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:7c:5c 10.100.0.9'], port_security=['fa:16:3e:43:7c:5c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73f1da2d-d075-455d-94dd-f10146df7d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-473e817e-09da-452b-aec0-d46546489b36', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce09df5a051f4f24bbb216fbe5785dcb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '134111d9-6c1c-466d-8bad-cdc68aa178a5 f24a4b05-cdda-49a7-af44-458e15bd9a13', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf052f1b-89e1-46ee-8169-e44075a76fcb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=8d1f2012-aa57-4dfc-a744-a852d1353ad2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.517 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 8d1f2012-aa57-4dfc-a744-a852d1353ad2 in datapath 473e817e-09da-452b-aec0-d46546489b36 unbound from our chassis#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.518 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 473e817e-09da-452b-aec0-d46546489b36, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.519 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[74707c23-a43c-4ce4-876c-62b9a55912eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.520 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-473e817e-09da-452b-aec0-d46546489b36 namespace which is not needed anymore#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.535 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:24 np0005532048 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 22 04:49:24 np0005532048 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000091.scope: Consumed 16.803s CPU time.
Nov 22 04:49:24 np0005532048 systemd-machined[215941]: Machine qemu-177-instance-00000091 terminated.
Nov 22 04:49:24 np0005532048 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [NOTICE]   (402955) : haproxy version is 2.8.14-c23fe91
Nov 22 04:49:24 np0005532048 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [NOTICE]   (402955) : path to executable is /usr/sbin/haproxy
Nov 22 04:49:24 np0005532048 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [WARNING]  (402955) : Exiting Master process...
Nov 22 04:49:24 np0005532048 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [ALERT]    (402955) : Current worker (402957) exited with code 143 (Terminated)
Nov 22 04:49:24 np0005532048 neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36[402951]: [WARNING]  (402955) : All workers exited. Exiting... (0)
Nov 22 04:49:24 np0005532048 systemd[1]: libpod-e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf.scope: Deactivated successfully.
Nov 22 04:49:24 np0005532048 podman[403425]: 2025-11-22 09:49:24.653745704 +0000 UTC m=+0.045700809 container died e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.678 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf-userdata-shm.mount: Deactivated successfully.
Nov 22 04:49:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e3b593cb210fa24aaae4778a6a8c263ac8ff16d9e064d9f8dea041fd87cb8a31-merged.mount: Deactivated successfully.
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.690 253665 INFO nova.virt.libvirt.driver [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Instance destroyed successfully.#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.690 253665 DEBUG nova.objects.instance [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lazy-loading 'resources' on Instance uuid 73f1da2d-d075-455d-94dd-f10146df7d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:49:24 np0005532048 podman[403425]: 2025-11-22 09:49:24.700244952 +0000 UTC m=+0.092200057 container cleanup e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.701 253665 DEBUG nova.virt.libvirt.vif [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:48:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-762381395',display_name='tempest-TestServerBasicOps-server-762381395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-762381395',id=145,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCtTbF8bJfFddW96zTLdAkxE2iVzgX9zcT4Pj4BnA20Jji6o4SOv+z2CVEObDH8w0qoNYti5+X9zzKmkIowUY67LzvbSFwG+M1TtD6ysNGURVyIwLTyMSUq/al9LkPsMvg==',key_name='tempest-TestServerBasicOps-443879583',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:48:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce09df5a051f4f24bbb216fbe5785dcb',ramdisk_id='',reservation_id='r-b7997295',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1909013265',owner_user_name='tempest-TestServerBasicOps-1909013265-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:49:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aff683c22adc499393a2037bae323af6',uuid=73f1da2d-d075-455d-94dd-f10146df7d30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.701 253665 DEBUG nova.network.os_vif_util [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converting VIF {"id": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "address": "fa:16:3e:43:7c:5c", "network": {"id": "473e817e-09da-452b-aec0-d46546489b36", "bridge": "br-int", "label": "tempest-TestServerBasicOps-612847367-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce09df5a051f4f24bbb216fbe5785dcb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d1f2012-aa", "ovs_interfaceid": "8d1f2012-aa57-4dfc-a744-a852d1353ad2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.702 253665 DEBUG nova.network.os_vif_util [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.703 253665 DEBUG os_vif [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.705 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.705 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d1f2012-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:24 np0005532048 systemd[1]: libpod-conmon-e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf.scope: Deactivated successfully.
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.711 253665 INFO os_vif [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:7c:5c,bridge_name='br-int',has_traffic_filtering=True,id=8d1f2012-aa57-4dfc-a744-a852d1353ad2,network=Network(473e817e-09da-452b-aec0-d46546489b36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d1f2012-aa')#033[00m
Nov 22 04:49:24 np0005532048 podman[403461]: 2025-11-22 09:49:24.768875257 +0000 UTC m=+0.045888003 container remove e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.771 253665 DEBUG nova.compute.manager [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-unplugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.772 253665 DEBUG oslo_concurrency.lockutils [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.772 253665 DEBUG oslo_concurrency.lockutils [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.772 253665 DEBUG oslo_concurrency.lockutils [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.772 253665 DEBUG nova.compute.manager [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] No waiting events found dispatching network-vif-unplugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.773 253665 DEBUG nova.compute.manager [req-37e4f945-55e3-421d-8949-76513331cf77 req-e27b6c24-6a2e-493b-a0e9-df6605b5c0e5 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-unplugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.776 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7d1a30-6565-4d0f-85d7-12de2c468124]: (4, ('Sat Nov 22 09:49:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36 (e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf)\ne938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf\nSat Nov 22 09:49:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-473e817e-09da-452b-aec0-d46546489b36 (e938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf)\ne938f9f9f20fcafcfd9d1d894f88ec68a373dc0aa2534cccfd4b8f1a626586cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.778 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1e425842-98c4-4589-9f12-27b8b7cbb008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.779 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473e817e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.780 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:24 np0005532048 kernel: tap473e817e-00: left promiscuous mode
Nov 22 04:49:24 np0005532048 nova_compute[253661]: 2025-11-22 09:49:24.792 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.796 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8e98f0-ef9e-46c2-9806-cf2207f46db4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.820 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[99c63cab-5ff9-4bda-83fd-89d1ce81db56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a00750e0-631b-4a66-be28-1fe086eeae98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.840 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1fe880-c431-4811-9d96-3d5aee3508bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776942, 'reachable_time': 16846, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403494, 'error': None, 'target': 'ovnmeta-473e817e-09da-452b-aec0-d46546489b36', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.842 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-473e817e-09da-452b-aec0-d46546489b36 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:49:24 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:24.842 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[b39b9569-8aaf-45e4-8371-61a4de106b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:24 np0005532048 systemd[1]: run-netns-ovnmeta\x2d473e817e\x2d09da\x2d452b\x2daec0\x2dd46546489b36.mount: Deactivated successfully.
Nov 22 04:49:25 np0005532048 nova_compute[253661]: 2025-11-22 09:49:25.076 253665 INFO nova.virt.libvirt.driver [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Deleting instance files /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30_del#033[00m
Nov 22 04:49:25 np0005532048 nova_compute[253661]: 2025-11-22 09:49:25.077 253665 INFO nova.virt.libvirt.driver [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Deletion of /var/lib/nova/instances/73f1da2d-d075-455d-94dd-f10146df7d30_del complete#033[00m
Nov 22 04:49:25 np0005532048 nova_compute[253661]: 2025-11-22 09:49:25.137 253665 INFO nova.compute.manager [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:49:25 np0005532048 nova_compute[253661]: 2025-11-22 09:49:25.137 253665 DEBUG oslo.service.loopingcall [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:49:25 np0005532048 nova_compute[253661]: 2025-11-22 09:49:25.137 253665 DEBUG nova.compute.manager [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:49:25 np0005532048 nova_compute[253661]: 2025-11-22 09:49:25.138 253665 DEBUG nova.network.neutron [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:49:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.7 KiB/s wr, 0 op/s
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.485 253665 DEBUG nova.network.neutron [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.498 253665 INFO nova.compute.manager [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Took 1.36 seconds to deallocate network for instance.#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.538 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.539 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.601 253665 DEBUG oslo_concurrency.processutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.853 253665 DEBUG nova.compute.manager [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.853 253665 DEBUG oslo_concurrency.lockutils [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.854 253665 DEBUG oslo_concurrency.lockutils [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.854 253665 DEBUG oslo_concurrency.lockutils [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.854 253665 DEBUG nova.compute.manager [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] No waiting events found dispatching network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.854 253665 WARNING nova.compute.manager [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received unexpected event network-vif-plugged-8d1f2012-aa57-4dfc-a744-a852d1353ad2 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:49:26 np0005532048 nova_compute[253661]: 2025-11-22 09:49:26.855 253665 DEBUG nova.compute.manager [req-9bc53de9-e08f-4dd6-a530-50c28dfa0a37 req-4b435c8c-5a45-4b3d-8faf-3ed873ea3e66 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Received event network-vif-deleted-8d1f2012-aa57-4dfc-a744-a852d1353ad2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:49:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3043213952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:49:27 np0005532048 nova_compute[253661]: 2025-11-22 09:49:27.034 253665 DEBUG oslo_concurrency.processutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:27 np0005532048 nova_compute[253661]: 2025-11-22 09:49:27.041 253665 DEBUG nova.compute.provider_tree [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:49:27 np0005532048 nova_compute[253661]: 2025-11-22 09:49:27.055 253665 DEBUG nova.scheduler.client.report [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.078 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:d3:2f 10.100.0.2 2001:db8:0:1:f816:3eff:fe02:d32f 2001:db8::f816:3eff:fe02:d32f'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe02:d32f/64 2001:db8::f816:3eff:fe02:d32f/64', 'neutron:device_id': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=da001788-faa3-412b-9b6a-82fe1a808a87) old=Port_Binding(mac=['fa:16:3e:02:d3:2f 10.100.0.2 2001:db8::f816:3eff:fe02:d32f'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe02:d32f/64', 'neutron:device_id': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.080 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port da001788-faa3-412b-9b6a-82fe1a808a87 in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce updated#033[00m
Nov 22 04:49:27 np0005532048 nova_compute[253661]: 2025-11-22 09:49:27.081 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.081 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.082 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4f36db4d-6bbf-437d-a908-092dcc0fc514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:27 np0005532048 nova_compute[253661]: 2025-11-22 09:49:27.128 253665 INFO nova.scheduler.client.report [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Deleted allocations for instance 73f1da2d-d075-455d-94dd-f10146df7d30#033[00m
Nov 22 04:49:27 np0005532048 nova_compute[253661]: 2025-11-22 09:49:27.189 253665 DEBUG oslo_concurrency.lockutils [None req-c7b6add6-c649-4967-a78e-023969fc80d9 aff683c22adc499393a2037bae323af6 ce09df5a051f4f24bbb216fbe5785dcb - - default default] Lock "73f1da2d-d075-455d-94dd-f10146df7d30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:27 np0005532048 nova_compute[253661]: 2025-11-22 09:49:27.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.992 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.992 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:27.993 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 04:49:29 np0005532048 nova_compute[253661]: 2025-11-22 09:49:29.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 04:49:31 np0005532048 nova_compute[253661]: 2025-11-22 09:49:31.140 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 04:49:32 np0005532048 nova_compute[253661]: 2025-11-22 09:49:32.290 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:32 np0005532048 podman[403690]: 2025-11-22 09:49:32.416951472 +0000 UTC m=+0.125644454 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:49:32 np0005532048 podman[403690]: 2025-11-22 09:49:32.527180903 +0000 UTC m=+0.235873885 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 04:49:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.293594) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973293713, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 819, "num_deletes": 252, "total_data_size": 1075306, "memory_usage": 1109504, "flush_reason": "Manual Compaction"}
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973305261, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1054047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53842, "largest_seqno": 54660, "table_properties": {"data_size": 1049873, "index_size": 1890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9388, "raw_average_key_size": 19, "raw_value_size": 1041427, "raw_average_value_size": 2183, "num_data_blocks": 84, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804908, "oldest_key_time": 1763804908, "file_creation_time": 1763804973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 11702 microseconds, and 3827 cpu microseconds.
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.305336) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1054047 bytes OK
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.305362) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307243) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307261) EVENT_LOG_v1 {"time_micros": 1763804973307256, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307282) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1071217, prev total WAL file size 1071217, number of live WAL files 2.
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307891) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1029KB)], [125(8921KB)]
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973307963, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10189345, "oldest_snapshot_seqno": -1}
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7434 keys, 8526463 bytes, temperature: kUnknown
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973358248, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8526463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8479290, "index_size": 27476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18629, "raw_key_size": 194617, "raw_average_key_size": 26, "raw_value_size": 8349066, "raw_average_value_size": 1123, "num_data_blocks": 1061, "num_entries": 7434, "num_filter_entries": 7434, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763804973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.358964) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8526463 bytes
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.363046) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.8 rd, 168.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(17.8) write-amplify(8.1) OK, records in: 7953, records dropped: 519 output_compression: NoCompression
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.363067) EVENT_LOG_v1 {"time_micros": 1763804973363058, "job": 76, "event": "compaction_finished", "compaction_time_micros": 50755, "compaction_time_cpu_micros": 21639, "output_level": 6, "num_output_files": 1, "total_output_size": 8526463, "num_input_records": 7953, "num_output_records": 7434, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973363370, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763804973365140, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.307764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:49:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:49:33.365209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c8ae9e43-4599-4d30-92c2-cfabe3708307 does not exist
Nov 22 04:49:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4505e737-c35b-4316-a5da-310b3c92c44d does not exist
Nov 22 04:49:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 64387ff9-72cf-4ca4-bf10-b3dd0aea340d does not exist
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:49:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:49:34 np0005532048 nova_compute[253661]: 2025-11-22 09:49:34.529 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:34 np0005532048 nova_compute[253661]: 2025-11-22 09:49:34.531 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:34 np0005532048 nova_compute[253661]: 2025-11-22 09:49:34.547 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:49:34 np0005532048 nova_compute[253661]: 2025-11-22 09:49:34.624 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:34 np0005532048 nova_compute[253661]: 2025-11-22 09:49:34.624 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:34 np0005532048 podman[404118]: 2025-11-22 09:49:34.626229756 +0000 UTC m=+0.043863245 container create bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:49:34 np0005532048 nova_compute[253661]: 2025-11-22 09:49:34.632 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:49:34 np0005532048 nova_compute[253661]: 2025-11-22 09:49:34.634 253665 INFO nova.compute.claims [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:49:34 np0005532048 systemd[1]: Started libpod-conmon-bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303.scope.
Nov 22 04:49:34 np0005532048 podman[404118]: 2025-11-22 09:49:34.605328039 +0000 UTC m=+0.022961548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:49:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:49:34 np0005532048 nova_compute[253661]: 2025-11-22 09:49:34.734 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:34 np0005532048 podman[404118]: 2025-11-22 09:49:34.772129928 +0000 UTC m=+0.189763447 container init bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:49:34 np0005532048 nova_compute[253661]: 2025-11-22 09:49:34.772 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:34 np0005532048 podman[404118]: 2025-11-22 09:49:34.782837682 +0000 UTC m=+0.200471171 container start bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:49:34 np0005532048 bold_darwin[404135]: 167 167
Nov 22 04:49:34 np0005532048 systemd[1]: libpod-bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303.scope: Deactivated successfully.
Nov 22 04:49:34 np0005532048 podman[404118]: 2025-11-22 09:49:34.789517047 +0000 UTC m=+0.207150536 container attach bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:49:34 np0005532048 podman[404118]: 2025-11-22 09:49:34.790393819 +0000 UTC m=+0.208027318 container died bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:49:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ec631dcf51993cea591b54591683a6e4057396d3cff769b7ed0a1f5707515e2d-merged.mount: Deactivated successfully.
Nov 22 04:49:34 np0005532048 podman[404118]: 2025-11-22 09:49:34.830901239 +0000 UTC m=+0.248534718 container remove bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:49:34 np0005532048 systemd[1]: libpod-conmon-bfdabfff70b59ef6884d1f810217bd2dd06cfb9f225b95750d0541f5fb5f1303.scope: Deactivated successfully.
Nov 22 04:49:35 np0005532048 podman[404180]: 2025-11-22 09:49:35.01523535 +0000 UTC m=+0.060764281 container create 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:49:35 np0005532048 systemd[1]: Started libpod-conmon-56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35.scope.
Nov 22 04:49:35 np0005532048 podman[404180]: 2025-11-22 09:49:34.981286022 +0000 UTC m=+0.026814963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:49:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:49:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:35 np0005532048 podman[404180]: 2025-11-22 09:49:35.186191342 +0000 UTC m=+0.231720273 container init 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:49:35 np0005532048 podman[404180]: 2025-11-22 09:49:35.194739264 +0000 UTC m=+0.240268175 container start 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 04:49:35 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:49:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177278564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.227 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.236 253665 DEBUG nova.compute.provider_tree [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:49:35 np0005532048 podman[404180]: 2025-11-22 09:49:35.24278769 +0000 UTC m=+0.288316631 container attach 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.254 253665 DEBUG nova.scheduler.client.report [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.281 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.282 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.369 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.370 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.394 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.419 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.526 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.527 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.528 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Creating image(s)#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.569 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.596 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.617 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.620 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.703 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.705 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.706 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.706 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.737 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.741 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:35 np0005532048 nova_compute[253661]: 2025-11-22 09:49:35.785 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.055 253665 DEBUG nova.policy [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.113 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.171 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.270 253665 DEBUG nova.objects.instance [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:49:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.293 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.294 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Ensure instance console log exists: /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.295 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.295 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.295 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:36 np0005532048 condescending_taussig[404197]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:49:36 np0005532048 condescending_taussig[404197]: --> relative data size: 1.0
Nov 22 04:49:36 np0005532048 condescending_taussig[404197]: --> All data devices are unavailable
Nov 22 04:49:36 np0005532048 systemd[1]: libpod-56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35.scope: Deactivated successfully.
Nov 22 04:49:36 np0005532048 systemd[1]: libpod-56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35.scope: Consumed 1.148s CPU time.
Nov 22 04:49:36 np0005532048 podman[404180]: 2025-11-22 09:49:36.438135687 +0000 UTC m=+1.483664598 container died 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:49:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9e93839208f592f284920d037ec3212f61e749c8b0b2704a0772201b54df604a-merged.mount: Deactivated successfully.
Nov 22 04:49:36 np0005532048 podman[404180]: 2025-11-22 09:49:36.519901305 +0000 UTC m=+1.565430216 container remove 56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_taussig, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:49:36 np0005532048 systemd[1]: libpod-conmon-56e35ae61ed534ade55a4179d6430dacb5715bfd85b67d0b207980d564eeed35.scope: Deactivated successfully.
Nov 22 04:49:36 np0005532048 podman[404396]: 2025-11-22 09:49:36.557109295 +0000 UTC m=+0.074327557 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:49:36 np0005532048 podman[404404]: 2025-11-22 09:49:36.561117853 +0000 UTC m=+0.078109879 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 22 04:49:36 np0005532048 nova_compute[253661]: 2025-11-22 09:49:36.861 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Successfully created port: 9c015dd3-d340-40c6-bcc6-efef0a914d39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:49:37 np0005532048 podman[404582]: 2025-11-22 09:49:37.142310175 +0000 UTC m=+0.043159747 container create 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:49:37 np0005532048 systemd[1]: Started libpod-conmon-3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48.scope.
Nov 22 04:49:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:49:37 np0005532048 podman[404582]: 2025-11-22 09:49:37.124873954 +0000 UTC m=+0.025723546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:49:37 np0005532048 podman[404582]: 2025-11-22 09:49:37.228791 +0000 UTC m=+0.129640572 container init 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:49:37 np0005532048 podman[404582]: 2025-11-22 09:49:37.235190148 +0000 UTC m=+0.136039720 container start 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:49:37 np0005532048 podman[404582]: 2025-11-22 09:49:37.238342166 +0000 UTC m=+0.139191768 container attach 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:49:37 np0005532048 agitated_galois[404597]: 167 167
Nov 22 04:49:37 np0005532048 systemd[1]: libpod-3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48.scope: Deactivated successfully.
Nov 22 04:49:37 np0005532048 podman[404582]: 2025-11-22 09:49:37.241832682 +0000 UTC m=+0.142682254 container died 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.246 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.247 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:49:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8b307f6bfca4bfc560150cabdc9bee60b715ed53a7ca32b8050f8ec307b59116-merged.mount: Deactivated successfully.
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.271 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:49:37 np0005532048 podman[404582]: 2025-11-22 09:49:37.281100712 +0000 UTC m=+0.181950284 container remove 3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:49:37 np0005532048 systemd[1]: libpod-conmon-3f9430e56937994dc12dcdecd11aa631ed89b5274d58c1ebf8000465bb050b48.scope: Deactivated successfully.
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.292 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:37 np0005532048 podman[404621]: 2025-11-22 09:49:37.444579218 +0000 UTC m=+0.040610854 container create dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:49:37 np0005532048 systemd[1]: Started libpod-conmon-dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684.scope.
Nov 22 04:49:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:49:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:37 np0005532048 podman[404621]: 2025-11-22 09:49:37.428599574 +0000 UTC m=+0.024631240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:49:37 np0005532048 podman[404621]: 2025-11-22 09:49:37.530401778 +0000 UTC m=+0.126433434 container init dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 22 04:49:37 np0005532048 podman[404621]: 2025-11-22 09:49:37.53897873 +0000 UTC m=+0.135010386 container start dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:49:37 np0005532048 podman[404621]: 2025-11-22 09:49:37.542822585 +0000 UTC m=+0.138854311 container attach dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.584 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Successfully updated port: 9c015dd3-d340-40c6-bcc6-efef0a914d39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.601 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.601 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.601 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.670 253665 DEBUG nova.compute.manager [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.670 253665 DEBUG nova.compute.manager [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing instance network info cache due to event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.670 253665 DEBUG oslo_concurrency.lockutils [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:49:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:37 np0005532048 nova_compute[253661]: 2025-11-22 09:49:37.928 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:49:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Nov 22 04:49:38 np0005532048 silly_rubin[404637]: {
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:    "0": [
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:        {
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "devices": [
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "/dev/loop3"
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            ],
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_name": "ceph_lv0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_size": "21470642176",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "name": "ceph_lv0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "tags": {
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.cluster_name": "ceph",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.crush_device_class": "",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.encrypted": "0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.osd_id": "0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.type": "block",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.vdo": "0"
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            },
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "type": "block",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "vg_name": "ceph_vg0"
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:        }
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:    ],
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:    "1": [
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:        {
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "devices": [
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "/dev/loop4"
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            ],
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_name": "ceph_lv1",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_size": "21470642176",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "name": "ceph_lv1",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "tags": {
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.cluster_name": "ceph",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.crush_device_class": "",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.encrypted": "0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.osd_id": "1",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.type": "block",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.vdo": "0"
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            },
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "type": "block",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "vg_name": "ceph_vg1"
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:        }
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:    ],
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:    "2": [
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:        {
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "devices": [
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "/dev/loop5"
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            ],
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_name": "ceph_lv2",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_size": "21470642176",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "name": "ceph_lv2",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "tags": {
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.cluster_name": "ceph",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.crush_device_class": "",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.encrypted": "0",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.osd_id": "2",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.type": "block",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:                "ceph.vdo": "0"
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            },
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "type": "block",
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:            "vg_name": "ceph_vg2"
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:        }
Nov 22 04:49:38 np0005532048 silly_rubin[404637]:    ]
Nov 22 04:49:38 np0005532048 silly_rubin[404637]: }
Nov 22 04:49:38 np0005532048 systemd[1]: libpod-dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684.scope: Deactivated successfully.
Nov 22 04:49:38 np0005532048 podman[404621]: 2025-11-22 09:49:38.404305567 +0000 UTC m=+1.000337203 container died dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:49:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3f8551c4d1d5ca0b6ba0a78eb28ba3fc3cd71a7a47c6831d371a583a5b6242a8-merged.mount: Deactivated successfully.
Nov 22 04:49:38 np0005532048 podman[404621]: 2025-11-22 09:49:38.461486549 +0000 UTC m=+1.057518185 container remove dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_rubin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:49:38 np0005532048 systemd[1]: libpod-conmon-dcb6b31318f94590a1794658c104b346706fe611ca04201e9ce9bf0e2be67684.scope: Deactivated successfully.
Nov 22 04:49:39 np0005532048 podman[404799]: 2025-11-22 09:49:39.070585009 +0000 UTC m=+0.039033234 container create 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:49:39 np0005532048 systemd[1]: Started libpod-conmon-0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186.scope.
Nov 22 04:49:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:49:39 np0005532048 podman[404799]: 2025-11-22 09:49:39.146450123 +0000 UTC m=+0.114898378 container init 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:49:39 np0005532048 podman[404799]: 2025-11-22 09:49:39.054967954 +0000 UTC m=+0.023416199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:49:39 np0005532048 podman[404799]: 2025-11-22 09:49:39.153136248 +0000 UTC m=+0.121584473 container start 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:49:39 np0005532048 podman[404799]: 2025-11-22 09:49:39.157596078 +0000 UTC m=+0.126044323 container attach 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:49:39 np0005532048 hardcore_shockley[404816]: 167 167
Nov 22 04:49:39 np0005532048 systemd[1]: libpod-0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186.scope: Deactivated successfully.
Nov 22 04:49:39 np0005532048 podman[404799]: 2025-11-22 09:49:39.159973687 +0000 UTC m=+0.128421912 container died 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:49:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-6655698ea932c3440df08305a6fc870c70e91825db24f91a61ad8f61fe57a253-merged.mount: Deactivated successfully.
Nov 22 04:49:39 np0005532048 podman[404799]: 2025-11-22 09:49:39.20101948 +0000 UTC m=+0.169467705 container remove 0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:49:39 np0005532048 systemd[1]: libpod-conmon-0635453fe990664a4e3ed0d2dea16fe7e4f247efad3bc4fb364328ec6dadc186.scope: Deactivated successfully.
Nov 22 04:49:39 np0005532048 podman[404840]: 2025-11-22 09:49:39.354957271 +0000 UTC m=+0.040667115 container create acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:49:39 np0005532048 systemd[1]: Started libpod-conmon-acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62.scope.
Nov 22 04:49:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:49:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:39 np0005532048 podman[404840]: 2025-11-22 09:49:39.423204157 +0000 UTC m=+0.108914021 container init acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:49:39 np0005532048 podman[404840]: 2025-11-22 09:49:39.430390254 +0000 UTC m=+0.116100098 container start acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:49:39 np0005532048 podman[404840]: 2025-11-22 09:49:39.43348271 +0000 UTC m=+0.119192574 container attach acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:49:39 np0005532048 podman[404840]: 2025-11-22 09:49:39.33908645 +0000 UTC m=+0.024796314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.504 253665 DEBUG nova.network.neutron [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.525 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.526 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance network_info: |[{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.527 253665 DEBUG oslo_concurrency.lockutils [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.527 253665 DEBUG nova.network.neutron [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.532 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start _get_guest_xml network_info=[{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.537 253665 WARNING nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.545 253665 DEBUG nova.virt.libvirt.host [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.546 253665 DEBUG nova.virt.libvirt.host [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.556 253665 DEBUG nova.virt.libvirt.host [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.557 253665 DEBUG nova.virt.libvirt.host [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.557 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.557 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.558 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.558 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.558 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.559 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.559 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.559 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.560 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.560 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.560 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.560 253665 DEBUG nova.virt.hardware [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.564 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.687 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763804964.6856081, 73f1da2d-d075-455d-94dd-f10146df7d30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.692 253665 INFO nova.compute.manager [-] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.721 253665 DEBUG nova.compute.manager [None req-51a9e37d-eead-4fc9-8a66-69ad6257eb38 - - - - - -] [instance: 73f1da2d-d075-455d-94dd-f10146df7d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:39 np0005532048 nova_compute[253661]: 2025-11-22 09:49:39.776 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:49:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871077078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.021 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.046 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.051 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]: {
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "osd_id": 1,
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "type": "bluestore"
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:    },
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "osd_id": 0,
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "type": "bluestore"
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:    },
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "osd_id": 2,
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:        "type": "bluestore"
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]:    }
Nov 22 04:49:40 np0005532048 elastic_faraday[404856]: }
Nov 22 04:49:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:49:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469356678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:49:40 np0005532048 systemd[1]: libpod-acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62.scope: Deactivated successfully.
Nov 22 04:49:40 np0005532048 systemd[1]: libpod-acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62.scope: Consumed 1.081s CPU time.
Nov 22 04:49:40 np0005532048 podman[404840]: 2025-11-22 09:49:40.515343575 +0000 UTC m=+1.201053449 container died acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.534 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.538 253665 DEBUG nova.virt.libvirt.vif [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:49:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1502632540',display_name='tempest-TestGettingAddress-server-1502632540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1502632540',id=146,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-97p9p4ep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:49:35Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=3a65f84a-3072-4b94-b08a-0ba7b1529a07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.538 253665 DEBUG nova.network.os_vif_util [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.539 253665 DEBUG nova.network.os_vif_util [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.541 253665 DEBUG nova.objects.instance [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.558 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <uuid>3a65f84a-3072-4b94-b08a-0ba7b1529a07</uuid>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <name>instance-00000092</name>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-1502632540</nova:name>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:49:39</nova:creationTime>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <nova:port uuid="9c015dd3-d340-40c6-bcc6-efef0a914d39">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe7b:4756" ipVersion="6"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe7b:4756" ipVersion="6"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <entry name="serial">3a65f84a-3072-4b94-b08a-0ba7b1529a07</entry>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <entry name="uuid">3a65f84a-3072-4b94-b08a-0ba7b1529a07</entry>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:7b:47:56"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <target dev="tap9c015dd3-d3"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/console.log" append="off"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:49:40 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:49:40 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:49:40 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:49:40 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.559 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Preparing to wait for external event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.559 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.559 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.559 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.560 253665 DEBUG nova.virt.libvirt.vif [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:49:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1502632540',display_name='tempest-TestGettingAddress-server-1502632540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1502632540',id=146,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-97p9p4ep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:49:35Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=3a65f84a-3072-4b94-b08a-0ba7b1529a07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.560 253665 DEBUG nova.network.os_vif_util [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.561 253665 DEBUG nova.network.os_vif_util [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.561 253665 DEBUG os_vif [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.562 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.562 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.563 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.568 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.569 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c015dd3-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.570 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c015dd3-d3, col_values=(('external_ids', {'iface-id': '9c015dd3-d340-40c6-bcc6-efef0a914d39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:47:56', 'vm-uuid': '3a65f84a-3072-4b94-b08a-0ba7b1529a07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.571 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:40 np0005532048 NetworkManager[48920]: <info>  [1763804980.5725] manager: (tap9c015dd3-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Nov 22 04:49:40 np0005532048 systemd[1]: var-lib-containers-storage-overlay-38b5a28722f40cfa509fc7d3bcbf3e02cc7b9ad43c5b5eb0f79eb213932e1794-merged.mount: Deactivated successfully.
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.582 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.585 253665 INFO os_vif [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3')#033[00m
Nov 22 04:49:40 np0005532048 podman[404840]: 2025-11-22 09:49:40.608143626 +0000 UTC m=+1.293853470 container remove acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:49:40 np0005532048 systemd[1]: libpod-conmon-acd4bd21af0b35c4e929106432de0b714461e04d673675534ccf75b535c54d62.scope: Deactivated successfully.
Nov 22 04:49:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.666 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.667 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.667 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:7b:47:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.668 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Using config drive#033[00m
Nov 22 04:49:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:49:40 np0005532048 nova_compute[253661]: 2025-11-22 09:49:40.694 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2a044c15-b5b6-4a99-b839-54272f15b4e9 does not exist
Nov 22 04:49:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6b65dd4a-9be5-472a-9e51-02175682cd7e does not exist
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.154 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Creating config drive at /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config#033[00m
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.159 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtra0f3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.307 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtra0f3x" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.339 253665 DEBUG nova.storage.rbd_utils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.342 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:41 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.722 253665 DEBUG oslo_concurrency.processutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config 3a65f84a-3072-4b94-b08a-0ba7b1529a07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.723 253665 INFO nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Deleting local config drive /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07/disk.config because it was imported into RBD.#033[00m
Nov 22 04:49:41 np0005532048 kernel: tap9c015dd3-d3: entered promiscuous mode
Nov 22 04:49:41 np0005532048 NetworkManager[48920]: <info>  [1763804981.7675] manager: (tap9c015dd3-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/646)
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.767 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:41Z|01584|binding|INFO|Claiming lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 for this chassis.
Nov 22 04:49:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:41Z|01585|binding|INFO|9c015dd3-d340-40c6-bcc6-efef0a914d39: Claiming fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.772 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.783 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756'], port_security=['fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe7b:4756/64 2001:db8::f816:3eff:fe7b:4756/64', 'neutron:device_id': '3a65f84a-3072-4b94-b08a-0ba7b1529a07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a71aa19e-d298-43f1-b9d0-7f952a63c1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9c015dd3-d340-40c6-bcc6-efef0a914d39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.784 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9c015dd3-d340-40c6-bcc6-efef0a914d39 in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce bound to our chassis#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.786 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce#033[00m
Nov 22 04:49:41 np0005532048 systemd-udevd[405089]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.798 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cf091125-1ee7-46e5-bcc5-4dad3ff3eee1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.799 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9b64819a-21 in ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.800 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9b64819a-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.800 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[db69ebf5-276c-4a21-adfd-ea9bb7978a59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.801 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[18769b0c-9c6b-4bbf-8a03-af27f228634d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 systemd-machined[215941]: New machine qemu-178-instance-00000092.
Nov 22 04:49:41 np0005532048 NetworkManager[48920]: <info>  [1763804981.8107] device (tap9c015dd3-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:49:41 np0005532048 NetworkManager[48920]: <info>  [1763804981.8121] device (tap9c015dd3-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.814 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[5da6c25c-aea2-4c3e-8f36-fd16aeea96d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 systemd[1]: Started Virtual Machine qemu-178-instance-00000092.
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.837 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.840 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a24a2d0b-0215-4c85-bba8-d15769408fcc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:41Z|01586|binding|INFO|Setting lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 ovn-installed in OVS
Nov 22 04:49:41 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:41Z|01587|binding|INFO|Setting lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 up in Southbound
Nov 22 04:49:41 np0005532048 nova_compute[253661]: 2025-11-22 09:49:41.845 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.875 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c9baa8-c369-4cc4-a902-4deff76131c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 systemd-udevd[405094]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.881 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[803087e8-f125-41e9-b7c4-9dc8f6d40210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 NetworkManager[48920]: <info>  [1763804981.8829] manager: (tap9b64819a-20): new Veth device (/org/freedesktop/NetworkManager/Devices/647)
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.923 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad99793-3d94-4f75-98f6-717e07b2ff75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.926 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9a66ba16-5551-4e7d-9147-4de88d1a6666]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 NetworkManager[48920]: <info>  [1763804981.9602] device (tap9b64819a-20): carrier: link connected
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.965 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5f80acab-d80e-4249-a0d0-1d0556a3972b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:41 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:41.983 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[281da5a1-1f6f-4e21-b0e0-de3daf66a601]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b64819a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:d3:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783269, 'reachable_time': 15193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405123, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.004 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fa1afe-bcc2-46b2-9dbb-ab2e7ebedfd0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:d32f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783269, 'tstamp': 783269}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405124, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.017 253665 DEBUG nova.network.neutron [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated VIF entry in instance network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.017 253665 DEBUG nova.network.neutron [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.025 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fb495800-0541-44f1-890c-0ddf621a38ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b64819a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:d3:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783269, 'reachable_time': 15193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405125, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.035 253665 DEBUG oslo_concurrency.lockutils [req-03e63020-ccc1-4af2-821e-f6134a5b06fc req-9ffce517-ca41-4284-aab0-21cb9b29ef4e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.070 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e84d19cf-739b-4d1c-a8f7-f84b0041a6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.142 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe9ac65-3673-4d40-b964-df78745f9ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b64819a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.144 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.145 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b64819a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.148 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:42 np0005532048 kernel: tap9b64819a-20: entered promiscuous mode
Nov 22 04:49:42 np0005532048 NetworkManager[48920]: <info>  [1763804982.1494] manager: (tap9b64819a-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.151 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b64819a-20, col_values=(('external_ids', {'iface-id': 'da001788-faa3-412b-9b6a-82fe1a808a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:42Z|01588|binding|INFO|Releasing lport da001788-faa3-412b-9b6a-82fe1a808a87 from this chassis (sb_readonly=0)
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.153 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.171 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9b64819a-274e-4eb7-988b-ceb1ea73c9ce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9b64819a-274e-4eb7-988b-ceb1ea73c9ce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.172 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eb850b4d-b64e-430e-9b42-2a3e08cbf1da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.173 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-9b64819a-274e-4eb7-988b-ceb1ea73c9ce
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/9b64819a-274e-4eb7-988b-ceb1ea73c9ce.pid.haproxy
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 9b64819a-274e-4eb7-988b-ceb1ea73c9ce
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:49:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:42.174 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'env', 'PROCESS_TAG=haproxy-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9b64819a-274e-4eb7-988b-ceb1ea73c9ce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.217 253665 DEBUG nova.compute.manager [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.218 253665 DEBUG oslo_concurrency.lockutils [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.218 253665 DEBUG oslo_concurrency.lockutils [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.218 253665 DEBUG oslo_concurrency.lockutils [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.218 253665 DEBUG nova.compute.manager [req-b7d622f9-aa23-4769-9864-f879305baeef req-2e6363eb-4c86-4a00-b103-052b637ae0f3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Processing event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.231 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804982.2310464, 3a65f84a-3072-4b94-b08a-0ba7b1529a07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.232 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] VM Started (Lifecycle Event)#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.234 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.237 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.242 253665 INFO nova.virt.libvirt.driver [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance spawned successfully.#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.242 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.246 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.249 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.260 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.260 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.260 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.261 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.261 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.261 253665 DEBUG nova.virt.libvirt.driver [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.264 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.265 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804982.2312224, 3a65f84a-3072-4b94-b08a-0ba7b1529a07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.265 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:49:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.288 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.292 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804982.2370152, 3a65f84a-3072-4b94-b08a-0ba7b1529a07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.292 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.316 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.319 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.341 253665 INFO nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Took 6.82 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.342 253665 DEBUG nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.343 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.424 253665 INFO nova.compute.manager [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Took 7.82 seconds to build instance.#033[00m
Nov 22 04:49:42 np0005532048 nova_compute[253661]: 2025-11-22 09:49:42.441 253665 DEBUG oslo_concurrency.lockutils [None req-1666144a-eb05-4bd1-9c7a-9bdb0a2e9a6a 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:42 np0005532048 podman[405199]: 2025-11-22 09:49:42.517094843 +0000 UTC m=+0.023657055 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:49:42 np0005532048 podman[405199]: 2025-11-22 09:49:42.651356619 +0000 UTC m=+0.157918811 container create e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:49:42 np0005532048 systemd[1]: Started libpod-conmon-e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e.scope.
Nov 22 04:49:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:49:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d720b8b444377e6f53dfc930e4e7925a5c19a33403959e8730a0a92df22382/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:42 np0005532048 podman[405199]: 2025-11-22 09:49:42.790604848 +0000 UTC m=+0.297167040 container init e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:49:42 np0005532048 podman[405199]: 2025-11-22 09:49:42.797653742 +0000 UTC m=+0.304215934 container start e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:49:42 np0005532048 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [NOTICE]   (405216) : New worker (405218) forked
Nov 22 04:49:42 np0005532048 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [NOTICE]   (405216) : Loading success.
Nov 22 04:49:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.039 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.040 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.061 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.137 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.137 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.144 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.144 253665 INFO nova.compute.claims [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.265 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:49:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3012163552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.702 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.708 253665 DEBUG nova.compute.provider_tree [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.722 253665 DEBUG nova.scheduler.client.report [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.747 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.748 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.807 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.807 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.845 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.864 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.975 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.976 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:49:43 np0005532048 nova_compute[253661]: 2025-11-22 09:49:43.976 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Creating image(s)#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.002 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.023 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.044 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.047 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.128 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.129 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.130 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.130 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.152 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.157 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 027bdffc-9e8e-4a33-9b06-844890912dc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.288 253665 DEBUG nova.compute.manager [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 DEBUG oslo_concurrency.lockutils [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 DEBUG oslo_concurrency.lockutils [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 DEBUG oslo_concurrency.lockutils [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 DEBUG nova.compute.manager [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] No waiting events found dispatching network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.289 253665 WARNING nova.compute.manager [req-777dbc7c-c37a-4641-ac30-4bcc220e6072 req-03e07904-dcb6-4bf6-91c1-8133d10ae1c6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received unexpected event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:49:44 np0005532048 podman[405340]: 2025-11-22 09:49:44.393838346 +0000 UTC m=+0.083598385 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:49:44 np0005532048 nova_compute[253661]: 2025-11-22 09:49:44.957 253665 DEBUG nova.policy [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1edb692a8ff443038839784febd964b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ffacc46512445d8b5c24899a0053196', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:49:45 np0005532048 nova_compute[253661]: 2025-11-22 09:49:45.264 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 027bdffc-9e8e-4a33-9b06-844890912dc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:45 np0005532048 nova_compute[253661]: 2025-11-22 09:49:45.331 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] resizing rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:49:45 np0005532048 nova_compute[253661]: 2025-11-22 09:49:45.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:45 np0005532048 nova_compute[253661]: 2025-11-22 09:49:45.619 253665 DEBUG nova.objects.instance [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lazy-loading 'migration_context' on Instance uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:49:45 np0005532048 nova_compute[253661]: 2025-11-22 09:49:45.635 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:49:45 np0005532048 nova_compute[253661]: 2025-11-22 09:49:45.635 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Ensure instance console log exists: /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:49:45 np0005532048 nova_compute[253661]: 2025-11-22 09:49:45.636 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:45 np0005532048 nova_compute[253661]: 2025-11-22 09:49:45.636 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:45 np0005532048 nova_compute[253661]: 2025-11-22 09:49:45.636 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 22 04:49:46 np0005532048 nova_compute[253661]: 2025-11-22 09:49:46.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:46 np0005532048 NetworkManager[48920]: <info>  [1763804986.8400] manager: (patch-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/649)
Nov 22 04:49:46 np0005532048 NetworkManager[48920]: <info>  [1763804986.8411] manager: (patch-br-int-to-provnet-0b764f0d-bbdf-439e-97ab-c0931e058bda): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Nov 22 04:49:46 np0005532048 nova_compute[253661]: 2025-11-22 09:49:46.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:46 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:46Z|01589|binding|INFO|Releasing lport da001788-faa3-412b-9b6a-82fe1a808a87 from this chassis (sb_readonly=0)
Nov 22 04:49:46 np0005532048 nova_compute[253661]: 2025-11-22 09:49:46.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.049 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Successfully created port: 62358b95-9f4a-404c-8165-dc98c7e3b042 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.121 253665 DEBUG nova.compute.manager [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.122 253665 DEBUG nova.compute.manager [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing instance network info cache due to event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.122 253665 DEBUG oslo_concurrency.lockutils [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.123 253665 DEBUG oslo_concurrency.lockutils [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.123 253665 DEBUG nova.network.neutron [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.253 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.255 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.269 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.295 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:47 np0005532048 nova_compute[253661]: 2025-11-22 09:49:47.520 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:49:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.681 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Successfully updated port: 62358b95-9f4a-404c-8165-dc98c7e3b042 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.695 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.696 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.696 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.766 253665 DEBUG nova.compute.manager [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.767 253665 DEBUG nova.compute.manager [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing instance network info cache due to event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.768 253665 DEBUG oslo_concurrency.lockutils [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.828 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.979 253665 DEBUG nova.network.neutron [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated VIF entry in instance network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.980 253665 DEBUG nova.network.neutron [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.997 253665 DEBUG oslo_concurrency.lockutils [req-ad274c3b-c9d9-461e-a730-9aa1d0d2bcf1 req-12f6b4d7-7705-4ecc-ad45-1449c0cfcd10 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.998 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.998 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:49:48 np0005532048 nova_compute[253661]: 2025-11-22 09:49:48.999 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:49:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.937 253665 DEBUG nova.network.neutron [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.955 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.956 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance network_info: |[{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.957 253665 DEBUG oslo_concurrency.lockutils [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.958 253665 DEBUG nova.network.neutron [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.964 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start _get_guest_xml network_info=[{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.971 253665 WARNING nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.981 253665 DEBUG nova.virt.libvirt.host [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.982 253665 DEBUG nova.virt.libvirt.host [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.987 253665 DEBUG nova.virt.libvirt.host [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.988 253665 DEBUG nova.virt.libvirt.host [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.988 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.989 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.990 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.991 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.991 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.992 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.992 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.993 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.994 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.994 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.995 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:49:50 np0005532048 nova_compute[253661]: 2025-11-22 09:49:50.995 253665 DEBUG nova.virt.hardware [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.000 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:49:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2782051890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.491 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.515 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.519 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:49:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246561523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.973 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.976 253665 DEBUG nova.virt.libvirt.vif [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:49:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1978624834',display_name='tempest-TestSnapshotPattern-server-1978624834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1978624834',id=147,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPbs4cvcme9ivACjshW3GrHRutsNNtC8JsYxZJpO7Wdm0wymVGG4uq7MUY+cUVsrxl6cn1THXZxHPADM3ZJF4hahzevBsWxtyjQn+l0NA1XlnmuhoCdb7kymP1eYu1QPUA==',key_name='tempest-TestSnapshotPattern-1057806612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ffacc46512445d8b5c24899a0053196',ramdisk_id='',reservation_id='r-c1keooq8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-98475773',owner_user_name='tempest-TestSnapshotPattern-98475773-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:49:43Z,user_data=None,user_id='1edb692a8ff443038839784febd964b1',uuid=027bdffc-9e8e-4a33-9b06-844890912dc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.977 253665 DEBUG nova.network.os_vif_util [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converting VIF {"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.978 253665 DEBUG nova.network.os_vif_util [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.980 253665 DEBUG nova.objects.instance [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lazy-loading 'pci_devices' on Instance uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:49:51 np0005532048 nova_compute[253661]: 2025-11-22 09:49:51.994 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <uuid>027bdffc-9e8e-4a33-9b06-844890912dc9</uuid>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <name>instance-00000093</name>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestSnapshotPattern-server-1978624834</nova:name>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:49:50</nova:creationTime>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <nova:user uuid="1edb692a8ff443038839784febd964b1">tempest-TestSnapshotPattern-98475773-project-member</nova:user>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <nova:project uuid="6ffacc46512445d8b5c24899a0053196">tempest-TestSnapshotPattern-98475773</nova:project>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <nova:port uuid="62358b95-9f4a-404c-8165-dc98c7e3b042">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <entry name="serial">027bdffc-9e8e-4a33-9b06-844890912dc9</entry>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <entry name="uuid">027bdffc-9e8e-4a33-9b06-844890912dc9</entry>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/027bdffc-9e8e-4a33-9b06-844890912dc9_disk">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:bc:a0:65"/>
Nov 22 04:49:51 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:      <target dev="tap62358b95-9f"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:49:52 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/console.log" append="off"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:49:52 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:49:52 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:49:52 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:49:52 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:49:52 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.002 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Preparing to wait for external event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.003 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.004 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.004 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.005 253665 DEBUG nova.virt.libvirt.vif [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:49:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1978624834',display_name='tempest-TestSnapshotPattern-server-1978624834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1978624834',id=147,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPbs4cvcme9ivACjshW3GrHRutsNNtC8JsYxZJpO7Wdm0wymVGG4uq7MUY+cUVsrxl6cn1THXZxHPADM3ZJF4hahzevBsWxtyjQn+l0NA1XlnmuhoCdb7kymP1eYu1QPUA==',key_name='tempest-TestSnapshotPattern-1057806612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ffacc46512445d8b5c24899a0053196',ramdisk_id='',reservation_id='r-c1keooq8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-98475773',owner_user_name='tempest-TestSnapshotPattern-98475773-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:49:43Z,user_data=None,user_id='1edb692a8ff443038839784febd964b1',uuid=027bdffc-9e8e-4a33-9b06-844890912dc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.006 253665 DEBUG nova.network.os_vif_util [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converting VIF {"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.007 253665 DEBUG nova.network.os_vif_util [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.007 253665 DEBUG os_vif [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.008 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.009 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.009 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.012 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.012 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62358b95-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.013 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62358b95-9f, col_values=(('external_ids', {'iface-id': '62358b95-9f4a-404c-8165-dc98c7e3b042', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:a0:65', 'vm-uuid': '027bdffc-9e8e-4a33-9b06-844890912dc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.014 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:52 np0005532048 NetworkManager[48920]: <info>  [1763804992.0158] manager: (tap62358b95-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.018 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.025 253665 INFO os_vif [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f')#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.089 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.090 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.091 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] No VIF found with MAC fa:16:3e:bc:a0:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.092 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Using config drive#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.117 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:49:52
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', '.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:49:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:49:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.951 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Creating config drive at /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config#033[00m
Nov 22 04:49:52 np0005532048 nova_compute[253661]: 2025-11-22 09:49:52.958 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7z6g3r12 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.013 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.032 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.033 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.034 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.122 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7z6g3r12" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.150 253665 DEBUG nova.storage.rbd_utils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] rbd image 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.155 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.187 253665 DEBUG nova.network.neutron [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated VIF entry in instance network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.188 253665 DEBUG nova.network.neutron [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.204 253665 DEBUG oslo_concurrency.lockutils [req-37cb27e3-334d-448f-a87a-3872057b04f8 req-c872b7c8-6bc8-4b16-bd09-5ce3d1cd3a31 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.308 253665 DEBUG oslo_concurrency.processutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config 027bdffc-9e8e-4a33-9b06-844890912dc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.309 253665 INFO nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Deleting local config drive /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9/disk.config because it was imported into RBD.#033[00m
Nov 22 04:49:53 np0005532048 NetworkManager[48920]: <info>  [1763804993.3676] manager: (tap62358b95-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/652)
Nov 22 04:49:53 np0005532048 kernel: tap62358b95-9f: entered promiscuous mode
Nov 22 04:49:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:53Z|01590|binding|INFO|Claiming lport 62358b95-9f4a-404c-8165-dc98c7e3b042 for this chassis.
Nov 22 04:49:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:53Z|01591|binding|INFO|62358b95-9f4a-404c-8165-dc98c7e3b042: Claiming fa:16:3e:bc:a0:65 10.100.0.3
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.381 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a0:65 10.100.0.3'], port_security=['fa:16:3e:bc:a0:65 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '027bdffc-9e8e-4a33-9b06-844890912dc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-768d62d5-f993-4383-9edf-3d68f19e409c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ffacc46512445d8b5c24899a0053196', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aef6f84b-f5db-4e86-b5ce-afacad080f10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eea3b39d-a626-45c2-a32c-ad267efc3243, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=62358b95-9f4a-404c-8165-dc98c7e3b042) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.384 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 62358b95-9f4a-404c-8165-dc98c7e3b042 in datapath 768d62d5-f993-4383-9edf-3d68f19e409c bound to our chassis#033[00m
Nov 22 04:49:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:53Z|01592|binding|INFO|Setting lport 62358b95-9f4a-404c-8165-dc98c7e3b042 ovn-installed in OVS
Nov 22 04:49:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:53Z|01593|binding|INFO|Setting lport 62358b95-9f4a-404c-8165-dc98c7e3b042 up in Southbound
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.387 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 768d62d5-f993-4383-9edf-3d68f19e409c#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.401 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0a321c9c-45f1-40ce-bed2-cddb695f51ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.402 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap768d62d5-f1 in ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.405 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap768d62d5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.405 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e421b477-4082-442f-8be0-4b910b09f835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.406 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8444ad-198c-44e8-99fc-7ae3b30afbb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 systemd-udevd[405582]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:49:53 np0005532048 systemd-machined[215941]: New machine qemu-179-instance-00000093.
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.418 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ee90c2e0-81d8-474a-9189-790519d23742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 systemd[1]: Started Virtual Machine qemu-179-instance-00000093.
Nov 22 04:49:53 np0005532048 NetworkManager[48920]: <info>  [1763804993.4293] device (tap62358b95-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:49:53 np0005532048 NetworkManager[48920]: <info>  [1763804993.4305] device (tap62358b95-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.438 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c9de9c7c-9243-457e-a7dd-34c2639f528b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.473 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[68ce526f-26ce-485b-a670-f10e45f098c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 NetworkManager[48920]: <info>  [1763804993.4820] manager: (tap768d62d5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/653)
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.481 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[63182330-9c10-44cc-ab31-a1765c6a2d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.518 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[6578b734-2bff-47bf-b675-17566d952546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.522 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9caf2b8b-801e-448d-9e4d-f8cb5cd84eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 NetworkManager[48920]: <info>  [1763804993.5470] device (tap768d62d5-f0): carrier: link connected
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.551 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[630e83f7-4dc4-4f69-8706-55b2434d3c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.569 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5f158187-47dc-4df5-bb5e-98b0b741f8dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap768d62d5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:b9:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 457], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784428, 'reachable_time': 28560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405614, 'error': None, 'target': 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.589 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[387e6895-071c-4de9-90dc-64b505b51efb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:b99d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 784428, 'tstamp': 784428}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405615, 'error': None, 'target': 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.606 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a12441-8fab-4f3f-9531-8ceb842df7cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap768d62d5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:b9:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 457], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784428, 'reachable_time': 28560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405616, 'error': None, 'target': 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.645 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[e08ab24b-b145-4a93-8ef7-e6c8fc4e5c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.741 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[7676d62c-af9f-4cfa-8d37-ad13933e861f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.742 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap768d62d5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.742 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.743 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap768d62d5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.744 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:53 np0005532048 kernel: tap768d62d5-f0: entered promiscuous mode
Nov 22 04:49:53 np0005532048 NetworkManager[48920]: <info>  [1763804993.7454] manager: (tap768d62d5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.747 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap768d62d5-f0, col_values=(('external_ids', {'iface-id': 'e20358df-1297-4b78-9482-59841121a4d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:49:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:53Z|01594|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.749 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/768d62d5-f993-4383-9edf-3d68f19e409c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/768d62d5-f993-4383-9edf-3d68f19e409c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.750 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c84b2508-32f4-4481-87a8-11738260d556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.751 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-768d62d5-f993-4383-9edf-3d68f19e409c
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/768d62d5-f993-4383-9edf-3d68f19e409c.pid.haproxy
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 768d62d5-f993-4383-9edf-3d68f19e409c
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:49:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:53.751 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'env', 'PROCESS_TAG=haproxy-768d62d5-f993-4383-9edf-3d68f19e409c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/768d62d5-f993-4383-9edf-3d68f19e409c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:49:53 np0005532048 nova_compute[253661]: 2025-11-22 09:49:53.763 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:54 np0005532048 nova_compute[253661]: 2025-11-22 09:49:54.084 253665 DEBUG nova.compute.manager [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:54 np0005532048 nova_compute[253661]: 2025-11-22 09:49:54.086 253665 DEBUG oslo_concurrency.lockutils [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:54 np0005532048 nova_compute[253661]: 2025-11-22 09:49:54.087 253665 DEBUG oslo_concurrency.lockutils [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:54 np0005532048 nova_compute[253661]: 2025-11-22 09:49:54.087 253665 DEBUG oslo_concurrency.lockutils [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:54 np0005532048 nova_compute[253661]: 2025-11-22 09:49:54.087 253665 DEBUG nova.compute.manager [req-01b8372a-adbb-4d06-98ee-16105c7a7b06 req-20930cc5-3d5d-4595-a2af-9d1bc654c677 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Processing event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:49:54 np0005532048 podman[405648]: 2025-11-22 09:49:54.16762839 +0000 UTC m=+0.089457111 container create 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:49:54 np0005532048 podman[405648]: 2025-11-22 09:49:54.10971349 +0000 UTC m=+0.031542211 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:49:54 np0005532048 nova_compute[253661]: 2025-11-22 09:49:54.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:54 np0005532048 systemd[1]: Started libpod-conmon-5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab.scope.
Nov 22 04:49:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:49:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:49:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3a35e44d987f37dea2ee9bb4f5402e567d895145b73865842e34bbbf02d3f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:49:54 np0005532048 podman[405648]: 2025-11-22 09:49:54.322578166 +0000 UTC m=+0.244406917 container init 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:49:54 np0005532048 podman[405648]: 2025-11-22 09:49:54.32964739 +0000 UTC m=+0.251476111 container start 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:49:54 np0005532048 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [NOTICE]   (405666) : New worker (405668) forked
Nov 22 04:49:54 np0005532048 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [NOTICE]   (405666) : Loading success.
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.067 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804995.0670545, 027bdffc-9e8e-4a33-9b06-844890912dc9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.068 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] VM Started (Lifecycle Event)#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.070 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.074 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.077 253665 INFO nova.virt.libvirt.driver [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance spawned successfully.#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.078 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.098 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.104 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.109 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.110 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.111 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.111 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.112 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.112 253665 DEBUG nova.virt.libvirt.driver [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.143 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.144 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804995.0672367, 027bdffc-9e8e-4a33-9b06-844890912dc9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.144 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.172 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.175 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763804995.0732617, 027bdffc-9e8e-4a33-9b06-844890912dc9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.175 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.192 253665 INFO nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Took 11.22 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.193 253665 DEBUG nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.196 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.203 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.229 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.253 253665 INFO nova.compute.manager [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Took 12.15 seconds to build instance.#033[00m
Nov 22 04:49:55 np0005532048 nova_compute[253661]: 2025-11-22 09:49:55.268 253665 DEBUG oslo_concurrency.lockutils [None req-3d1fdfcb-e3cc-4159-bd3d-1b0fa5f4bb5a 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:49:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:49:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:49:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:49:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:49:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:55Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:47:56 10.100.0.9
Nov 22 04:49:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:49:55Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:47:56 10.100.0.9
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.151 253665 DEBUG nova.compute.manager [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.152 253665 DEBUG oslo_concurrency.lockutils [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.152 253665 DEBUG oslo_concurrency.lockutils [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.152 253665 DEBUG oslo_concurrency.lockutils [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.153 253665 DEBUG nova.compute.manager [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] No waiting events found dispatching network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.153 253665 WARNING nova.compute.manager [req-5879cc65-bd54-4bc8-889f-a7ff3e6dae36 req-c7faf57b-984c-450d-95e2-061ac3dece45 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received unexpected event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.249 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.250 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.250 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 22 04:49:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:49:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:49:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:49:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:49:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:49:56 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:49:56 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2811333471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.720 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.788 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.789 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.792 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.792 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.967 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.968 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3192MB free_disk=59.94662857055664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.969 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:49:56 np0005532048 nova_compute[253661]: 2025-11-22 09:49:56.969 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.016 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3a65f84a-3072-4b94-b08a-0ba7b1529a07 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.040 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.040 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.116 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.300 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:49:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:49:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307993980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.565 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.571 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.584 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.610 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:49:57 np0005532048 nova_compute[253661]: 2025-11-22 09:49:57.611 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:49:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:49:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Nov 22 04:49:58 np0005532048 nova_compute[253661]: 2025-11-22 09:49:58.611 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:58 np0005532048 nova_compute[253661]: 2025-11-22 09:49:58.637 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:58 np0005532048 nova_compute[253661]: 2025-11-22 09:49:58.638 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:49:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:59.013 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:49:59 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:49:59.014 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:49:59 np0005532048 nova_compute[253661]: 2025-11-22 09:49:59.014 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 22 04:50:01 np0005532048 nova_compute[253661]: 2025-11-22 09:50:01.150 253665 DEBUG nova.compute.manager [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:50:01 np0005532048 nova_compute[253661]: 2025-11-22 09:50:01.151 253665 DEBUG nova.compute.manager [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing instance network info cache due to event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:50:01 np0005532048 nova_compute[253661]: 2025-11-22 09:50:01.152 253665 DEBUG oslo_concurrency.lockutils [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:50:01 np0005532048 nova_compute[253661]: 2025-11-22 09:50:01.152 253665 DEBUG oslo_concurrency.lockutils [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:50:01 np0005532048 nova_compute[253661]: 2025-11-22 09:50:01.152 253665 DEBUG nova.network.neutron [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:50:02 np0005532048 nova_compute[253661]: 2025-11-22 09:50:02.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:02 np0005532048 nova_compute[253661]: 2025-11-22 09:50:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:50:02 np0005532048 nova_compute[253661]: 2025-11-22 09:50:02.303 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 22 04:50:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011051429112131277 of space, bias 1.0, pg target 0.3315428733639383 quantized to 32 (current 32)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:50:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:50:03 np0005532048 nova_compute[253661]: 2025-11-22 09:50:03.128 253665 DEBUG nova.network.neutron [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated VIF entry in instance network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:50:03 np0005532048 nova_compute[253661]: 2025-11-22 09:50:03.129 253665 DEBUG nova.network.neutron [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:50:03 np0005532048 nova_compute[253661]: 2025-11-22 09:50:03.144 253665 DEBUG oslo_concurrency.lockutils [req-f0aad5e7-0fb8-4d2c-b47a-fffa84d96da0 req-20a1823b-8546-4a06-969c-c50e1322b4c9 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:50:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Nov 22 04:50:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:05.017 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:05 np0005532048 nova_compute[253661]: 2025-11-22 09:50:05.638 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:05 np0005532048 nova_compute[253661]: 2025-11-22 09:50:05.639 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:05 np0005532048 nova_compute[253661]: 2025-11-22 09:50:05.657 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:50:05 np0005532048 nova_compute[253661]: 2025-11-22 09:50:05.717 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:05 np0005532048 nova_compute[253661]: 2025-11-22 09:50:05.717 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:05 np0005532048 nova_compute[253661]: 2025-11-22 09:50:05.725 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:50:05 np0005532048 nova_compute[253661]: 2025-11-22 09:50:05.725 253665 INFO nova.compute.claims [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:50:05 np0005532048 nova_compute[253661]: 2025-11-22 09:50:05.875 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Nov 22 04:50:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:50:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469611548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.372 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.378 253665 DEBUG nova.compute.provider_tree [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.393 253665 DEBUG nova.scheduler.client.report [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.421 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.422 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.471 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.472 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.498 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.514 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.599 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.601 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.602 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Creating image(s)#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.623 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.649 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.672 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.676 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.793 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.794 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.796 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.797 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.825 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.830 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 34b8226a-40bd-46d4-99ee-1be44f56e142_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:06 np0005532048 nova_compute[253661]: 2025-11-22 09:50:06.980 253665 DEBUG nova.policy [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.127 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 34b8226a-40bd-46d4-99ee-1be44f56e142_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.201 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:07 np0005532048 podman[405934]: 2025-11-22 09:50:07.386549294 +0000 UTC m=+0.060368172 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:50:07 np0005532048 podman[405935]: 2025-11-22 09:50:07.408568848 +0000 UTC m=+0.090175038 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.485 253665 DEBUG nova.objects.instance [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 34b8226a-40bd-46d4-99ee-1be44f56e142 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.500 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.501 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Ensure instance console log exists: /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.501 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.501 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:07 np0005532048 nova_compute[253661]: 2025-11-22 09:50:07.502 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:08 np0005532048 nova_compute[253661]: 2025-11-22 09:50:08.045 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Successfully created port: fbec9736-25e9-44be-80ed-974c1de2bf0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:50:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:08Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:a0:65 10.100.0.3
Nov 22 04:50:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:08Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:a0:65 10.100.0.3
Nov 22 04:50:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 223 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 179 op/s
Nov 22 04:50:08 np0005532048 nova_compute[253661]: 2025-11-22 09:50:08.821 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Successfully updated port: fbec9736-25e9-44be-80ed-974c1de2bf0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:50:08 np0005532048 nova_compute[253661]: 2025-11-22 09:50:08.837 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:50:08 np0005532048 nova_compute[253661]: 2025-11-22 09:50:08.838 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:50:08 np0005532048 nova_compute[253661]: 2025-11-22 09:50:08.838 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:50:08 np0005532048 nova_compute[253661]: 2025-11-22 09:50:08.907 253665 DEBUG nova.compute.manager [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:50:08 np0005532048 nova_compute[253661]: 2025-11-22 09:50:08.908 253665 DEBUG nova.compute.manager [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing instance network info cache due to event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:50:08 np0005532048 nova_compute[253661]: 2025-11-22 09:50:08.908 253665 DEBUG oslo_concurrency.lockutils [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:50:08 np0005532048 nova_compute[253661]: 2025-11-22 09:50:08.985 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:50:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 223 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.471 253665 DEBUG nova.network.neutron [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.771 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.772 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance network_info: |[{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.772 253665 DEBUG oslo_concurrency.lockutils [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.773 253665 DEBUG nova.network.neutron [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.776 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start _get_guest_xml network_info=[{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.780 253665 WARNING nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.786 253665 DEBUG nova.virt.libvirt.host [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.788 253665 DEBUG nova.virt.libvirt.host [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.796 253665 DEBUG nova.virt.libvirt.host [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.797 253665 DEBUG nova.virt.libvirt.host [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.797 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.797 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.798 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.799 253665 DEBUG nova.virt.hardware [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:50:10 np0005532048 nova_compute[253661]: 2025-11-22 09:50:10.802 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:50:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3444718572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.261 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.294 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.299 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:50:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968768954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.764 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.765 253665 DEBUG nova.virt.libvirt.vif [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1818854274',display_name='tempest-TestGettingAddress-server-1818854274',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1818854274',id=148,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-no44603j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:50:06Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=34b8226a-40bd-46d4-99ee-1be44f56e142,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.766 253665 DEBUG nova.network.os_vif_util [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.767 253665 DEBUG nova.network.os_vif_util [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.768 253665 DEBUG nova.objects.instance [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 34b8226a-40bd-46d4-99ee-1be44f56e142 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.785 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <uuid>34b8226a-40bd-46d4-99ee-1be44f56e142</uuid>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <name>instance-00000094</name>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-1818854274</nova:name>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:50:10</nova:creationTime>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <nova:port uuid="fbec9736-25e9-44be-80ed-974c1de2bf0d">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:febc:980b" ipVersion="6"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:febc:980b" ipVersion="6"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <entry name="serial">34b8226a-40bd-46d4-99ee-1be44f56e142</entry>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <entry name="uuid">34b8226a-40bd-46d4-99ee-1be44f56e142</entry>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/34b8226a-40bd-46d4-99ee-1be44f56e142_disk">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:bc:98:0b"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <target dev="tapfbec9736-25"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/console.log" append="off"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:50:11 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:50:11 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:50:11 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:50:11 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.786 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Preparing to wait for external event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.786 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.786 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.786 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.787 253665 DEBUG nova.virt.libvirt.vif [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1818854274',display_name='tempest-TestGettingAddress-server-1818854274',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1818854274',id=148,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-no44603j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:50:06Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=34b8226a-40bd-46d4-99ee-1be44f56e142,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.788 253665 DEBUG nova.network.os_vif_util [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.788 253665 DEBUG nova.network.os_vif_util [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.789 253665 DEBUG os_vif [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.790 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.790 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.793 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.793 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbec9736-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.793 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbec9736-25, col_values=(('external_ids', {'iface-id': 'fbec9736-25e9-44be-80ed-974c1de2bf0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:98:0b', 'vm-uuid': '34b8226a-40bd-46d4-99ee-1be44f56e142'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.795 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:11 np0005532048 NetworkManager[48920]: <info>  [1763805011.7959] manager: (tapfbec9736-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/655)
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.800 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.802 253665 INFO os_vif [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25')#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.840 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.840 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.840 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:bc:98:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.841 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Using config drive#033[00m
Nov 22 04:50:11 np0005532048 nova_compute[253661]: 2025-11-22 09:50:11.859 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:50:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 238 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 285 KiB/s rd, 3.9 MiB/s wr, 75 op/s
Nov 22 04:50:12 np0005532048 nova_compute[253661]: 2025-11-22 09:50:12.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:50:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229073963' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:50:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:50:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229073963' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:50:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:13 np0005532048 nova_compute[253661]: 2025-11-22 09:50:13.939 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Creating config drive at /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config#033[00m
Nov 22 04:50:13 np0005532048 nova_compute[253661]: 2025-11-22 09:50:13.947 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xvu_tq_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.097 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xvu_tq_" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.125 253665 DEBUG nova.storage.rbd_utils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.128 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.325 253665 DEBUG oslo_concurrency.processutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config 34b8226a-40bd-46d4-99ee-1be44f56e142_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.326 253665 INFO nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Deleting local config drive /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142/disk.config because it was imported into RBD.#033[00m
Nov 22 04:50:14 np0005532048 kernel: tapfbec9736-25: entered promiscuous mode
Nov 22 04:50:14 np0005532048 NetworkManager[48920]: <info>  [1763805014.3722] manager: (tapfbec9736-25): new Tun device (/org/freedesktop/NetworkManager/Devices/656)
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:14Z|01595|binding|INFO|Claiming lport fbec9736-25e9-44be-80ed-974c1de2bf0d for this chassis.
Nov 22 04:50:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:14Z|01596|binding|INFO|fbec9736-25e9-44be-80ed-974c1de2bf0d: Claiming fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:14Z|01597|binding|INFO|Setting lport fbec9736-25e9-44be-80ed-974c1de2bf0d ovn-installed in OVS
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:14 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:14Z|01598|binding|INFO|Setting lport fbec9736-25e9-44be-80ed-974c1de2bf0d up in Southbound
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.415 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b'], port_security=['fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:febc:980b/64 2001:db8::f816:3eff:febc:980b/64', 'neutron:device_id': '34b8226a-40bd-46d4-99ee-1be44f56e142', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a71aa19e-d298-43f1-b9d0-7f952a63c1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fbec9736-25e9-44be-80ed-974c1de2bf0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.416 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fbec9736-25e9-44be-80ed-974c1de2bf0d in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce bound to our chassis#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.418 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce#033[00m
Nov 22 04:50:14 np0005532048 systemd-machined[215941]: New machine qemu-180-instance-00000094.
Nov 22 04:50:14 np0005532048 systemd[1]: Started Virtual Machine qemu-180-instance-00000094.
Nov 22 04:50:14 np0005532048 systemd-udevd[406129]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.435 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f541e6c3-6ffa-46c5-a706-e6e341c72537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:14 np0005532048 NetworkManager[48920]: <info>  [1763805014.4505] device (tapfbec9736-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:50:14 np0005532048 NetworkManager[48920]: <info>  [1763805014.4527] device (tapfbec9736-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.474 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[55cdf009-6b10-49f1-be5b-511f7ca2aa8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.480 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[13f05718-effb-40f5-9487-8f8d2abb13ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.512 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[c87faf24-482f-485f-ae62-cc3a56deb421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:14 np0005532048 podman[406127]: 2025-11-22 09:50:14.526256075 +0000 UTC m=+0.090996628 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.528 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1c5125-7178-4bbb-af22-3ad0d17c922b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b64819a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:d3:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783269, 'reachable_time': 17873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406167, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.545 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c10a7487-fc0e-4e58-aebe-13a7f4c49963]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9b64819a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783283, 'tstamp': 783283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406168, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9b64819a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783287, 'tstamp': 783287}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406168, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.547 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b64819a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.549 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.550 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b64819a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.550 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.551 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b64819a-20, col_values=(('external_ids', {'iface-id': 'da001788-faa3-412b-9b6a-82fe1a808a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:14 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:14.551 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.761 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805014.761172, 34b8226a-40bd-46d4-99ee-1be44f56e142 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.762 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] VM Started (Lifecycle Event)#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.782 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.787 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805014.761292, 34b8226a-40bd-46d4-99ee-1be44f56e142 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.787 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.806 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.809 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:50:14 np0005532048 nova_compute[253661]: 2025-11-22 09:50:14.831 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.178 253665 DEBUG nova.compute.manager [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.178 253665 DEBUG oslo_concurrency.lockutils [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.179 253665 DEBUG oslo_concurrency.lockutils [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.179 253665 DEBUG oslo_concurrency.lockutils [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.179 253665 DEBUG nova.compute.manager [req-2a13c24a-aa7c-4471-acf9-ff3707f6ea6e req-18f0bbb8-e2d3-4e9c-9542-3dfdc46b18fc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Processing event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.180 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.183 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805015.1828246, 34b8226a-40bd-46d4-99ee-1be44f56e142 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.183 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.185 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.188 253665 INFO nova.virt.libvirt.driver [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance spawned successfully.#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.189 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.206 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.211 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.211 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.212 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.212 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.213 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.213 253665 DEBUG nova.virt.libvirt.driver [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.221 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.254 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.309 253665 INFO nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Took 8.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.310 253665 DEBUG nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.404 253665 INFO nova.compute.manager [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Took 9.71 seconds to build instance.#033[00m
Nov 22 04:50:15 np0005532048 nova_compute[253661]: 2025-11-22 09:50:15.429 253665 DEBUG oslo_concurrency.lockutils [None req-2295c1d4-a2c1-41de-8a3b-e660ac44ba1e 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 22 04:50:16 np0005532048 nova_compute[253661]: 2025-11-22 09:50:16.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.340 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.586 253665 DEBUG nova.network.neutron [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updated VIF entry in instance network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.587 253665 DEBUG nova.network.neutron [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.608 253665 DEBUG oslo_concurrency.lockutils [req-abf6ef6a-d879-4e84-8667-f2e38538a026 req-516c8b15-0d08-4af9-91ed-d5ac1cc0e478 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.669 253665 DEBUG nova.compute.manager [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.676 253665 DEBUG nova.compute.manager [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.676 253665 DEBUG oslo_concurrency.lockutils [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.677 253665 DEBUG oslo_concurrency.lockutils [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.677 253665 DEBUG oslo_concurrency.lockutils [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.677 253665 DEBUG nova.compute.manager [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] No waiting events found dispatching network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.677 253665 WARNING nova.compute.manager [req-6b827def-4eda-4d8a-956a-a023c580d903 req-074458b2-824f-4ead-b298-7b78efb7f62a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received unexpected event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d for instance with vm_state active and task_state None.#033[00m
Nov 22 04:50:17 np0005532048 nova_compute[253661]: 2025-11-22 09:50:17.716 253665 INFO nova.compute.manager [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] instance snapshotting#033[00m
Nov 22 04:50:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:18 np0005532048 nova_compute[253661]: 2025-11-22 09:50:18.125 253665 INFO nova.virt.libvirt.driver [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Beginning live snapshot process#033[00m
Nov 22 04:50:18 np0005532048 nova_compute[253661]: 2025-11-22 09:50:18.263 253665 DEBUG nova.virt.libvirt.imagebackend [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] No parent info for 878156d4-57f6-4a8b-8f4c-cbde182bb832; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 22 04:50:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Nov 22 04:50:18 np0005532048 nova_compute[253661]: 2025-11-22 09:50:18.545 253665 DEBUG nova.storage.rbd_utils [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] creating snapshot(1c148e2a9d904db5a74c4a842a0649a6) on rbd image(027bdffc-9e8e-4a33-9b06-844890912dc9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 22 04:50:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Nov 22 04:50:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Nov 22 04:50:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Nov 22 04:50:19 np0005532048 nova_compute[253661]: 2025-11-22 09:50:19.456 253665 DEBUG nova.storage.rbd_utils [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] cloning vms/027bdffc-9e8e-4a33-9b06-844890912dc9_disk@1c148e2a9d904db5a74c4a842a0649a6 to images/39d89ba1-0559-4b27-814a-561c7d3add70 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 22 04:50:19 np0005532048 nova_compute[253661]: 2025-11-22 09:50:19.733 253665 DEBUG nova.storage.rbd_utils [None req-4c2c9903-78ff-4588-937c-8b6c6a7dcb14 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] flattening images/39d89ba1-0559-4b27-814a-561c7d3add70 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 22 04:50:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 145 op/s
Nov 22 04:50:21 np0005532048 nova_compute[253661]: 2025-11-22 09:50:21.050 253665 DEBUG nova.compute.manager [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:50:21 np0005532048 nova_compute[253661]: 2025-11-22 09:50:21.052 253665 DEBUG nova.compute.manager [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing instance network info cache due to event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:50:21 np0005532048 nova_compute[253661]: 2025-11-22 09:50:21.052 253665 DEBUG oslo_concurrency.lockutils [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:50:21 np0005532048 nova_compute[253661]: 2025-11-22 09:50:21.053 253665 DEBUG oslo_concurrency.lockutils [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:50:21 np0005532048 nova_compute[253661]: 2025-11-22 09:50:21.053 253665 DEBUG nova.network.neutron [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:50:21 np0005532048 nova_compute[253661]: 2025-11-22 09:50:21.799 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 101 KiB/s wr, 105 op/s
Nov 22 04:50:22 np0005532048 nova_compute[253661]: 2025-11-22 09:50:22.381 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:50:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:50:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Nov 22 04:50:25 np0005532048 nova_compute[253661]: 2025-11-22 09:50:25.961 253665 DEBUG nova.network.neutron [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updated VIF entry in instance network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:50:25 np0005532048 nova_compute[253661]: 2025-11-22 09:50:25.962 253665 DEBUG nova.network.neutron [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:50:26 np0005532048 nova_compute[253661]: 2025-11-22 09:50:26.000 253665 DEBUG oslo_concurrency.lockutils [req-108667a9-2e86-4b98-8cf6-e959666efcf6 req-d2953273-0440-44e8-9760-a048de20b8c8 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:50:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Nov 22 04:50:26 np0005532048 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 04:50:26 np0005532048 nova_compute[253661]: 2025-11-22 09:50:26.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:27 np0005532048 nova_compute[253661]: 2025-11-22 09:50:27.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:27.993 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:27.994 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:27.994 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 23 op/s
Nov 22 04:50:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 21 op/s
Nov 22 04:50:30 np0005532048 ceph-mds[101348]: mds.beacon.cephfs.compute-0.myffln missed beacon ack from the monitors
Nov 22 04:50:31 np0005532048 nova_compute[253661]: 2025-11-22 09:50:31.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 268 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 19 op/s
Nov 22 04:50:32 np0005532048 nova_compute[253661]: 2025-11-22 09:50:32.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 14.7733 seconds
Nov 22 04:50:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:33 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 11.245452881s
Nov 22 04:50:33 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 11.245452881s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 11.123932838s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 11.124113083s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.257675171s, txc = 0x56138410c900
Nov 22 04:50:33 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 6.877428532s
Nov 22 04:50:33 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 6.877429485s
Nov 22 04:50:33 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.877703667s, txc = 0x55a6cbc8a900
Nov 22 04:50:33 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.878297806s, txc = 0x55a6cc8b7500
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 11.123982430s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 11.123887062s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 11.716888428s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 11.715475082s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 11.732536316s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for submit_transact, latency = 11.717301369s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 11.715861320s
Nov 22 04:50:33 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for throttle_transact, latency = 11.729718208s
Nov 22 04:50:33 np0005532048 ceph-osd[88656]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 11.245686531s, txc = 0x56207faafb00
Nov 22 04:50:34 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.962179661s, txc = 0x55a6cbd4db00
Nov 22 04:50:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 1 active+clean+laggy, 304 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 55 op/s
Nov 22 04:50:34 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.335518837s, txc = 0x561383c42600
Nov 22 04:50:35 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.676224709s, txc = 0x561383c42f00
Nov 22 04:50:35 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.269107819s, txc = 0x56138410d200
Nov 22 04:50:35 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.269145966s, txc = 0x561383331800
Nov 22 04:50:35 np0005532048 ceph-osd[89679]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.284407616s, txc = 0x561385106000
Nov 22 04:50:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 1 active+clean+laggy, 304 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 36 op/s
Nov 22 04:50:36 np0005532048 nova_compute[253661]: 2025-11-22 09:50:36.856 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:37 np0005532048 nova_compute[253661]: 2025-11-22 09:50:37.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 1 active+clean+laggy, 304 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 22 04:50:38 np0005532048 podman[406318]: 2025-11-22 09:50:38.376352126 +0000 UTC m=+0.059799118 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 04:50:38 np0005532048 podman[406319]: 2025-11-22 09:50:38.39677345 +0000 UTC m=+0.076908000 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:50:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 1 active+clean+laggy, 304 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:50:41 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 060260d5-3384-41d6-af25-daa9ad2f23e5 does not exist
Nov 22 04:50:41 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a42d49b3-5f86-4642-8511-eba7972b10ad does not exist
Nov 22 04:50:41 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 633a37a8-374f-4538-baf6-33cd85138f08 does not exist
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:50:41 np0005532048 nova_compute[253661]: 2025-11-22 09:50:41.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:50:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:50:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:50:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:50:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:50:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 1 active+clean+laggy, 304 active+clean; 318 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 22 04:50:42 np0005532048 nova_compute[253661]: 2025-11-22 09:50:42.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:42 np0005532048 podman[406631]: 2025-11-22 09:50:42.462118067 +0000 UTC m=+0.026974207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:50:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:42 np0005532048 podman[406631]: 2025-11-22 09:50:42.776619932 +0000 UTC m=+0.341476052 container create cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:50:43 np0005532048 systemd[1]: Started libpod-conmon-cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3.scope.
Nov 22 04:50:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:50:43 np0005532048 podman[406631]: 2025-11-22 09:50:43.487467416 +0000 UTC m=+1.052323556 container init cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:50:43 np0005532048 podman[406631]: 2025-11-22 09:50:43.49413062 +0000 UTC m=+1.058986740 container start cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:50:43 np0005532048 nifty_ganguly[406647]: 167 167
Nov 22 04:50:43 np0005532048 systemd[1]: libpod-cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3.scope: Deactivated successfully.
Nov 22 04:50:43 np0005532048 podman[406631]: 2025-11-22 09:50:43.766939237 +0000 UTC m=+1.331795357 container attach cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:50:43 np0005532048 podman[406631]: 2025-11-22 09:50:43.767504171 +0000 UTC m=+1.332360291 container died cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:50:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c3e3c9ea16e0c0b64db721f23e7e0e1e6e2a9e0dc8edbf89b766d59b1a0b7615-merged.mount: Deactivated successfully.
Nov 22 04:50:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 1 active+clean+laggy, 304 active+clean; 336 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 67 op/s
Nov 22 04:50:44 np0005532048 podman[406631]: 2025-11-22 09:50:44.537391401 +0000 UTC m=+2.102247521 container remove cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ganguly, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:50:44 np0005532048 systemd[1]: libpod-conmon-cc361fb435e13249cf8169b50f91f9055cd32d49dd9cda149bada6754ca55ff3.scope: Deactivated successfully.
Nov 22 04:50:44 np0005532048 podman[406666]: 2025-11-22 09:50:44.716305159 +0000 UTC m=+0.110420088 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:50:44 np0005532048 podman[406692]: 2025-11-22 09:50:44.704025536 +0000 UTC m=+0.027376087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:50:44 np0005532048 podman[406692]: 2025-11-22 09:50:44.803969593 +0000 UTC m=+0.127320124 container create 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:50:44 np0005532048 systemd[1]: Started libpod-conmon-9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600.scope.
Nov 22 04:50:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:50:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:45 np0005532048 podman[406692]: 2025-11-22 09:50:45.048540713 +0000 UTC m=+0.371891264 container init 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:50:45 np0005532048 podman[406692]: 2025-11-22 09:50:45.056288135 +0000 UTC m=+0.379638666 container start 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:50:45 np0005532048 podman[406692]: 2025-11-22 09:50:45.147024934 +0000 UTC m=+0.470375465 container attach 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:50:46 np0005532048 stupefied_lovelace[406713]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:50:46 np0005532048 stupefied_lovelace[406713]: --> relative data size: 1.0
Nov 22 04:50:46 np0005532048 stupefied_lovelace[406713]: --> All data devices are unavailable
Nov 22 04:50:46 np0005532048 systemd[1]: libpod-9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600.scope: Deactivated successfully.
Nov 22 04:50:46 np0005532048 systemd[1]: libpod-9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600.scope: Consumed 1.027s CPU time.
Nov 22 04:50:46 np0005532048 podman[406692]: 2025-11-22 09:50:46.197343151 +0000 UTC m=+1.520693682 container died 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:50:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 1 active+clean+laggy, 304 active+clean; 336 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.6 MiB/s wr, 31 op/s
Nov 22 04:50:46 np0005532048 nova_compute[253661]: 2025-11-22 09:50:46.861 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8887166bc50051c789800a10ec87214b51a64db772a83ac3bfa17adfef33c3aa-merged.mount: Deactivated successfully.
Nov 22 04:50:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:47Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:98:0b 10.100.0.8
Nov 22 04:50:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:47Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:98:0b 10.100.0.8
Nov 22 04:50:47 np0005532048 nova_compute[253661]: 2025-11-22 09:50:47.231 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:50:47 np0005532048 nova_compute[253661]: 2025-11-22 09:50:47.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:50:47 np0005532048 nova_compute[253661]: 2025-11-22 09:50:47.232 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:50:47 np0005532048 nova_compute[253661]: 2025-11-22 09:50:47.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:47 np0005532048 podman[406692]: 2025-11-22 09:50:47.44380425 +0000 UTC m=+2.767154781 container remove 9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 04:50:47 np0005532048 systemd[1]: libpod-conmon-9bd5d8ca9dfa40e69ad679bf2ca07453746b8319956b9acc0a71e93769220600.scope: Deactivated successfully.
Nov 22 04:50:47 np0005532048 nova_compute[253661]: 2025-11-22 09:50:47.509 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:50:47 np0005532048 nova_compute[253661]: 2025-11-22 09:50:47.509 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:50:47 np0005532048 nova_compute[253661]: 2025-11-22 09:50:47.510 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:50:47 np0005532048 nova_compute[253661]: 2025-11-22 09:50:47.510 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:50:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:48 np0005532048 podman[406896]: 2025-11-22 09:50:48.055258489 +0000 UTC m=+0.040875661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:50:48 np0005532048 podman[406896]: 2025-11-22 09:50:48.238110993 +0000 UTC m=+0.223728055 container create e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 04:50:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 1 active+clean+laggy, 304 active+clean; 348 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 22 04:50:48 np0005532048 systemd[1]: Started libpod-conmon-e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4.scope.
Nov 22 04:50:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:50:48 np0005532048 podman[406896]: 2025-11-22 09:50:48.606529951 +0000 UTC m=+0.592147043 container init e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:50:48 np0005532048 podman[406896]: 2025-11-22 09:50:48.613418501 +0000 UTC m=+0.599035573 container start e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:50:48 np0005532048 crazy_chatelet[406912]: 167 167
Nov 22 04:50:48 np0005532048 systemd[1]: libpod-e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4.scope: Deactivated successfully.
Nov 22 04:50:48 np0005532048 podman[406896]: 2025-11-22 09:50:48.780358303 +0000 UTC m=+0.765975365 container attach e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:50:48 np0005532048 podman[406896]: 2025-11-22 09:50:48.781522102 +0000 UTC m=+0.767139194 container died e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:50:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-690d0fda1d9dd49e18050356e6f5dbfe306ee9257e60e8e8ad0637115c7cbb26-merged.mount: Deactivated successfully.
Nov 22 04:50:49 np0005532048 nova_compute[253661]: 2025-11-22 09:50:49.312 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:50:49 np0005532048 nova_compute[253661]: 2025-11-22 09:50:49.325 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:50:49 np0005532048 nova_compute[253661]: 2025-11-22 09:50:49.326 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:50:49 np0005532048 podman[406896]: 2025-11-22 09:50:49.355488975 +0000 UTC m=+1.341106037 container remove e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:50:49 np0005532048 systemd[1]: libpod-conmon-e60b00c0fc20d669e808cc9ad7b00684df10158e3165802d4c3c07680e494dd4.scope: Deactivated successfully.
Nov 22 04:50:49 np0005532048 podman[406936]: 2025-11-22 09:50:49.540233737 +0000 UTC m=+0.027177322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:50:49 np0005532048 podman[406936]: 2025-11-22 09:50:49.686640582 +0000 UTC m=+0.173584147 container create 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:50:49 np0005532048 systemd[1]: Started libpod-conmon-423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80.scope.
Nov 22 04:50:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:50:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:50 np0005532048 podman[406936]: 2025-11-22 09:50:50.086197208 +0000 UTC m=+0.573140803 container init 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:50:50 np0005532048 podman[406936]: 2025-11-22 09:50:50.092375771 +0000 UTC m=+0.579319346 container start 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:50:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:50Z|01599|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 22 04:50:50 np0005532048 nova_compute[253661]: 2025-11-22 09:50:50.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:50:50 np0005532048 podman[406936]: 2025-11-22 09:50:50.299983768 +0000 UTC m=+0.786932803 container attach 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:50:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 1 active+clean+laggy, 304 active+clean; 348 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]: {
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:    "0": [
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:        {
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "devices": [
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "/dev/loop3"
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            ],
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_name": "ceph_lv0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_size": "21470642176",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "name": "ceph_lv0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "tags": {
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.cluster_name": "ceph",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.crush_device_class": "",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.encrypted": "0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.osd_id": "0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.type": "block",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.vdo": "0"
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            },
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "type": "block",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "vg_name": "ceph_vg0"
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:        }
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:    ],
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:    "1": [
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:        {
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "devices": [
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "/dev/loop4"
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            ],
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_name": "ceph_lv1",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_size": "21470642176",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "name": "ceph_lv1",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "tags": {
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.cluster_name": "ceph",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.crush_device_class": "",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.encrypted": "0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.osd_id": "1",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.type": "block",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.vdo": "0"
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            },
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "type": "block",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "vg_name": "ceph_vg1"
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:        }
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:    ],
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:    "2": [
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:        {
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "devices": [
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "/dev/loop5"
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            ],
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_name": "ceph_lv2",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_size": "21470642176",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "name": "ceph_lv2",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "tags": {
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.cluster_name": "ceph",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.crush_device_class": "",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.encrypted": "0",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.osd_id": "2",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.type": "block",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:                "ceph.vdo": "0"
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            },
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "type": "block",
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:            "vg_name": "ceph_vg2"
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:        }
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]:    ]
Nov 22 04:50:50 np0005532048 keen_keldysh[406953]: }
Nov 22 04:50:50 np0005532048 systemd[1]: libpod-423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80.scope: Deactivated successfully.
Nov 22 04:50:50 np0005532048 podman[406936]: 2025-11-22 09:50:50.942958415 +0000 UTC m=+1.429901970 container died 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:50:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:51.185+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1cdc6df9cec2a7b3a3ab32bd369ca518464cf68192c8f8f113bab06893e51e6d-merged.mount: Deactivated successfully.
Nov 22 04:50:51 np0005532048 podman[406936]: 2025-11-22 09:50:51.514010186 +0000 UTC m=+2.000953751 container remove 423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_keldysh, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:50:51 np0005532048 systemd[1]: libpod-conmon-423d49c4edf1c02b5d7dd7208e51ecd11f4b3d4205b086eb6fb918aebe19ff80.scope: Deactivated successfully.
Nov 22 04:50:51 np0005532048 nova_compute[253661]: 2025-11-22 09:50:51.896 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:51 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:52 np0005532048 podman[407116]: 2025-11-22 09:50:52.158878989 +0000 UTC m=+0.057333437 container create 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:50:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:52.181+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:52 np0005532048 systemd[1]: Started libpod-conmon-4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c.scope.
Nov 22 04:50:52 np0005532048 podman[407116]: 2025-11-22 09:50:52.125122755 +0000 UTC m=+0.023577223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:50:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:50:52 np0005532048 podman[407116]: 2025-11-22 09:50:52.256851698 +0000 UTC m=+0.155306166 container init 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:50:52 np0005532048 podman[407116]: 2025-11-22 09:50:52.263234666 +0000 UTC m=+0.161689114 container start 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:50:52 np0005532048 podman[407116]: 2025-11-22 09:50:52.267843039 +0000 UTC m=+0.166297487 container attach 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:50:52 np0005532048 brave_chaum[407132]: 167 167
Nov 22 04:50:52 np0005532048 systemd[1]: libpod-4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c.scope: Deactivated successfully.
Nov 22 04:50:52 np0005532048 podman[407116]: 2025-11-22 09:50:52.269115931 +0000 UTC m=+0.167570379 container died 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:50:52
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'images', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups']
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:50:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5fd43736adee5c64a61ab7566de62de8bc1f4674cbf6f3b34da08f26d76c3047-merged.mount: Deactivated successfully.
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 1 active+clean+laggy, 304 active+clean; 348 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Nov 22 04:50:52 np0005532048 nova_compute[253661]: 2025-11-22 09:50:52.394 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:52 np0005532048 podman[407116]: 2025-11-22 09:50:52.39867001 +0000 UTC m=+0.297124458 container remove 4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chaum, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 22 04:50:52 np0005532048 systemd[1]: libpod-conmon-4142a8bad3eabce1d0845710e797c326cfc37741ea1b064342d96fc4398b463c.scope: Deactivated successfully.
Nov 22 04:50:52 np0005532048 podman[407156]: 2025-11-22 09:50:52.575742972 +0000 UTC m=+0.044091989 container create 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:50:52 np0005532048 systemd[1]: Started libpod-conmon-76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034.scope.
Nov 22 04:50:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:50:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:52 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:50:52 np0005532048 podman[407156]: 2025-11-22 09:50:52.556001885 +0000 UTC m=+0.024350922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:50:52 np0005532048 podman[407156]: 2025-11-22 09:50:52.658991399 +0000 UTC m=+0.127340436 container init 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:50:52 np0005532048 podman[407156]: 2025-11-22 09:50:52.665841037 +0000 UTC m=+0.134190054 container start 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:50:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:52 np0005532048 podman[407156]: 2025-11-22 09:50:52.674397839 +0000 UTC m=+0.142746856 container attach 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:50:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:50:52 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:53.145+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:53 np0005532048 nova_compute[253661]: 2025-11-22 09:50:53.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:50:53 np0005532048 nova_compute[253661]: 2025-11-22 09:50:53.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:50:53 np0005532048 nova_compute[253661]: 2025-11-22 09:50:53.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]: {
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "osd_id": 1,
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "type": "bluestore"
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:    },
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "osd_id": 0,
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "type": "bluestore"
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:    },
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "osd_id": 2,
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:        "type": "bluestore"
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]:    }
Nov 22 04:50:53 np0005532048 beautiful_proskuriakova[407172]: }
Nov 22 04:50:53 np0005532048 systemd[1]: libpod-76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034.scope: Deactivated successfully.
Nov 22 04:50:53 np0005532048 podman[407206]: 2025-11-22 09:50:53.696055486 +0000 UTC m=+0.024793122 container died 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:50:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-da3279e73c343314dccac9a15238c9ada0104224e6dd86134ecf56abf3058c68-merged.mount: Deactivated successfully.
Nov 22 04:50:53 np0005532048 podman[407206]: 2025-11-22 09:50:53.811695192 +0000 UTC m=+0.140432828 container remove 76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_proskuriakova, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:50:53 np0005532048 systemd[1]: libpod-conmon-76983a277905223d25bb37b0b603a4351155dc90b3415ce48114cd47dfd3c034.scope: Deactivated successfully.
Nov 22 04:50:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:50:53 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:50:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:50:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:54.151+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:54 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:50:54 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 160e9175-67e6-4370-b321-bfe7b41c5d0e does not exist
Nov 22 04:50:54 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6addafff-1fc7-4600-b85d-ad37c31bee18 does not exist
Nov 22 04:50:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:54 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:50:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 22 04:50:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:55.190+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.226 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.226 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.227 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:50:55 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 31 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:50:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.446 253665 DEBUG nova.compute.manager [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.446 253665 DEBUG nova.compute.manager [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing instance network info cache due to event network-changed-fbec9736-25e9-44be-80ed-974c1de2bf0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.447 253665 DEBUG oslo_concurrency.lockutils [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.448 253665 DEBUG oslo_concurrency.lockutils [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.448 253665 DEBUG nova.network.neutron [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Refreshing network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.515 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.516 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.516 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.516 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.516 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.517 253665 INFO nova.compute.manager [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Terminating instance#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.518 253665 DEBUG nova.compute.manager [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:50:55 np0005532048 kernel: tapfbec9736-25 (unregistering): left promiscuous mode
Nov 22 04:50:55 np0005532048 NetworkManager[48920]: <info>  [1763805055.6939] device (tapfbec9736-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:50:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:55Z|01600|binding|INFO|Releasing lport fbec9736-25e9-44be-80ed-974c1de2bf0d from this chassis (sb_readonly=0)
Nov 22 04:50:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:55Z|01601|binding|INFO|Setting lport fbec9736-25e9-44be-80ed-974c1de2bf0d down in Southbound
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:55 np0005532048 ovn_controller[152872]: 2025-11-22T09:50:55Z|01602|binding|INFO|Removing iface tapfbec9736-25 ovn-installed in OVS
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.725 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b'], port_security=['fa:16:3e:bc:98:0b 10.100.0.8 2001:db8:0:1:f816:3eff:febc:980b 2001:db8::f816:3eff:febc:980b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:febc:980b/64 2001:db8::f816:3eff:febc:980b/64', 'neutron:device_id': '34b8226a-40bd-46d4-99ee-1be44f56e142', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a71aa19e-d298-43f1-b9d0-7f952a63c1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=fbec9736-25e9-44be-80ed-974c1de2bf0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.727 162862 INFO neutron.agent.ovn.metadata.agent [-] Port fbec9736-25e9-44be-80ed-974c1de2bf0d in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce unbound from our chassis#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.729 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce#033[00m
Nov 22 04:50:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:50:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:50:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.749 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[bf58ef30-e596-4377-b127-7490e68b0cf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:50:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:50:55 np0005532048 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000094.scope: Deactivated successfully.
Nov 22 04:50:55 np0005532048 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000094.scope: Consumed 14.952s CPU time.
Nov 22 04:50:55 np0005532048 systemd-machined[215941]: Machine qemu-180-instance-00000094 terminated.
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.787 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[519ae2b5-dc65-47e2-b9e1-0a0aba6425ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.791 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0969b1-164a-414b-a113-e5ed893063d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.832 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fbab38-38bd-45af-b84b-57c7d8bed5f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.857 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94ff2fad-b771-4c75-9966-460509a71ca4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b64819a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:d3:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783269, 'reachable_time': 17873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407282, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.881 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c320cd39-8afd-4930-b1ab-00c50462143b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9b64819a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783283, 'tstamp': 783283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407283, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9b64819a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783287, 'tstamp': 783287}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407283, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.884 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b64819a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.892 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b64819a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.893 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.893 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b64819a-20, col_values=(('external_ids', {'iface-id': 'da001788-faa3-412b-9b6a-82fe1a808a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:55 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:55.894 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.971 253665 INFO nova.virt.libvirt.driver [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Instance destroyed successfully.#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.972 253665 DEBUG nova.objects.instance [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 34b8226a-40bd-46d4-99ee-1be44f56e142 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.987 253665 DEBUG nova.virt.libvirt.vif [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:50:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1818854274',display_name='tempest-TestGettingAddress-server-1818854274',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1818854274',id=148,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:50:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-no44603j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:50:15Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=34b8226a-40bd-46d4-99ee-1be44f56e142,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.988 253665 DEBUG nova.network.os_vif_util [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.989 253665 DEBUG nova.network.os_vif_util [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.989 253665 DEBUG os_vif [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.993 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbec9736-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:55 np0005532048 nova_compute[253661]: 2025-11-22 09:50:55.998 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:50:56 np0005532048 nova_compute[253661]: 2025-11-22 09:50:56.001 253665 INFO os_vif [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:98:0b,bridge_name='br-int',has_traffic_filtering=True,id=fbec9736-25e9-44be-80ed-974c1de2bf0d,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbec9736-25')#033[00m
Nov 22 04:50:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:56.150+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:56 np0005532048 nova_compute[253661]: 2025-11-22 09:50:56.161 253665 DEBUG nova.compute.manager [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-unplugged-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:50:56 np0005532048 nova_compute[253661]: 2025-11-22 09:50:56.162 253665 DEBUG oslo_concurrency.lockutils [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:56 np0005532048 nova_compute[253661]: 2025-11-22 09:50:56.162 253665 DEBUG oslo_concurrency.lockutils [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:56 np0005532048 nova_compute[253661]: 2025-11-22 09:50:56.162 253665 DEBUG oslo_concurrency.lockutils [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:56 np0005532048 nova_compute[253661]: 2025-11-22 09:50:56.162 253665 DEBUG nova.compute.manager [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] No waiting events found dispatching network-vif-unplugged-fbec9736-25e9-44be-80ed-974c1de2bf0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:50:56 np0005532048 nova_compute[253661]: 2025-11-22 09:50:56.163 253665 DEBUG nova.compute.manager [req-0c117465-e13e-46e2-bd28-dacfea41fdcc req-620565c2-636e-4e83-a27c-6b3f91e00f01 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-unplugged-fbec9736-25e9-44be-80ed-974c1de2bf0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:50:56 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:56 np0005532048 ceph-mon[75021]: Health check failed: 1 slow ops, oldest one blocked for 31 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:50:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 604 KiB/s wr, 42 op/s
Nov 22 04:50:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:50:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:50:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:50:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:50:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:50:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:57.183+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.250 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.251 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.252 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:57 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.428 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.631 253665 INFO nova.virt.libvirt.driver [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Deleting instance files /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142_del#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.632 253665 INFO nova.virt.libvirt.driver [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Deletion of /var/lib/nova/instances/34b8226a-40bd-46d4-99ee-1be44f56e142_del complete#033[00m
Nov 22 04:50:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:50:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:50:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1355 writes, 6193 keys, 1355 commit groups, 1.0 writes per commit group, ingest: 8.58 MB, 0.01 MB/s#012Interval WAL: 1355 writes, 1355 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.2      1.46              0.23        38    0.038       0      0       0.0       0.0#012  L6      1/0    8.13 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.8     97.8     82.7      3.64              0.92        37    0.098    227K    19K       0.0       0.0#012 Sum      1/0    8.13 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.8     69.8     71.4      5.10              1.15        75    0.068    227K    19K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.0     74.8     75.1      0.70              0.14        10    0.070     39K   2532       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     97.8     82.7      3.64              0.92        37    0.098    227K    19K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.3      1.46              0.23        37    0.039       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.062, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.35 GB read, 0.07 MB/s read, 5.1 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 40.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.0005 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2658,39.02 MB,12.8354%) FilterBlock(76,645.17 KB,0.207254%) IndexBlock(76,1.05 MB,0.345456%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.680 253665 INFO nova.compute.manager [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Took 2.16 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.681 253665 DEBUG oslo.service.loopingcall [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.681 253665 DEBUG nova.compute.manager [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.682 253665 DEBUG nova.network.neutron [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:50:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:50:57 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1194823485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.779 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.851 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.852 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.857 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:50:57 np0005532048 nova_compute[253661]: 2025-11-22 09:50:57.857 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.060 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.061 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3150MB free_disk=59.85171127319336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.061 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.062 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.143 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 3a65f84a-3072-4b94-b08a-0ba7b1529a07 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.144 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.144 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 34b8226a-40bd-46d4-99ee-1be44f56e142 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.144 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.144 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:50:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:58.186+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.208 253665 DEBUG nova.network.neutron [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updated VIF entry in instance network info cache for port fbec9736-25e9-44be-80ed-974c1de2bf0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.209 253665 DEBUG nova.network.neutron [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [{"id": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "address": "fa:16:3e:bc:98:0b", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:febc:980b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbec9736-25", "ovs_interfaceid": "fbec9736-25e9-44be-80ed-974c1de2bf0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:50:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:50:58.230 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.233 253665 DEBUG oslo_concurrency.lockutils [req-12e25f07-a6d3-4dbe-9c2c-eab0a8db5c2d req-eb3034d4-5971-4fc0-908b-486d8d225843 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-34b8226a-40bd-46d4-99ee-1be44f56e142" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.238 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.288 253665 DEBUG nova.compute.manager [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.289 253665 DEBUG oslo_concurrency.lockutils [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.290 253665 DEBUG oslo_concurrency.lockutils [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.290 253665 DEBUG oslo_concurrency.lockutils [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.290 253665 DEBUG nova.compute.manager [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] No waiting events found dispatching network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.290 253665 WARNING nova.compute.manager [req-bb5987bb-9172-43be-9fe7-e6ee9412cf33 req-ac3e4b6f-deaa-4f95-8825-1f491069a140 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received unexpected event network-vif-plugged-fbec9736-25e9-44be-80ed-974c1de2bf0d for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:50:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 605 KiB/s wr, 71 op/s
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.513 253665 DEBUG nova.network.neutron [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:50:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.532 253665 INFO nova.compute.manager [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Took 0.85 seconds to deallocate network for instance.#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.591 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:50:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:50:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571257852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.731 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.738 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.754 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.787 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.788 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.788 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:50:58 np0005532048 nova_compute[253661]: 2025-11-22 09:50:58.868 253665 DEBUG oslo_concurrency.processutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:50:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:50:59.198+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:50:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:50:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665110708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:50:59 np0005532048 nova_compute[253661]: 2025-11-22 09:50:59.338 253665 DEBUG oslo_concurrency.processutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:50:59 np0005532048 nova_compute[253661]: 2025-11-22 09:50:59.345 253665 DEBUG nova.compute.provider_tree [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:50:59 np0005532048 nova_compute[253661]: 2025-11-22 09:50:59.362 253665 DEBUG nova.scheduler.client.report [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:50:59 np0005532048 nova_compute[253661]: 2025-11-22 09:50:59.387 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:59 np0005532048 nova_compute[253661]: 2025-11-22 09:50:59.415 253665 INFO nova.scheduler.client.report [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 34b8226a-40bd-46d4-99ee-1be44f56e142#033[00m
Nov 22 04:50:59 np0005532048 nova_compute[253661]: 2025-11-22 09:50:59.482 253665 DEBUG oslo_concurrency.lockutils [None req-b8361b26-14a5-4cea-bb3e-8882d6fa7141 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "34b8226a-40bd-46d4-99ee-1be44f56e142" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:50:59 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:50:59 np0005532048 nova_compute[253661]: 2025-11-22 09:50:59.788 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:50:59 np0005532048 nova_compute[253661]: 2025-11-22 09:50:59.789 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:51:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:00.217+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 62 KiB/s wr, 44 op/s
Nov 22 04:51:00 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:00 np0005532048 nova_compute[253661]: 2025-11-22 09:51:00.861 253665 DEBUG nova.compute.manager [req-e9df2005-2592-48cd-aa30-2dc2d82cc05b req-8c7dcc9c-666a-4756-b827-2399579e9313 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Received event network-vif-deleted-fbec9736-25e9-44be-80ed-974c1de2bf0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:51:00 np0005532048 nova_compute[253661]: 2025-11-22 09:51:00.996 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:01.250+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.476 253665 DEBUG nova.compute.manager [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.476 253665 DEBUG nova.compute.manager [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing instance network info cache due to event network-changed-9c015dd3-d340-40c6-bcc6-efef0a914d39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.476 253665 DEBUG oslo_concurrency.lockutils [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.477 253665 DEBUG oslo_concurrency.lockutils [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.477 253665 DEBUG nova.network.neutron [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Refreshing network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.551 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.551 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.552 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.552 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.552 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.553 253665 INFO nova.compute.manager [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Terminating instance#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.554 253665 DEBUG nova.compute.manager [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:51:01 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:01 np0005532048 kernel: tap9c015dd3-d3 (unregistering): left promiscuous mode
Nov 22 04:51:01 np0005532048 NetworkManager[48920]: <info>  [1763805061.6283] device (tap9c015dd3-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:51:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:51:01Z|01603|binding|INFO|Releasing lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 from this chassis (sb_readonly=0)
Nov 22 04:51:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:51:01Z|01604|binding|INFO|Setting lport 9c015dd3-d340-40c6-bcc6-efef0a914d39 down in Southbound
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:01 np0005532048 ovn_controller[152872]: 2025-11-22T09:51:01Z|01605|binding|INFO|Removing iface tap9c015dd3-d3 ovn-installed in OVS
Nov 22 04:51:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.648 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756'], port_security=['fa:16:3e:7b:47:56 10.100.0.9 2001:db8:0:1:f816:3eff:fe7b:4756 2001:db8::f816:3eff:fe7b:4756'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe7b:4756/64 2001:db8::f816:3eff:fe7b:4756/64', 'neutron:device_id': '3a65f84a-3072-4b94-b08a-0ba7b1529a07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a71aa19e-d298-43f1-b9d0-7f952a63c1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=572cc1a4-6889-45f5-9ccb-1d24fa3ab232, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=9c015dd3-d340-40c6-bcc6-efef0a914d39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:51:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.649 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 9c015dd3-d340-40c6-bcc6-efef0a914d39 in datapath 9b64819a-274e-4eb7-988b-ceb1ea73c9ce unbound from our chassis#033[00m
Nov 22 04:51:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.651 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b64819a-274e-4eb7-988b-ceb1ea73c9ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:51:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.652 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[a94064bf-1995-4da3-9dd6-9f197b61cfea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:51:01 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:01.652 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce namespace which is not needed anymore#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:01 np0005532048 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 22 04:51:01 np0005532048 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000092.scope: Consumed 16.245s CPU time.
Nov 22 04:51:01 np0005532048 systemd-machined[215941]: Machine qemu-178-instance-00000092 terminated.
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.794 253665 INFO nova.virt.libvirt.driver [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Instance destroyed successfully.#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.795 253665 DEBUG nova.objects.instance [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 3a65f84a-3072-4b94-b08a-0ba7b1529a07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.805 253665 DEBUG nova.virt.libvirt.vif [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:49:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1502632540',display_name='tempest-TestGettingAddress-server-1502632540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1502632540',id=146,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCcB35ow6uk6IMlUBwbOGuOK3V7CtaZ2yJV3EZplxoxOQmEddDgKs5J+v7KXl9WfxkSmq+Acn+6POKmEHRfjGgaOghqPwK+UcBY92I7fBGtxwwkl4TxWcumLZptxfN80TA==',key_name='tempest-TestGettingAddress-174680913',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:49:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-97p9p4ep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:49:42Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=3a65f84a-3072-4b94-b08a-0ba7b1529a07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.806 253665 DEBUG nova.network.os_vif_util [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.807 253665 DEBUG nova.network.os_vif_util [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.807 253665 DEBUG os_vif [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.810 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.810 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c015dd3-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:51:01 np0005532048 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [NOTICE]   (405216) : haproxy version is 2.8.14-c23fe91
Nov 22 04:51:01 np0005532048 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [NOTICE]   (405216) : path to executable is /usr/sbin/haproxy
Nov 22 04:51:01 np0005532048 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [WARNING]  (405216) : Exiting Master process...
Nov 22 04:51:01 np0005532048 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [WARNING]  (405216) : Exiting Master process...
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:01 np0005532048 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [ALERT]    (405216) : Current worker (405218) exited with code 143 (Terminated)
Nov 22 04:51:01 np0005532048 neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce[405212]: [WARNING]  (405216) : All workers exited. Exiting... (0)
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.813 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:01 np0005532048 nova_compute[253661]: 2025-11-22 09:51:01.816 253665 INFO os_vif [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:47:56,bridge_name='br-int',has_traffic_filtering=True,id=9c015dd3-d340-40c6-bcc6-efef0a914d39,network=Network(9b64819a-274e-4eb7-988b-ceb1ea73c9ce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c015dd3-d3')#033[00m
Nov 22 04:51:01 np0005532048 systemd[1]: libpod-e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e.scope: Deactivated successfully.
Nov 22 04:51:01 np0005532048 podman[407405]: 2025-11-22 09:51:01.824160934 +0000 UTC m=+0.054052865 container died e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:51:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e-userdata-shm.mount: Deactivated successfully.
Nov 22 04:51:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-04d720b8b444377e6f53dfc930e4e7925a5c19a33403959e8730a0a92df22382-merged.mount: Deactivated successfully.
Nov 22 04:51:01 np0005532048 podman[407405]: 2025-11-22 09:51:01.91873844 +0000 UTC m=+0.148630361 container cleanup e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:51:01 np0005532048 systemd[1]: libpod-conmon-e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e.scope: Deactivated successfully.
Nov 22 04:51:02 np0005532048 podman[407462]: 2025-11-22 09:51:02.007227654 +0000 UTC m=+0.063723864 container remove e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:51:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.015 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[397c7cbf-f5b3-4ed5-8f56-61864c8a5937]: (4, ('Sat Nov 22 09:51:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce (e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e)\ne0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e\nSat Nov 22 09:51:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce (e0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e)\ne0bec075401e993e53dad9eabd1aa8e2d51224ac185dc79bd9f2ca0582baee9e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:51:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.017 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[13e8ac76-6547-4c18-8723-1d6e83806f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:51:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.019 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b64819a-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.080 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:02 np0005532048 kernel: tap9b64819a-20: left promiscuous mode
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.098 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.101 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[0f02506b-07b6-4263-ad65-18bd7e5b0df8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:51:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.117 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cd4e6b-f014-4fd2-b1ad-d752a5cec976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:51:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.118 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[04efd892-4cdc-4592-8b35-ff325b9f6c91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:51:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.139 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[be336b49-be83-46ef-b5dc-6f3c3498efa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783260, 'reachable_time': 20424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407475, 'error': None, 'target': 'ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:51:02 np0005532048 systemd[1]: run-netns-ovnmeta\x2d9b64819a\x2d274e\x2d4eb7\x2d988b\x2dceb1ea73c9ce.mount: Deactivated successfully.
Nov 22 04:51:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.142 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9b64819a-274e-4eb7-988b-ceb1ea73c9ce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:51:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:02.144 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[13ec574a-a328-48ae-8249-cce5466ad0f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:51:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:02.297+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 62 KiB/s wr, 44 op/s
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.391 253665 INFO nova.virt.libvirt.driver [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Deleting instance files /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07_del#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.392 253665 INFO nova.virt.libvirt.driver [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Deletion of /var/lib/nova/instances/3a65f84a-3072-4b94-b08a-0ba7b1529a07_del complete#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.555 253665 INFO nova.compute.manager [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.556 253665 DEBUG oslo.service.loopingcall [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.556 253665 DEBUG nova.compute.manager [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.556 253665 DEBUG nova.network.neutron [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:51:02 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 36 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.925 253665 DEBUG nova.compute.manager [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-unplugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.926 253665 DEBUG oslo_concurrency.lockutils [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.926 253665 DEBUG oslo_concurrency.lockutils [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.926 253665 DEBUG oslo_concurrency.lockutils [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.926 253665 DEBUG nova.compute.manager [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] No waiting events found dispatching network-vif-unplugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:51:02 np0005532048 nova_compute[253661]: 2025-11-22 09:51:02.927 253665 DEBUG nova.compute.manager [req-c28ef962-ff83-4938-9369-cc08201670f9 req-ca6d60e1-a0aa-4df6-8f9f-995733fca32f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-unplugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001519945098326496 of space, bias 1.0, pg target 0.4559835294979488 quantized to 32 (current 32)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:51:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:51:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:03.325+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:03 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:03 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 36 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:03 np0005532048 nova_compute[253661]: 2025-11-22 09:51:03.949 253665 DEBUG nova.network.neutron [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:51:03 np0005532048 nova_compute[253661]: 2025-11-22 09:51:03.966 253665 INFO nova.compute.manager [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Took 1.41 seconds to deallocate network for instance.#033[00m
Nov 22 04:51:04 np0005532048 nova_compute[253661]: 2025-11-22 09:51:04.007 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:04 np0005532048 nova_compute[253661]: 2025-11-22 09:51:04.008 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 66 KiB/s wr, 72 op/s
Nov 22 04:51:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:04.352+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:04 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:04 np0005532048 nova_compute[253661]: 2025-11-22 09:51:04.863 253665 DEBUG oslo_concurrency.processutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.007 253665 DEBUG nova.compute.manager [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.008 253665 DEBUG oslo_concurrency.lockutils [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.008 253665 DEBUG oslo_concurrency.lockutils [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.009 253665 DEBUG oslo_concurrency.lockutils [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.009 253665 DEBUG nova.compute.manager [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] No waiting events found dispatching network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.009 253665 WARNING nova.compute.manager [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received unexpected event network-vif-plugged-9c015dd3-d340-40c6-bcc6-efef0a914d39 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.010 253665 DEBUG nova.compute.manager [req-c13b4e1e-15c8-4936-8f77-fac11dc409a4 req-5268ec61-fc3b-4afb-8959-e494d08e3489 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Received event network-vif-deleted-9c015dd3-d340-40c6-bcc6-efef0a914d39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:51:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:05.307+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:51:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3443649570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.347 253665 DEBUG oslo_concurrency.processutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.355 253665 DEBUG nova.compute.provider_tree [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.374 253665 DEBUG nova.scheduler.client.report [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.420 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.532 253665 INFO nova.scheduler.client.report [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 3a65f84a-3072-4b94-b08a-0ba7b1529a07#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.663 253665 DEBUG oslo_concurrency.lockutils [None req-20843ca9-417a-4049-af19-8db10be2a0d7 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "3a65f84a-3072-4b94-b08a-0ba7b1529a07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.676 253665 DEBUG nova.network.neutron [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updated VIF entry in instance network info cache for port 9c015dd3-d340-40c6-bcc6-efef0a914d39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.677 253665 DEBUG nova.network.neutron [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Updating instance_info_cache with network_info: [{"id": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "address": "fa:16:3e:7b:47:56", "network": {"id": "9b64819a-274e-4eb7-988b-ceb1ea73c9ce", "bridge": "br-int", "label": "tempest-network-smoke--2109183656", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe7b:4756", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c015dd3-d3", "ovs_interfaceid": "9c015dd3-d340-40c6-bcc6-efef0a914d39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:51:05 np0005532048 nova_compute[253661]: 2025-11-22 09:51:05.715 253665 DEBUG oslo_concurrency.lockutils [req-236b77f9-a06d-43f4-8512-a01ad2ac1646 req-807ae5db-a260-4e8c-b553-4fb792ecddb4 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-3a65f84a-3072-4b94-b08a-0ba7b1529a07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:51:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:06.283+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Nov 22 04:51:06 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:06 np0005532048 nova_compute[253661]: 2025-11-22 09:51:06.813 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:07.280+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:07 np0005532048 nova_compute[253661]: 2025-11-22 09:51:07.431 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:07 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 41 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:07 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:07 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 41 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:08.309+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Nov 22 04:51:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:09.299+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:09 np0005532048 podman[407499]: 2025-11-22 09:51:09.37671394 +0000 UTC m=+0.067166640 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 04:51:09 np0005532048 podman[407500]: 2025-11-22 09:51:09.38322598 +0000 UTC m=+0.073061555 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible)
Nov 22 04:51:09 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:10.305+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 22 04:51:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:10 np0005532048 nova_compute[253661]: 2025-11-22 09:51:10.968 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805055.9676287, 34b8226a-40bd-46d4-99ee-1be44f56e142 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:51:10 np0005532048 nova_compute[253661]: 2025-11-22 09:51:10.969 253665 INFO nova.compute.manager [-] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:51:10 np0005532048 nova_compute[253661]: 2025-11-22 09:51:10.989 253665 DEBUG nova.compute.manager [None req-7a8235dd-e4ad-4a76-802c-f6b005dfe445 - - - - - -] [instance: 34b8226a-40bd-46d4-99ee-1be44f56e142] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:51:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:11.338+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:11 np0005532048 nova_compute[253661]: 2025-11-22 09:51:11.814 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:11 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 22 04:51:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:12.362+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:51:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/303551982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:51:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:51:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/303551982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:51:12 np0005532048 nova_compute[253661]: 2025-11-22 09:51:12.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:12 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 46 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:12 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:12 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 46 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:13.397+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:14 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 22 04:51:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:14.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:15 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:15.314+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:15 np0005532048 podman[407534]: 2025-11-22 09:51:15.399217973 +0000 UTC m=+0.092418573 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 04:51:16 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:16.294+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:16 np0005532048 nova_compute[253661]: 2025-11-22 09:51:16.791 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805061.7896166, 3a65f84a-3072-4b94-b08a-0ba7b1529a07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:51:16 np0005532048 nova_compute[253661]: 2025-11-22 09:51:16.792 253665 INFO nova.compute.manager [-] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:51:16 np0005532048 nova_compute[253661]: 2025-11-22 09:51:16.832 253665 DEBUG nova.compute.manager [None req-29d5bc73-d091-44ae-9ca3-b6ee9e7326e9 - - - - - -] [instance: 3a65f84a-3072-4b94-b08a-0ba7b1529a07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:51:16 np0005532048 nova_compute[253661]: 2025-11-22 09:51:16.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:17 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:17.267+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:17 np0005532048 ovn_controller[152872]: 2025-11-22T09:51:17Z|01606|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 04:51:17 np0005532048 nova_compute[253661]: 2025-11-22 09:51:17.435 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:17 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 51 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:18 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:18 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 51 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:18.297+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:19 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:19.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:20 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:20.283+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:21.243+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:21 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:21 np0005532048 nova_compute[253661]: 2025-11-22 09:51:21.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:22.196+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:22 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:22 np0005532048 nova_compute[253661]: 2025-11-22 09:51:22.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:22 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 56 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:51:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:51:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:23.224+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:23 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:23 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 56 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:24.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:24 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:25.232+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:25 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:26.271+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:26 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:26 np0005532048 nova_compute[253661]: 2025-11-22 09:51:26.869 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:27.247+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:27 np0005532048 nova_compute[253661]: 2025-11-22 09:51:27.483 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:27 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 61 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:27 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:27.994 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:27.994 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:27.995 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:51:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:28.252+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:28 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 61 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:29.221+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:29 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:29 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:30.192+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:30 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:31.237+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:31 np0005532048 nova_compute[253661]: 2025-11-22 09:51:31.871 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:31 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:32.212+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:32 np0005532048 nova_compute[253661]: 2025-11-22 09:51:32.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:32 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 66 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:33 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:33 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 66 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:33.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:33 np0005532048 nova_compute[253661]: 2025-11-22 09:51:33.267 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:34 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:34.171+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:35 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:35.218+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:36.225+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:36 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:36 np0005532048 nova_compute[253661]: 2025-11-22 09:51:36.873 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:37.195+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:37 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:37 np0005532048 nova_compute[253661]: 2025-11-22 09:51:37.487 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:37 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 71 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:38.213+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:38 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:38 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 71 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:39.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:39 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:40.201+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:40 np0005532048 podman[407561]: 2025-11-22 09:51:40.361659379 +0000 UTC m=+0.051826811 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:51:40 np0005532048 podman[407562]: 2025-11-22 09:51:40.380670039 +0000 UTC m=+0.063004097 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 22 04:51:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:41.207+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:41 np0005532048 nova_compute[253661]: 2025-11-22 09:51:41.875 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:42.206+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:42 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:42 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:42 np0005532048 nova_compute[253661]: 2025-11-22 09:51:42.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:42 np0005532048 nova_compute[253661]: 2025-11-22 09:51:42.583 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 76 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:43.228+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:43 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:43 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 76 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:44.249+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:44 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:45.220+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:45 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:46.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:46.297 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:0e:e6 10.100.0.2 2001:db8::f816:3eff:fe94:ee6'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe94:ee6/64', 'neutron:device_id': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=318882b5-a140-4600-8260-0040c058e797) old=Port_Binding(mac=['fa:16:3e:94:0e:e6 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:51:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:46.298 162862 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 318882b5-a140-4600-8260-0040c058e797 in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e updated#033[00m
Nov 22 04:51:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:46.300 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58b95ca9-260c-49de-9bd2-c16568d51c7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:51:46 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:51:46.301 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[5562f392-e162-4353-99f5-f73070eaf8ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:51:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:46 np0005532048 podman[407600]: 2025-11-22 09:51:46.426124679 +0000 UTC m=+0.122564668 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:51:46 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:46 np0005532048 nova_compute[253661]: 2025-11-22 09:51:46.878 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:47.215+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:47 np0005532048 nova_compute[253661]: 2025-11-22 09:51:47.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:47 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 81 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:47 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:47 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 81 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:48.229+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:48 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:49 np0005532048 nova_compute[253661]: 2025-11-22 09:51:49.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:51:49 np0005532048 nova_compute[253661]: 2025-11-22 09:51:49.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:51:49 np0005532048 nova_compute[253661]: 2025-11-22 09:51:49.269 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:51:49 np0005532048 nova_compute[253661]: 2025-11-22 09:51:49.269 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:51:49 np0005532048 nova_compute[253661]: 2025-11-22 09:51:49.269 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:51:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:49.271+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:50.290+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:51 np0005532048 nova_compute[253661]: 2025-11-22 09:51:51.308 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:51:51 np0005532048 nova_compute[253661]: 2025-11-22 09:51:51.336 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:51:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:51.337+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:51 np0005532048 nova_compute[253661]: 2025-11-22 09:51:51.337 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:51:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:51 np0005532048 nova_compute[253661]: 2025-11-22 09:51:51.338 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:51:51 np0005532048 nova_compute[253661]: 2025-11-22 09:51:51.881 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:52 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:51:52
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images']
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:51:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:52.350+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:52 np0005532048 nova_compute[253661]: 2025-11-22 09:51:52.540 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 86 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:51:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:51:53 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:53 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 86 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:53.375+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:54 np0005532048 nova_compute[253661]: 2025-11-22 09:51:54.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:51:54 np0005532048 nova_compute[253661]: 2025-11-22 09:51:54.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:51:54 np0005532048 nova_compute[253661]: 2025-11-22 09:51:54.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:51:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:54.408+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:55 np0005532048 nova_compute[253661]: 2025-11-22 09:51:55.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:51:55 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 60ff2f3b-a4b1-4785-a814-2e98fe2812ac does not exist
Nov 22 04:51:55 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev dcb66370-02b4-44b7-ac11-541774b5949e does not exist
Nov 22 04:51:55 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0eb21d3e-6abc-4f2e-a00b-c1589f316085 does not exist
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:51:55 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:51:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:55.427+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:51:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:51:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:51:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:51:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:51:56 np0005532048 podman[407897]: 2025-11-22 09:51:55.922069303 +0000 UTC m=+0.031350405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:51:56 np0005532048 podman[407897]: 2025-11-22 09:51:56.050026072 +0000 UTC m=+0.159307094 container create 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:51:56 np0005532048 systemd[1]: Started libpod-conmon-739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960.scope.
Nov 22 04:51:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:51:56 np0005532048 podman[407897]: 2025-11-22 09:51:56.211019908 +0000 UTC m=+0.320300950 container init 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:51:56 np0005532048 podman[407897]: 2025-11-22 09:51:56.220220675 +0000 UTC m=+0.329501697 container start 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:51:56 np0005532048 podman[407897]: 2025-11-22 09:51:56.228276604 +0000 UTC m=+0.337557626 container attach 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:51:56 np0005532048 interesting_hofstadter[407914]: 167 167
Nov 22 04:51:56 np0005532048 systemd[1]: libpod-739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960.scope: Deactivated successfully.
Nov 22 04:51:56 np0005532048 conmon[407914]: conmon 739a10b089a7c7317d39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960.scope/container/memory.events
Nov 22 04:51:56 np0005532048 podman[407897]: 2025-11-22 09:51:56.233698848 +0000 UTC m=+0.342979860 container died 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:51:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-aba8d81932232959c5c25e09d9aae245c46c5f7ba500f6bd0e226376393e2bbd-merged.mount: Deactivated successfully.
Nov 22 04:51:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:56 np0005532048 podman[407897]: 2025-11-22 09:51:56.368752703 +0000 UTC m=+0.478033745 container remove 739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:51:56 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:56 np0005532048 systemd[1]: libpod-conmon-739a10b089a7c7317d3971918a2c5730ddeb14fe843f48c81a182d8ddbb3f960.scope: Deactivated successfully.
Nov 22 04:51:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:56.466+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:56 np0005532048 podman[407940]: 2025-11-22 09:51:56.57029496 +0000 UTC m=+0.024486897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:51:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:51:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:51:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:51:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:51:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:51:56 np0005532048 podman[407940]: 2025-11-22 09:51:56.830010602 +0000 UTC m=+0.284202519 container create bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:51:56 np0005532048 nova_compute[253661]: 2025-11-22 09:51:56.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:56 np0005532048 systemd[1]: Started libpod-conmon-bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e.scope.
Nov 22 04:51:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:51:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:51:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:51:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:51:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:51:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:51:57 np0005532048 podman[407940]: 2025-11-22 09:51:57.016756384 +0000 UTC m=+0.470948331 container init bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:51:57 np0005532048 podman[407940]: 2025-11-22 09:51:57.024017263 +0000 UTC m=+0.478209190 container start bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:51:57 np0005532048 podman[407940]: 2025-11-22 09:51:57.065448037 +0000 UTC m=+0.519639964 container attach bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.388 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.390 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.408 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:51:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:57.437+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:57 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.512 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.513 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.518 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.519 253665 INFO nova.compute.claims [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.611 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.629 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.630 253665 DEBUG nova.compute.provider_tree [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.645 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.678 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:51:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 91 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:51:57 np0005532048 nova_compute[253661]: 2025-11-22 09:51:57.729 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:51:58 np0005532048 cranky_khayyam[407956]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:51:58 np0005532048 cranky_khayyam[407956]: --> relative data size: 1.0
Nov 22 04:51:58 np0005532048 cranky_khayyam[407956]: --> All data devices are unavailable
Nov 22 04:51:58 np0005532048 systemd[1]: libpod-bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e.scope: Deactivated successfully.
Nov 22 04:51:58 np0005532048 systemd[1]: libpod-bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e.scope: Consumed 1.056s CPU time.
Nov 22 04:51:58 np0005532048 podman[407940]: 2025-11-22 09:51:58.129390643 +0000 UTC m=+1.583582560 container died bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:51:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:51:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125127614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.188 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.194 253665 DEBUG nova.compute.provider_tree [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.208 253665 DEBUG nova.scheduler.client.report [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.229 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.230 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.259 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.259 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.306 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.306 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.324 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.345 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:51:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.441 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.443 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.444 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Creating image(s)#033[00m
Nov 22 04:51:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:58.467+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-11e96527a1bb2dead1513b2e9979600ac981319df676c1f803c9f4e66e383d01-merged.mount: Deactivated successfully.
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.613 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:51:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:51:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499355171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.772 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.791 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.794 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.828 253665 DEBUG nova.policy [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.831 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.871 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:51:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.872 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:58 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 91 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.872 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:58 np0005532048 nova_compute[253661]: 2025-11-22 09:51:58.872 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.023 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.027 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d986b43b-ea74-42e0-903b-eef7a997e4ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.107 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.108 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.263 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.264 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3293MB free_disk=59.942649841308594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.265 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:51:59 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Nov 22 04:51:59 np0005532048 podman[407940]: 2025-11-22 09:51:59.321512714 +0000 UTC m=+2.775704631 container remove bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_khayyam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d986b43b-ea74-42e0-903b-eef7a997e4ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.326 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:51:59 np0005532048 systemd[1]: libpod-conmon-bdf1b0ec6ecf55d89f3062568e3e124c8072c1aecac19c19bc542768704ca51e.scope: Deactivated successfully.
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.400 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:51:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:51:59.493+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:51:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:51:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:51:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985646846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.856 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.864 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.885 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.903 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:51:59 np0005532048 nova_compute[253661]: 2025-11-22 09:51:59.903 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:00 np0005532048 podman[408297]: 2025-11-22 09:51:59.917371084 +0000 UTC m=+0.020186673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:52:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:00.047 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:52:00 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:00.048 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:52:00 np0005532048 nova_compute[253661]: 2025-11-22 09:52:00.083 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:00 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:00 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:00 np0005532048 nova_compute[253661]: 2025-11-22 09:52:00.164 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Successfully created port: b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:52:00 np0005532048 podman[408297]: 2025-11-22 09:52:00.192100872 +0000 UTC m=+0.294916441 container create bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:52:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:52:00 np0005532048 systemd[1]: Started libpod-conmon-bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48.scope.
Nov 22 04:52:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:52:00 np0005532048 podman[408297]: 2025-11-22 09:52:00.440659535 +0000 UTC m=+0.543475134 container init bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:52:00 np0005532048 podman[408297]: 2025-11-22 09:52:00.448238357 +0000 UTC m=+0.551053926 container start bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:52:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:00.450+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:00 np0005532048 sharp_chandrasekhar[408315]: 167 167
Nov 22 04:52:00 np0005532048 systemd[1]: libpod-bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48.scope: Deactivated successfully.
Nov 22 04:52:00 np0005532048 podman[408297]: 2025-11-22 09:52:00.477414366 +0000 UTC m=+0.580229955 container attach bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:52:00 np0005532048 podman[408297]: 2025-11-22 09:52:00.478174396 +0000 UTC m=+0.580989965 container died bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:52:00 np0005532048 nova_compute[253661]: 2025-11-22 09:52:00.518 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 d986b43b-ea74-42e0-903b-eef7a997e4ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-faad95947cc632d4432333ff8c81b735696584f6184133eb6ad4eaa9e1297ebc-merged.mount: Deactivated successfully.
Nov 22 04:52:00 np0005532048 nova_compute[253661]: 2025-11-22 09:52:00.579 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:52:00 np0005532048 nova_compute[253661]: 2025-11-22 09:52:00.896 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:00 np0005532048 nova_compute[253661]: 2025-11-22 09:52:00.914 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:00 np0005532048 nova_compute[253661]: 2025-11-22 09:52:00.915 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:00 np0005532048 podman[408297]: 2025-11-22 09:52:00.946634356 +0000 UTC m=+1.049449925 container remove bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:52:01 np0005532048 systemd[1]: libpod-conmon-bdc926cfbc66dd342800a0eca5e2346ba99d93f1b970e26f6bb948e8157b0d48.scope: Deactivated successfully.
Nov 22 04:52:01 np0005532048 podman[408392]: 2025-11-22 09:52:01.170937894 +0000 UTC m=+0.090909307 container create c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:52:01 np0005532048 podman[408392]: 2025-11-22 09:52:01.104704654 +0000 UTC m=+0.024676087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.233 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Successfully updated port: b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:52:01 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.246 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.246 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.246 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:52:01 np0005532048 systemd[1]: Started libpod-conmon-c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8.scope.
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.287 253665 DEBUG nova.objects.instance [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid d986b43b-ea74-42e0-903b-eef7a997e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:52:01 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.296 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:52:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:52:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:52:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.297 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Ensure instance console log exists: /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.297 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.297 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.297 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:01 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.310 253665 DEBUG nova.compute.manager [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.310 253665 DEBUG nova.compute.manager [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing instance network info cache due to event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.311 253665 DEBUG oslo_concurrency.lockutils [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:52:01 np0005532048 podman[408392]: 2025-11-22 09:52:01.342648168 +0000 UTC m=+0.262619601 container init c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 04:52:01 np0005532048 podman[408392]: 2025-11-22 09:52:01.353616156 +0000 UTC m=+0.273587549 container start c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:52:01 np0005532048 podman[408392]: 2025-11-22 09:52:01.383816882 +0000 UTC m=+0.303788305 container attach c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.389 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:52:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:01.445+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:01 np0005532048 nova_compute[253661]: 2025-11-22 09:52:01.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:02 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:02.050 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:02 np0005532048 nifty_curran[408423]: {
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:    "0": [
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:        {
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "devices": [
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "/dev/loop3"
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            ],
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_name": "ceph_lv0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_size": "21470642176",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "name": "ceph_lv0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "tags": {
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.cluster_name": "ceph",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.crush_device_class": "",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.encrypted": "0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.osd_id": "0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.type": "block",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.vdo": "0"
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            },
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "type": "block",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "vg_name": "ceph_vg0"
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:        }
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:    ],
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:    "1": [
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:        {
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "devices": [
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "/dev/loop4"
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            ],
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_name": "ceph_lv1",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_size": "21470642176",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "name": "ceph_lv1",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "tags": {
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.cluster_name": "ceph",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.crush_device_class": "",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.encrypted": "0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.osd_id": "1",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.type": "block",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.vdo": "0"
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            },
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "type": "block",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "vg_name": "ceph_vg1"
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:        }
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:    ],
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:    "2": [
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:        {
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "devices": [
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "/dev/loop5"
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            ],
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_name": "ceph_lv2",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_size": "21470642176",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "name": "ceph_lv2",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "tags": {
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.cluster_name": "ceph",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.crush_device_class": "",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.encrypted": "0",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.osd_id": "2",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.type": "block",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:                "ceph.vdo": "0"
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            },
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "type": "block",
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:            "vg_name": "ceph_vg2"
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:        }
Nov 22 04:52:02 np0005532048 nifty_curran[408423]:    ]
Nov 22 04:52:02 np0005532048 nifty_curran[408423]: }
Nov 22 04:52:02 np0005532048 systemd[1]: libpod-c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8.scope: Deactivated successfully.
Nov 22 04:52:02 np0005532048 podman[408392]: 2025-11-22 09:52:02.168945033 +0000 UTC m=+1.088916436 container died c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:52:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:52:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:02.495+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.575 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.599 253665 DEBUG nova.network.neutron [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:52:02 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.630 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.631 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance network_info: |[{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.632 253665 DEBUG oslo_concurrency.lockutils [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.632 253665 DEBUG nova.network.neutron [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.636 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start _get_guest_xml network_info=[{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.644 253665 WARNING nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.656 253665 DEBUG nova.virt.libvirt.host [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.658 253665 DEBUG nova.virt.libvirt.host [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.662 253665 DEBUG nova.virt.libvirt.host [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.663 253665 DEBUG nova.virt.libvirt.host [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.664 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.664 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.665 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.666 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.666 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.666 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.667 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.667 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.667 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.668 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.668 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.668 253665 DEBUG nova.virt.hardware [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:52:02 np0005532048 nova_compute[253661]: 2025-11-22 09:52:02.673 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dbdfccf717e5aa4163c6b0338eb8359bb3ae6fc1063d89e12b63da55812345b6-merged.mount: Deactivated successfully.
Nov 22 04:52:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 96 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007606720469492739 of space, bias 1.0, pg target 0.22820161408478218 quantized to 32 (current 32)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:52:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:52:03 np0005532048 podman[408392]: 2025-11-22 09:52:03.061826825 +0000 UTC m=+1.981798228 container remove c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:52:03 np0005532048 systemd[1]: libpod-conmon-c98e36c1c3ceab1719b78fba35bef430960a5e372faf7be3d04cd9d5a860e9d8.scope: Deactivated successfully.
Nov 22 04:52:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:52:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2647914078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.193 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.219 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.225 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.277 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:03.455+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:03 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:03 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 96 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:52:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1226863174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.792 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.795 253665 DEBUG nova.virt.libvirt.vif [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-355287164',display_name='tempest-TestGettingAddress-server-355287164',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-355287164',id=149,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-913op7wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:51:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d986b43b-ea74-42e0-903b-eef7a997e4ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.795 253665 DEBUG nova.network.os_vif_util [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.796 253665 DEBUG nova.network.os_vif_util [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.798 253665 DEBUG nova.objects.instance [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid d986b43b-ea74-42e0-903b-eef7a997e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.817 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <uuid>d986b43b-ea74-42e0-903b-eef7a997e4ce</uuid>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <name>instance-00000095</name>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-355287164</nova:name>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:52:02</nova:creationTime>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <nova:port uuid="b10caa5b-0659-423b-9bcf-57a9a1ed30c0">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe12:a97e" ipVersion="6"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <entry name="serial">d986b43b-ea74-42e0-903b-eef7a997e4ce</entry>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <entry name="uuid">d986b43b-ea74-42e0-903b-eef7a997e4ce</entry>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d986b43b-ea74-42e0-903b-eef7a997e4ce_disk">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:12:a9:7e"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <target dev="tapb10caa5b-06"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/console.log" append="off"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:52:03 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:52:03 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:52:03 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:52:03 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.819 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Preparing to wait for external event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.819 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.819 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.820 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.820 253665 DEBUG nova.virt.libvirt.vif [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-355287164',display_name='tempest-TestGettingAddress-server-355287164',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-355287164',id=149,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-913op7wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:51:58Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d986b43b-ea74-42e0-903b-eef7a997e4ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.821 253665 DEBUG nova.network.os_vif_util [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.822 253665 DEBUG nova.network.os_vif_util [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.822 253665 DEBUG os_vif [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.824 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.825 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.830 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.830 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb10caa5b-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.830 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb10caa5b-06, col_values=(('external_ids', {'iface-id': 'b10caa5b-0659-423b-9bcf-57a9a1ed30c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:a9:7e', 'vm-uuid': 'd986b43b-ea74-42e0-903b-eef7a997e4ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:03 np0005532048 NetworkManager[48920]: <info>  [1763805123.8336] manager: (tapb10caa5b-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.844 253665 INFO os_vif [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06')#033[00m
Nov 22 04:52:03 np0005532048 podman[408651]: 2025-11-22 09:52:03.814689417 +0000 UTC m=+0.032641628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:52:03 np0005532048 podman[408651]: 2025-11-22 09:52:03.980882642 +0000 UTC m=+0.198834803 container create c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.993 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.993 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.993 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:12:a9:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:52:03 np0005532048 nova_compute[253661]: 2025-11-22 09:52:03.994 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Using config drive#033[00m
Nov 22 04:52:04 np0005532048 nova_compute[253661]: 2025-11-22 09:52:04.018 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:04 np0005532048 systemd[1]: Started libpod-conmon-c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379.scope.
Nov 22 04:52:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:52:04 np0005532048 podman[408651]: 2025-11-22 09:52:04.205652221 +0000 UTC m=+0.423604412 container init c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:52:04 np0005532048 podman[408651]: 2025-11-22 09:52:04.218961519 +0000 UTC m=+0.436913720 container start c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:52:04 np0005532048 xenodochial_jones[408688]: 167 167
Nov 22 04:52:04 np0005532048 systemd[1]: libpod-c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379.scope: Deactivated successfully.
Nov 22 04:52:04 np0005532048 nova_compute[253661]: 2025-11-22 09:52:04.268 253665 DEBUG nova.network.neutron [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updated VIF entry in instance network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:52:04 np0005532048 nova_compute[253661]: 2025-11-22 09:52:04.270 253665 DEBUG nova.network.neutron [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:52:04 np0005532048 nova_compute[253661]: 2025-11-22 09:52:04.283 253665 DEBUG oslo_concurrency.lockutils [req-7f1af84c-9661-48da-b5b1-9b11a4adda73 req-95dc1ce6-d573-4d94-b8e9-9aea4dbc3b29 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:52:04 np0005532048 nova_compute[253661]: 2025-11-22 09:52:04.344 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Creating config drive at /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config#033[00m
Nov 22 04:52:04 np0005532048 nova_compute[253661]: 2025-11-22 09:52:04.350 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjq3z9yo_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:52:04 np0005532048 podman[408651]: 2025-11-22 09:52:04.364684645 +0000 UTC m=+0.582636856 container attach c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:52:04 np0005532048 podman[408651]: 2025-11-22 09:52:04.365113065 +0000 UTC m=+0.583065246 container died c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:52:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:04.483+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:04 np0005532048 nova_compute[253661]: 2025-11-22 09:52:04.498 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjq3z9yo_" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:04 np0005532048 nova_compute[253661]: 2025-11-22 09:52:04.524 253665 DEBUG nova.storage.rbd_utils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:04 np0005532048 nova_compute[253661]: 2025-11-22 09:52:04.528 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-dc1f2a45ed5ac16da445967a74145bcdbfca2fbe13f46ad9ecfce1d4ed05a2ee-merged.mount: Deactivated successfully.
Nov 22 04:52:04 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:05 np0005532048 podman[408651]: 2025-11-22 09:52:05.105631365 +0000 UTC m=+1.323583556 container remove c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_jones, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:52:05 np0005532048 systemd[1]: libpod-conmon-c061cc7bb66bb708f681c5efd35b34c77150c969b9281fee4e45a2efd6f5b379.scope: Deactivated successfully.
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.252 253665 DEBUG oslo_concurrency.processutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config d986b43b-ea74-42e0-903b-eef7a997e4ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.255 253665 INFO nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Deleting local config drive /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce/disk.config because it was imported into RBD.#033[00m
Nov 22 04:52:05 np0005532048 kernel: tapb10caa5b-06: entered promiscuous mode
Nov 22 04:52:05 np0005532048 NetworkManager[48920]: <info>  [1763805125.3180] manager: (tapb10caa5b-06): new Tun device (/org/freedesktop/NetworkManager/Devices/658)
Nov 22 04:52:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:05Z|01607|binding|INFO|Claiming lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 for this chassis.
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.317 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:05Z|01608|binding|INFO|b10caa5b-0659-423b-9bcf-57a9a1ed30c0: Claiming fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.328 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e'], port_security=['fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8::f816:3eff:fe12:a97e/64', 'neutron:device_id': 'd986b43b-ea74-42e0-903b-eef7a997e4ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0beff345-fc2f-4a68-a4a7-1d4c0960ae91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b10caa5b-0659-423b-9bcf-57a9a1ed30c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.329 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e bound to our chassis#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.330 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58b95ca9-260c-49de-9bd2-c16568d51c7e#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.332 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:05Z|01609|binding|INFO|Setting lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 ovn-installed in OVS
Nov 22 04:52:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:05Z|01610|binding|INFO|Setting lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 up in Southbound
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.334 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.338 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.342 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d551d9d5-cc6c-4a62-805a-5285a6bd4f03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.343 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58b95ca9-21 in ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 22 04:52:05 np0005532048 podman[408752]: 2025-11-22 09:52:05.34638729 +0000 UTC m=+0.074553851 container create 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.346 270751 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58b95ca9-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.346 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fb058665-4838-4ade-8885-9cf917d5d160]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.347 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f0619e-404d-4c5f-9257-9ecac8c09d80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 systemd-udevd[408776]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.360 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[ceba34db-5b60-495d-8e09-8d00eb16876e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 NetworkManager[48920]: <info>  [1763805125.3682] device (tapb10caa5b-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:52:05 np0005532048 NetworkManager[48920]: <info>  [1763805125.3698] device (tapb10caa5b-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:52:05 np0005532048 systemd-machined[215941]: New machine qemu-181-instance-00000095.
Nov 22 04:52:05 np0005532048 systemd[1]: Started Virtual Machine qemu-181-instance-00000095.
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.390 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[94948bd3-25e4-40e5-b18b-28a848fb0531]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 systemd[1]: Started libpod-conmon-31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34.scope.
Nov 22 04:52:05 np0005532048 podman[408752]: 2025-11-22 09:52:05.299442319 +0000 UTC m=+0.027608900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:52:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:52:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:52:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:52:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.422 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[5d19faf2-9d75-4b19-ab31-765de95e2a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:52:05 np0005532048 NetworkManager[48920]: <info>  [1763805125.4368] manager: (tap58b95ca9-20): new Veth device (/org/freedesktop/NetworkManager/Devices/659)
Nov 22 04:52:05 np0005532048 systemd-udevd[408780]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.436 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[1070addd-e679-48b9-8fba-621bee52c836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.473 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[40d0ba2c-fa37-4178-981b-89ebcb6b5fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 podman[408752]: 2025-11-22 09:52:05.477340011 +0000 UTC m=+0.205506572 container init 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.476 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[37e154a1-6cae-4d16-9c46-f04268bf93bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:05.484+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:05 np0005532048 podman[408752]: 2025-11-22 09:52:05.487429657 +0000 UTC m=+0.215596218 container start 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:52:05 np0005532048 podman[408752]: 2025-11-22 09:52:05.504055829 +0000 UTC m=+0.232222410 container attach 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:52:05 np0005532048 NetworkManager[48920]: <info>  [1763805125.5078] device (tap58b95ca9-20): carrier: link connected
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.512 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[49d2232a-5831-4cba-b445-2db44cadfa2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.531 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[38a3d71f-8b8b-4dd2-a923-2e9b8a9dc28f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58b95ca9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0e:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797624, 'reachable_time': 32921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408817, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.545 253665 DEBUG nova.compute.manager [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.545 253665 DEBUG oslo_concurrency.lockutils [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.545 253665 DEBUG oslo_concurrency.lockutils [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.546 253665 DEBUG oslo_concurrency.lockutils [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.546 253665 DEBUG nova.compute.manager [req-330bfd00-18dd-4b74-a7dd-a5a1706fcd60 req-dc106981-dd29-4d00-a5f2-9891c763510f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Processing event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.549 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ba0cc1ab-30cd-4917-b5d6-68ae280ca954]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:ee6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797624, 'tstamp': 797624}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408818, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.565 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f38e643c-d8f7-4283-bcc6-cb8f88ad2995]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58b95ca9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0e:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797624, 'reachable_time': 32921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 408819, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.596 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[eced9b79-7379-4209-b2eb-785b356be4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.661 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7fb409-76be-42f8-9c04-e8842bed3762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.663 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58b95ca9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.664 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.665 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58b95ca9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:05 np0005532048 NetworkManager[48920]: <info>  [1763805125.6677] manager: (tap58b95ca9-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Nov 22 04:52:05 np0005532048 kernel: tap58b95ca9-20: entered promiscuous mode
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.671 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.672 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58b95ca9-20, col_values=(('external_ids', {'iface-id': '318882b5-a140-4600-8260-0040c058e797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.673 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:05 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:05Z|01611|binding|INFO|Releasing lport 318882b5-a140-4600-8260-0040c058e797 from this chassis (sb_readonly=0)
Nov 22 04:52:05 np0005532048 nova_compute[253661]: 2025-11-22 09:52:05.689 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.690 162862 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58b95ca9-260c-49de-9bd2-c16568d51c7e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58b95ca9-260c-49de-9bd2-c16568d51c7e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.691 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[218a370d-dc7a-46a7-8574-399579ad77b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.692 162862 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: global
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    log         /dev/log local0 debug
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    log-tag     haproxy-metadata-proxy-58b95ca9-260c-49de-9bd2-c16568d51c7e
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    user        root
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    group       root
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    maxconn     1024
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    pidfile     /var/lib/neutron/external/pids/58b95ca9-260c-49de-9bd2-c16568d51c7e.pid.haproxy
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    daemon
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: defaults
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    log global
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    mode http
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    option httplog
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    option dontlognull
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    option http-server-close
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    option forwardfor
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    retries                 3
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    timeout http-request    30s
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    timeout connect         30s
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    timeout client          32s
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    timeout server          32s
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    timeout http-keep-alive 30s
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: listen listener
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    bind 169.254.169.254:80
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]:    http-request add-header X-OVN-Network-ID 58b95ca9-260c-49de-9bd2-c16568d51c7e
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 22 04:52:05 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:05.693 162862 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'env', 'PROCESS_TAG=haproxy-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58b95ca9-260c-49de-9bd2-c16568d51c7e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 22 04:52:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:06 np0005532048 podman[408876]: 2025-11-22 09:52:06.107986334 +0000 UTC m=+0.098811787 container create 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:52:06 np0005532048 podman[408876]: 2025-11-22 09:52:06.032659354 +0000 UTC m=+0.023484837 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 22 04:52:06 np0005532048 systemd[1]: Started libpod-conmon-2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47.scope.
Nov 22 04:52:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.167 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805126.166405, d986b43b-ea74-42e0-903b-eef7a997e4ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.168 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] VM Started (Lifecycle Event)#033[00m
Nov 22 04:52:06 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ea709103a9d5440c95c0740482f664efdf69029657dc148bca198fcea4ae2e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.171 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.175 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.179 253665 INFO nova.virt.libvirt.driver [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance spawned successfully.#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.179 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:52:06 np0005532048 podman[408876]: 2025-11-22 09:52:06.192069256 +0000 UTC m=+0.182894739 container init 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.192 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:52:06 np0005532048 podman[408876]: 2025-11-22 09:52:06.19890608 +0000 UTC m=+0.189731533 container start 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.201 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.204 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.205 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.205 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.206 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.206 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.207 253665 DEBUG nova.virt.libvirt.driver [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:06 np0005532048 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [NOTICE]   (408912) : New worker (408915) forked
Nov 22 04:52:06 np0005532048 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [NOTICE]   (408912) : Loading success.
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.229 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.230 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805126.1667562, d986b43b-ea74-42e0-903b-eef7a997e4ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.230 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.250 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.253 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805126.1744428, d986b43b-ea74-42e0-903b-eef7a997e4ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.253 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.270 253665 INFO nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Took 7.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.271 253665 DEBUG nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.273 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.282 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.325 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.350 253665 INFO nova.compute.manager [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Took 8.86 seconds to build instance.#033[00m
Nov 22 04:52:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:52:06 np0005532048 nova_compute[253661]: 2025-11-22 09:52:06.369 253665 DEBUG oslo_concurrency.lockutils [None req-b91f2a8d-1d71-4c64-a4f1-4fa60995f410 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:06.476+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]: {
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "osd_id": 1,
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "type": "bluestore"
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:    },
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "osd_id": 0,
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "type": "bluestore"
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:    },
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "osd_id": 2,
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:        "type": "bluestore"
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]:    }
Nov 22 04:52:06 np0005532048 loving_heyrovsky[408784]: }
Nov 22 04:52:06 np0005532048 systemd[1]: libpod-31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34.scope: Deactivated successfully.
Nov 22 04:52:06 np0005532048 systemd[1]: libpod-31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34.scope: Consumed 1.068s CPU time.
Nov 22 04:52:06 np0005532048 podman[408752]: 2025-11-22 09:52:06.569253061 +0000 UTC m=+1.297419622 container died 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:52:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4f131c71116df6069d5841017c100d3760e998dff149a91f5acdea4d85d29a90-merged.mount: Deactivated successfully.
Nov 22 04:52:06 np0005532048 podman[408752]: 2025-11-22 09:52:06.688879955 +0000 UTC m=+1.417046516 container remove 31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:52:06 np0005532048 systemd[1]: libpod-conmon-31f6c7e665df8ff6ee160b944db33b3ddbe409e18f845952c34c80ea3a52af34.scope: Deactivated successfully.
Nov 22 04:52:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:52:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:52:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:52:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:52:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 14a0f2a2-1e58-4472-8e10-cb832dfeca9a does not exist
Nov 22 04:52:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 234ce324-3e16-4b1e-bd62-9036c637b423 does not exist
Nov 22 04:52:07 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:52:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:52:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:07.515+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:07 np0005532048 nova_compute[253661]: 2025-11-22 09:52:07.614 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:07 np0005532048 nova_compute[253661]: 2025-11-22 09:52:07.632 253665 DEBUG nova.compute.manager [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:52:07 np0005532048 nova_compute[253661]: 2025-11-22 09:52:07.632 253665 DEBUG oslo_concurrency.lockutils [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:07 np0005532048 nova_compute[253661]: 2025-11-22 09:52:07.632 253665 DEBUG oslo_concurrency.lockutils [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:07 np0005532048 nova_compute[253661]: 2025-11-22 09:52:07.632 253665 DEBUG oslo_concurrency.lockutils [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:07 np0005532048 nova_compute[253661]: 2025-11-22 09:52:07.633 253665 DEBUG nova.compute.manager [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] No waiting events found dispatching network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:52:07 np0005532048 nova_compute[253661]: 2025-11-22 09:52:07.633 253665 WARNING nova.compute.manager [req-3bb24430-5059-4c84-b79d-d8c89ae653df req-03fb92cc-fb76-4142-9fa8-c15712d69c8f 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received unexpected event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:52:07 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 101 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:08 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 101 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:52:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:08.524+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:08 np0005532048 nova_compute[253661]: 2025-11-22 09:52:08.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:09 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:09.493+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:52:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:10.498+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:11 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:11 np0005532048 podman[409014]: 2025-11-22 09:52:11.35441898 +0000 UTC m=+0.051815875 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:52:11 np0005532048 podman[409015]: 2025-11-22 09:52:11.366227719 +0000 UTC m=+0.061747207 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:52:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:11.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:12 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:52:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:52:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3369010267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:52:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:52:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3369010267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:52:12 np0005532048 nova_compute[253661]: 2025-11-22 09:52:12.469 253665 DEBUG nova.compute.manager [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:52:12 np0005532048 nova_compute[253661]: 2025-11-22 09:52:12.470 253665 DEBUG nova.compute.manager [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing instance network info cache due to event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:52:12 np0005532048 nova_compute[253661]: 2025-11-22 09:52:12.470 253665 DEBUG oslo_concurrency.lockutils [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:52:12 np0005532048 nova_compute[253661]: 2025-11-22 09:52:12.470 253665 DEBUG oslo_concurrency.lockutils [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:52:12 np0005532048 nova_compute[253661]: 2025-11-22 09:52:12.470 253665 DEBUG nova.network.neutron [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:52:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:12.523+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:12 np0005532048 nova_compute[253661]: 2025-11-22 09:52:12.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:12 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 106 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:13 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:13 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 106 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:13.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:13 np0005532048 nova_compute[253661]: 2025-11-22 09:52:13.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:14 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:52:14 np0005532048 nova_compute[253661]: 2025-11-22 09:52:14.445 253665 DEBUG nova.network.neutron [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updated VIF entry in instance network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:52:14 np0005532048 nova_compute[253661]: 2025-11-22 09:52:14.446 253665 DEBUG nova.network.neutron [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:52:14 np0005532048 nova_compute[253661]: 2025-11-22 09:52:14.466 253665 DEBUG oslo_concurrency.lockutils [req-e673eef9-b368-4d2c-968e-e031bf90bd9c req-2f2b58ac-bc54-4db3-acb8-edbb77764d9c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:52:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:14.561+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:15 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:15.521+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:52:16 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:16.549+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:17 np0005532048 podman[409053]: 2025-11-22 09:52:17.407729318 +0000 UTC m=+0.093820120 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 04:52:17 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:17.524+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:17 np0005532048 nova_compute[253661]: 2025-11-22 09:52:17.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:17 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 111 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 1 active+clean+laggy, 304 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Nov 22 04:52:18 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:18 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 111 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:18.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:18 np0005532048 nova_compute[253661]: 2025-11-22 09:52:18.839 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:19 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:19.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:20Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:a9:7e 10.100.0.6
Nov 22 04:52:20 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:20Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:a9:7e 10.100.0.6
Nov 22 04:52:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 1 active+clean+laggy, 304 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 19 op/s
Nov 22 04:52:20 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:20.616+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:21.585+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:21 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 1 active+clean+laggy, 304 active+clean; 257 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 19 op/s
Nov 22 04:52:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:22.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:22 np0005532048 nova_compute[253661]: 2025-11-22 09:52:22.621 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:22 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:22 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 116 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:52:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:52:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:23.601+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:23 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:23 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 116 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:23 np0005532048 nova_compute[253661]: 2025-11-22 09:52:23.841 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:52:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:24.624+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:24 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:25.660+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:25 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:52:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:26.617+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:26 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:26 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:27.582+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:27 np0005532048 nova_compute[253661]: 2025-11-22 09:52:27.652 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:27 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 121 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:27.995 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:27.996 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:28 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 121 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:52:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:28.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:28 np0005532048 nova_compute[253661]: 2025-11-22 09:52:28.846 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:29 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:29.634+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:30 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 740 KiB/s wr, 44 op/s
Nov 22 04:52:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:30.675+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:31 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:31.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 740 KiB/s wr, 44 op/s
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:32 np0005532048 nova_compute[253661]: 2025-11-22 09:52:32.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:32.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 126 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:32.775368) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805152775441, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2008, "num_deletes": 258, "total_data_size": 2740122, "memory_usage": 2780040, "flush_reason": "Manual Compaction"}
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805152975506, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 2681387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54661, "largest_seqno": 56668, "table_properties": {"data_size": 2672672, "index_size": 5080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20786, "raw_average_key_size": 20, "raw_value_size": 2653984, "raw_average_value_size": 2651, "num_data_blocks": 224, "num_entries": 1001, "num_filter_entries": 1001, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763804974, "oldest_key_time": 1763804974, "file_creation_time": 1763805152, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 200195 microseconds, and 7417 cpu microseconds.
Nov 22 04:52:32 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:32.975556) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 2681387 bytes OK
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:32.975592) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.224424) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.224478) EVENT_LOG_v1 {"time_micros": 1763805153224467, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.224503) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2731364, prev total WAL file size 2731364, number of live WAL files 2.
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.225704) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323537' seq:72057594037927935, type:22 .. '6C6F676D0032353131' seq:0, type:0; will stop at (end)
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(2618KB)], [128(8326KB)]
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805153225778, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11207850, "oldest_snapshot_seqno": -1}
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7903 keys, 11080519 bytes, temperature: kUnknown
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805153445987, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11080519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11027560, "index_size": 32110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 206083, "raw_average_key_size": 26, "raw_value_size": 10886274, "raw_average_value_size": 1377, "num_data_blocks": 1257, "num_entries": 7903, "num_filter_entries": 7903, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.446543) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11080519 bytes
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.590486) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 50.9 rd, 50.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.1 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(8.3) write-amplify(4.1) OK, records in: 8435, records dropped: 532 output_compression: NoCompression
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.590526) EVENT_LOG_v1 {"time_micros": 1763805153590511, "job": 78, "event": "compaction_finished", "compaction_time_micros": 220373, "compaction_time_cpu_micros": 32527, "output_level": 6, "num_output_files": 1, "total_output_size": 11080519, "num_input_records": 8435, "num_output_records": 7903, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805153591270, "job": 78, "event": "table_file_deletion", "file_number": 130}
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805153593037, "job": 78, "event": "table_file_deletion", "file_number": 128}
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.225549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:33.593165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:33 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 126 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:33.728+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:33 np0005532048 nova_compute[253661]: 2025-11-22 09:52:33.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 745 KiB/s wr, 45 op/s
Nov 22 04:52:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:34.692+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:34 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:35 np0005532048 nova_compute[253661]: 2025-11-22 09:52:35.259 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:35 np0005532048 nova_compute[253661]: 2025-11-22 09:52:35.260 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:35 np0005532048 nova_compute[253661]: 2025-11-22 09:52:35.277 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:52:35 np0005532048 nova_compute[253661]: 2025-11-22 09:52:35.365 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:35 np0005532048 nova_compute[253661]: 2025-11-22 09:52:35.365 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:35 np0005532048 nova_compute[253661]: 2025-11-22 09:52:35.377 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:52:35 np0005532048 nova_compute[253661]: 2025-11-22 09:52:35.377 253665 INFO nova.compute.claims [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:52:35 np0005532048 nova_compute[253661]: 2025-11-22 09:52:35.530 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:35.713+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:35 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:52:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283250128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.019 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.028 253665 DEBUG nova.compute.provider_tree [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.051 253665 DEBUG nova.scheduler.client.report [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.082 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.083 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.145 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.145 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.167 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.187 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.272 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.273 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.274 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Creating image(s)#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.294 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.316 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.339 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.343 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.414 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.415 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.416 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.416 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.435 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:36 np0005532048 nova_compute[253661]: 2025-11-22 09:52:36.439 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:36.718+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:36 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:36 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.005 253665 DEBUG nova.policy [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '175451870b324a779f0096d0d5c2a4c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aa4944a176f4fd4b020666526e48e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.240 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.327 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] resizing rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.448 253665 DEBUG nova.objects.instance [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'migration_context' on Instance uuid 566d6c71-a9a6-49f3-9f46-f9d31e71936b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.461 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.462 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Ensure instance console log exists: /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.463 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.463 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.464 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:37.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:37 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 131 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:37 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:37 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 131 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:37 np0005532048 nova_compute[253661]: 2025-11-22 09:52:37.946 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Successfully created port: 43d41379-bbc3-4a75-be24-8943b05e7a8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:52:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:52:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:38.725+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:38 np0005532048 nova_compute[253661]: 2025-11-22 09:52:38.727 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Successfully updated port: 43d41379-bbc3-4a75-be24-8943b05e7a8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:52:38 np0005532048 nova_compute[253661]: 2025-11-22 09:52:38.739 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:52:38 np0005532048 nova_compute[253661]: 2025-11-22 09:52:38.740 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquired lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:52:38 np0005532048 nova_compute[253661]: 2025-11-22 09:52:38.740 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:52:38 np0005532048 nova_compute[253661]: 2025-11-22 09:52:38.816 253665 DEBUG nova.compute.manager [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:52:38 np0005532048 nova_compute[253661]: 2025-11-22 09:52:38.817 253665 DEBUG nova.compute.manager [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing instance network info cache due to event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:52:38 np0005532048 nova_compute[253661]: 2025-11-22 09:52:38.817 253665 DEBUG oslo_concurrency.lockutils [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:52:38 np0005532048 nova_compute[253661]: 2025-11-22 09:52:38.852 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:38 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:38 np0005532048 nova_compute[253661]: 2025-11-22 09:52:38.891 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:52:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:39.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:39 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.455 253665 DEBUG nova.network.neutron [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.484 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Releasing lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.485 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance network_info: |[{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.485 253665 DEBUG oslo_concurrency.lockutils [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.485 253665 DEBUG nova.network.neutron [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.488 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start _get_guest_xml network_info=[{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.492 253665 WARNING nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.498 253665 DEBUG nova.virt.libvirt.host [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.498 253665 DEBUG nova.virt.libvirt.host [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.506 253665 DEBUG nova.virt.libvirt.host [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.507 253665 DEBUG nova.virt.libvirt.host [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.507 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.508 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.508 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.508 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.509 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.509 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.509 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.509 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.510 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.510 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.510 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.510 253665 DEBUG nova.virt.hardware [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.513 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:40.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:52:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3983528387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.971 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.991 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:40 np0005532048 nova_compute[253661]: 2025-11-22 09:52:40.995 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:52:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/67735884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.414 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.417 253665 DEBUG nova.virt.libvirt.vif [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-262851612',display_name='tempest-TestGettingAddress-server-262851612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-262851612',id=150,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-drb6i5m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:52:36Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=566d6c71-a9a6-49f3-9f46-f9d31e71936b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.417 253665 DEBUG nova.network.os_vif_util [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.418 253665 DEBUG nova.network.os_vif_util [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.420 253665 DEBUG nova.objects.instance [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 566d6c71-a9a6-49f3-9f46-f9d31e71936b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.432 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <uuid>566d6c71-a9a6-49f3-9f46-f9d31e71936b</uuid>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <name>instance-00000096</name>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestGettingAddress-server-262851612</nova:name>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:52:40</nova:creationTime>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <nova:user uuid="175451870b324a779f0096d0d5c2a4c0">tempest-TestGettingAddress-647364936-project-member</nova:user>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <nova:project uuid="9aa4944a176f4fd4b020666526e48e84">tempest-TestGettingAddress-647364936</nova:project>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <nova:port uuid="43d41379-bbc3-4a75-be24-8943b05e7a8e">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe95:c99b" ipVersion="6"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <entry name="serial">566d6c71-a9a6-49f3-9f46-f9d31e71936b</entry>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <entry name="uuid">566d6c71-a9a6-49f3-9f46-f9d31e71936b</entry>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:95:c9:9b"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <target dev="tap43d41379-bb"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/console.log" append="off"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:52:41 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:52:41 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:52:41 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:52:41 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.433 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Preparing to wait for external event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.434 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.434 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.435 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.435 253665 DEBUG nova.virt.libvirt.vif [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-262851612',display_name='tempest-TestGettingAddress-server-262851612',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-262851612',id=150,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-drb6i5m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:52:36Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=566d6c71-a9a6-49f3-9f46-f9d31e71936b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.436 253665 DEBUG nova.network.os_vif_util [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.436 253665 DEBUG nova.network.os_vif_util [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.437 253665 DEBUG os_vif [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.438 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.438 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.441 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.441 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43d41379-bb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.442 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43d41379-bb, col_values=(('external_ids', {'iface-id': '43d41379-bbc3-4a75-be24-8943b05e7a8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:c9:9b', 'vm-uuid': '566d6c71-a9a6-49f3-9f46-f9d31e71936b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:41 np0005532048 NetworkManager[48920]: <info>  [1763805161.4444] manager: (tap43d41379-bb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.450 253665 INFO os_vif [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb')#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.498 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.499 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.499 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] No VIF found with MAC fa:16:3e:95:c9:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.500 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Using config drive#033[00m
Nov 22 04:52:41 np0005532048 nova_compute[253661]: 2025-11-22 09:52:41.522 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:41 np0005532048 podman[409336]: 2025-11-22 09:52:41.533504711 +0000 UTC m=+0.047639769 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 04:52:41 np0005532048 podman[409338]: 2025-11-22 09:52:41.571290629 +0000 UTC m=+0.084017311 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 04:52:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:41.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:41 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.129 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Creating config drive at /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.133 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp57kqn_8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.273 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp57kqn_8" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.296 253665 DEBUG nova.storage.rbd_utils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] rbd image 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.299 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:52:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.409 253665 DEBUG nova.network.neutron [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updated VIF entry in instance network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.410 253665 DEBUG nova.network.neutron [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.426 253665 DEBUG oslo_concurrency.lockutils [req-e6aaae1f-f85a-4411-94af-679da4f9f6c4 req-253b6769-66a9-4a85-aa3a-b6f36f27fe70 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.458 253665 DEBUG oslo_concurrency.processutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config 566d6c71-a9a6-49f3-9f46-f9d31e71936b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.459 253665 INFO nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Deleting local config drive /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b/disk.config because it was imported into RBD.#033[00m
Nov 22 04:52:42 np0005532048 kernel: tap43d41379-bb: entered promiscuous mode
Nov 22 04:52:42 np0005532048 NetworkManager[48920]: <info>  [1763805162.5038] manager: (tap43d41379-bb): new Tun device (/org/freedesktop/NetworkManager/Devices/662)
Nov 22 04:52:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:42Z|01612|binding|INFO|Claiming lport 43d41379-bbc3-4a75-be24-8943b05e7a8e for this chassis.
Nov 22 04:52:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:42Z|01613|binding|INFO|43d41379-bbc3-4a75-be24-8943b05e7a8e: Claiming fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.559 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.571 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b'], port_security=['fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe95:c99b/64', 'neutron:device_id': '566d6c71-a9a6-49f3-9f46-f9d31e71936b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0beff345-fc2f-4a68-a4a7-1d4c0960ae91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=43d41379-bbc3-4a75-be24-8943b05e7a8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.572 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 43d41379-bbc3-4a75-be24-8943b05e7a8e in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e bound to our chassis#033[00m
Nov 22 04:52:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:42Z|01614|binding|INFO|Setting lport 43d41379-bbc3-4a75-be24-8943b05e7a8e up in Southbound
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.573 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58b95ca9-260c-49de-9bd2-c16568d51c7e#033[00m
Nov 22 04:52:42 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:42Z|01615|binding|INFO|Setting lport 43d41379-bbc3-4a75-be24-8943b05e7a8e ovn-installed in OVS
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.574 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.576 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:42 np0005532048 systemd-machined[215941]: New machine qemu-182-instance-00000096.
Nov 22 04:52:42 np0005532048 systemd-udevd[409444]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.590 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[477f7fa3-6863-4a8e-b0fc-2604963e308e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:42 np0005532048 NetworkManager[48920]: <info>  [1763805162.5987] device (tap43d41379-bb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:52:42 np0005532048 NetworkManager[48920]: <info>  [1763805162.5997] device (tap43d41379-bb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:52:42 np0005532048 systemd[1]: Started Virtual Machine qemu-182-instance-00000096.
Nov 22 04:52:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:42.604+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.618 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[0237e3f1-d00c-42ec-babc-6b70ac69fa95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.620 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[216ebe6f-f201-437f-9362-89a0f4538066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.648 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[4370038f-0aa1-4392-9e65-107e4a42453a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.666 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[dcad49f0-d825-4794-b596-930edff52475]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58b95ca9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0e:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797624, 'reachable_time': 32921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409456, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.681 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[77382257-33f2-435e-8655-bc6c625ea204]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58b95ca9-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797636, 'tstamp': 797636}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409457, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58b95ca9-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797639, 'tstamp': 797639}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409457, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.683 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58b95ca9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.684 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:42 np0005532048 nova_compute[253661]: 2025-11-22 09:52:42.685 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.686 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58b95ca9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.687 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.687 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58b95ca9-20, col_values=(('external_ids', {'iface-id': '318882b5-a140-4600-8260-0040c058e797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:52:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:52:42.688 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:52:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 136 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:43 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:43 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 136 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.061 253665 DEBUG nova.compute.manager [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.061 253665 DEBUG oslo_concurrency.lockutils [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.062 253665 DEBUG oslo_concurrency.lockutils [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.062 253665 DEBUG oslo_concurrency.lockutils [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.062 253665 DEBUG nova.compute.manager [req-f0648304-4349-4392-80d3-c5dfc2a8e2f8 req-fa6a4d70-304c-4cba-a1e9-1b1e12598536 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Processing event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.143 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805163.1428714, 566d6c71-a9a6-49f3-9f46-f9d31e71936b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.143 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] VM Started (Lifecycle Event)#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.146 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.150 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.154 253665 INFO nova.virt.libvirt.driver [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance spawned successfully.#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.154 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.161 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.165 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.174 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.174 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.175 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.175 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.175 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.176 253665 DEBUG nova.virt.libvirt.driver [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.183 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.184 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805163.1430955, 566d6c71-a9a6-49f3-9f46-f9d31e71936b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.184 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.203 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.205 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805163.14976, 566d6c71-a9a6-49f3-9f46-f9d31e71936b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.206 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.221 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.224 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.259 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.284 253665 INFO nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Took 7.01 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.284 253665 DEBUG nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.354 253665 INFO nova.compute.manager [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Took 8.02 seconds to build instance.#033[00m
Nov 22 04:52:43 np0005532048 nova_compute[253661]: 2025-11-22 09:52:43.370 253665 DEBUG oslo_concurrency.lockutils [None req-90275a32-5615-4be9-acbb-61c8ec3ee277 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:43.630+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:44 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 22 04:52:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:44.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:45 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:45.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:45 np0005532048 nova_compute[253661]: 2025-11-22 09:52:45.701 253665 DEBUG nova.compute.manager [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:52:45 np0005532048 nova_compute[253661]: 2025-11-22 09:52:45.701 253665 DEBUG oslo_concurrency.lockutils [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:52:45 np0005532048 nova_compute[253661]: 2025-11-22 09:52:45.701 253665 DEBUG oslo_concurrency.lockutils [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:52:45 np0005532048 nova_compute[253661]: 2025-11-22 09:52:45.701 253665 DEBUG oslo_concurrency.lockutils [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:52:45 np0005532048 nova_compute[253661]: 2025-11-22 09:52:45.702 253665 DEBUG nova.compute.manager [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] No waiting events found dispatching network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:52:45 np0005532048 nova_compute[253661]: 2025-11-22 09:52:45.702 253665 WARNING nova.compute.manager [req-45dcaf9a-1f69-435a-b004-438cd4468316 req-32273fe5-8a23-4b48-9315-bed49c8bbb16 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received unexpected event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e for instance with vm_state active and task_state None.#033[00m
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.202110) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166202139, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 403, "num_deletes": 251, "total_data_size": 206532, "memory_usage": 215608, "flush_reason": "Manual Compaction"}
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166219149, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 203430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56669, "largest_seqno": 57071, "table_properties": {"data_size": 201095, "index_size": 434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6179, "raw_average_key_size": 19, "raw_value_size": 196345, "raw_average_value_size": 606, "num_data_blocks": 19, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805153, "oldest_key_time": 1763805153, "file_creation_time": 1763805166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 17090 microseconds, and 1479 cpu microseconds.
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.219195) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 203430 bytes OK
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.219218) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.223864) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.223898) EVENT_LOG_v1 {"time_micros": 1763805166223890, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.223920) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 203940, prev total WAL file size 203940, number of live WAL files 2.
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.224426) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(198KB)], [131(10MB)]
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166224491, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11283949, "oldest_snapshot_seqno": -1}
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7718 keys, 9601038 bytes, temperature: kUnknown
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166316487, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9601038, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9550689, "index_size": 29948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19333, "raw_key_size": 203043, "raw_average_key_size": 26, "raw_value_size": 9414158, "raw_average_value_size": 1219, "num_data_blocks": 1157, "num_entries": 7718, "num_filter_entries": 7718, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.316697) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9601038 bytes
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.321409) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.6 rd, 104.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(102.7) write-amplify(47.2) OK, records in: 8227, records dropped: 509 output_compression: NoCompression
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.321435) EVENT_LOG_v1 {"time_micros": 1763805166321425, "job": 80, "event": "compaction_finished", "compaction_time_micros": 92059, "compaction_time_cpu_micros": 24556, "output_level": 6, "num_output_files": 1, "total_output_size": 9601038, "num_input_records": 8227, "num_output_records": 7718, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166321598, "job": 80, "event": "table_file_deletion", "file_number": 133}
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805166323235, "job": 80, "event": "table_file_deletion", "file_number": 131}
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.224290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:46 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:52:46.323270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:52:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 22 04:52:46 np0005532048 nova_compute[253661]: 2025-11-22 09:52:46.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:46.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:47 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:47.604+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:47 np0005532048 nova_compute[253661]: 2025-11-22 09:52:47.660 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:47 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 141 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:48 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:48 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 141 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:52:48 np0005532048 podman[409501]: 2025-11-22 09:52:48.422659936 +0000 UTC m=+0.108215996 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:52:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:48.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:49.595+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:49 np0005532048 nova_compute[253661]: 2025-11-22 09:52:49.900 253665 DEBUG nova.compute.manager [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:52:49 np0005532048 nova_compute[253661]: 2025-11-22 09:52:49.900 253665 DEBUG nova.compute.manager [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing instance network info cache due to event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:52:49 np0005532048 nova_compute[253661]: 2025-11-22 09:52:49.900 253665 DEBUG oslo_concurrency.lockutils [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:52:49 np0005532048 nova_compute[253661]: 2025-11-22 09:52:49.901 253665 DEBUG oslo_concurrency.lockutils [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:52:49 np0005532048 nova_compute[253661]: 2025-11-22 09:52:49.901 253665 DEBUG nova.network.neutron [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:52:50 np0005532048 nova_compute[253661]: 2025-11-22 09:52:50.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:52:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:50.597+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:51 np0005532048 nova_compute[253661]: 2025-11-22 09:52:51.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:51 np0005532048 nova_compute[253661]: 2025-11-22 09:52:51.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:52:51 np0005532048 nova_compute[253661]: 2025-11-22 09:52:51.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:52:51 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:51 np0005532048 nova_compute[253661]: 2025-11-22 09:52:51.447 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:51.618+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:52:52
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.meta']
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:52:52 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:52:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:52.581+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:52 np0005532048 nova_compute[253661]: 2025-11-22 09:52:52.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:52 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 146 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:52:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:52:53 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:53 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 146 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:53.619+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:54 np0005532048 nova_compute[253661]: 2025-11-22 09:52:54.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:54 np0005532048 nova_compute[253661]: 2025-11-22 09:52:54.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:54 np0005532048 nova_compute[253661]: 2025-11-22 09:52:54.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:52:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 22 04:52:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:54.582+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:55 np0005532048 nova_compute[253661]: 2025-11-22 09:52:55.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:55 np0005532048 nova_compute[253661]: 2025-11-22 09:52:55.275 253665 DEBUG nova.network.neutron [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updated VIF entry in instance network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:52:55 np0005532048 nova_compute[253661]: 2025-11-22 09:52:55.275 253665 DEBUG nova.network.neutron [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:52:55 np0005532048 nova_compute[253661]: 2025-11-22 09:52:55.297 253665 DEBUG oslo_concurrency.lockutils [req-10d81ed1-f2ed-46cd-bd90-d8277dcb8829 req-ce4fe925-6e91-416c-82e9-8fe7ce9be1e6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:52:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:52:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:52:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:52:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:52:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:52:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 1 active+clean+laggy, 304 active+clean; 322 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Nov 22 04:52:56 np0005532048 nova_compute[253661]: 2025-11-22 09:52:56.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:56Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:c9:9b 10.100.0.12
Nov 22 04:52:56 np0005532048 ovn_controller[152872]: 2025-11-22T09:52:56Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:c9:9b 10.100.0.12
Nov 22 04:52:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:56.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:52:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:52:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:52:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:52:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:52:57 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:57.544+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:57 np0005532048 nova_compute[253661]: 2025-11-22 09:52:57.665 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:52:57 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 151 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:57 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:52:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 22 04:52:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:58 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 151 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:52:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:58.572+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:59 np0005532048 nova_compute[253661]: 2025-11-22 09:52:59.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:52:59 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:52:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:52:59.541+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:52:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:53:00 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:00.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:53:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470028346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.717 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.804 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.805 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.808 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.809 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.812 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.812 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.964 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.965 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2943MB free_disk=59.8516960144043GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.966 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:00 np0005532048 nova_compute[253661]: 2025-11-22 09:53:00.966 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.045 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.046 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance d986b43b-ea74-42e0-903b-eef7a997e4ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.046 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 566d6c71-a9a6-49f3-9f46-f9d31e71936b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.046 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.046 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.124 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:01 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:01.513+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:53:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3356258732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.563 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.569 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.589 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.760 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:53:01 np0005532048 nova_compute[253661]: 2025-11-22 09:53:01.760 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:53:02 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:02.562+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:02 np0005532048 nova_compute[253661]: 2025-11-22 09:53:02.667 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:02 np0005532048 nova_compute[253661]: 2025-11-22 09:53:02.761 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:02 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 156 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002276865293514358 of space, bias 1.0, pg target 0.6830595880543074 quantized to 32 (current 32)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:53:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:53:03 np0005532048 nova_compute[253661]: 2025-11-22 09:53:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:03 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:03 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 156 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:03.544+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 22 04:53:04 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:04.581+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 1 active+clean+laggy, 304 active+clean; 355 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:53:06 np0005532048 nova_compute[253661]: 2025-11-22 09:53:06.454 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:06.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.366 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.369 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.567 253665 DEBUG nova.compute.manager [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.567 253665 DEBUG nova.compute.manager [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing instance network info cache due to event network-changed-43d41379-bbc3-4a75-be24-8943b05e7a8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.568 253665 DEBUG oslo_concurrency.lockutils [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.568 253665 DEBUG oslo_concurrency.lockutils [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.568 253665 DEBUG nova.network.neutron [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Refreshing network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:53:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:07.582+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.633 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.633 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.634 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.634 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.634 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.635 253665 INFO nova.compute.manager [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Terminating instance#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.636 253665 DEBUG nova.compute.manager [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.669 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 kernel: tap43d41379-bb (unregistering): left promiscuous mode
Nov 22 04:53:07 np0005532048 NetworkManager[48920]: <info>  [1763805187.6931] device (tap43d41379-bb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:53:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:07Z|01616|binding|INFO|Releasing lport 43d41379-bbc3-4a75-be24-8943b05e7a8e from this chassis (sb_readonly=0)
Nov 22 04:53:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:07Z|01617|binding|INFO|Setting lport 43d41379-bbc3-4a75-be24-8943b05e7a8e down in Southbound
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:07Z|01618|binding|INFO|Removing iface tap43d41379-bb ovn-installed in OVS
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.705 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.709 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b'], port_security=['fa:16:3e:95:c9:9b 10.100.0.12 2001:db8::f816:3eff:fe95:c99b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe95:c99b/64', 'neutron:device_id': '566d6c71-a9a6-49f3-9f46-f9d31e71936b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0beff345-fc2f-4a68-a4a7-1d4c0960ae91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=43d41379-bbc3-4a75-be24-8943b05e7a8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.710 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 43d41379-bbc3-4a75-be24-8943b05e7a8e in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e unbound from our chassis#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.712 162862 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58b95ca9-260c-49de-9bd2-c16568d51c7e#033[00m
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.721 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:53:07 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7686f124-bcfe-45ff-94c6-57b59d5cdedb does not exist
Nov 22 04:53:07 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d671c9ee-ca13-4133-9509-35bc56bd136e does not exist
Nov 22 04:53:07 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b4c0fd4e-e6da-4545-b345-ffabb46d5309 does not exist
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.728 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[44072d92-f1e4-414c-96a1-fd7a88ddbaf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:53:07 np0005532048 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Deactivated successfully.
Nov 22 04:53:07 np0005532048 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Consumed 14.219s CPU time.
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.759 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[41fd2978-2bbc-4bb3-811a-1c19d3a4ee3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.763 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0dfbc3-57b6-42fa-8590-4b9daeadcfcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:07 np0005532048 systemd-machined[215941]: Machine qemu-182-instance-00000096 terminated.
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 161 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.793 271173 DEBUG oslo.privsep.daemon [-] privsep: reply[97f1876e-8f2e-4447-8bf4-f4c345afbdec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.812 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[26b5e6cd-9904-4222-b900-d359c6d4b860]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58b95ca9-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:0e:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 462], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797624, 'reachable_time': 32921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409740, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.827 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[36f92329-494d-493b-9148-40ac0f79b56b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap58b95ca9-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797636, 'tstamp': 797636}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409753, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap58b95ca9-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797639, 'tstamp': 797639}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409753, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.829 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58b95ca9-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.835 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58b95ca9-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.835 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.836 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58b95ca9-20, col_values=(('external_ids', {'iface-id': '318882b5-a140-4600-8260-0040c058e797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:07.836 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.868 253665 INFO nova.virt.libvirt.driver [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Instance destroyed successfully.#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.869 253665 DEBUG nova.objects.instance [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid 566d6c71-a9a6-49f3-9f46-f9d31e71936b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.877 253665 DEBUG nova.virt.libvirt.vif [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:52:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-262851612',display_name='tempest-TestGettingAddress-server-262851612',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-262851612',id=150,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:52:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-drb6i5m1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:52:43Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=566d6c71-a9a6-49f3-9f46-f9d31e71936b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.878 253665 DEBUG nova.network.os_vif_util [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.878 253665 DEBUG nova.network.os_vif_util [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.879 253665 DEBUG os_vif [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.880 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.880 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43d41379-bb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:07 np0005532048 nova_compute[253661]: 2025-11-22 09:53:07.886 253665 INFO os_vif [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:c9:9b,bridge_name='br-int',has_traffic_filtering=True,id=43d41379-bbc3-4a75-be24-8943b05e7a8e,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43d41379-bb')#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.271 253665 DEBUG nova.compute.manager [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-unplugged-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.273 253665 DEBUG oslo_concurrency.lockutils [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.273 253665 DEBUG oslo_concurrency.lockutils [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.274 253665 DEBUG oslo_concurrency.lockutils [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.275 253665 DEBUG nova.compute.manager [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] No waiting events found dispatching network-vif-unplugged-43d41379-bbc3-4a75-be24-8943b05e7a8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.275 253665 DEBUG nova.compute.manager [req-fefb420b-e661-4940-8ef3-5293e65b7e4f req-ab56d1e0-9ba7-471b-befe-79a3eff460cb 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-unplugged-43d41379-bbc3-4a75-be24-8943b05e7a8e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:53:08 np0005532048 podman[409888]: 2025-11-22 09:53:08.322224734 +0000 UTC m=+0.043498684 container create 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.334 253665 INFO nova.virt.libvirt.driver [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Deleting instance files /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b_del#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.334 253665 INFO nova.virt.libvirt.driver [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Deletion of /var/lib/nova/instances/566d6c71-a9a6-49f3-9f46-f9d31e71936b_del complete#033[00m
Nov 22 04:53:08 np0005532048 systemd[1]: Started libpod-conmon-40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9.scope.
Nov 22 04:53:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:53:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 1 active+clean+laggy, 304 active+clean; 288 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Nov 22 04:53:08 np0005532048 podman[409888]: 2025-11-22 09:53:08.302151045 +0000 UTC m=+0.023425005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:53:08 np0005532048 podman[409888]: 2025-11-22 09:53:08.401800722 +0000 UTC m=+0.123074682 container init 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:53:08 np0005532048 podman[409888]: 2025-11-22 09:53:08.408372619 +0000 UTC m=+0.129646569 container start 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:08 np0005532048 podman[409888]: 2025-11-22 09:53:08.411672722 +0000 UTC m=+0.132946672 container attach 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:08 np0005532048 elastic_blackburn[409904]: 167 167
Nov 22 04:53:08 np0005532048 systemd[1]: libpod-40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9.scope: Deactivated successfully.
Nov 22 04:53:08 np0005532048 podman[409888]: 2025-11-22 09:53:08.414105604 +0000 UTC m=+0.135379554 container died 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:53:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-168a7b75f0a66bddaad31d4eb15df6627186efca7f2eb488a3dcb1bebec7e566-merged.mount: Deactivated successfully.
Nov 22 04:53:08 np0005532048 podman[409888]: 2025-11-22 09:53:08.456446568 +0000 UTC m=+0.177720518 container remove 40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:53:08 np0005532048 systemd[1]: libpod-conmon-40c78d2f661e21f683b20ce4cf336caef1a9267c4562bc657629f2eac0c5fcf9.scope: Deactivated successfully.
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.508 253665 INFO nova.compute.manager [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.510 253665 DEBUG oslo.service.loopingcall [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.510 253665 DEBUG nova.compute.manager [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:53:08 np0005532048 nova_compute[253661]: 2025-11-22 09:53:08.510 253665 DEBUG nova.network.neutron [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:53:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:08.546+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:53:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:53:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:53:08 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 161 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:08 np0005532048 podman[409928]: 2025-11-22 09:53:08.670744692 +0000 UTC m=+0.053871146 container create d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:53:08 np0005532048 systemd[1]: Started libpod-conmon-d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561.scope.
Nov 22 04:53:08 np0005532048 podman[409928]: 2025-11-22 09:53:08.648098548 +0000 UTC m=+0.031225032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:53:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:53:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:08 np0005532048 podman[409928]: 2025-11-22 09:53:08.773716894 +0000 UTC m=+0.156843368 container init d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:08 np0005532048 podman[409928]: 2025-11-22 09:53:08.781363247 +0000 UTC m=+0.164489701 container start d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:53:08 np0005532048 podman[409928]: 2025-11-22 09:53:08.785277457 +0000 UTC m=+0.168403941 container attach d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:09.547+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:09 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:09 np0005532048 bold_wilson[409945]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:53:09 np0005532048 bold_wilson[409945]: --> relative data size: 1.0
Nov 22 04:53:09 np0005532048 bold_wilson[409945]: --> All data devices are unavailable
Nov 22 04:53:09 np0005532048 systemd[1]: libpod-d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561.scope: Deactivated successfully.
Nov 22 04:53:09 np0005532048 podman[409928]: 2025-11-22 09:53:09.825705491 +0000 UTC m=+1.208831945 container died d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-86a467f9a23bd12dad189cafba18539d81a57bfcb05324f4980d07343599df7f-merged.mount: Deactivated successfully.
Nov 22 04:53:09 np0005532048 podman[409928]: 2025-11-22 09:53:09.897675486 +0000 UTC m=+1.280801950 container remove d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wilson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:53:09 np0005532048 systemd[1]: libpod-conmon-d90e13831a6e05f69054214a47cc0513905552313a806321d3ea9fc007966561.scope: Deactivated successfully.
Nov 22 04:53:09 np0005532048 nova_compute[253661]: 2025-11-22 09:53:09.917 253665 DEBUG nova.network.neutron [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updated VIF entry in instance network info cache for port 43d41379-bbc3-4a75-be24-8943b05e7a8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:53:09 np0005532048 nova_compute[253661]: 2025-11-22 09:53:09.919 253665 DEBUG nova.network.neutron [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [{"id": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "address": "fa:16:3e:95:c9:9b", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c99b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43d41379-bb", "ovs_interfaceid": "43d41379-bbc3-4a75-be24-8943b05e7a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:53:09 np0005532048 nova_compute[253661]: 2025-11-22 09:53:09.922 253665 DEBUG nova.network.neutron [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:53:09 np0005532048 nova_compute[253661]: 2025-11-22 09:53:09.936 253665 INFO nova.compute.manager [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Took 1.43 seconds to deallocate network for instance.#033[00m
Nov 22 04:53:09 np0005532048 nova_compute[253661]: 2025-11-22 09:53:09.941 253665 DEBUG oslo_concurrency.lockutils [req-d49cef4a-779b-4413-aa53-72e642a09933 req-0036225a-1903-4554-9810-7457dc175688 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-566d6c71-a9a6-49f3-9f46-f9d31e71936b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:53:09 np0005532048 nova_compute[253661]: 2025-11-22 09:53:09.981 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:09 np0005532048 nova_compute[253661]: 2025-11-22 09:53:09.982 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.298 253665 DEBUG oslo_concurrency.processutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.361 253665 DEBUG nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.362 253665 DEBUG oslo_concurrency.lockutils [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.363 253665 DEBUG oslo_concurrency.lockutils [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.363 253665 DEBUG oslo_concurrency.lockutils [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.363 253665 DEBUG nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] No waiting events found dispatching network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.364 253665 WARNING nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received unexpected event network-vif-plugged-43d41379-bbc3-4a75-be24-8943b05e7a8e for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.364 253665 DEBUG nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Received event network-vif-deleted-43d41379-bbc3-4a75-be24-8943b05e7a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.364 253665 INFO nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Neutron deleted interface 43d41379-bbc3-4a75-be24-8943b05e7a8e; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.365 253665 DEBUG nova.network.neutron [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.382 253665 DEBUG nova.compute.manager [req-16f344b5-3537-4516-82d7-e23c215cccbf req-3f349a73-cd10-4319-8177-a2eba997c4f6 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Detach interface failed, port_id=43d41379-bbc3-4a75-be24-8943b05e7a8e, reason: Instance 566d6c71-a9a6-49f3-9f46-f9d31e71936b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:53:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 1 active+clean+laggy, 304 active+clean; 288 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 21 KiB/s wr, 17 op/s
Nov 22 04:53:10 np0005532048 podman[410147]: 2025-11-22 09:53:10.544194862 +0000 UTC m=+0.067127404 container create ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:10.576+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:10 np0005532048 systemd[1]: Started libpod-conmon-ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba.scope.
Nov 22 04:53:10 np0005532048 podman[410147]: 2025-11-22 09:53:10.513991195 +0000 UTC m=+0.036923757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:53:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:53:10 np0005532048 podman[410147]: 2025-11-22 09:53:10.632211564 +0000 UTC m=+0.155144116 container init ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:10 np0005532048 podman[410147]: 2025-11-22 09:53:10.638689689 +0000 UTC m=+0.161622231 container start ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:53:10 np0005532048 podman[410147]: 2025-11-22 09:53:10.642422873 +0000 UTC m=+0.165355415 container attach ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:53:10 np0005532048 vibrant_bell[410163]: 167 167
Nov 22 04:53:10 np0005532048 systemd[1]: libpod-ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba.scope: Deactivated successfully.
Nov 22 04:53:10 np0005532048 podman[410147]: 2025-11-22 09:53:10.644861885 +0000 UTC m=+0.167794427 container died ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:53:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-337c04d95e716c14cd124fb8b5c51ad069140ad14b1365eb0497fad421f2ab82-merged.mount: Deactivated successfully.
Nov 22 04:53:10 np0005532048 podman[410147]: 2025-11-22 09:53:10.686232724 +0000 UTC m=+0.209165266 container remove ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:53:10 np0005532048 systemd[1]: libpod-conmon-ccc90a5fb0418dbb3faa820652db38c1f57bf695dfb1037da297a64eddca96ba.scope: Deactivated successfully.
Nov 22 04:53:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:53:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385091223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.801 253665 DEBUG oslo_concurrency.processutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.807 253665 DEBUG nova.compute.provider_tree [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.832 253665 DEBUG nova.scheduler.client.report [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:53:10 np0005532048 podman[410188]: 2025-11-22 09:53:10.856554033 +0000 UTC m=+0.040135529 container create 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.872 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:10 np0005532048 systemd[1]: Started libpod-conmon-202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e.scope.
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.918 253665 INFO nova.scheduler.client.report [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance 566d6c71-a9a6-49f3-9f46-f9d31e71936b#033[00m
Nov 22 04:53:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:53:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:10 np0005532048 podman[410188]: 2025-11-22 09:53:10.838929046 +0000 UTC m=+0.022510342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:53:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:10 np0005532048 podman[410188]: 2025-11-22 09:53:10.952517967 +0000 UTC m=+0.136099273 container init 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:53:10 np0005532048 podman[410188]: 2025-11-22 09:53:10.960338345 +0000 UTC m=+0.143919621 container start 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:53:10 np0005532048 podman[410188]: 2025-11-22 09:53:10.963873225 +0000 UTC m=+0.147454531 container attach 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:53:10 np0005532048 nova_compute[253661]: 2025-11-22 09:53:10.997 253665 DEBUG oslo_concurrency.lockutils [None req-ae83697e-00d1-48b7-b065-a3603868d6f0 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "566d6c71-a9a6-49f3-9f46-f9d31e71936b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:11.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]: {
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:    "0": [
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:        {
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "devices": [
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "/dev/loop3"
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            ],
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_name": "ceph_lv0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_size": "21470642176",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "name": "ceph_lv0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "tags": {
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.cluster_name": "ceph",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.crush_device_class": "",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.encrypted": "0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.osd_id": "0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.type": "block",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.vdo": "0"
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            },
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "type": "block",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "vg_name": "ceph_vg0"
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:        }
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:    ],
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:    "1": [
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:        {
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "devices": [
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "/dev/loop4"
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            ],
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_name": "ceph_lv1",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_size": "21470642176",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "name": "ceph_lv1",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "tags": {
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.cluster_name": "ceph",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.crush_device_class": "",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.encrypted": "0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.osd_id": "1",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.type": "block",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.vdo": "0"
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            },
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "type": "block",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "vg_name": "ceph_vg1"
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:        }
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:    ],
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:    "2": [
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:        {
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "devices": [
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "/dev/loop5"
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            ],
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_name": "ceph_lv2",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_size": "21470642176",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "name": "ceph_lv2",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "tags": {
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.cluster_name": "ceph",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.crush_device_class": "",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.encrypted": "0",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.osd_id": "2",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.type": "block",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:                "ceph.vdo": "0"
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            },
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "type": "block",
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:            "vg_name": "ceph_vg2"
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:        }
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]:    ]
Nov 22 04:53:11 np0005532048 eloquent_shockley[410204]: }
Nov 22 04:53:11 np0005532048 systemd[1]: libpod-202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e.scope: Deactivated successfully.
Nov 22 04:53:11 np0005532048 conmon[410204]: conmon 202d9112c254275e6cc9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e.scope/container/memory.events
Nov 22 04:53:11 np0005532048 podman[410188]: 2025-11-22 09:53:11.735596405 +0000 UTC m=+0.919177681 container died 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:53:11 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5f55f87279da6c64929aa6d8c45548b0141cbfbbdac6589ee4d164aadf2c35f9-merged.mount: Deactivated successfully.
Nov 22 04:53:11 np0005532048 podman[410188]: 2025-11-22 09:53:11.813410179 +0000 UTC m=+0.996991445 container remove 202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shockley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:11 np0005532048 systemd[1]: libpod-conmon-202d9112c254275e6cc9f9397bfe4ae74cc77c7645a7e70d45ad888819c0298e.scope: Deactivated successfully.
Nov 22 04:53:11 np0005532048 podman[410213]: 2025-11-22 09:53:11.852666884 +0000 UTC m=+0.093458181 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 22 04:53:11 np0005532048 podman[410216]: 2025-11-22 09:53:11.859130188 +0000 UTC m=+0.099940576 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 04:53:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 1 active+clean+laggy, 304 active+clean; 288 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 21 KiB/s wr, 17 op/s
Nov 22 04:53:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:53:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/490340778' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:53:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:53:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/490340778' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:53:12 np0005532048 podman[410401]: 2025-11-22 09:53:12.409679439 +0000 UTC m=+0.041631596 container create 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:53:12 np0005532048 systemd[1]: Started libpod-conmon-26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685.scope.
Nov 22 04:53:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:53:12 np0005532048 podman[410401]: 2025-11-22 09:53:12.476389241 +0000 UTC m=+0.108341418 container init 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:12 np0005532048 podman[410401]: 2025-11-22 09:53:12.389543549 +0000 UTC m=+0.021495736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:53:12 np0005532048 podman[410401]: 2025-11-22 09:53:12.485303447 +0000 UTC m=+0.117255604 container start 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 04:53:12 np0005532048 kind_perlman[410417]: 167 167
Nov 22 04:53:12 np0005532048 systemd[1]: libpod-26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685.scope: Deactivated successfully.
Nov 22 04:53:12 np0005532048 podman[410401]: 2025-11-22 09:53:12.490894608 +0000 UTC m=+0.122846755 container attach 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:53:12 np0005532048 podman[410401]: 2025-11-22 09:53:12.491221147 +0000 UTC m=+0.123173304 container died 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:53:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2c07c440e8eb984e1cab11feba0b6fbfecb6a3f0600934051405be9c25212ad2-merged.mount: Deactivated successfully.
Nov 22 04:53:12 np0005532048 podman[410401]: 2025-11-22 09:53:12.548298184 +0000 UTC m=+0.180250341 container remove 26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:53:12 np0005532048 systemd[1]: libpod-conmon-26136550210c5f60747c68225737756fecd79fb928ce85f7dd6b855d2b6fd685.scope: Deactivated successfully.
Nov 22 04:53:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:12.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:12 np0005532048 podman[410442]: 2025-11-22 09:53:12.724164954 +0000 UTC m=+0.044220622 container create 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:53:12 np0005532048 nova_compute[253661]: 2025-11-22 09:53:12.724 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:12 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:12 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 166 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:12 np0005532048 systemd[1]: Started libpod-conmon-9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073.scope.
Nov 22 04:53:12 np0005532048 podman[410442]: 2025-11-22 09:53:12.70349514 +0000 UTC m=+0.023550828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:53:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:53:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:53:12 np0005532048 podman[410442]: 2025-11-22 09:53:12.826833348 +0000 UTC m=+0.146889036 container init 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:53:12 np0005532048 podman[410442]: 2025-11-22 09:53:12.834107973 +0000 UTC m=+0.154163641 container start 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:53:12 np0005532048 podman[410442]: 2025-11-22 09:53:12.83914093 +0000 UTC m=+0.159196598 container attach 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:53:12 np0005532048 nova_compute[253661]: 2025-11-22 09:53:12.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.169 253665 DEBUG nova.compute.manager [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.169 253665 DEBUG nova.compute.manager [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing instance network info cache due to event network-changed-b10caa5b-0659-423b-9bcf-57a9a1ed30c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.169 253665 DEBUG oslo_concurrency.lockutils [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.169 253665 DEBUG oslo_concurrency.lockutils [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.170 253665 DEBUG nova.network.neutron [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Refreshing network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.252 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.252 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.255 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.256 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.256 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.257 253665 INFO nova.compute.manager [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Terminating instance#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.258 253665 DEBUG nova.compute.manager [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:53:13 np0005532048 kernel: tapb10caa5b-06 (unregistering): left promiscuous mode
Nov 22 04:53:13 np0005532048 NetworkManager[48920]: <info>  [1763805193.3492] device (tapb10caa5b-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:53:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:13Z|01619|binding|INFO|Releasing lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 from this chassis (sb_readonly=0)
Nov 22 04:53:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:13Z|01620|binding|INFO|Setting lport b10caa5b-0659-423b-9bcf-57a9a1ed30c0 down in Southbound
Nov 22 04:53:13 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:13Z|01621|binding|INFO|Removing iface tapb10caa5b-06 ovn-installed in OVS
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.362 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.368 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e'], port_security=['fa:16:3e:12:a9:7e 10.100.0.6 2001:db8::f816:3eff:fe12:a97e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8::f816:3eff:fe12:a97e/64', 'neutron:device_id': 'd986b43b-ea74-42e0-903b-eef7a997e4ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aa4944a176f4fd4b020666526e48e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0beff345-fc2f-4a68-a4a7-1d4c0960ae91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63d2b202-7cdb-46d8-a16a-63cc2d81bd37, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=b10caa5b-0659-423b-9bcf-57a9a1ed30c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.369 162862 INFO neutron.agent.ovn.metadata.agent [-] Port b10caa5b-0659-423b-9bcf-57a9a1ed30c0 in datapath 58b95ca9-260c-49de-9bd2-c16568d51c7e unbound from our chassis#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.371 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58b95ca9-260c-49de-9bd2-c16568d51c7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.372 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[6d569b78-c595-4530-8fd8-27b9861f7c4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.372 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e namespace which is not needed anymore#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:13 np0005532048 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000095.scope: Deactivated successfully.
Nov 22 04:53:13 np0005532048 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000095.scope: Consumed 15.889s CPU time.
Nov 22 04:53:13 np0005532048 systemd-machined[215941]: Machine qemu-181-instance-00000095 terminated.
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.500 253665 INFO nova.virt.libvirt.driver [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Instance destroyed successfully.#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.501 253665 DEBUG nova.objects.instance [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lazy-loading 'resources' on Instance uuid d986b43b-ea74-42e0-903b-eef7a997e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.513 253665 DEBUG nova.virt.libvirt.vif [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-355287164',display_name='tempest-TestGettingAddress-server-355287164',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-355287164',id=149,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSb60+tm2QuEWINNrbY2Z4T8shyuVj5ORFNm8DDF4ERr5xc1TwTNbRvBPI6FjbgHdIsPrc+izgcvAijbwtfNpo3Q7dk/qm1p9ZZITdtksKMPJb7o1jSKDouF16N0zCqOA==',key_name='tempest-TestGettingAddress-1774595184',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:52:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aa4944a176f4fd4b020666526e48e84',ramdisk_id='',reservation_id='r-913op7wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-647364936',owner_user_name='tempest-TestGettingAddress-647364936-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:52:06Z,user_data=None,user_id='175451870b324a779f0096d0d5c2a4c0',uuid=d986b43b-ea74-42e0-903b-eef7a997e4ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.514 253665 DEBUG nova.network.os_vif_util [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converting VIF {"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.515 253665 DEBUG nova.network.os_vif_util [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.515 253665 DEBUG os_vif [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.517 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb10caa5b-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.521 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.525 253665 INFO os_vif [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:a9:7e,bridge_name='br-int',has_traffic_filtering=True,id=b10caa5b-0659-423b-9bcf-57a9a1ed30c0,network=Network(58b95ca9-260c-49de-9bd2-c16568d51c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb10caa5b-06')#033[00m
Nov 22 04:53:13 np0005532048 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [NOTICE]   (408912) : haproxy version is 2.8.14-c23fe91
Nov 22 04:53:13 np0005532048 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [NOTICE]   (408912) : path to executable is /usr/sbin/haproxy
Nov 22 04:53:13 np0005532048 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [WARNING]  (408912) : Exiting Master process...
Nov 22 04:53:13 np0005532048 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [WARNING]  (408912) : Exiting Master process...
Nov 22 04:53:13 np0005532048 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [ALERT]    (408912) : Current worker (408915) exited with code 143 (Terminated)
Nov 22 04:53:13 np0005532048 neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e[408908]: [WARNING]  (408912) : All workers exited. Exiting... (0)
Nov 22 04:53:13 np0005532048 systemd[1]: libpod-2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47.scope: Deactivated successfully.
Nov 22 04:53:13 np0005532048 podman[410486]: 2025-11-22 09:53:13.550271634 +0000 UTC m=+0.059918171 container died 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:53:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47-userdata-shm.mount: Deactivated successfully.
Nov 22 04:53:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-04ea709103a9d5440c95c0740482f664efdf69029657dc148bca198fcea4ae2e-merged.mount: Deactivated successfully.
Nov 22 04:53:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:13.633+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:13 np0005532048 podman[410486]: 2025-11-22 09:53:13.651755818 +0000 UTC m=+0.161402365 container cleanup 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:53:13 np0005532048 systemd[1]: libpod-conmon-2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47.scope: Deactivated successfully.
Nov 22 04:53:13 np0005532048 podman[410554]: 2025-11-22 09:53:13.737767089 +0000 UTC m=+0.052927014 container remove 2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.746 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[d2885377-6dab-4618-a6e8-55641f23981a]: (4, ('Sat Nov 22 09:53:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e (2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47)\n2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47\nSat Nov 22 09:53:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e (2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47)\n2cffe105c5626cf852dfbb346e1029c8b10c8b933ce8f2b23b73ef302e900f47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.749 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[04fd4ec4-8687-45fd-9ce7-800c2ecf61a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.750 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58b95ca9-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.752 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:13 np0005532048 kernel: tap58b95ca9-20: left promiscuous mode
Nov 22 04:53:13 np0005532048 nova_compute[253661]: 2025-11-22 09:53:13.812 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:13 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:13 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 166 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.822 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[76caa4ad-6657-4f94-8f7b-e76bf0f4a189]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.837 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[fc41b2b6-c7e4-45af-9915-dabd79c2e7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.839 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a300e06-9af3-438b-bc1e-bc7c539f31d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.858 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5fe158-6392-4490-9629-0bf35a37147b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797615, 'reachable_time': 23597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410582, 'error': None, 'target': 'ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:13 np0005532048 systemd[1]: run-netns-ovnmeta\x2d58b95ca9\x2d260c\x2d49de\x2d9bd2\x2dc16568d51c7e.mount: Deactivated successfully.
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.869 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58b95ca9-260c-49de-9bd2-c16568d51c7e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:53:13 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:13.869 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5eadc0-03f1-4dea-a9b5-700b03a7260b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]: {
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "osd_id": 1,
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "type": "bluestore"
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:    },
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "osd_id": 0,
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "type": "bluestore"
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:    },
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "osd_id": 2,
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:        "type": "bluestore"
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]:    }
Nov 22 04:53:13 np0005532048 awesome_robinson[410458]: }
Nov 22 04:53:13 np0005532048 systemd[1]: libpod-9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073.scope: Deactivated successfully.
Nov 22 04:53:13 np0005532048 systemd[1]: libpod-9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073.scope: Consumed 1.071s CPU time.
Nov 22 04:53:13 np0005532048 podman[410442]: 2025-11-22 09:53:13.968619903 +0000 UTC m=+1.288675581 container died 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:53:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e339efcc221112192db2c66c83a610e0cfba7f00983d1504c8d9132994c2f638-merged.mount: Deactivated successfully.
Nov 22 04:53:14 np0005532048 podman[410442]: 2025-11-22 09:53:14.046241321 +0000 UTC m=+1.366296989 container remove 9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_robinson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:53:14 np0005532048 systemd[1]: libpod-conmon-9905c0cfdaa3b78d1d82077570c4740e5300e087e8d8dc164c6b54d0ff51f073.scope: Deactivated successfully.
Nov 22 04:53:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:53:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:53:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:53:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:53:14 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev dd05be6f-8442-4f06-af87-598df44caa8e does not exist
Nov 22 04:53:14 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fbe91504-54f2-4464-8245-d6b96a981f85 does not exist
Nov 22 04:53:14 np0005532048 nova_compute[253661]: 2025-11-22 09:53:14.110 253665 INFO nova.virt.libvirt.driver [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Deleting instance files /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce_del#033[00m
Nov 22 04:53:14 np0005532048 nova_compute[253661]: 2025-11-22 09:53:14.111 253665 INFO nova.virt.libvirt.driver [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Deletion of /var/lib/nova/instances/d986b43b-ea74-42e0-903b-eef7a997e4ce_del complete#033[00m
Nov 22 04:53:14 np0005532048 nova_compute[253661]: 2025-11-22 09:53:14.187 253665 INFO nova.compute.manager [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:53:14 np0005532048 nova_compute[253661]: 2025-11-22 09:53:14.188 253665 DEBUG oslo.service.loopingcall [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:53:14 np0005532048 nova_compute[253661]: 2025-11-22 09:53:14.188 253665 DEBUG nova.compute.manager [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:53:14 np0005532048 nova_compute[253661]: 2025-11-22 09:53:14.188 253665 DEBUG nova.network.neutron [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:53:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 25 KiB/s wr, 30 op/s
Nov 22 04:53:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:14.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:14 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:53:14 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:53:15 np0005532048 nova_compute[253661]: 2025-11-22 09:53:15.270 253665 DEBUG nova.compute.manager [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-unplugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:15 np0005532048 nova_compute[253661]: 2025-11-22 09:53:15.270 253665 DEBUG oslo_concurrency.lockutils [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:15 np0005532048 nova_compute[253661]: 2025-11-22 09:53:15.270 253665 DEBUG oslo_concurrency.lockutils [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:15 np0005532048 nova_compute[253661]: 2025-11-22 09:53:15.271 253665 DEBUG oslo_concurrency.lockutils [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:15 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:53:15 np0005532048 nova_compute[253661]: 2025-11-22 09:53:15.271 253665 DEBUG nova.compute.manager [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] No waiting events found dispatching network-vif-unplugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:53:15 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:53:15 np0005532048 nova_compute[253661]: 2025-11-22 09:53:15.271 253665 DEBUG nova.compute.manager [req-e51bb0a9-7044-4258-9a41-5acad505093c req-0bebae7d-cfcc-46ef-8c22-010a27db1a09 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-unplugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:53:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:15.371 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:15.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:15 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.234 253665 DEBUG nova.network.neutron [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.252 253665 INFO nova.compute.manager [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Took 2.06 seconds to deallocate network for instance.#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.292 253665 DEBUG nova.compute.manager [req-a1c6e121-3de5-4d1e-aa6b-d02f98817c34 req-649c6371-e131-4c37-8048-0d66a198902d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-deleted-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.307 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.308 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 1 active+clean+laggy, 304 active+clean; 275 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 24 KiB/s wr, 30 op/s
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.387 253665 DEBUG nova.network.neutron [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updated VIF entry in instance network info cache for port b10caa5b-0659-423b-9bcf-57a9a1ed30c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.387 253665 DEBUG nova.network.neutron [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Updating instance_info_cache with network_info: [{"id": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "address": "fa:16:3e:12:a9:7e", "network": {"id": "58b95ca9-260c-49de-9bd2-c16568d51c7e", "bridge": "br-int", "label": "tempest-network-smoke--986128646", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a97e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "9aa4944a176f4fd4b020666526e48e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb10caa5b-06", "ovs_interfaceid": "b10caa5b-0659-423b-9bcf-57a9a1ed30c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.402 253665 DEBUG oslo_concurrency.lockutils [req-eff0b36f-ec5e-4a51-8267-f807ef289b38 req-20d47a5c-c5f3-4b36-b1f8-28328824d241 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-d986b43b-ea74-42e0-903b-eef7a997e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.404 253665 DEBUG oslo_concurrency.processutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:16.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:16 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:53:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849791293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.888 253665 DEBUG oslo_concurrency.processutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.894 253665 DEBUG nova.compute.provider_tree [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.908 253665 DEBUG nova.scheduler.client.report [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.931 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:16 np0005532048 nova_compute[253661]: 2025-11-22 09:53:16.958 253665 INFO nova.scheduler.client.report [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Deleted allocations for instance d986b43b-ea74-42e0-903b-eef7a997e4ce#033[00m
Nov 22 04:53:17 np0005532048 nova_compute[253661]: 2025-11-22 09:53:17.022 253665 DEBUG oslo_concurrency.lockutils [None req-88845b3e-50c8-4fbb-bdac-8115a37a6356 175451870b324a779f0096d0d5c2a4c0 9aa4944a176f4fd4b020666526e48e84 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:17 np0005532048 nova_compute[253661]: 2025-11-22 09:53:17.345 253665 DEBUG nova.compute.manager [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:17 np0005532048 nova_compute[253661]: 2025-11-22 09:53:17.346 253665 DEBUG oslo_concurrency.lockutils [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:17 np0005532048 nova_compute[253661]: 2025-11-22 09:53:17.346 253665 DEBUG oslo_concurrency.lockutils [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:17 np0005532048 nova_compute[253661]: 2025-11-22 09:53:17.346 253665 DEBUG oslo_concurrency.lockutils [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "d986b43b-ea74-42e0-903b-eef7a997e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:17 np0005532048 nova_compute[253661]: 2025-11-22 09:53:17.347 253665 DEBUG nova.compute.manager [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] No waiting events found dispatching network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:53:17 np0005532048 nova_compute[253661]: 2025-11-22 09:53:17.347 253665 WARNING nova.compute.manager [req-b31ed94a-39a2-4916-824d-456a9c8d2689 req-87b082b9-484f-457f-b6bc-ca496a22dbf1 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Received unexpected event network-vif-plugged-b10caa5b-0659-423b-9bcf-57a9a1ed30c0 for instance with vm_state deleted and task_state None.#033[00m
Nov 22 04:53:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:17.722+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:17 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 171 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:17 np0005532048 nova_compute[253661]: 2025-11-22 09:53:17.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:17 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:17 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 171 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 25 KiB/s wr, 58 op/s
Nov 22 04:53:18 np0005532048 nova_compute[253661]: 2025-11-22 09:53:18.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:18.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:18 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:18 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:19 np0005532048 podman[410677]: 2025-11-22 09:53:19.430533433 +0000 UTC m=+0.114633108 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:53:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:19.677+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:20 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.7 KiB/s wr, 40 op/s
Nov 22 04:53:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:20.682+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:21 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:21.694+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:22 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.7 KiB/s wr, 40 op/s
Nov 22 04:53:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:22.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:53:22 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 176 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:53:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:22 np0005532048 nova_compute[253661]: 2025-11-22 09:53:22.816 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:22 np0005532048 nova_compute[253661]: 2025-11-22 09:53:22.866 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805187.8656547, 566d6c71-a9a6-49f3-9f46-f9d31e71936b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:53:22 np0005532048 nova_compute[253661]: 2025-11-22 09:53:22.867 253665 INFO nova.compute.manager [-] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:53:22 np0005532048 nova_compute[253661]: 2025-11-22 09:53:22.886 253665 DEBUG nova.compute.manager [None req-6fab7022-6e32-4c11-b939-44bf327a2d90 - - - - - -] [instance: 566d6c71-a9a6-49f3-9f46-f9d31e71936b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:23 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:23 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 176 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:23 np0005532048 nova_compute[253661]: 2025-11-22 09:53:23.521 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:23.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.7 KiB/s wr, 40 op/s
Nov 22 04:53:24 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:24Z|01622|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 04:53:24 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:24 np0005532048 nova_compute[253661]: 2025-11-22 09:53:24.491 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:24.720+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:25 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:25.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 22 04:53:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:26.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:26 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:27.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:27 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 181 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:27 np0005532048 nova_compute[253661]: 2025-11-22 09:53:27.819 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:27.996 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:28 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 181 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 22 04:53:28 np0005532048 nova_compute[253661]: 2025-11-22 09:53:28.499 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805193.4984808, d986b43b-ea74-42e0-903b-eef7a997e4ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:53:28 np0005532048 nova_compute[253661]: 2025-11-22 09:53:28.499 253665 INFO nova.compute.manager [-] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:53:28 np0005532048 nova_compute[253661]: 2025-11-22 09:53:28.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:28 np0005532048 nova_compute[253661]: 2025-11-22 09:53:28.526 253665 DEBUG nova.compute.manager [None req-2f86d777-6543-4f1c-ab65-07815c648ffa - - - - - -] [instance: d986b43b-ea74-42e0-903b-eef7a997e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:28.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:29 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:29.719+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:30 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:53:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:30.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:31 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:31.710+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:32 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:53:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:32.727+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:32 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 186 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:32 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:32 np0005532048 nova_compute[253661]: 2025-11-22 09:53:32.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:33 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:33 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 186 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:33 np0005532048 nova_compute[253661]: 2025-11-22 09:53:33.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:33.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:53:34 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:34.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:35 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:35.732+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:53:36 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:36.719+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:37 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:37.722+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:37 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 191 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:37 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:37 np0005532048 nova_compute[253661]: 2025-11-22 09:53:37.822 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:53:38 np0005532048 nova_compute[253661]: 2025-11-22 09:53:38.529 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:38.683+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:38 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:38 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 191 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:39 np0005532048 nova_compute[253661]: 2025-11-22 09:53:39.424 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:39 np0005532048 nova_compute[253661]: 2025-11-22 09:53:39.425 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:39 np0005532048 nova_compute[253661]: 2025-11-22 09:53:39.438 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:53:39 np0005532048 nova_compute[253661]: 2025-11-22 09:53:39.522 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:39 np0005532048 nova_compute[253661]: 2025-11-22 09:53:39.523 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:39 np0005532048 nova_compute[253661]: 2025-11-22 09:53:39.532 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:53:39 np0005532048 nova_compute[253661]: 2025-11-22 09:53:39.532 253665 INFO nova.compute.claims [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:53:39 np0005532048 nova_compute[253661]: 2025-11-22 09:53:39.689 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:39.706+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:39 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:39 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:53:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508051849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.158 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.165 253665 DEBUG nova.compute.provider_tree [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.180 253665 DEBUG nova.scheduler.client.report [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.207 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.208 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.251 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.252 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.273 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.295 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:53:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.421 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.422 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.422 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Creating image(s)#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.443 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.463 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.483 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.486 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.570 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.571 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.571 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.572 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.591 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.594 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:40.710+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:40 np0005532048 nova_compute[253661]: 2025-11-22 09:53:40.935 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:41 np0005532048 nova_compute[253661]: 2025-11-22 09:53:41.005 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] resizing rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:53:41 np0005532048 nova_compute[253661]: 2025-11-22 09:53:41.045 253665 DEBUG nova.policy [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dd4a4c13ace640b98e8ff1360f0112e8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 22 04:53:41 np0005532048 nova_compute[253661]: 2025-11-22 09:53:41.095 253665 DEBUG nova.objects.instance [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'migration_context' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:53:41 np0005532048 nova_compute[253661]: 2025-11-22 09:53:41.109 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:53:41 np0005532048 nova_compute[253661]: 2025-11-22 09:53:41.110 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Ensure instance console log exists: /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:53:41 np0005532048 nova_compute[253661]: 2025-11-22 09:53:41.111 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:41 np0005532048 nova_compute[253661]: 2025-11-22 09:53:41.111 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:41 np0005532048 nova_compute[253661]: 2025-11-22 09:53:41.111 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:41.694+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:41 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:42 np0005532048 podman[410893]: 2025-11-22 09:53:42.355510892 +0000 UTC m=+0.049218379 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:53:42 np0005532048 nova_compute[253661]: 2025-11-22 09:53:42.362 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Successfully created port: 57ba7057-9293-4134-9246-25ddf0b3af07 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 22 04:53:42 np0005532048 podman[410894]: 2025-11-22 09:53:42.369247191 +0000 UTC m=+0.060661479 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:53:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:53:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:42.715+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:42 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 196 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:42 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:42 np0005532048 nova_compute[253661]: 2025-11-22 09:53:42.823 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:42 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:42 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 196 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:43 np0005532048 nova_compute[253661]: 2025-11-22 09:53:43.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:43.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:43 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:44 np0005532048 nova_compute[253661]: 2025-11-22 09:53:44.268 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Successfully updated port: 57ba7057-9293-4134-9246-25ddf0b3af07 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 22 04:53:44 np0005532048 nova_compute[253661]: 2025-11-22 09:53:44.303 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:53:44 np0005532048 nova_compute[253661]: 2025-11-22 09:53:44.304 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquired lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:53:44 np0005532048 nova_compute[253661]: 2025-11-22 09:53:44.305 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:53:44 np0005532048 nova_compute[253661]: 2025-11-22 09:53:44.377 253665 DEBUG nova.compute.manager [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-changed-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:44 np0005532048 nova_compute[253661]: 2025-11-22 09:53:44.378 253665 DEBUG nova.compute.manager [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Refreshing instance network info cache due to event network-changed-57ba7057-9293-4134-9246-25ddf0b3af07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:53:44 np0005532048 nova_compute[253661]: 2025-11-22 09:53:44.378 253665 DEBUG oslo_concurrency.lockutils [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:53:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:53:44 np0005532048 nova_compute[253661]: 2025-11-22 09:53:44.452 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:53:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:44.739+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:45 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.422 253665 DEBUG nova.network.neutron [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.502 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Releasing lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.503 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance network_info: |[{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.503 253665 DEBUG oslo_concurrency.lockutils [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.503 253665 DEBUG nova.network.neutron [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Refreshing network info cache for port 57ba7057-9293-4134-9246-25ddf0b3af07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.506 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start _get_guest_xml network_info=[{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.510 253665 WARNING nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.519 253665 DEBUG nova.virt.libvirt.host [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.519 253665 DEBUG nova.virt.libvirt.host [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.523 253665 DEBUG nova.virt.libvirt.host [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.524 253665 DEBUG nova.virt.libvirt.host [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.524 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.524 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.524 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.525 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.525 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.525 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.525 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.526 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.526 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.526 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.526 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.527 253665 DEBUG nova.virt.hardware [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.530 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:45.782+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:53:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506280532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:53:45 np0005532048 nova_compute[253661]: 2025-11-22 09:53:45.979 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.003 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.008 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:46 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:53:46 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:53:46 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257460261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.484 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.488 253665 DEBUG nova.virt.libvirt.vif [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:53:40Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.488 253665 DEBUG nova.network.os_vif_util [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.489 253665 DEBUG nova.network.os_vif_util [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.490 253665 DEBUG nova.objects.instance [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.503 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <uuid>38b3cb94-17e4-4fd5-830f-1934cf3ee3a1</uuid>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <name>instance-00000097</name>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <nova:name>tempest-TestServerAdvancedOps-server-2052922089</nova:name>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:53:45</nova:creationTime>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <nova:user uuid="dd4a4c13ace640b98e8ff1360f0112e8">tempest-TestServerAdvancedOps-654297744-project-member</nova:user>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <nova:project uuid="b0b724ea495b4cef9085881ad518a4f0">tempest-TestServerAdvancedOps-654297744</nova:project>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <nova:ports>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <nova:port uuid="57ba7057-9293-4134-9246-25ddf0b3af07">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        </nova:port>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      </nova:ports>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <entry name="serial">38b3cb94-17e4-4fd5-830f-1934cf3ee3a1</entry>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <entry name="uuid">38b3cb94-17e4-4fd5-830f-1934cf3ee3a1</entry>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <interface type="ethernet">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <mac address="fa:16:3e:5b:5c:38"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <mtu size="1442"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <target dev="tap57ba7057-92"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    </interface>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/console.log" append="off"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:53:46 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:53:46 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:53:46 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:53:46 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.505 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Preparing to wait for external event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.506 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.506 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.507 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.508 253665 DEBUG nova.virt.libvirt.vif [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T09:53:40Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.509 253665 DEBUG nova.network.os_vif_util [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.510 253665 DEBUG nova.network.os_vif_util [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.511 253665 DEBUG os_vif [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.512 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.513 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.514 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.520 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57ba7057-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.520 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57ba7057-92, col_values=(('external_ids', {'iface-id': '57ba7057-9293-4134-9246-25ddf0b3af07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:5c:38', 'vm-uuid': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:46 np0005532048 NetworkManager[48920]: <info>  [1763805226.5233] manager: (tap57ba7057-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.532 253665 INFO os_vif [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92')#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.590 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.591 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.591 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] No VIF found with MAC fa:16:3e:5b:5c:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.592 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Using config drive#033[00m
Nov 22 04:53:46 np0005532048 nova_compute[253661]: 2025-11-22 09:53:46.616 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:53:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:46.765+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:47 np0005532048 nova_compute[253661]: 2025-11-22 09:53:47.158 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Creating config drive at /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config#033[00m
Nov 22 04:53:47 np0005532048 nova_compute[253661]: 2025-11-22 09:53:47.168 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzczupkfh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:47 np0005532048 nova_compute[253661]: 2025-11-22 09:53:47.331 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzczupkfh" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:47 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:47 np0005532048 nova_compute[253661]: 2025-11-22 09:53:47.380 253665 DEBUG nova.storage.rbd_utils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] rbd image 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:53:47 np0005532048 nova_compute[253661]: 2025-11-22 09:53:47.385 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:53:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:47.743+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:47 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 201 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:47 np0005532048 nova_compute[253661]: 2025-11-22 09:53:47.825 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:47 np0005532048 nova_compute[253661]: 2025-11-22 09:53:47.918 253665 DEBUG oslo_concurrency.processutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:53:47 np0005532048 nova_compute[253661]: 2025-11-22 09:53:47.919 253665 INFO nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Deleting local config drive /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1/disk.config because it was imported into RBD.#033[00m
Nov 22 04:53:47 np0005532048 kernel: tap57ba7057-92: entered promiscuous mode
Nov 22 04:53:47 np0005532048 NetworkManager[48920]: <info>  [1763805227.9941] manager: (tap57ba7057-92): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Nov 22 04:53:47 np0005532048 nova_compute[253661]: 2025-11-22 09:53:47.993 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:47Z|01623|binding|INFO|Claiming lport 57ba7057-9293-4134-9246-25ddf0b3af07 for this chassis.
Nov 22 04:53:47 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:47Z|01624|binding|INFO|57ba7057-9293-4134-9246-25ddf0b3af07: Claiming fa:16:3e:5b:5c:38 10.100.0.12
Nov 22 04:53:48 np0005532048 systemd-udevd[411067]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:53:48 np0005532048 systemd-machined[215941]: New machine qemu-183-instance-00000097.
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:48Z|01625|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 ovn-installed in OVS
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.038 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:48 np0005532048 NetworkManager[48920]: <info>  [1763805228.0423] device (tap57ba7057-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:53:48 np0005532048 NetworkManager[48920]: <info>  [1763805228.0432] device (tap57ba7057-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:53:48 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:48Z|01626|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 up in Southbound
Nov 22 04:53:48 np0005532048 systemd[1]: Started Virtual Machine qemu-183-instance-00000097.
Nov 22 04:53:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:48.047 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:53:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:48.048 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 bound to our chassis#033[00m
Nov 22 04:53:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:48.050 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:53:48 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:48.051 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[32c9241d-ccea-47b7-9503-48cc4f80dfd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 22 04:53:48 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:48 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 201 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.530 253665 DEBUG nova.compute.manager [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.531 253665 DEBUG oslo_concurrency.lockutils [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.531 253665 DEBUG oslo_concurrency.lockutils [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.531 253665 DEBUG oslo_concurrency.lockutils [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.531 253665 DEBUG nova.compute.manager [req-a45ecd51-d1d1-4e9c-9347-47c68e00bd1e req-12c784fb-f0fc-4ea8-9f63-8644a72377bc 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Processing event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.584 253665 DEBUG nova.network.neutron [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updated VIF entry in instance network info cache for port 57ba7057-9293-4134-9246-25ddf0b3af07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.584 253665 DEBUG nova.network.neutron [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:53:48 np0005532048 nova_compute[253661]: 2025-11-22 09:53:48.598 253665 DEBUG oslo_concurrency.lockutils [req-b6bb6df2-a0d0-4b3c-a928-05c93828e518 req-d16c0d10-505d-4fe9-bc7d-2983407d9d28 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:53:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:53:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 44K writes, 184K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 44K writes, 15K syncs, 2.93 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4662 writes, 21K keys, 4662 commit groups, 1.0 writes per commit group, ingest: 26.32 MB, 0.04 MB/s#012Interval WAL: 4662 writes, 1573 syncs, 2.96 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:53:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:48.789+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.075 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805229.0745053, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.075 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.077 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.080 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.084 253665 INFO nova.virt.libvirt.driver [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance spawned successfully.#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.084 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.092 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.096 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.105 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.105 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.106 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.106 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.107 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.107 253665 DEBUG nova.virt.libvirt.driver [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.131 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.132 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805229.074664, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.132 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.149 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.153 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805229.0801437, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.153 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.169 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.173 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.197 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.254 253665 INFO nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Took 8.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.255 253665 DEBUG nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.315 253665 INFO nova.compute.manager [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Took 9.83 seconds to build instance.#033[00m
Nov 22 04:53:49 np0005532048 nova_compute[253661]: 2025-11-22 09:53:49.334 253665 DEBUG oslo_concurrency.lockutils [None req-b6647ef2-6da8-432e-8638-bc200da7267c dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:49.800+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:50 np0005532048 nova_compute[253661]: 2025-11-22 09:53:50.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 22 04:53:50 np0005532048 podman[411119]: 2025-11-22 09:53:50.427501612 +0000 UTC m=+0.097083712 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:53:50 np0005532048 nova_compute[253661]: 2025-11-22 09:53:50.619 253665 DEBUG nova.compute.manager [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:50 np0005532048 nova_compute[253661]: 2025-11-22 09:53:50.620 253665 DEBUG oslo_concurrency.lockutils [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:50 np0005532048 nova_compute[253661]: 2025-11-22 09:53:50.621 253665 DEBUG oslo_concurrency.lockutils [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:50 np0005532048 nova_compute[253661]: 2025-11-22 09:53:50.621 253665 DEBUG oslo_concurrency.lockutils [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:50 np0005532048 nova_compute[253661]: 2025-11-22 09:53:50.621 253665 DEBUG nova.compute.manager [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:53:50 np0005532048 nova_compute[253661]: 2025-11-22 09:53:50.621 253665 WARNING nova.compute.manager [req-2530a56f-e1d3-4a6a-bfbe-f84112a6f028 req-afcf4d95-55a8-49d8-a69f-b94e79f65537 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:53:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:50.760+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:51 np0005532048 nova_compute[253661]: 2025-11-22 09:53:51.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:51 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:51.772+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.259 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.260 253665 DEBUG nova.objects.instance [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:53:52
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.rgw.root']
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.596 253665 DEBUG nova.objects.instance [None req-be2d0fb1-1bb3-48e6-8ff9-d5fdde4e962a dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.615 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805232.6150796, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.615 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.631 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.634 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.650 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 22 04:53:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:52.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:53:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:53:52 np0005532048 nova_compute[253661]: 2025-11-22 09:53:52.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:53 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 206 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:53 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:53.756+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:53 np0005532048 kernel: tap57ba7057-92 (unregistering): left promiscuous mode
Nov 22 04:53:53 np0005532048 NetworkManager[48920]: <info>  [1763805233.8575] device (tap57ba7057-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:53:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:53Z|01627|binding|INFO|Releasing lport 57ba7057-9293-4134-9246-25ddf0b3af07 from this chassis (sb_readonly=0)
Nov 22 04:53:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:53Z|01628|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 down in Southbound
Nov 22 04:53:53 np0005532048 nova_compute[253661]: 2025-11-22 09:53:53.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:53 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:53Z|01629|binding|INFO|Removing iface tap57ba7057-92 ovn-installed in OVS
Nov 22 04:53:53 np0005532048 nova_compute[253661]: 2025-11-22 09:53:53.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:53 np0005532048 nova_compute[253661]: 2025-11-22 09:53:53.887 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:53.897 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:53:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:53.898 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 unbound from our chassis#033[00m
Nov 22 04:53:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:53.899 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:53:53 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:53.900 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4e14bd9c-dccc-4e38-9901-b378f2a3f29c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:53 np0005532048 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 22 04:53:53 np0005532048 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Consumed 4.697s CPU time.
Nov 22 04:53:53 np0005532048 systemd-machined[215941]: Machine qemu-183-instance-00000097 terminated.
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.035 253665 DEBUG nova.compute.manager [None req-be2d0fb1-1bb3-48e6-8ff9-d5fdde4e962a dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.287 253665 DEBUG nova.compute.manager [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.288 253665 DEBUG oslo_concurrency.lockutils [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.288 253665 DEBUG oslo_concurrency.lockutils [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.289 253665 DEBUG oslo_concurrency.lockutils [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.289 253665 DEBUG nova.compute.manager [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.289 253665 WARNING nova.compute.manager [req-87390a49-9918-4a82-b635-6eb36e36d86b req-c1aa6930-dfc5-4712-bf38-1fad2c5d9084 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state None.#033[00m
Nov 22 04:53:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:53:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:54 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 206 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.668 253665 DEBUG nova.network.neutron [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.687 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.688 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.688 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.688 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:53:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:54.802+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.869 253665 INFO nova.compute.manager [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Resuming#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.870 253665 DEBUG nova.objects.instance [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'flavor' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.900 253665 DEBUG oslo_concurrency.lockutils [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.901 253665 DEBUG oslo_concurrency.lockutils [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquired lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:53:54 np0005532048 nova_compute[253661]: 2025-11-22 09:53:54.901 253665 DEBUG nova.network.neutron [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:53:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:53:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.2 total, 600.0 interval#012Cumulative writes: 44K writes, 177K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 44K writes, 15K syncs, 2.85 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4300 writes, 17K keys, 4300 commit groups, 1.0 writes per commit group, ingest: 22.17 MB, 0.04 MB/s#012Interval WAL: 4300 writes, 1636 syncs, 2.63 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:53:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:53:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:53:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:53:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:53:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:53:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:55.792+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:56 np0005532048 nova_compute[253661]: 2025-11-22 09:53:56.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:56 np0005532048 nova_compute[253661]: 2025-11-22 09:53:56.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:56 np0005532048 nova_compute[253661]: 2025-11-22 09:53:56.371 253665 DEBUG nova.compute.manager [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:56 np0005532048 nova_compute[253661]: 2025-11-22 09:53:56.372 253665 DEBUG oslo_concurrency.lockutils [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:56 np0005532048 nova_compute[253661]: 2025-11-22 09:53:56.372 253665 DEBUG oslo_concurrency.lockutils [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:56 np0005532048 nova_compute[253661]: 2025-11-22 09:53:56.372 253665 DEBUG oslo_concurrency.lockutils [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:56 np0005532048 nova_compute[253661]: 2025-11-22 09:53:56.372 253665 DEBUG nova.compute.manager [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:53:56 np0005532048 nova_compute[253661]: 2025-11-22 09:53:56.373 253665 WARNING nova.compute.manager [req-60891b7b-9f3e-4327-b85b-eb71be2bd948 req-ccac04d3-db37-445a-a086-8aebfc8c8b3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 22 04:53:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:53:56 np0005532048 nova_compute[253661]: 2025-11-22 09:53:56.528 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:56.768+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:53:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:53:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:53:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:53:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:53:57 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:57.778+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:57 np0005532048 nova_compute[253661]: 2025-11-22 09:53:57.829 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.053 253665 DEBUG nova.network.neutron [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.068 253665 DEBUG oslo_concurrency.lockutils [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Releasing lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.072 253665 DEBUG nova.virt.libvirt.vif [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:53:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:53:54Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.073 253665 DEBUG nova.network.os_vif_util [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.073 253665 DEBUG nova.network.os_vif_util [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.074 253665 DEBUG os_vif [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.074 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.075 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.077 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.077 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57ba7057-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.077 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57ba7057-92, col_values=(('external_ids', {'iface-id': '57ba7057-9293-4134-9246-25ddf0b3af07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:5c:38', 'vm-uuid': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.078 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.078 253665 INFO os_vif [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92')#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.115 253665 DEBUG nova.objects.instance [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:53:58 np0005532048 kernel: tap57ba7057-92: entered promiscuous mode
Nov 22 04:53:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:58Z|01630|binding|INFO|Claiming lport 57ba7057-9293-4134-9246-25ddf0b3af07 for this chassis.
Nov 22 04:53:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:58Z|01631|binding|INFO|57ba7057-9293-4134-9246-25ddf0b3af07: Claiming fa:16:3e:5b:5c:38 10.100.0.12
Nov 22 04:53:58 np0005532048 NetworkManager[48920]: <info>  [1763805238.1816] manager: (tap57ba7057-92): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:58Z|01632|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 ovn-installed in OVS
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:53:58 np0005532048 systemd-udevd[411177]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:53:58 np0005532048 systemd-machined[215941]: New machine qemu-184-instance-00000097.
Nov 22 04:53:58 np0005532048 NetworkManager[48920]: <info>  [1763805238.2149] device (tap57ba7057-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:53:58 np0005532048 NetworkManager[48920]: <info>  [1763805238.2156] device (tap57ba7057-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:53:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:53:58Z|01633|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 up in Southbound
Nov 22 04:53:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:58.224 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:53:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:58.225 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 bound to our chassis#033[00m
Nov 22 04:53:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:58.226 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:53:58 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:53:58.226 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[267168d6-f43b-46ac-804c-b2a2bfa93b5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:53:58 np0005532048 systemd[1]: Started Virtual Machine qemu-184-instance-00000097.
Nov 22 04:53:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 211 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:53:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:53:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:58.738+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.769 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.769 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805238.7686427, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.769 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.784 253665 DEBUG nova.compute.manager [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.784 253665 DEBUG nova.objects.instance [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.786 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.790 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.795 253665 INFO nova.virt.libvirt.driver [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance running successfully.#033[00m
Nov 22 04:53:58 np0005532048 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.798 253665 DEBUG nova.virt.libvirt.guest [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.798 253665 DEBUG nova.compute.manager [None req-b62bf021-e30e-43c2-a922-867fc0c4cd71 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.803 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.804 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805238.771586, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.804 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.824 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.827 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.843 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.989 253665 DEBUG nova.compute.manager [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.990 253665 DEBUG oslo_concurrency.lockutils [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.990 253665 DEBUG oslo_concurrency.lockutils [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.990 253665 DEBUG oslo_concurrency.lockutils [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.991 253665 DEBUG nova.compute.manager [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:53:58 np0005532048 nova_compute[253661]: 2025-11-22 09:53:58.991 253665 WARNING nova.compute.manager [req-76337ee2-ccbb-4dc5-ad8e-f3f71b23208f req-07c3d273-c461-485a-ab5b-a0f70b689b9e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:53:59 np0005532048 nova_compute[253661]: 2025-11-22 09:53:59.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:53:59 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 211 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:53:59 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:53:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:53:59.691+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:53:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:00 np0005532048 nova_compute[253661]: 2025-11-22 09:54:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:00 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:54:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 35K writes, 144K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.92 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3530 writes, 14K keys, 3530 commit groups, 1.0 writes per commit group, ingest: 17.14 MB, 0.03 MB/s#012Interval WAL: 3530 writes, 1327 syncs, 2.66 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:54:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 69 op/s
Nov 22 04:54:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:00.644+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.076 253665 DEBUG nova.compute.manager [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.076 253665 DEBUG oslo_concurrency.lockutils [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.077 253665 DEBUG oslo_concurrency.lockutils [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.077 253665 DEBUG oslo_concurrency.lockutils [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.077 253665 DEBUG nova.compute.manager [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.077 253665 WARNING nova.compute.manager [req-6a463be2-f0cd-4903-a212-f3e3a98fb66e req-b4472e4c-bf7c-43d1-a442-616ae5d7012a 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.256 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.316 253665 DEBUG nova.objects.instance [None req-85dd401f-7155-4360-9fd6-45ac5bf8d048 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.337 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805241.3361979, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.339 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Paused (Lifecycle Event)#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.362 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.367 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.383 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 22 04:54:01 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:01 np0005532048 nova_compute[253661]: 2025-11-22 09:54:01.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:01.645+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 69 op/s
Nov 22 04:54:02 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 04:54:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:02.674+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:02 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:54:02 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/780034624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:54:02 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:02 np0005532048 nova_compute[253661]: 2025-11-22 09:54:02.786 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:54:02 np0005532048 nova_compute[253661]: 2025-11-22 09:54:02.862 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:03 np0005532048 kernel: tap57ba7057-92 (unregistering): left promiscuous mode
Nov 22 04:54:03 np0005532048 NetworkManager[48920]: <info>  [1763805243.0230] device (tap57ba7057-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:03Z|01634|binding|INFO|Releasing lport 57ba7057-9293-4134-9246-25ddf0b3af07 from this chassis (sb_readonly=0)
Nov 22 04:54:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:03Z|01635|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 down in Southbound
Nov 22 04:54:03 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:03Z|01636|binding|INFO|Removing iface tap57ba7057-92 ovn-installed in OVS
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.037 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.048 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011090219443901804 of space, bias 1.0, pg target 0.3327065833170541 quantized to 32 (current 32)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:54:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:54:03 np0005532048 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 22 04:54:03 np0005532048 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Consumed 3.162s CPU time.
Nov 22 04:54:03 np0005532048 systemd-machined[215941]: Machine qemu-184-instance-00000097 terminated.
Nov 22 04:54:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:03.125 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:54:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:03.126 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 unbound from our chassis#033[00m
Nov 22 04:54:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:03.127 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:54:03 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:03.127 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[f81fbeef-5c02-407f-b0df-38521b550b3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.144 253665 DEBUG nova.compute.manager [None req-85dd401f-7155-4360-9fd6-45ac5bf8d048 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.225 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.226 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.230 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.230 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.352 253665 DEBUG nova.compute.manager [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.352 253665 DEBUG oslo_concurrency.lockutils [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.353 253665 DEBUG oslo_concurrency.lockutils [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.353 253665 DEBUG oslo_concurrency.lockutils [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.353 253665 DEBUG nova.compute.manager [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.354 253665 WARNING nova.compute.manager [req-078f249b-d859-4306-8264-5fd8ce5c89bd req-65421c59-1024-43b2-a191-ec05693a78c3 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state None.#033[00m
Nov 22 04:54:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 216 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.402 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.403 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3176MB free_disk=59.9217529296875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.403 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.403 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.483 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.483 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.484 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.484 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:54:03 np0005532048 nova_compute[253661]: 2025-11-22 09:54:03.549 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:54:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:03.653+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:54:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2629260196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:54:04 np0005532048 nova_compute[253661]: 2025-11-22 09:54:04.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:54:04 np0005532048 nova_compute[253661]: 2025-11-22 09:54:04.265 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:54:04 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:04 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 216 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:04 np0005532048 nova_compute[253661]: 2025-11-22 09:54:04.280 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:54:04 np0005532048 nova_compute[253661]: 2025-11-22 09:54:04.323 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:54:04 np0005532048 nova_compute[253661]: 2025-11-22 09:54:04.323 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:04 np0005532048 nova_compute[253661]: 2025-11-22 09:54:04.323 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 74 op/s
Nov 22 04:54:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:04.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.323 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.347 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.360 253665 INFO nova.compute.manager [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Resuming#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.361 253665 DEBUG nova.objects.instance [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'flavor' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.397 253665 DEBUG oslo_concurrency.lockutils [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.398 253665 DEBUG oslo_concurrency.lockutils [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquired lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.398 253665 DEBUG nova.network.neutron [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.439 253665 DEBUG nova.compute.manager [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.440 253665 DEBUG oslo_concurrency.lockutils [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.440 253665 DEBUG oslo_concurrency.lockutils [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.440 253665 DEBUG oslo_concurrency.lockutils [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.441 253665 DEBUG nova.compute.manager [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:54:05 np0005532048 nova_compute[253661]: 2025-11-22 09:54:05.441 253665 WARNING nova.compute.manager [req-c6c163f1-65a2-49e3-95c1-544a53d48aee req-c5fd1260-12e4-4f28-b1e3-0ade0c68389c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 22 04:54:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:05.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 04:54:06 np0005532048 nova_compute[253661]: 2025-11-22 09:54:06.534 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:06.675+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:06 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.360 253665 DEBUG nova.network.neutron [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [{"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.374 253665 DEBUG oslo_concurrency.lockutils [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Releasing lock "refresh_cache-38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.380 253665 DEBUG nova.virt.libvirt.vif [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:53:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:54:03Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.381 253665 DEBUG nova.network.os_vif_util [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.382 253665 DEBUG nova.network.os_vif_util [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.382 253665 DEBUG os_vif [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.383 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.384 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.388 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57ba7057-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.388 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap57ba7057-92, col_values=(('external_ids', {'iface-id': '57ba7057-9293-4134-9246-25ddf0b3af07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:5c:38', 'vm-uuid': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.389 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.389 253665 INFO os_vif [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92')#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.409 253665 DEBUG nova.objects.instance [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:54:07 np0005532048 kernel: tap57ba7057-92: entered promiscuous mode
Nov 22 04:54:07 np0005532048 NetworkManager[48920]: <info>  [1763805247.4839] manager: (tap57ba7057-92): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Nov 22 04:54:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:07Z|01637|binding|INFO|Claiming lport 57ba7057-9293-4134-9246-25ddf0b3af07 for this chassis.
Nov 22 04:54:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:07Z|01638|binding|INFO|57ba7057-9293-4134-9246-25ddf0b3af07: Claiming fa:16:3e:5b:5c:38 10.100.0.12
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.485 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:07.491 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '7', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:54:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:07.492 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 bound to our chassis#033[00m
Nov 22 04:54:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:07.493 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:54:07 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:07.494 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea32594-2e04-4f63-a8d9-9955478d608a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:54:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:07Z|01639|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 ovn-installed in OVS
Nov 22 04:54:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:07Z|01640|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 up in Southbound
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.497 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:07 np0005532048 systemd-udevd[411307]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:54:07 np0005532048 systemd-machined[215941]: New machine qemu-185-instance-00000097.
Nov 22 04:54:07 np0005532048 NetworkManager[48920]: <info>  [1763805247.5291] device (tap57ba7057-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:54:07 np0005532048 NetworkManager[48920]: <info>  [1763805247.5311] device (tap57ba7057-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:54:07 np0005532048 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Nov 22 04:54:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:07.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:07 np0005532048 nova_compute[253661]: 2025-11-22 09:54:07.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:08 np0005532048 nova_compute[253661]: 2025-11-22 09:54:08.087 253665 DEBUG nova.compute.manager [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:54:08 np0005532048 nova_compute[253661]: 2025-11-22 09:54:08.087 253665 DEBUG oslo_concurrency.lockutils [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:08 np0005532048 nova_compute[253661]: 2025-11-22 09:54:08.088 253665 DEBUG oslo_concurrency.lockutils [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:08 np0005532048 nova_compute[253661]: 2025-11-22 09:54:08.088 253665 DEBUG oslo_concurrency.lockutils [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:08 np0005532048 nova_compute[253661]: 2025-11-22 09:54:08.088 253665 DEBUG nova.compute.manager [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:54:08 np0005532048 nova_compute[253661]: 2025-11-22 09:54:08.088 253665 WARNING nova.compute.manager [req-47dead26-b4b4-4de2-a7d3-9765aac587d9 req-720275c2-58b1-4e9f-9a61-66c4fea44e15 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 22 04:54:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 221 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 04:54:08 np0005532048 nova_compute[253661]: 2025-11-22 09:54:08.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:08.543 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:54:08 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:08.544 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:54:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:08.673+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:09 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:09 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 221 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.133 253665 DEBUG nova.virt.libvirt.host [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Removed pending event for 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.134 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805249.133124, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.134 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Started (Lifecycle Event)#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.151 253665 DEBUG nova.compute.manager [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.152 253665 DEBUG nova.objects.instance [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.158 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.163 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.169 253665 INFO nova.virt.libvirt.driver [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance running successfully.#033[00m
Nov 22 04:54:09 np0005532048 virtqemud[254229]: argument unsupported: QEMU guest agent is not configured
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.171 253665 DEBUG nova.virt.libvirt.guest [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.172 253665 DEBUG nova.compute.manager [None req-b3f01653-314c-4597-a91f-4a4cb0f509d5 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.196 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.197 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805249.138165, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.198 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.217 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.220 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:54:09 np0005532048 nova_compute[253661]: 2025-11-22 09:54:09.243 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 22 04:54:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:09.715+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:10 np0005532048 nova_compute[253661]: 2025-11-22 09:54:10.399 253665 DEBUG nova.compute.manager [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:54:10 np0005532048 nova_compute[253661]: 2025-11-22 09:54:10.399 253665 DEBUG oslo_concurrency.lockutils [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:10 np0005532048 nova_compute[253661]: 2025-11-22 09:54:10.399 253665 DEBUG oslo_concurrency.lockutils [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:10 np0005532048 nova_compute[253661]: 2025-11-22 09:54:10.400 253665 DEBUG oslo_concurrency.lockutils [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:10 np0005532048 nova_compute[253661]: 2025-11-22 09:54:10.400 253665 DEBUG nova.compute.manager [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:54:10 np0005532048 nova_compute[253661]: 2025-11-22 09:54:10.400 253665 WARNING nova.compute.manager [req-db2fca9d-4925-4c2e-bfa6-7ae13a49b9de req-c03a3230-2ea2-4618-aa1e-1e198fa8d77e 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state None.#033[00m
Nov 22 04:54:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 04:54:10 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:10.547 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:54:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:10.729+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:11 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:11 np0005532048 nova_compute[253661]: 2025-11-22 09:54:11.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:11.768+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:12 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:12 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.489 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.490 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.490 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.490 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.490 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.491 253665 INFO nova.compute.manager [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Terminating instance#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.492 253665 DEBUG nova.compute.manager [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:54:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:54:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1082317192' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:54:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:54:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1082317192' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:54:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:12.753+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:12 np0005532048 kernel: tap57ba7057-92 (unregistering): left promiscuous mode
Nov 22 04:54:12 np0005532048 NetworkManager[48920]: <info>  [1763805252.7898] device (tap57ba7057-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:54:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:12Z|01641|binding|INFO|Releasing lport 57ba7057-9293-4134-9246-25ddf0b3af07 from this chassis (sb_readonly=0)
Nov 22 04:54:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:12Z|01642|binding|INFO|Setting lport 57ba7057-9293-4134-9246-25ddf0b3af07 down in Southbound
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.796 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:12 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:12Z|01643|binding|INFO|Removing iface tap57ba7057-92 ovn-installed in OVS
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.798 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.811 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:12.812 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:5c:38 10.100.0.12'], port_security=['fa:16:3e:5b:5c:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '38b3cb94-17e4-4fd5-830f-1934cf3ee3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3d44f1-0406-4ebd-915f-8d5452433943', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0b724ea495b4cef9085881ad518a4f0', 'neutron:revision_number': '8', 'neutron:security_group_ids': '183eda70-2911-4904-8548-d3fbd3d654dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d066f18c-0e7f-4577-9555-41a5aab599d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=57ba7057-9293-4134-9246-25ddf0b3af07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:54:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:12.814 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 57ba7057-9293-4134-9246-25ddf0b3af07 in datapath 1e3d44f1-0406-4ebd-915f-8d5452433943 unbound from our chassis#033[00m
Nov 22 04:54:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:12.815 162862 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1e3d44f1-0406-4ebd-915f-8d5452433943 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 22 04:54:12 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:12.816 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3f0923-65f3-4f1d-a41d-40b0e4051a91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:54:12 np0005532048 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 22 04:54:12 np0005532048 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 4.745s CPU time.
Nov 22 04:54:12 np0005532048 systemd-machined[215941]: Machine qemu-185-instance-00000097 terminated.
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.865 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:12 np0005532048 podman[411360]: 2025-11-22 09:54:12.891193398 +0000 UTC m=+0.061693866 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 22 04:54:12 np0005532048 podman[411361]: 2025-11-22 09:54:12.897354155 +0000 UTC m=+0.068082798 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.929 253665 INFO nova.virt.libvirt.driver [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Instance destroyed successfully.#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.930 253665 DEBUG nova.objects.instance [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lazy-loading 'resources' on Instance uuid 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.941 253665 DEBUG nova.virt.libvirt.vif [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:53:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-2052922089',display_name='tempest-TestServerAdvancedOps-server-2052922089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-2052922089',id=151,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:53:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0b724ea495b4cef9085881ad518a4f0',ramdisk_id='',reservation_id='r-81r06j8q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-654297744',owner_user_name='tempest-TestServerAdvancedOps-654297744-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:54:09Z,user_data=None,user_id='dd4a4c13ace640b98e8ff1360f0112e8',uuid=38b3cb94-17e4-4fd5-830f-1934cf3ee3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.942 253665 DEBUG nova.network.os_vif_util [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converting VIF {"id": "57ba7057-9293-4134-9246-25ddf0b3af07", "address": "fa:16:3e:5b:5c:38", "network": {"id": "1e3d44f1-0406-4ebd-915f-8d5452433943", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1244899419-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "b0b724ea495b4cef9085881ad518a4f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap57ba7057-92", "ovs_interfaceid": "57ba7057-9293-4134-9246-25ddf0b3af07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.942 253665 DEBUG nova.network.os_vif_util [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.943 253665 DEBUG os_vif [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.945 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.945 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57ba7057-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.946 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.949 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:12 np0005532048 nova_compute[253661]: 2025-11-22 09:54:12.951 253665 INFO os_vif [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:5c:38,bridge_name='br-int',has_traffic_filtering=True,id=57ba7057-9293-4134-9246-25ddf0b3af07,network=Network(1e3d44f1-0406-4ebd-915f-8d5452433943),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap57ba7057-92')#033[00m
Nov 22 04:54:13 np0005532048 nova_compute[253661]: 2025-11-22 09:54:13.121 253665 DEBUG nova.compute.manager [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:54:13 np0005532048 nova_compute[253661]: 2025-11-22 09:54:13.122 253665 DEBUG oslo_concurrency.lockutils [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:13 np0005532048 nova_compute[253661]: 2025-11-22 09:54:13.122 253665 DEBUG oslo_concurrency.lockutils [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:13 np0005532048 nova_compute[253661]: 2025-11-22 09:54:13.123 253665 DEBUG oslo_concurrency.lockutils [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:13 np0005532048 nova_compute[253661]: 2025-11-22 09:54:13.123 253665 DEBUG nova.compute.manager [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:54:13 np0005532048 nova_compute[253661]: 2025-11-22 09:54:13.123 253665 DEBUG nova.compute.manager [req-76cd3241-4ec0-4897-a888-6f2197e8f949 req-f220bf95-3ceb-4771-a693-ad1d3cf49734 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-unplugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:54:13 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 226 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:13 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:13 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 226 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:13.740+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 10 op/s
Nov 22 04:54:14 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:14.697+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:54:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e38c6843-7a75-422b-b1b1-53a05607812c does not exist
Nov 22 04:54:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0cfeb4b7-9d05-4556-af09-515318eab703 does not exist
Nov 22 04:54:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9fa37ce1-e10b-4287-9148-f2a8c66d70fb does not exist
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.105 253665 INFO nova.virt.libvirt.driver [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Deleting instance files /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_del#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.107 253665 INFO nova.virt.libvirt.driver [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Deletion of /var/lib/nova/instances/38b3cb94-17e4-4fd5-830f-1934cf3ee3a1_del complete#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.202 253665 DEBUG nova.compute.manager [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.203 253665 DEBUG oslo_concurrency.lockutils [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.203 253665 DEBUG oslo_concurrency.lockutils [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.203 253665 DEBUG oslo_concurrency.lockutils [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.204 253665 DEBUG nova.compute.manager [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] No waiting events found dispatching network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.204 253665 WARNING nova.compute.manager [req-26833e5f-4810-4d89-b533-841349a6eb46 req-4392d2ba-3fbf-4818-8d6e-be2938f7c0ea 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received unexpected event network-vif-plugged-57ba7057-9293-4134-9246-25ddf0b3af07 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.248 253665 INFO nova.compute.manager [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Took 2.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.249 253665 DEBUG oslo.service.loopingcall [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.249 253665 DEBUG nova.compute.manager [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:54:15 np0005532048 nova_compute[253661]: 2025-11-22 09:54:15.249 253665 DEBUG nova.network.neutron [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:54:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:15.672+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:54:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:54:15 np0005532048 podman[411706]: 2025-11-22 09:54:15.693112882 +0000 UTC m=+0.052259805 container create 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:54:15 np0005532048 systemd[1]: Started libpod-conmon-4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de.scope.
Nov 22 04:54:15 np0005532048 podman[411706]: 2025-11-22 09:54:15.668399046 +0000 UTC m=+0.027545969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:54:15 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:54:15 np0005532048 podman[411706]: 2025-11-22 09:54:15.829278946 +0000 UTC m=+0.188425869 container init 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 04:54:15 np0005532048 podman[411706]: 2025-11-22 09:54:15.837964306 +0000 UTC m=+0.197111209 container start 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:54:15 np0005532048 brave_yonath[411723]: 167 167
Nov 22 04:54:15 np0005532048 systemd[1]: libpod-4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de.scope: Deactivated successfully.
Nov 22 04:54:15 np0005532048 conmon[411723]: conmon 4f9d4667b12e308ef064 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de.scope/container/memory.events
Nov 22 04:54:15 np0005532048 podman[411706]: 2025-11-22 09:54:15.859568674 +0000 UTC m=+0.218715607 container attach 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:54:15 np0005532048 podman[411706]: 2025-11-22 09:54:15.860063117 +0000 UTC m=+0.219210020 container died 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:54:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5f14ff6af7cd1306c695e55ceaca8a97e6780c684d31e9b13726ec349b390f88-merged.mount: Deactivated successfully.
Nov 22 04:54:15 np0005532048 podman[411706]: 2025-11-22 09:54:15.95957595 +0000 UTC m=+0.318722853 container remove 4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:54:15 np0005532048 systemd[1]: libpod-conmon-4f9d4667b12e308ef06484141aef0f31e43de2c809469ae8cd2deb069de702de.scope: Deactivated successfully.
Nov 22 04:54:16 np0005532048 podman[411749]: 2025-11-22 09:54:16.138900408 +0000 UTC m=+0.055128840 container create fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:54:16 np0005532048 systemd[1]: Started libpod-conmon-fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931.scope.
Nov 22 04:54:16 np0005532048 podman[411749]: 2025-11-22 09:54:16.108358473 +0000 UTC m=+0.024586925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:54:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:54:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:16 np0005532048 podman[411749]: 2025-11-22 09:54:16.255466914 +0000 UTC m=+0.171695366 container init fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:54:16 np0005532048 podman[411749]: 2025-11-22 09:54:16.267783756 +0000 UTC m=+0.184012188 container start fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:54:16 np0005532048 podman[411749]: 2025-11-22 09:54:16.28139082 +0000 UTC m=+0.197619252 container attach fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:54:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 1 active+clean+laggy, 304 active+clean; 243 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 KiB/s rd, 5 op/s
Nov 22 04:54:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:16.657+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:16 np0005532048 nova_compute[253661]: 2025-11-22 09:54:16.672 253665 DEBUG nova.network.neutron [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:54:16 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:16 np0005532048 nova_compute[253661]: 2025-11-22 09:54:16.732 253665 INFO nova.compute.manager [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Took 1.48 seconds to deallocate network for instance.#033[00m
Nov 22 04:54:16 np0005532048 nova_compute[253661]: 2025-11-22 09:54:16.872 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:16 np0005532048 nova_compute[253661]: 2025-11-22 09:54:16.873 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:16 np0005532048 nova_compute[253661]: 2025-11-22 09:54:16.941 253665 DEBUG oslo_concurrency.processutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:54:17 np0005532048 nova_compute[253661]: 2025-11-22 09:54:17.276 253665 DEBUG nova.compute.manager [req-9371b582-d6eb-49c7-8258-06195107665a req-aac5a369-688f-4e6c-b6c6-f84fceabd311 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Received event network-vif-deleted-57ba7057-9293-4134-9246-25ddf0b3af07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:54:17 np0005532048 brave_ishizaka[411766]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:54:17 np0005532048 brave_ishizaka[411766]: --> relative data size: 1.0
Nov 22 04:54:17 np0005532048 brave_ishizaka[411766]: --> All data devices are unavailable
Nov 22 04:54:17 np0005532048 systemd[1]: libpod-fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931.scope: Deactivated successfully.
Nov 22 04:54:17 np0005532048 podman[411749]: 2025-11-22 09:54:17.335372529 +0000 UTC m=+1.251600981 container died fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:54:17 np0005532048 systemd[1]: libpod-fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931.scope: Consumed 1.013s CPU time.
Nov 22 04:54:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:54:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229528339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:54:17 np0005532048 nova_compute[253661]: 2025-11-22 09:54:17.392 253665 DEBUG oslo_concurrency.processutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:54:17 np0005532048 nova_compute[253661]: 2025-11-22 09:54:17.398 253665 DEBUG nova.compute.provider_tree [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:54:17 np0005532048 nova_compute[253661]: 2025-11-22 09:54:17.412 253665 DEBUG nova.scheduler.client.report [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:54:17 np0005532048 nova_compute[253661]: 2025-11-22 09:54:17.515 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5e1e59ecd1d8386fb4cfe4912d1b2e3d14832628014a042f0c0296ef07c2e0ff-merged.mount: Deactivated successfully.
Nov 22 04:54:17 np0005532048 nova_compute[253661]: 2025-11-22 09:54:17.637 253665 INFO nova.scheduler.client.report [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Deleted allocations for instance 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1#033[00m
Nov 22 04:54:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:17.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:17 np0005532048 nova_compute[253661]: 2025-11-22 09:54:17.836 253665 DEBUG oslo_concurrency.lockutils [None req-30119234-34c5-4e16-a3c5-6e0a36a6f8a7 dd4a4c13ace640b98e8ff1360f0112e8 b0b724ea495b4cef9085881ad518a4f0 - - default default] Lock "38b3cb94-17e4-4fd5-830f-1934cf3ee3a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:17 np0005532048 nova_compute[253661]: 2025-11-22 09:54:17.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:17 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:17 np0005532048 podman[411749]: 2025-11-22 09:54:17.94544084 +0000 UTC m=+1.861669272 container remove fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:54:17 np0005532048 nova_compute[253661]: 2025-11-22 09:54:17.947 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:17 np0005532048 systemd[1]: libpod-conmon-fdc10360f56b70398bc46b430644ddb004e3dbfe683427cd5a5ec892dff0a931.scope: Deactivated successfully.
Nov 22 04:54:18 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 231 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 04:54:18 np0005532048 podman[411968]: 2025-11-22 09:54:18.558336313 +0000 UTC m=+0.068806566 container create 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:54:18 np0005532048 podman[411968]: 2025-11-22 09:54:18.513359933 +0000 UTC m=+0.023830206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:54:18 np0005532048 systemd[1]: Started libpod-conmon-81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1.scope.
Nov 22 04:54:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:54:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:18.684+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:18 np0005532048 podman[411968]: 2025-11-22 09:54:18.712648586 +0000 UTC m=+0.223118869 container init 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:54:18 np0005532048 podman[411968]: 2025-11-22 09:54:18.719780607 +0000 UTC m=+0.230250860 container start 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:54:18 np0005532048 vigilant_noyce[411984]: 167 167
Nov 22 04:54:18 np0005532048 systemd[1]: libpod-81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1.scope: Deactivated successfully.
Nov 22 04:54:18 np0005532048 conmon[411984]: conmon 81a7104a8ffb43e83be1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1.scope/container/memory.events
Nov 22 04:54:18 np0005532048 systemd[1]: Starting dnf makecache...
Nov 22 04:54:18 np0005532048 podman[411968]: 2025-11-22 09:54:18.813237477 +0000 UTC m=+0.323707730 container attach 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:54:18 np0005532048 podman[411968]: 2025-11-22 09:54:18.813690668 +0000 UTC m=+0.324160921 container died 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:54:19 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:19 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 231 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-834d0e0a2589eac454f44cc9f404ce95604ca5ca2154f6b3a80cac6805ae3184-merged.mount: Deactivated successfully.
Nov 22 04:54:19 np0005532048 dnf[411989]: Metadata cache refreshed recently.
Nov 22 04:54:19 np0005532048 podman[411968]: 2025-11-22 09:54:19.273719265 +0000 UTC m=+0.784189518 container remove 81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:54:19 np0005532048 systemd[1]: libpod-conmon-81a7104a8ffb43e83be1f66849bbfd54fc6798733765953407ac84f93ee6b8b1.scope: Deactivated successfully.
Nov 22 04:54:19 np0005532048 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 22 04:54:19 np0005532048 systemd[1]: Finished dnf makecache.
Nov 22 04:54:19 np0005532048 podman[412009]: 2025-11-22 09:54:19.478032516 +0000 UTC m=+0.065923614 container create 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:54:19 np0005532048 systemd[1]: Started libpod-conmon-8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7.scope.
Nov 22 04:54:19 np0005532048 podman[412009]: 2025-11-22 09:54:19.43994936 +0000 UTC m=+0.027840468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:54:19 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:54:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:19 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:19 np0005532048 podman[412009]: 2025-11-22 09:54:19.599572667 +0000 UTC m=+0.187463785 container init 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:54:19 np0005532048 podman[412009]: 2025-11-22 09:54:19.60912683 +0000 UTC m=+0.197017928 container start 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 04:54:19 np0005532048 podman[412009]: 2025-11-22 09:54:19.618698553 +0000 UTC m=+0.206589681 container attach 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:54:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:19.684+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:20 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]: {
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:    "0": [
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:        {
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "devices": [
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "/dev/loop3"
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            ],
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_name": "ceph_lv0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_size": "21470642176",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "name": "ceph_lv0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "tags": {
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.cluster_name": "ceph",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.crush_device_class": "",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.encrypted": "0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.osd_id": "0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.type": "block",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.vdo": "0"
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            },
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "type": "block",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "vg_name": "ceph_vg0"
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:        }
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:    ],
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:    "1": [
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:        {
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "devices": [
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "/dev/loop4"
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            ],
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_name": "ceph_lv1",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_size": "21470642176",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "name": "ceph_lv1",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "tags": {
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.cluster_name": "ceph",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.crush_device_class": "",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.encrypted": "0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.osd_id": "1",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.type": "block",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.vdo": "0"
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            },
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "type": "block",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "vg_name": "ceph_vg1"
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:        }
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:    ],
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:    "2": [
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:        {
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "devices": [
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "/dev/loop5"
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            ],
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_name": "ceph_lv2",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_size": "21470642176",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "name": "ceph_lv2",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "tags": {
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.cluster_name": "ceph",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.crush_device_class": "",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.encrypted": "0",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.osd_id": "2",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.type": "block",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:                "ceph.vdo": "0"
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            },
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "type": "block",
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:            "vg_name": "ceph_vg2"
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:        }
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]:    ]
Nov 22 04:54:20 np0005532048 relaxed_grothendieck[412025]: }
Nov 22 04:54:20 np0005532048 systemd[1]: libpod-8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7.scope: Deactivated successfully.
Nov 22 04:54:20 np0005532048 podman[412009]: 2025-11-22 09:54:20.448173367 +0000 UTC m=+1.036064465 container died 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:54:20 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4bb4e698d748fe8431897871e1123ea15f21b32d9f064600bc984fbb55efce24-merged.mount: Deactivated successfully.
Nov 22 04:54:20 np0005532048 podman[412009]: 2025-11-22 09:54:20.549150089 +0000 UTC m=+1.137041187 container remove 8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:54:20 np0005532048 systemd[1]: libpod-conmon-8c26a274f0881adaf543d938892f3e79e65c9b0a143997d5ce7e621519f585e7.scope: Deactivated successfully.
Nov 22 04:54:20 np0005532048 podman[412034]: 2025-11-22 09:54:20.617327667 +0000 UTC m=+0.135941288 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:54:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:20.653+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:21 np0005532048 podman[412208]: 2025-11-22 09:54:21.179041611 +0000 UTC m=+0.054237685 container create 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:54:21 np0005532048 systemd[1]: Started libpod-conmon-1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b.scope.
Nov 22 04:54:21 np0005532048 podman[412208]: 2025-11-22 09:54:21.14662464 +0000 UTC m=+0.021820734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:54:21 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:54:21 np0005532048 podman[412208]: 2025-11-22 09:54:21.268232824 +0000 UTC m=+0.143428918 container init 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:54:21 np0005532048 podman[412208]: 2025-11-22 09:54:21.276046242 +0000 UTC m=+0.151242306 container start 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:54:21 np0005532048 musing_kapitsa[412223]: 167 167
Nov 22 04:54:21 np0005532048 systemd[1]: libpod-1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b.scope: Deactivated successfully.
Nov 22 04:54:21 np0005532048 podman[412208]: 2025-11-22 09:54:21.288147038 +0000 UTC m=+0.163343132 container attach 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:54:21 np0005532048 podman[412208]: 2025-11-22 09:54:21.288808306 +0000 UTC m=+0.164004370 container died 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:54:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c9a7a7a5f2e6b658f309a96fcdc53615a8fcb6667b0db721dd310bfdb98fc8cf-merged.mount: Deactivated successfully.
Nov 22 04:54:21 np0005532048 podman[412208]: 2025-11-22 09:54:21.346683513 +0000 UTC m=+0.221879587 container remove 1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:54:21 np0005532048 systemd[1]: libpod-conmon-1aac4141bfb004f6b510a456f7f389c33c6cb66479ed54ff9d35d036a692fa7b.scope: Deactivated successfully.
Nov 22 04:54:21 np0005532048 podman[412246]: 2025-11-22 09:54:21.520305326 +0000 UTC m=+0.048222904 container create a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:54:21 np0005532048 systemd[1]: Started libpod-conmon-a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46.scope.
Nov 22 04:54:21 np0005532048 podman[412246]: 2025-11-22 09:54:21.496121123 +0000 UTC m=+0.024038731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:54:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:54:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:54:21 np0005532048 podman[412246]: 2025-11-22 09:54:21.62692639 +0000 UTC m=+0.154843988 container init a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:54:21 np0005532048 podman[412246]: 2025-11-22 09:54:21.635171379 +0000 UTC m=+0.163088957 container start a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:54:21 np0005532048 podman[412246]: 2025-11-22 09:54:21.649959764 +0000 UTC m=+0.177877362 container attach a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:54:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:21.674+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 04:54:22 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:22 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:22 np0005532048 ovn_controller[152872]: 2025-11-22T09:54:22Z|01644|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 04:54:22 np0005532048 nova_compute[253661]: 2025-11-22 09:54:22.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:22 np0005532048 epic_hermann[412262]: {
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "osd_id": 1,
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "type": "bluestore"
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:    },
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "osd_id": 0,
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "type": "bluestore"
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:    },
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "osd_id": 2,
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:        "type": "bluestore"
Nov 22 04:54:22 np0005532048 epic_hermann[412262]:    }
Nov 22 04:54:22 np0005532048 epic_hermann[412262]: }
Nov 22 04:54:22 np0005532048 systemd[1]: libpod-a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46.scope: Deactivated successfully.
Nov 22 04:54:22 np0005532048 podman[412246]: 2025-11-22 09:54:22.668125694 +0000 UTC m=+1.196043292 container died a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:54:22 np0005532048 systemd[1]: libpod-a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46.scope: Consumed 1.027s CPU time.
Nov 22 04:54:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:22.716+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:54:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:54:22 np0005532048 nova_compute[253661]: 2025-11-22 09:54:22.870 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2ed5a92e2f480228d1924993a9f120c42b03a74218f71052ded4045d505791d4-merged.mount: Deactivated successfully.
Nov 22 04:54:22 np0005532048 nova_compute[253661]: 2025-11-22 09:54:22.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:23 np0005532048 podman[412246]: 2025-11-22 09:54:23.276154813 +0000 UTC m=+1.804072431 container remove a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:54:23 np0005532048 systemd[1]: libpod-conmon-a32478761689b402e4464155f5961814472a8b084ba2044ae565e80007038f46.scope: Deactivated successfully.
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:54:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 38c02cb6-c09e-4989-9586-7166c1af971d does not exist
Nov 22 04:54:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 274ae1f1-086d-412b-a54c-2b4578ee2876 does not exist
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 236 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:54:23 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 236 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:23.741+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 04:54:24 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:24.726+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:25 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:25.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 22 04:54:26 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:26.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:27.735+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:27 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:27 np0005532048 nova_compute[253661]: 2025-11-22 09:54:27.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:27 np0005532048 nova_compute[253661]: 2025-11-22 09:54:27.927 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805252.9265118, 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:54:27 np0005532048 nova_compute[253661]: 2025-11-22 09:54:27.928 253665 INFO nova.compute.manager [-] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:54:27 np0005532048 nova_compute[253661]: 2025-11-22 09:54:27.951 253665 DEBUG nova.compute.manager [None req-ea684394-2df9-48f4-96e9-5ab22e9989e9 - - - - - -] [instance: 38b3cb94-17e4-4fd5-830f-1934cf3ee3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:54:27 np0005532048 nova_compute[253661]: 2025-11-22 09:54:27.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:54:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:27.997 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:54:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:54:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:54:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 241 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 22 04:54:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:28.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:28 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 241 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:29.707+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:30 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:30.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:31 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:31.778+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:32 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:32 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:32.749+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:32 np0005532048 nova_compute[253661]: 2025-11-22 09:54:32.883 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:32 np0005532048 nova_compute[253661]: 2025-11-22 09:54:32.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 246 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:33 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:33.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:34 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 246 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:34 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:34.743+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:35.745+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:35 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:36.790+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:37 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:37 np0005532048 nova_compute[253661]: 2025-11-22 09:54:37.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:37 np0005532048 nova_compute[253661]: 2025-11-22 09:54:37.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:54:37 np0005532048 nova_compute[253661]: 2025-11-22 09:54:37.260 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:54:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:37.742+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:37 np0005532048 nova_compute[253661]: 2025-11-22 09:54:37.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:37 np0005532048 nova_compute[253661]: 2025-11-22 09:54:37.955 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:38 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 251 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:38.721+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:39 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:39 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 251 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:39.723+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:40.751+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:41 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:41.756+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:42 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:42.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:42 np0005532048 nova_compute[253661]: 2025-11-22 09:54:42.922 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:42 np0005532048 nova_compute[253661]: 2025-11-22 09:54:42.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:43 np0005532048 podman[412359]: 2025-11-22 09:54:43.364105452 +0000 UTC m=+0.053236681 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:54:43 np0005532048 podman[412360]: 2025-11-22 09:54:43.369624032 +0000 UTC m=+0.058702189 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 04:54:43 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:43 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 256 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:43.728+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:44 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 256 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:44 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:44.687+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:45 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:45.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:46.612+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:46 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:47.639+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:47 np0005532048 nova_compute[253661]: 2025-11-22 09:54:47.923 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:47 np0005532048 nova_compute[253661]: 2025-11-22 09:54:47.959 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:47 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:48.599+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 261 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:49 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 261 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:49.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:50 np0005532048 nova_compute[253661]: 2025-11-22 09:54:50.245 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:50.569+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:51 np0005532048 podman[412396]: 2025-11-22 09:54:51.389301855 +0000 UTC m=+0.086605847 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 22 04:54:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:51.576+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:51 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:51 np0005532048 nova_compute[253661]: 2025-11-22 09:54:51.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:52 np0005532048 nova_compute[253661]: 2025-11-22 09:54:52.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:52 np0005532048 nova_compute[253661]: 2025-11-22 09:54:52.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:54:52 np0005532048 nova_compute[253661]: 2025-11-22 09:54:52.245 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:54:52
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log']
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:52.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:54:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:54:52 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:52 np0005532048 nova_compute[253661]: 2025-11-22 09:54:52.925 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:52 np0005532048 nova_compute[253661]: 2025-11-22 09:54:52.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:53.539+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:53 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 266 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:54 np0005532048 nova_compute[253661]: 2025-11-22 09:54:54.236 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:54 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 266 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:54.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:55.549+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:54:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:54:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:54:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:54:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:54:55 np0005532048 nova_compute[253661]: 2025-11-22 09:54:55.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:56 np0005532048 nova_compute[253661]: 2025-11-22 09:54:56.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:56 np0005532048 nova_compute[253661]: 2025-11-22 09:54:56.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:54:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:56.539+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:57 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:57 np0005532048 nova_compute[253661]: 2025-11-22 09:54:57.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:54:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:54:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:54:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:54:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:54:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:57.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:57 np0005532048 nova_compute[253661]: 2025-11-22 09:54:57.926 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:57 np0005532048 nova_compute[253661]: 2025-11-22 09:54:57.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:54:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:54:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:58.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 271 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:54:59 np0005532048 nova_compute[253661]: 2025-11-22 09:54:59.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:54:59 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:59 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:54:59 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 271 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:54:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:54:59.575+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:54:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:00 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:00.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:01.501+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:01 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:02 np0005532048 nova_compute[253661]: 2025-11-22 09:55:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:02.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:02 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:02 np0005532048 nova_compute[253661]: 2025-11-22 09:55:02.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:02 np0005532048 nova_compute[253661]: 2025-11-22 09:55:02.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007606720469492739 of space, bias 1.0, pg target 0.22820161408478218 quantized to 32 (current 32)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:55:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.258 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:55:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:03.480+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:55:03 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929090523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.705 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:55:03 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.772 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.772 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:55:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 276 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.942 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.943 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3260MB free_disk=59.942649841308594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.944 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:55:03 np0005532048 nova_compute[253661]: 2025-11-22 09:55:03.944 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.043 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.044 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.044 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.085 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:55:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:55:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397147431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.520 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.526 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:55:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:04.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.541 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.649 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.649 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:55:04 np0005532048 nova_compute[253661]: 2025-11-22 09:55:04.650 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:04 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:04 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 276 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:05.476+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:06.479+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:06 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:07.512+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:07 np0005532048 ovn_controller[152872]: 2025-11-22T09:55:07Z|01645|binding|INFO|Releasing lport e20358df-1297-4b78-9482-59841121a4d7 from this chassis (sb_readonly=0)
Nov 22 04:55:07 np0005532048 nova_compute[253661]: 2025-11-22 09:55:07.737 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:07 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:07 np0005532048 nova_compute[253661]: 2025-11-22 09:55:07.930 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:07 np0005532048 nova_compute[253661]: 2025-11-22 09:55:07.965 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:08.517+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 286 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:09.507+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:09.800 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:55:09 np0005532048 nova_compute[253661]: 2025-11-22 09:55:09.801 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:09 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:09.801 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:55:09 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:09 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 286 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:10.522+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:11.492+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:11 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:12.488+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:12 np0005532048 nova_compute[253661]: 2025-11-22 09:55:12.932 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:12 np0005532048 nova_compute[253661]: 2025-11-22 09:55:12.966 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:13 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:13.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:14 np0005532048 podman[412467]: 2025-11-22 09:55:14.355248569 +0000 UTC m=+0.054229206 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:55:14 np0005532048 podman[412468]: 2025-11-22 09:55:14.361163949 +0000 UTC m=+0.058503675 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:55:14 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:14.508+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:15 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 291 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:15.484+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:15 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:15 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:15 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:15.803 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:55:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:16.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:16 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 291 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:16 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:17.545+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:17 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:17 np0005532048 nova_compute[253661]: 2025-11-22 09:55:17.920 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:17 np0005532048 nova_compute[253661]: 2025-11-22 09:55:17.937 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Triggering sync for uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 22 04:55:17 np0005532048 nova_compute[253661]: 2025-11-22 09:55:17.939 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:55:17 np0005532048 nova_compute[253661]: 2025-11-22 09:55:17.939 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:55:17 np0005532048 nova_compute[253661]: 2025-11-22 09:55:17.939 253665 INFO nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] During sync_power_state the instance has a pending task (image_uploading). Skip.#033[00m
Nov 22 04:55:17 np0005532048 nova_compute[253661]: 2025-11-22 09:55:17.939 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:55:17 np0005532048 nova_compute[253661]: 2025-11-22 09:55:17.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:18.579+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:18 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:19.584+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:19 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:20.555+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:20 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:20 np0005532048 nova_compute[253661]: 2025-11-22 09:55:20.862 253665 DEBUG nova.compute.manager [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:55:20 np0005532048 nova_compute[253661]: 2025-11-22 09:55:20.862 253665 DEBUG nova.compute.manager [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing instance network info cache due to event network-changed-62358b95-9f4a-404c-8165-dc98c7e3b042. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 22 04:55:20 np0005532048 nova_compute[253661]: 2025-11-22 09:55:20.862 253665 DEBUG oslo_concurrency.lockutils [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:55:20 np0005532048 nova_compute[253661]: 2025-11-22 09:55:20.862 253665 DEBUG oslo_concurrency.lockutils [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquired lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:55:20 np0005532048 nova_compute[253661]: 2025-11-22 09:55:20.863 253665 DEBUG nova.network.neutron [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Refreshing network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.237 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.237 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.238 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.239 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.239 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.241 253665 INFO nova.compute.manager [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Terminating instance#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.241 253665 DEBUG nova.compute.manager [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:55:21 np0005532048 kernel: tap62358b95-9f (unregistering): left promiscuous mode
Nov 22 04:55:21 np0005532048 NetworkManager[48920]: <info>  [1763805321.3232] device (tap62358b95-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:55:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:55:21Z|01646|binding|INFO|Releasing lport 62358b95-9f4a-404c-8165-dc98c7e3b042 from this chassis (sb_readonly=0)
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.373 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:55:21Z|01647|binding|INFO|Setting lport 62358b95-9f4a-404c-8165-dc98c7e3b042 down in Southbound
Nov 22 04:55:21 np0005532048 ovn_controller[152872]: 2025-11-22T09:55:21Z|01648|binding|INFO|Removing iface tap62358b95-9f ovn-installed in OVS
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.387 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000093.scope: Deactivated successfully.
Nov 22 04:55:21 np0005532048 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000093.scope: Consumed 27.810s CPU time.
Nov 22 04:55:21 np0005532048 systemd-machined[215941]: Machine qemu-179-instance-00000093 terminated.
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.467 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.470 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.482 253665 INFO nova.virt.libvirt.driver [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Instance destroyed successfully.#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.482 253665 DEBUG nova.objects.instance [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lazy-loading 'resources' on Instance uuid 027bdffc-9e8e-4a33-9b06-844890912dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.496 253665 DEBUG nova.virt.libvirt.vif [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T09:49:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1978624834',display_name='tempest-TestSnapshotPattern-server-1978624834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1978624834',id=147,image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPbs4cvcme9ivACjshW3GrHRutsNNtC8JsYxZJpO7Wdm0wymVGG4uq7MUY+cUVsrxl6cn1THXZxHPADM3ZJF4hahzevBsWxtyjQn+l0NA1XlnmuhoCdb7kymP1eYu1QPUA==',key_name='tempest-TestSnapshotPattern-1057806612',keypairs=<?>,launch_index=0,launched_at=2025-11-22T09:49:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ffacc46512445d8b5c24899a0053196',ramdisk_id='',reservation_id='r-c1keooq8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='878156d4-57f6-4a8b-8f4c-cbde182bb832',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-98475773',owner_user_name='tempest-TestSnapshotPattern-98475773-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T09:50:18Z,user_data=None,user_id='1edb692a8ff443038839784febd964b1',uuid=027bdffc-9e8e-4a33-9b06-844890912dc9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.496 253665 DEBUG nova.network.os_vif_util [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converting VIF {"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.497 253665 DEBUG nova.network.os_vif_util [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.497 253665 DEBUG os_vif [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.499 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.500 253665 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62358b95-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.501 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.505 253665 INFO os_vif [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:a0:65,bridge_name='br-int',has_traffic_filtering=True,id=62358b95-9f4a-404c-8165-dc98c7e3b042,network=Network(768d62d5-f993-4383-9edf-3d68f19e409c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62358b95-9f')#033[00m
Nov 22 04:55:21 np0005532048 podman[412512]: 2025-11-22 09:55:21.54546729 +0000 UTC m=+0.093104013 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:55:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:21.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.621 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a0:65 10.100.0.3'], port_security=['fa:16:3e:bc:a0:65 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '027bdffc-9e8e-4a33-9b06-844890912dc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-768d62d5-f993-4383-9edf-3d68f19e409c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ffacc46512445d8b5c24899a0053196', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aef6f84b-f5db-4e86-b5ce-afacad080f10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eea3b39d-a626-45c2-a32c-ad267efc3243, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>], logical_port=62358b95-9f4a-404c-8165-dc98c7e3b042) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fc8a7fd7d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.622 162862 INFO neutron.agent.ovn.metadata.agent [-] Port 62358b95-9f4a-404c-8165-dc98c7e3b042 in datapath 768d62d5-f993-4383-9edf-3d68f19e409c unbound from our chassis#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.623 162862 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 768d62d5-f993-4383-9edf-3d68f19e409c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.625 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9b52a8-cf21-4b5d-913f-7212ebd42805]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.625 162862 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c namespace which is not needed anymore#033[00m
Nov 22 04:55:21 np0005532048 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [NOTICE]   (405666) : haproxy version is 2.8.14-c23fe91
Nov 22 04:55:21 np0005532048 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [NOTICE]   (405666) : path to executable is /usr/sbin/haproxy
Nov 22 04:55:21 np0005532048 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [WARNING]  (405666) : Exiting Master process...
Nov 22 04:55:21 np0005532048 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [WARNING]  (405666) : Exiting Master process...
Nov 22 04:55:21 np0005532048 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [ALERT]    (405666) : Current worker (405668) exited with code 143 (Terminated)
Nov 22 04:55:21 np0005532048 neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c[405662]: [WARNING]  (405666) : All workers exited. Exiting... (0)
Nov 22 04:55:21 np0005532048 systemd[1]: libpod-5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab.scope: Deactivated successfully.
Nov 22 04:55:21 np0005532048 podman[412598]: 2025-11-22 09:55:21.756571633 +0000 UTC m=+0.051838035 container died 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:55:21 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab-userdata-shm.mount: Deactivated successfully.
Nov 22 04:55:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3a3a35e44d987f37dea2ee9bb4f5402e567d895145b73865842e34bbbf02d3f6-merged.mount: Deactivated successfully.
Nov 22 04:55:21 np0005532048 podman[412598]: 2025-11-22 09:55:21.820387631 +0000 UTC m=+0.115654033 container cleanup 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.823 253665 DEBUG nova.network.neutron [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updated VIF entry in instance network info cache for port 62358b95-9f4a-404c-8165-dc98c7e3b042. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.823 253665 DEBUG nova.network.neutron [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [{"id": "62358b95-9f4a-404c-8165-dc98c7e3b042", "address": "fa:16:3e:bc:a0:65", "network": {"id": "768d62d5-f993-4383-9edf-3d68f19e409c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-73694349-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ffacc46512445d8b5c24899a0053196", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62358b95-9f", "ovs_interfaceid": "62358b95-9f4a-404c-8165-dc98c7e3b042", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:55:21 np0005532048 systemd[1]: libpod-conmon-5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab.scope: Deactivated successfully.
Nov 22 04:55:21 np0005532048 podman[412628]: 2025-11-22 09:55:21.883176913 +0000 UTC m=+0.043990836 container remove 5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.888 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[28a539a4-5d6d-495e-b4c1-5af702a6af1c]: (4, ('Sat Nov 22 09:55:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c (5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab)\n5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab\nSat Nov 22 09:55:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c (5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab)\n5f0efdaaf4f07e6230ed5bdfdd1b062418911b4256e5e8448d9fd49e37bc96ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.890 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[df2a4825-11ee-4629-8c28-657600907748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.891 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap768d62d5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:55:21 np0005532048 kernel: tap768d62d5-f0: left promiscuous mode
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.893 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 nova_compute[253661]: 2025-11-22 09:55:21.905 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.907 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1eb193-01b2-453f-bb5f-90f5735362a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.924 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[b43b1c0d-fcf7-4f01-91b0-cba40feebbe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.926 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[43cca04e-ad7e-4ef7-8dc0-fa1f0f57bc8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.940 270751 DEBUG oslo.privsep.daemon [-] privsep: reply[cab5ae7b-e932-462d-bd46-4180174bf47a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 784420, 'reachable_time': 17395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412643, 'error': None, 'target': 'ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:55:21 np0005532048 systemd[1]: run-netns-ovnmeta\x2d768d62d5\x2df993\x2d4383\x2d9edf\x2d3d68f19e409c.mount: Deactivated successfully.
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.943 162975 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-768d62d5-f993-4383-9edf-3d68f19e409c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 22 04:55:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:21.945 162975 DEBUG oslo.privsep.daemon [-] privsep: reply[a122e836-65c5-4f94-be6c-c88c3e69da7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 22 04:55:22 np0005532048 nova_compute[253661]: 2025-11-22 09:55:22.130 253665 DEBUG oslo_concurrency.lockutils [req-88ba7f57-b906-4628-8537-2d4269933ae4 req-f8ad04f2-5cc7-4519-b094-c91cefdcfb83 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Releasing lock "refresh_cache-027bdffc-9e8e-4a33-9b06-844890912dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:55:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:55:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:22.612+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:55:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:55:22 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:22 np0005532048 nova_compute[253661]: 2025-11-22 09:55:22.971 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:23 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 296 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:24 np0005532048 nova_compute[253661]: 2025-11-22 09:55:24.241 253665 DEBUG nova.compute.manager [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-unplugged-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:55:24 np0005532048 nova_compute[253661]: 2025-11-22 09:55:24.242 253665 DEBUG oslo_concurrency.lockutils [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:55:24 np0005532048 nova_compute[253661]: 2025-11-22 09:55:24.242 253665 DEBUG oslo_concurrency.lockutils [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:55:24 np0005532048 nova_compute[253661]: 2025-11-22 09:55:24.242 253665 DEBUG oslo_concurrency.lockutils [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:55:24 np0005532048 nova_compute[253661]: 2025-11-22 09:55:24.242 253665 DEBUG nova.compute.manager [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] No waiting events found dispatching network-vif-unplugged-62358b95-9f4a-404c-8165-dc98c7e3b042 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:55:24 np0005532048 nova_compute[253661]: 2025-11-22 09:55:24.243 253665 DEBUG nova.compute.manager [req-ebadedfd-e949-49a7-afaf-308cdb709c67 req-8f5003ca-f3b9-4a27-b9f7-224147f51043 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-unplugged-62358b95-9f4a-404c-8165-dc98c7e3b042 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:55:24 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a46bdeaa-1f29-4e35-ae12-4749e35ac7ea does not exist
Nov 22 04:55:24 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 30fc49c2-c787-457b-842e-a62244ad408a does not exist
Nov 22 04:55:24 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 78624c8c-6a52-4408-9b64-b35e2d8c353f does not exist
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:55:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 04:55:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:24.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 296 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:55:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:55:24 np0005532048 podman[412913]: 2025-11-22 09:55:24.95886552 +0000 UTC m=+0.041516984 container create 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:55:25 np0005532048 systemd[1]: Started libpod-conmon-95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19.scope.
Nov 22 04:55:25 np0005532048 podman[412913]: 2025-11-22 09:55:24.940514565 +0000 UTC m=+0.023166049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:55:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:55:25 np0005532048 podman[412913]: 2025-11-22 09:55:25.080137095 +0000 UTC m=+0.162788639 container init 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 04:55:25 np0005532048 podman[412913]: 2025-11-22 09:55:25.095180707 +0000 UTC m=+0.177832181 container start 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:55:25 np0005532048 podman[412913]: 2025-11-22 09:55:25.101184408 +0000 UTC m=+0.183835962 container attach 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:55:25 np0005532048 modest_fermat[412930]: 167 167
Nov 22 04:55:25 np0005532048 systemd[1]: libpod-95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19.scope: Deactivated successfully.
Nov 22 04:55:25 np0005532048 podman[412913]: 2025-11-22 09:55:25.104618226 +0000 UTC m=+0.187269690 container died 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:55:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f9231c2f50588bdc240bc3850e214df812e3bf3dd5ec889a9d986a915cbc40a5-merged.mount: Deactivated successfully.
Nov 22 04:55:25 np0005532048 podman[412913]: 2025-11-22 09:55:25.175230776 +0000 UTC m=+0.257882250 container remove 95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_fermat, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:55:25 np0005532048 systemd[1]: libpod-conmon-95986f20e81878ddddc317998a05800281325d946bafa7928564272b80b99c19.scope: Deactivated successfully.
Nov 22 04:55:25 np0005532048 podman[412954]: 2025-11-22 09:55:25.346516811 +0000 UTC m=+0.048968113 container create b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:55:25 np0005532048 systemd[1]: Started libpod-conmon-b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc.scope.
Nov 22 04:55:25 np0005532048 podman[412954]: 2025-11-22 09:55:25.322268185 +0000 UTC m=+0.024719497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:55:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:55:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:25 np0005532048 podman[412954]: 2025-11-22 09:55:25.4561241 +0000 UTC m=+0.158575412 container init b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:55:25 np0005532048 podman[412954]: 2025-11-22 09:55:25.462949613 +0000 UTC m=+0.165400905 container start b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:55:25 np0005532048 podman[412954]: 2025-11-22 09:55:25.467061117 +0000 UTC m=+0.169512429 container attach b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:55:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:25.555+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:25 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:26 np0005532048 nova_compute[253661]: 2025-11-22 09:55:26.395 253665 DEBUG nova.compute.manager [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:55:26 np0005532048 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 DEBUG oslo_concurrency.lockutils [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Acquiring lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:55:26 np0005532048 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 DEBUG oslo_concurrency.lockutils [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:55:26 np0005532048 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 DEBUG oslo_concurrency.lockutils [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:55:26 np0005532048 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 DEBUG nova.compute.manager [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] No waiting events found dispatching network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 22 04:55:26 np0005532048 nova_compute[253661]: 2025-11-22 09:55:26.396 253665 WARNING nova.compute.manager [req-7958d13a-d3c5-4289-96d6-1d2d283578df req-04e9ee6b-63c3-4bc3-b0f0-b1a83519eb3c 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received unexpected event network-vif-plugged-62358b95-9f4a-404c-8165-dc98c7e3b042 for instance with vm_state active and task_state deleting.#033[00m
Nov 22 04:55:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 04:55:26 np0005532048 stoic_mclaren[412970]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:55:26 np0005532048 stoic_mclaren[412970]: --> relative data size: 1.0
Nov 22 04:55:26 np0005532048 stoic_mclaren[412970]: --> All data devices are unavailable
Nov 22 04:55:26 np0005532048 nova_compute[253661]: 2025-11-22 09:55:26.505 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:26 np0005532048 systemd[1]: libpod-b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc.scope: Deactivated successfully.
Nov 22 04:55:26 np0005532048 podman[412954]: 2025-11-22 09:55:26.506811864 +0000 UTC m=+1.209263176 container died b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:55:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-55a8ae64b5a948a370c9f2144a281c0924d6c0f120e74780995c382363175850-merged.mount: Deactivated successfully.
Nov 22 04:55:26 np0005532048 podman[412954]: 2025-11-22 09:55:26.610883204 +0000 UTC m=+1.313334516 container remove b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:55:26 np0005532048 systemd[1]: libpod-conmon-b3fbe02719e922a4ccfe2cebeba5b51ee6ae5a066a3268eb30db65af1b51befc.scope: Deactivated successfully.
Nov 22 04:55:26 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:27 np0005532048 podman[413171]: 2025-11-22 09:55:27.264832368 +0000 UTC m=+0.048415319 container create d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:55:27 np0005532048 systemd[1]: Started libpod-conmon-d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b.scope.
Nov 22 04:55:27 np0005532048 podman[413171]: 2025-11-22 09:55:27.242980743 +0000 UTC m=+0.026563704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:55:27 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:55:27 np0005532048 podman[413171]: 2025-11-22 09:55:27.379217458 +0000 UTC m=+0.162800429 container init d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:55:27 np0005532048 podman[413171]: 2025-11-22 09:55:27.390823873 +0000 UTC m=+0.174406854 container start d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:55:27 np0005532048 podman[413171]: 2025-11-22 09:55:27.396897566 +0000 UTC m=+0.180480527 container attach d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:55:27 np0005532048 gracious_shockley[413187]: 167 167
Nov 22 04:55:27 np0005532048 systemd[1]: libpod-d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b.scope: Deactivated successfully.
Nov 22 04:55:27 np0005532048 podman[413171]: 2025-11-22 09:55:27.400120088 +0000 UTC m=+0.183703039 container died d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:55:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-563fdb97761385ff99ecb316c8eaf3838ceedffe89121d3d5dfbfda4f34b87a3-merged.mount: Deactivated successfully.
Nov 22 04:55:27 np0005532048 podman[413171]: 2025-11-22 09:55:27.45146421 +0000 UTC m=+0.235047151 container remove d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shockley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:55:27 np0005532048 systemd[1]: libpod-conmon-d658b941fe643dcab42cc5e3566119be4e128ed396a9e7dda5b37b2a32d9c84b.scope: Deactivated successfully.
Nov 22 04:55:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:27.564+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:27 np0005532048 podman[413211]: 2025-11-22 09:55:27.70541761 +0000 UTC m=+0.074566662 container create c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:55:27 np0005532048 systemd[1]: Started libpod-conmon-c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f.scope.
Nov 22 04:55:27 np0005532048 podman[413211]: 2025-11-22 09:55:27.672107606 +0000 UTC m=+0.041256688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:55:27 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:55:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:27 np0005532048 podman[413211]: 2025-11-22 09:55:27.797166587 +0000 UTC m=+0.166315689 container init c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:55:27 np0005532048 podman[413211]: 2025-11-22 09:55:27.80439503 +0000 UTC m=+0.173544092 container start c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 04:55:27 np0005532048 podman[413211]: 2025-11-22 09:55:27.811518931 +0000 UTC m=+0.180667983 container attach c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:55:27 np0005532048 nova_compute[253661]: 2025-11-22 09:55:27.973 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:55:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:55:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:55:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:55:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 73 op/s
Nov 22 04:55:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:28.541+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]: {
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:    "0": [
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:        {
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "devices": [
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "/dev/loop3"
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            ],
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_name": "ceph_lv0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_size": "21470642176",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "name": "ceph_lv0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "tags": {
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.cluster_name": "ceph",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.crush_device_class": "",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.encrypted": "0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.osd_id": "0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.type": "block",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.vdo": "0"
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            },
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "type": "block",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "vg_name": "ceph_vg0"
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:        }
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:    ],
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:    "1": [
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:        {
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "devices": [
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "/dev/loop4"
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            ],
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_name": "ceph_lv1",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_size": "21470642176",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "name": "ceph_lv1",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "tags": {
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.cluster_name": "ceph",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.crush_device_class": "",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.encrypted": "0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.osd_id": "1",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.type": "block",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.vdo": "0"
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            },
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "type": "block",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "vg_name": "ceph_vg1"
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:        }
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:    ],
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:    "2": [
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:        {
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "devices": [
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "/dev/loop5"
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            ],
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_name": "ceph_lv2",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_size": "21470642176",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "name": "ceph_lv2",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "tags": {
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.cluster_name": "ceph",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.crush_device_class": "",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.encrypted": "0",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.osd_id": "2",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.type": "block",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:                "ceph.vdo": "0"
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            },
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "type": "block",
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:            "vg_name": "ceph_vg2"
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:        }
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]:    ]
Nov 22 04:55:28 np0005532048 flamboyant_brown[413227]: }
Nov 22 04:55:28 np0005532048 systemd[1]: libpod-c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f.scope: Deactivated successfully.
Nov 22 04:55:28 np0005532048 conmon[413227]: conmon c60ff675507505e6acd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f.scope/container/memory.events
Nov 22 04:55:28 np0005532048 podman[413211]: 2025-11-22 09:55:28.675128072 +0000 UTC m=+1.044277124 container died c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:55:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3a30859f75cbe352bf3ad9caa5c12bd8346009716166caeede807b513c8ad1b6-merged.mount: Deactivated successfully.
Nov 22 04:55:28 np0005532048 podman[413211]: 2025-11-22 09:55:28.763031991 +0000 UTC m=+1.132181043 container remove c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:55:28 np0005532048 systemd[1]: libpod-conmon-c60ff675507505e6acd02d4499ac2e7a56331da38c8770f33953bea5dc92dc5f.scope: Deactivated successfully.
Nov 22 04:55:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 301 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:29 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:29 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 301 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:29 np0005532048 podman[413389]: 2025-11-22 09:55:29.439549767 +0000 UTC m=+0.045785872 container create ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:55:29 np0005532048 systemd[1]: Started libpod-conmon-ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743.scope.
Nov 22 04:55:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:29.497+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:55:29 np0005532048 podman[413389]: 2025-11-22 09:55:29.417367744 +0000 UTC m=+0.023603879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:55:29 np0005532048 podman[413389]: 2025-11-22 09:55:29.521337761 +0000 UTC m=+0.127573886 container init ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:55:29 np0005532048 podman[413389]: 2025-11-22 09:55:29.530400841 +0000 UTC m=+0.136636946 container start ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:55:29 np0005532048 podman[413389]: 2025-11-22 09:55:29.533955261 +0000 UTC m=+0.140191396 container attach ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:55:29 np0005532048 nifty_margulis[413405]: 167 167
Nov 22 04:55:29 np0005532048 systemd[1]: libpod-ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743.scope: Deactivated successfully.
Nov 22 04:55:29 np0005532048 podman[413389]: 2025-11-22 09:55:29.536017733 +0000 UTC m=+0.142253858 container died ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:55:29 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d4b0a6fdbd65baa9983a685c607189bc84610d972a27d106072bc05cddcc0b9d-merged.mount: Deactivated successfully.
Nov 22 04:55:29 np0005532048 podman[413389]: 2025-11-22 09:55:29.57020761 +0000 UTC m=+0.176443715 container remove ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:55:29 np0005532048 systemd[1]: libpod-conmon-ccc692fd752b4076de53d63bcc4b0a2c96e5a5314df8bec7423e697e0fe98743.scope: Deactivated successfully.
Nov 22 04:55:29 np0005532048 podman[413428]: 2025-11-22 09:55:29.802492441 +0000 UTC m=+0.071513915 container create b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:55:29 np0005532048 systemd[1]: Started libpod-conmon-b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4.scope.
Nov 22 04:55:29 np0005532048 podman[413428]: 2025-11-22 09:55:29.779858357 +0000 UTC m=+0.048879811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:55:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:55:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:55:29 np0005532048 podman[413428]: 2025-11-22 09:55:29.902983689 +0000 UTC m=+0.172005143 container init b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:55:29 np0005532048 podman[413428]: 2025-11-22 09:55:29.909666139 +0000 UTC m=+0.178687573 container start b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:55:29 np0005532048 podman[413428]: 2025-11-22 09:55:29.914084391 +0000 UTC m=+0.183105825 container attach b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:55:30 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 73 op/s
Nov 22 04:55:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:30.542+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]: {
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "osd_id": 1,
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "type": "bluestore"
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:    },
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "osd_id": 0,
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "type": "bluestore"
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:    },
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "osd_id": 2,
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:        "type": "bluestore"
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]:    }
Nov 22 04:55:30 np0005532048 pedantic_hertz[413444]: }
Nov 22 04:55:30 np0005532048 systemd[1]: libpod-b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4.scope: Deactivated successfully.
Nov 22 04:55:30 np0005532048 podman[413477]: 2025-11-22 09:55:30.943371753 +0000 UTC m=+0.024216805 container died b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:55:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c8a74559e4ab93d84dc2f47999c5afd20f3d65e10af0421df0df0cf26f477c50-merged.mount: Deactivated successfully.
Nov 22 04:55:30 np0005532048 podman[413477]: 2025-11-22 09:55:30.997019253 +0000 UTC m=+0.077864285 container remove b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 04:55:31 np0005532048 systemd[1]: libpod-conmon-b4fe6d50ebed2714a0317ddda7296ba20665206bb991b8a2f1c36b08cae3dfc4.scope: Deactivated successfully.
Nov 22 04:55:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:55:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:55:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:55:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:55:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d38c747c-c60e-4d94-b1d1-252751066413 does not exist
Nov 22 04:55:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev f980b5e6-355e-4ffd-9bb5-b1ba0b615f6a does not exist
Nov 22 04:55:31 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:31 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:55:31 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:55:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:31.495+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:31 np0005532048 nova_compute[253661]: 2025-11-22 09:55:31.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:32 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 0 B/s wr, 73 op/s
Nov 22 04:55:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:32.510+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:32 np0005532048 nova_compute[253661]: 2025-11-22 09:55:32.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:33 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:33.478+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 306 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:34 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:34 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 306 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 0 B/s wr, 80 op/s
Nov 22 04:55:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:34.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:35 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:35.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 04:55:36 np0005532048 nova_compute[253661]: 2025-11-22 09:55:36.481 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805321.4796615, 027bdffc-9e8e-4a33-9b06-844890912dc9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:55:36 np0005532048 nova_compute[253661]: 2025-11-22 09:55:36.481 253665 INFO nova.compute.manager [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:55:36 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:36 np0005532048 nova_compute[253661]: 2025-11-22 09:55:36.503 253665 DEBUG nova.compute.manager [None req-d28dab39-4da1-4624-982e-9b8a52211a84 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:55:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:36.508+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:36 np0005532048 nova_compute[253661]: 2025-11-22 09:55:36.510 253665 DEBUG nova.compute.manager [None req-d28dab39-4da1-4624-982e-9b8a52211a84 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:55:36 np0005532048 nova_compute[253661]: 2025-11-22 09:55:36.534 253665 INFO nova.compute.manager [None req-d28dab39-4da1-4624-982e-9b8a52211a84 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 22 04:55:36 np0005532048 nova_compute[253661]: 2025-11-22 09:55:36.560 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:37.541+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.663482) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805337663624, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2183, "num_deletes": 251, "total_data_size": 2852217, "memory_usage": 2907632, "flush_reason": "Manual Compaction"}
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805337713307, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2786034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57072, "largest_seqno": 59254, "table_properties": {"data_size": 2776888, "index_size": 5319, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23348, "raw_average_key_size": 21, "raw_value_size": 2756754, "raw_average_value_size": 2522, "num_data_blocks": 233, "num_entries": 1093, "num_filter_entries": 1093, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805167, "oldest_key_time": 1763805167, "file_creation_time": 1763805337, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 49946 microseconds, and 12080 cpu microseconds.
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.713448) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2786034 bytes OK
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.713476) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.715590) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.715609) EVENT_LOG_v1 {"time_micros": 1763805337715602, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.715632) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2842709, prev total WAL file size 2842709, number of live WAL files 2.
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.716786) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2720KB)], [134(9376KB)]
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805337716828, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12387072, "oldest_snapshot_seqno": -1}
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8297 keys, 10648309 bytes, temperature: kUnknown
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805337929468, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 10648309, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10593817, "index_size": 32619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20805, "raw_key_size": 216964, "raw_average_key_size": 26, "raw_value_size": 10446658, "raw_average_value_size": 1259, "num_data_blocks": 1263, "num_entries": 8297, "num_filter_entries": 8297, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805337, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:55:37 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:55:37 np0005532048 nova_compute[253661]: 2025-11-22 09:55:37.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.929765) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 10648309 bytes
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.124871) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.2 rd, 50.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(8.3) write-amplify(3.8) OK, records in: 8811, records dropped: 514 output_compression: NoCompression
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.124928) EVENT_LOG_v1 {"time_micros": 1763805338124908, "job": 82, "event": "compaction_finished", "compaction_time_micros": 212724, "compaction_time_cpu_micros": 40225, "output_level": 6, "num_output_files": 1, "total_output_size": 10648309, "num_input_records": 8811, "num_output_records": 8297, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805338125914, "job": 82, "event": "table_file_deletion", "file_number": 136}
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805338128127, "job": 82, "event": "table_file_deletion", "file_number": 134}
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:37.716720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:55:38.128229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:55:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 66 op/s
Nov 22 04:55:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:38.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 316 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:39.515+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:39 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:39 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 316 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:55:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:40.511+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:41 np0005532048 nova_compute[253661]: 2025-11-22 09:55:41.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:41 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:55:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:42.535+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:42 np0005532048 nova_compute[253661]: 2025-11-22 09:55:42.977 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:43.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:43 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 04:55:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:44.558+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 321 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:44 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:45 np0005532048 podman[413597]: 2025-11-22 09:55:45.38452321 +0000 UTC m=+0.068536749 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 22 04:55:45 np0005532048 podman[413598]: 2025-11-22 09:55:45.38454058 +0000 UTC m=+0.068093128 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:55:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:45.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:45 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:45 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 321 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:55:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:46.507+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:46 np0005532048 nova_compute[253661]: 2025-11-22 09:55:46.591 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:46 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:47.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:47 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:48 np0005532048 nova_compute[253661]: 2025-11-22 09:55:48.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 04:55:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:48.468+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:48 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:49.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:55:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:50.497+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:50 np0005532048 ovn_controller[152872]: 2025-11-22T09:55:50Z|01649|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 22 04:55:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:51.490+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:51 np0005532048 nova_compute[253661]: 2025-11-22 09:55:51.596 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:51 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:52 np0005532048 nova_compute[253661]: 2025-11-22 09:55:52.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:55:52
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:55:52 np0005532048 podman[413672]: 2025-11-22 09:55:52.391454952 +0000 UTC m=+0.079638071 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 22 04:55:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:52.441+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:55:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:55:52 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:53 np0005532048 nova_compute[253661]: 2025-11-22 09:55:53.024 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:53.403+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:53 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 326 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:53 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:53 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 326 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:54 np0005532048 nova_compute[253661]: 2025-11-22 09:55:54.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:54 np0005532048 nova_compute[253661]: 2025-11-22 09:55:54.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:55:54 np0005532048 nova_compute[253661]: 2025-11-22 09:55:54.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:55:54 np0005532048 nova_compute[253661]: 2025-11-22 09:55:54.252 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 22 04:55:54 np0005532048 nova_compute[253661]: 2025-11-22 09:55:54.252 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:55:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:54.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 04:55:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:55 np0005532048 nova_compute[253661]: 2025-11-22 09:55:55.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:55.431+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:55:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:55:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:55:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:55:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:55:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:55:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:56.454+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:56 np0005532048 nova_compute[253661]: 2025-11-22 09:55:56.599 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:56 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:57 np0005532048 nova_compute[253661]: 2025-11-22 09:55:57.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:57 np0005532048 nova_compute[253661]: 2025-11-22 09:55:57.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:55:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:55:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:55:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:55:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:55:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:55:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:57.485+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:57 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:58 np0005532048 nova_compute[253661]: 2025-11-22 09:55:58.062 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:55:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 04:55:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:58.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 331 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:55:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:58 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 331 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:55:59 np0005532048 nova_compute[253661]: 2025-11-22 09:55:59.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:55:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:55:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:55:59.491+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:55:59 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:56:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:00.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:00 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:01 np0005532048 nova_compute[253661]: 2025-11-22 09:56:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:56:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:01.448+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:01 np0005532048 nova_compute[253661]: 2025-11-22 09:56:01.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:01 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:02.437+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:56:03 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:03 np0005532048 nova_compute[253661]: 2025-11-22 09:56:03.064 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007606720469492739 of space, bias 1.0, pg target 0.22820161408478218 quantized to 32 (current 32)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:56:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:56:03 np0005532048 nova_compute[253661]: 2025-11-22 09:56:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:56:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:03.462+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 336 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:04 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:04 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 336 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.254 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:56:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 04:56:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:04.462+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:56:04 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201112681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.694 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.770 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.770 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.932 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.934 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3522MB free_disk=59.942649841308594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.934 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:56:04 np0005532048 nova_compute[253661]: 2025-11-22 09:56:04.934 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:56:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:05 np0005532048 nova_compute[253661]: 2025-11-22 09:56:05.157 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:56:05 np0005532048 nova_compute[253661]: 2025-11-22 09:56:05.157 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:56:05 np0005532048 nova_compute[253661]: 2025-11-22 09:56:05.158 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:56:05 np0005532048 nova_compute[253661]: 2025-11-22 09:56:05.281 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:56:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:05.444+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:56:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303169481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:56:05 np0005532048 nova_compute[253661]: 2025-11-22 09:56:05.696 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:56:05 np0005532048 nova_compute[253661]: 2025-11-22 09:56:05.703 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:56:05 np0005532048 nova_compute[253661]: 2025-11-22 09:56:05.717 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:56:05 np0005532048 nova_compute[253661]: 2025-11-22 09:56:05.865 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:56:05 np0005532048 nova_compute[253661]: 2025-11-22 09:56:05.866 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:56:06 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:06.401+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:56:06 np0005532048 nova_compute[253661]: 2025-11-22 09:56:06.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:06 np0005532048 nova_compute[253661]: 2025-11-22 09:56:06.867 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:56:07 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:07 np0005532048 nova_compute[253661]: 2025-11-22 09:56:07.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:56:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:07.373+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:07 np0005532048 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 04:56:07 np0005532048 systemd[1]: virtsecretd.service: Consumed 1.223s CPU time.
Nov 22 04:56:08 np0005532048 nova_compute[253661]: 2025-11-22 09:56:08.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:08.395+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 04:56:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 341 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:09 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:09 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 341 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:09.414+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:10.404+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:56:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:11.383+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:11 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:11 np0005532048 nova_compute[253661]: 2025-11-22 09:56:11.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:12.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:56:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2673727272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:56:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:56:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2673727272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:56:12 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:56:13 np0005532048 nova_compute[253661]: 2025-11-22 09:56:13.113 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:13.378+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:13 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:13 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 346 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:14.378+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 04:56:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:15.383+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:15 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:15 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 346 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:16.334+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:16 np0005532048 podman[413816]: 2025-11-22 09:56:16.356206854 +0000 UTC m=+0.051642910 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 04:56:16 np0005532048 podman[413815]: 2025-11-22 09:56:16.37853162 +0000 UTC m=+0.075574858 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:56:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:56:16 np0005532048 nova_compute[253661]: 2025-11-22 09:56:16.750 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:16 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:16 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:17.334+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:18 np0005532048 nova_compute[253661]: 2025-11-22 09:56:18.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:18 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:18.319+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 04:56:18 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 351 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:19 np0005532048 nova_compute[253661]: 2025-11-22 09:56:19.038 253665 INFO nova.virt.libvirt.driver [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Deleting instance files /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9_del#033[00m
Nov 22 04:56:19 np0005532048 nova_compute[253661]: 2025-11-22 09:56:19.039 253665 INFO nova.virt.libvirt.driver [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Deletion of /var/lib/nova/instances/027bdffc-9e8e-4a33-9b06-844890912dc9_del complete#033[00m
Nov 22 04:56:19 np0005532048 nova_compute[253661]: 2025-11-22 09:56:19.231 253665 INFO nova.compute.manager [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Took 57.99 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:56:19 np0005532048 nova_compute[253661]: 2025-11-22 09:56:19.232 253665 DEBUG oslo.service.loopingcall [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:56:19 np0005532048 nova_compute[253661]: 2025-11-22 09:56:19.234 253665 DEBUG nova.compute.manager [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:56:19 np0005532048 nova_compute[253661]: 2025-11-22 09:56:19.234 253665 DEBUG nova.network.neutron [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:56:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:19.281+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:19 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:19 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:19 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 351 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:20.276+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:20 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 16 op/s
Nov 22 04:56:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:21.304+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:21 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:21 np0005532048 nova_compute[253661]: 2025-11-22 09:56:21.755 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:22.282+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:22 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:56:22.406 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:56:22 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:56:22.407 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.433 253665 DEBUG nova.network.neutron [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.456 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 16 op/s
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.489 253665 DEBUG nova.compute.manager [req-29407488-4d86-4589-bc36-cb436d054e4d req-77fa5bd1-fe6d-46e8-9f47-0310fc9eaa3d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Received event network-vif-deleted-62358b95-9f4a-404c-8165-dc98c7e3b042 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.490 253665 INFO nova.compute.manager [req-29407488-4d86-4589-bc36-cb436d054e4d req-77fa5bd1-fe6d-46e8-9f47-0310fc9eaa3d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Neutron deleted interface 62358b95-9f4a-404c-8165-dc98c7e3b042; detaching it from the instance and deleting it from the info cache#033[00m
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.490 253665 DEBUG nova.network.neutron [req-29407488-4d86-4589-bc36-cb436d054e4d req-77fa5bd1-fe6d-46e8-9f47-0310fc9eaa3d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.544 253665 INFO nova.compute.manager [-] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Took 3.31 seconds to deallocate network for instance.#033[00m
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.551 253665 DEBUG nova.compute.manager [req-29407488-4d86-4589-bc36-cb436d054e4d req-77fa5bd1-fe6d-46e8-9f47-0310fc9eaa3d 7021da3f689a42549d918cbddb330ab8 77774d23b76e426d870c9cea132a91e4 - - default default] [instance: 027bdffc-9e8e-4a33-9b06-844890912dc9] Detach interface failed, port_id=62358b95-9f4a-404c-8165-dc98c7e3b042, reason: Instance 027bdffc-9e8e-4a33-9b06-844890912dc9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.632 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.632 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:56:22 np0005532048 nova_compute[253661]: 2025-11-22 09:56:22.688 253665 DEBUG oslo_concurrency.processutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:56:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:56:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:56:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811789837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:56:23 np0005532048 nova_compute[253661]: 2025-11-22 09:56:23.104 253665 DEBUG oslo_concurrency.processutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:56:23 np0005532048 nova_compute[253661]: 2025-11-22 09:56:23.111 253665 DEBUG nova.compute.provider_tree [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:56:23 np0005532048 nova_compute[253661]: 2025-11-22 09:56:23.115 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:23 np0005532048 nova_compute[253661]: 2025-11-22 09:56:23.128 253665 DEBUG nova.scheduler.client.report [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:56:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:23.236+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:23 np0005532048 nova_compute[253661]: 2025-11-22 09:56:23.351 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:56:23 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:23 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:23 np0005532048 podman[413912]: 2025-11-22 09:56:23.378601257 +0000 UTC m=+0.078104972 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 04:56:23 np0005532048 nova_compute[253661]: 2025-11-22 09:56:23.776 253665 INFO nova.scheduler.client.report [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Deleted allocations for instance 027bdffc-9e8e-4a33-9b06-844890912dc9#033[00m
Nov 22 04:56:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 356 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:24 np0005532048 nova_compute[253661]: 2025-11-22 09:56:24.017 253665 DEBUG oslo_concurrency.lockutils [None req-515d6c34-706f-4cdd-a728-959e11656606 1edb692a8ff443038839784febd964b1 6ffacc46512445d8b5c24899a0053196 - - default default] Lock "027bdffc-9e8e-4a33-9b06-844890912dc9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 62.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:56:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:24.193+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:24 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 356 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:24 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 597 B/s wr, 33 op/s
Nov 22 04:56:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:25.199+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:25 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:26.225+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:26 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 597 B/s wr, 26 op/s
Nov 22 04:56:26 np0005532048 nova_compute[253661]: 2025-11-22 09:56:26.758 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:27.204+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:27 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:27 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:56:27.998 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:56:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:56:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:56:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:56:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:56:28 np0005532048 nova_compute[253661]: 2025-11-22 09:56:28.163 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:28.247+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:56:28.410 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:56:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 597 B/s wr, 26 op/s
Nov 22 04:56:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 361 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:29 np0005532048 nova_compute[253661]: 2025-11-22 09:56:29.000 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:29 np0005532048 nova_compute[253661]: 2025-11-22 09:56:29.122 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:29.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:29 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 361 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:29 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:30.246+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:30 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 597 B/s wr, 16 op/s
Nov 22 04:56:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:31.270+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:31 np0005532048 nova_compute[253661]: 2025-11-22 09:56:31.802 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:56:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3e7402e4-3a4a-4226-ac0f-0b8ef682f6f9 does not exist
Nov 22 04:56:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6cc49166-2612-4ea1-b232-b023f80bdccc does not exist
Nov 22 04:56:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 442e7215-c2fd-4fc2-8175-2915a963fcbc does not exist
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:56:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:56:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:32.224+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:56:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:56:32 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:56:32 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 597 B/s wr, 16 op/s
Nov 22 04:56:32 np0005532048 podman[414212]: 2025-11-22 09:56:32.569993504 +0000 UTC m=+0.040312943 container create b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:56:32 np0005532048 systemd[1]: Started libpod-conmon-b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03.scope.
Nov 22 04:56:32 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:56:32 np0005532048 podman[414212]: 2025-11-22 09:56:32.64753785 +0000 UTC m=+0.117857209 container init b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:56:32 np0005532048 podman[414212]: 2025-11-22 09:56:32.551733641 +0000 UTC m=+0.022053020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:56:32 np0005532048 podman[414212]: 2025-11-22 09:56:32.656268102 +0000 UTC m=+0.126587461 container start b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:56:32 np0005532048 podman[414212]: 2025-11-22 09:56:32.659524304 +0000 UTC m=+0.129843663 container attach b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:56:32 np0005532048 dazzling_fermat[414228]: 167 167
Nov 22 04:56:32 np0005532048 systemd[1]: libpod-b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03.scope: Deactivated successfully.
Nov 22 04:56:32 np0005532048 podman[414212]: 2025-11-22 09:56:32.663435063 +0000 UTC m=+0.133754452 container died b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:56:32 np0005532048 systemd[1]: var-lib-containers-storage-overlay-227735bb3029afc857e96e406b24bddec5636afe38884ddb9af5a801bd6c54d7-merged.mount: Deactivated successfully.
Nov 22 04:56:32 np0005532048 podman[414212]: 2025-11-22 09:56:32.70666019 +0000 UTC m=+0.176979549 container remove b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:56:32 np0005532048 systemd[1]: libpod-conmon-b3e17c660f9371be64ef626ac365415906c64019ea9327cd7d96cc858b9cca03.scope: Deactivated successfully.
Nov 22 04:56:32 np0005532048 podman[414253]: 2025-11-22 09:56:32.875934772 +0000 UTC m=+0.049677250 container create c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:56:32 np0005532048 systemd[1]: Started libpod-conmon-c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29.scope.
Nov 22 04:56:32 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:56:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:32 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:32 np0005532048 podman[414253]: 2025-11-22 09:56:32.855890864 +0000 UTC m=+0.029633362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:56:32 np0005532048 podman[414253]: 2025-11-22 09:56:32.95036196 +0000 UTC m=+0.124104468 container init c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:56:32 np0005532048 podman[414253]: 2025-11-22 09:56:32.959510122 +0000 UTC m=+0.133252600 container start c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 22 04:56:32 np0005532048 podman[414253]: 2025-11-22 09:56:32.964874198 +0000 UTC m=+0.138616726 container attach c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:56:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:33.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:33 np0005532048 nova_compute[253661]: 2025-11-22 09:56:33.199 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:33 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 366 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:34 np0005532048 musing_faraday[414269]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:56:34 np0005532048 musing_faraday[414269]: --> relative data size: 1.0
Nov 22 04:56:34 np0005532048 musing_faraday[414269]: --> All data devices are unavailable
Nov 22 04:56:34 np0005532048 systemd[1]: libpod-c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29.scope: Deactivated successfully.
Nov 22 04:56:34 np0005532048 systemd[1]: libpod-c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29.scope: Consumed 1.065s CPU time.
Nov 22 04:56:34 np0005532048 podman[414253]: 2025-11-22 09:56:34.069011468 +0000 UTC m=+1.242753976 container died c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:56:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-113243c704264216b5745f81126f0b02451e6192ad635cf7913141582653740f-merged.mount: Deactivated successfully.
Nov 22 04:56:34 np0005532048 podman[414253]: 2025-11-22 09:56:34.131500783 +0000 UTC m=+1.305243261 container remove c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:56:34 np0005532048 systemd[1]: libpod-conmon-c01c1f67b97524c5c106e1d8b0f714afd0ee38f9ccc5cc99ebc0dc46fab11c29.scope: Deactivated successfully.
Nov 22 04:56:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:34.158+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:34 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 366 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:34 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 597 B/s wr, 16 op/s
Nov 22 04:56:34 np0005532048 podman[414451]: 2025-11-22 09:56:34.772803486 +0000 UTC m=+0.043267608 container create 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:56:34 np0005532048 systemd[1]: Started libpod-conmon-6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b.scope.
Nov 22 04:56:34 np0005532048 podman[414451]: 2025-11-22 09:56:34.754881411 +0000 UTC m=+0.025345553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:56:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:56:34 np0005532048 podman[414451]: 2025-11-22 09:56:34.870299348 +0000 UTC m=+0.140763490 container init 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:56:34 np0005532048 podman[414451]: 2025-11-22 09:56:34.881242046 +0000 UTC m=+0.151706168 container start 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:56:34 np0005532048 podman[414451]: 2025-11-22 09:56:34.885130584 +0000 UTC m=+0.155594766 container attach 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:56:34 np0005532048 naughty_ellis[414467]: 167 167
Nov 22 04:56:34 np0005532048 systemd[1]: libpod-6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b.scope: Deactivated successfully.
Nov 22 04:56:34 np0005532048 podman[414451]: 2025-11-22 09:56:34.890542962 +0000 UTC m=+0.161007094 container died 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:56:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-65214dd0e8399f487b5c6190ec07f4d528e8f7aef7e11ab40341329cd2c0b65f-merged.mount: Deactivated successfully.
Nov 22 04:56:34 np0005532048 podman[414451]: 2025-11-22 09:56:34.939986215 +0000 UTC m=+0.210450337 container remove 6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ellis, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:56:34 np0005532048 systemd[1]: libpod-conmon-6a533411a45195ecf05575cf6a0b2e9f95b84f3d7d652fbc0236137f8a5f762b.scope: Deactivated successfully.
Nov 22 04:56:35 np0005532048 podman[414490]: 2025-11-22 09:56:35.117325283 +0000 UTC m=+0.038620441 container create 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:56:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:35.119+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:35 np0005532048 systemd[1]: Started libpod-conmon-7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f.scope.
Nov 22 04:56:35 np0005532048 podman[414490]: 2025-11-22 09:56:35.10143924 +0000 UTC m=+0.022734418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:56:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:56:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:35 np0005532048 podman[414490]: 2025-11-22 09:56:35.219791721 +0000 UTC m=+0.141086899 container init 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:56:35 np0005532048 podman[414490]: 2025-11-22 09:56:35.226505662 +0000 UTC m=+0.147800810 container start 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 04:56:35 np0005532048 podman[414490]: 2025-11-22 09:56:35.229816245 +0000 UTC m=+0.151111423 container attach 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:56:35 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:35 np0005532048 epic_bartik[414506]: {
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:    "0": [
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:        {
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "devices": [
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "/dev/loop3"
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            ],
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_name": "ceph_lv0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_size": "21470642176",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "name": "ceph_lv0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "tags": {
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.cluster_name": "ceph",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.crush_device_class": "",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.encrypted": "0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.osd_id": "0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.type": "block",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.vdo": "0"
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            },
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "type": "block",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "vg_name": "ceph_vg0"
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:        }
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:    ],
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:    "1": [
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:        {
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "devices": [
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "/dev/loop4"
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            ],
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_name": "ceph_lv1",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_size": "21470642176",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "name": "ceph_lv1",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "tags": {
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.cluster_name": "ceph",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.crush_device_class": "",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.encrypted": "0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.osd_id": "1",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.type": "block",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.vdo": "0"
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            },
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "type": "block",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "vg_name": "ceph_vg1"
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:        }
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:    ],
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:    "2": [
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:        {
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "devices": [
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "/dev/loop5"
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            ],
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_name": "ceph_lv2",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_size": "21470642176",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "name": "ceph_lv2",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "tags": {
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.cluster_name": "ceph",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.crush_device_class": "",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.encrypted": "0",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.osd_id": "2",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.type": "block",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:                "ceph.vdo": "0"
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            },
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "type": "block",
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:            "vg_name": "ceph_vg2"
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:        }
Nov 22 04:56:35 np0005532048 epic_bartik[414506]:    ]
Nov 22 04:56:35 np0005532048 epic_bartik[414506]: }
Nov 22 04:56:35 np0005532048 systemd[1]: libpod-7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f.scope: Deactivated successfully.
Nov 22 04:56:35 np0005532048 podman[414490]: 2025-11-22 09:56:35.972963141 +0000 UTC m=+0.894258299 container died 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:56:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e6ced7fd19fb6072e1f4c8edff2e5872f8c429eb1f133c55ed66a556a5a29a59-merged.mount: Deactivated successfully.
Nov 22 04:56:36 np0005532048 podman[414490]: 2025-11-22 09:56:36.03874955 +0000 UTC m=+0.960044708 container remove 7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bartik, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 22 04:56:36 np0005532048 systemd[1]: libpod-conmon-7de3a1f2b7c9de76343ed75a53f50151c1bcebc7593a5f9ab5f5af7dccc3d56f.scope: Deactivated successfully.
Nov 22 04:56:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:36.136+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:36 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:36 np0005532048 podman[414669]: 2025-11-22 09:56:36.666423717 +0000 UTC m=+0.041670557 container create 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:56:36 np0005532048 systemd[1]: Started libpod-conmon-913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0.scope.
Nov 22 04:56:36 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:56:36 np0005532048 podman[414669]: 2025-11-22 09:56:36.650097793 +0000 UTC m=+0.025344663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:56:36 np0005532048 podman[414669]: 2025-11-22 09:56:36.756010909 +0000 UTC m=+0.131257769 container init 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:56:36 np0005532048 podman[414669]: 2025-11-22 09:56:36.769285026 +0000 UTC m=+0.144531866 container start 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 04:56:36 np0005532048 podman[414669]: 2025-11-22 09:56:36.772966749 +0000 UTC m=+0.148213789 container attach 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:56:36 np0005532048 youthful_solomon[414686]: 167 167
Nov 22 04:56:36 np0005532048 systemd[1]: libpod-913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0.scope: Deactivated successfully.
Nov 22 04:56:36 np0005532048 podman[414669]: 2025-11-22 09:56:36.778885469 +0000 UTC m=+0.154132309 container died 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:56:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4fc35e9ea41ba6dd7acae83ea7e8f3b60cb35d97155683a028fa0dae6e161ec4-merged.mount: Deactivated successfully.
Nov 22 04:56:36 np0005532048 nova_compute[253661]: 2025-11-22 09:56:36.842 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:36 np0005532048 podman[414669]: 2025-11-22 09:56:36.855679246 +0000 UTC m=+0.230926086 container remove 913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:56:36 np0005532048 systemd[1]: libpod-conmon-913a4cade5e0ec36a15761f416f5b4e93762e57c9debfa304b732a05cbc3f2e0.scope: Deactivated successfully.
Nov 22 04:56:37 np0005532048 podman[414709]: 2025-11-22 09:56:37.042612907 +0000 UTC m=+0.058753591 container create 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:56:37 np0005532048 systemd[1]: Started libpod-conmon-664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd.scope.
Nov 22 04:56:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:37.094+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:56:37 np0005532048 podman[414709]: 2025-11-22 09:56:37.018731682 +0000 UTC m=+0.034872396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:56:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:56:37 np0005532048 podman[414709]: 2025-11-22 09:56:37.131187503 +0000 UTC m=+0.147328187 container init 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:56:37 np0005532048 podman[414709]: 2025-11-22 09:56:37.13699569 +0000 UTC m=+0.153136354 container start 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:56:37 np0005532048 podman[414709]: 2025-11-22 09:56:37.140081599 +0000 UTC m=+0.156222263 container attach 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:56:37 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:38.115+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]: {
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "osd_id": 1,
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "type": "bluestore"
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:    },
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "osd_id": 0,
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "type": "bluestore"
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:    },
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "osd_id": 2,
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:        "type": "bluestore"
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]:    }
Nov 22 04:56:38 np0005532048 exciting_jepsen[414725]: }
Nov 22 04:56:38 np0005532048 systemd[1]: libpod-664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd.scope: Deactivated successfully.
Nov 22 04:56:38 np0005532048 podman[414709]: 2025-11-22 09:56:38.188709792 +0000 UTC m=+1.204850476 container died 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:56:38 np0005532048 systemd[1]: libpod-664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd.scope: Consumed 1.060s CPU time.
Nov 22 04:56:38 np0005532048 nova_compute[253661]: 2025-11-22 09:56:38.202 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b9e010ecf02cc558e9cac1f367e18812e0df888b06060eb5e09fec2cc9ec00c5-merged.mount: Deactivated successfully.
Nov 22 04:56:38 np0005532048 podman[414709]: 2025-11-22 09:56:38.267689954 +0000 UTC m=+1.283830618 container remove 664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jepsen, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:56:38 np0005532048 systemd[1]: libpod-conmon-664149aa3cc7994f40663a8a493b5a6e3996bb2aad6ce5a8db438bcbf039f2fd.scope: Deactivated successfully.
Nov 22 04:56:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:56:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:56:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:56:38 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:56:38 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 403a83a7-6df9-4dbc-aac7-3471d1636031 does not exist
Nov 22 04:56:38 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev eea6a6db-a4b2-4cda-aab0-3d7a9c7686a4 does not exist
Nov 22 04:56:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:38 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:56:38 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:56:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 371 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:39.066+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:39 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 371 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:39 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:40.027+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:41.063+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:41 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:41 np0005532048 nova_compute[253661]: 2025-11-22 09:56:41.849 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:42.056+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:42 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:43.085+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:43 np0005532048 nova_compute[253661]: 2025-11-22 09:56:43.204 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:43 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:43 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 376 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:44.093+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:44 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 376 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:44 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:45.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:45 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:46.097+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:46 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:46 np0005532048 nova_compute[253661]: 2025-11-22 09:56:46.854 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:47.133+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:47 np0005532048 podman[414823]: 2025-11-22 09:56:47.411513775 +0000 UTC m=+0.092575809 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 04:56:47 np0005532048 podman[414824]: 2025-11-22 09:56:47.425749636 +0000 UTC m=+0.097749360 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:56:47 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:48.160+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:48 np0005532048 nova_compute[253661]: 2025-11-22 09:56:48.207 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:48 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 387 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:49.119+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:49 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 387 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:50.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:51.120+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:51 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:51 np0005532048 nova_compute[253661]: 2025-11-22 09:56:51.904 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:52.126+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:56:52
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:52 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:56:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:56:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:53.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:53 np0005532048 nova_compute[253661]: 2025-11-22 09:56:53.208 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:53 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:54.140+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:54 np0005532048 nova_compute[253661]: 2025-11-22 09:56:54.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:56:54 np0005532048 nova_compute[253661]: 2025-11-22 09:56:54.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:56:54 np0005532048 nova_compute[253661]: 2025-11-22 09:56:54.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:56:54 np0005532048 nova_compute[253661]: 2025-11-22 09:56:54.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:56:54 np0005532048 nova_compute[253661]: 2025-11-22 09:56:54.246 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:56:54 np0005532048 podman[414859]: 2025-11-22 09:56:54.427769611 +0000 UTC m=+0.117703395 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 04:56:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 392 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:55.107+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:55 np0005532048 nova_compute[253661]: 2025-11-22 09:56:55.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:56:55 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 392 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:56:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:56:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:56:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:56:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:56:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:56:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:56.086+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:56 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:56 np0005532048 nova_compute[253661]: 2025-11-22 09:56:56.909 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:57.060+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:57 np0005532048 nova_compute[253661]: 2025-11-22 09:56:57.215 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "a7eff414-1d1e-4670-a9ca-5477d690015b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:56:57 np0005532048 nova_compute[253661]: 2025-11-22 09:56:57.216 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:56:57 np0005532048 nova_compute[253661]: 2025-11-22 09:56:57.304 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 22 04:56:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:56:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:56:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:56:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:56:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:56:57 np0005532048 nova_compute[253661]: 2025-11-22 09:56:57.464 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:56:57 np0005532048 nova_compute[253661]: 2025-11-22 09:56:57.465 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:56:57 np0005532048 nova_compute[253661]: 2025-11-22 09:56:57.474 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 22 04:56:57 np0005532048 nova_compute[253661]: 2025-11-22 09:56:57.475 253665 INFO nova.compute.claims [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 22 04:56:57 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:57 np0005532048 nova_compute[253661]: 2025-11-22 09:56:57.964 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:56:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:58.035+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.211 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257337994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:56:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.489 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.495 253665 DEBUG nova.compute.provider_tree [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.513 253665 DEBUG nova.scheduler.client.report [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.544 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.545 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.608 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.609 253665 DEBUG nova.network.neutron [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.663 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.682 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.849 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.851 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.852 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Creating image(s)#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.887 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.919 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.924553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805418924635, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1184, "num_deletes": 250, "total_data_size": 1344088, "memory_usage": 1377912, "flush_reason": "Manual Compaction"}
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.950 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:56:58 np0005532048 nova_compute[253661]: 2025-11-22 09:56:58.955 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805418967458, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 856888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59255, "largest_seqno": 60438, "table_properties": {"data_size": 852505, "index_size": 1714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13468, "raw_average_key_size": 21, "raw_value_size": 842304, "raw_average_value_size": 1349, "num_data_blocks": 75, "num_entries": 624, "num_filter_entries": 624, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805338, "oldest_key_time": 1763805338, "file_creation_time": 1763805418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 42976 microseconds, and 3410 cpu microseconds.
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.967538) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 856888 bytes OK
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.967563) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.981729) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.981789) EVENT_LOG_v1 {"time_micros": 1763805418981776, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.981820) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1338502, prev total WAL file size 1338502, number of live WAL files 2.
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.982719) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323536' seq:72057594037927935, type:22 .. '6D6772737461740032353037' seq:0, type:0; will stop at (end)
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(836KB)], [137(10MB)]
Nov 22 04:56:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805418982761, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11505197, "oldest_snapshot_seqno": -1}
Nov 22 04:56:59 np0005532048 nova_compute[253661]: 2025-11-22 09:56:59.012 253665 DEBUG nova.network.neutron [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 22 04:56:59 np0005532048 nova_compute[253661]: 2025-11-22 09:56:59.012 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 22 04:56:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:56:59.041+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:56:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:56:59 np0005532048 nova_compute[253661]: 2025-11-22 09:56:59.049 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:56:59 np0005532048 nova_compute[253661]: 2025-11-22 09:56:59.050 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:56:59 np0005532048 nova_compute[253661]: 2025-11-22 09:56:59.051 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:56:59 np0005532048 nova_compute[253661]: 2025-11-22 09:56:59.052 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "82db50257fd208421e31241f1b0ae2cc5ee8c9c4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8444 keys, 8636317 bytes, temperature: kUnknown
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805419071961, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8636317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8584937, "index_size": 29187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 220739, "raw_average_key_size": 26, "raw_value_size": 8439245, "raw_average_value_size": 999, "num_data_blocks": 1121, "num_entries": 8444, "num_filter_entries": 8444, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.072368) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8636317 bytes
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.075380) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.8 rd, 96.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.2 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(23.5) write-amplify(10.1) OK, records in: 8921, records dropped: 477 output_compression: NoCompression
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.075407) EVENT_LOG_v1 {"time_micros": 1763805419075394, "job": 84, "event": "compaction_finished", "compaction_time_micros": 89316, "compaction_time_cpu_micros": 26015, "output_level": 6, "num_output_files": 1, "total_output_size": 8636317, "num_input_records": 8921, "num_output_records": 8444, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805419075740, "job": 84, "event": "table_file_deletion", "file_number": 139}
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805419077789, "job": 84, "event": "table_file_deletion", "file_number": 137}
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:58.982611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:56:59.077864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:56:59 np0005532048 nova_compute[253661]: 2025-11-22 09:56:59.081 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:56:59 np0005532048 nova_compute[253661]: 2025-11-22 09:56:59.086 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a7eff414-1d1e-4670-a9ca-5477d690015b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:56:59 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.080 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 a7eff414-1d1e-4670-a9ca-5477d690015b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.994s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.154 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] resizing rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.422 253665 DEBUG nova.objects.instance [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lazy-loading 'migration_context' on Instance uuid a7eff414-1d1e-4670-a9ca-5477d690015b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.439 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.440 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Ensure instance console log exists: /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.441 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.441 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.441 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.443 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '878156d4-57f6-4a8b-8f4c-cbde182bb832'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.446 253665 WARNING nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.452 253665 DEBUG nova.virt.libvirt.host [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.453 253665 DEBUG nova.virt.libvirt.host [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.456 253665 DEBUG nova.virt.libvirt.host [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.457 253665 DEBUG nova.virt.libvirt.host [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.457 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.458 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T09:04:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='677a0b1c-7eab-430d-9a1d-f98f8f7fc4a1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T09:04:48Z,direct_url=<?>,disk_format='qcow2',id=878156d4-57f6-4a8b-8f4c-cbde182bb832,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='eb30b774cfce4a009d1e952539a71a13',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T09:04:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.458 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.459 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.459 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.459 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.460 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.460 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.460 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.461 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.461 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.461 253665 DEBUG nova.virt.hardware [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.464 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:57:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:57:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253056536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.919 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.944 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:57:00 np0005532048 nova_compute[253661]: 2025-11-22 09:57:00.948 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:57:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:01.075+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:01 np0005532048 nova_compute[253661]: 2025-11-22 09:57:01.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:57:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:57:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335997378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:57:01 np0005532048 nova_compute[253661]: 2025-11-22 09:57:01.422 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:57:01 np0005532048 nova_compute[253661]: 2025-11-22 09:57:01.424 253665 DEBUG nova.objects.instance [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid a7eff414-1d1e-4670-a9ca-5477d690015b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:57:01 np0005532048 nova_compute[253661]: 2025-11-22 09:57:01.442 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <uuid>a7eff414-1d1e-4670-a9ca-5477d690015b</uuid>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <name>instance-00000098</name>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <memory>131072</memory>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <vcpu>1</vcpu>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <metadata>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <nova:name>tempest-AggregatesAdminTestJSON-server-1241444193</nova:name>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <nova:creationTime>2025-11-22 09:57:00</nova:creationTime>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <nova:flavor name="m1.nano">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <nova:memory>128</nova:memory>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <nova:disk>1</nova:disk>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <nova:swap>0</nova:swap>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <nova:vcpus>1</nova:vcpus>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      </nova:flavor>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <nova:owner>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <nova:user uuid="c1b227b0892a49698bf98933a153ab9c">tempest-AggregatesAdminTestJSON-562303690-project-member</nova:user>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <nova:project uuid="1aa9495592994e239bcee4c6e795c5d2">tempest-AggregatesAdminTestJSON-562303690</nova:project>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      </nova:owner>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <nova:root type="image" uuid="878156d4-57f6-4a8b-8f4c-cbde182bb832"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <nova:ports/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    </nova:instance>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  </metadata>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <sysinfo type="smbios">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <system>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <entry name="manufacturer">RDO</entry>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <entry name="product">OpenStack Compute</entry>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <entry name="serial">a7eff414-1d1e-4670-a9ca-5477d690015b</entry>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <entry name="uuid">a7eff414-1d1e-4670-a9ca-5477d690015b</entry>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <entry name="family">Virtual Machine</entry>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    </system>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  </sysinfo>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <os>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <boot dev="hd"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <smbios mode="sysinfo"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  </os>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <features>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <acpi/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <apic/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <vmcoreinfo/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  </features>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <clock offset="utc">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <timer name="pit" tickpolicy="delay"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <timer name="hpet" present="no"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  </clock>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <cpu mode="host-model" match="exact">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  </cpu>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  <devices>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <disk type="network" device="disk">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a7eff414-1d1e-4670-a9ca-5477d690015b_disk">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <target dev="vda" bus="virtio"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <disk type="network" device="cdrom">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <driver type="raw" cache="none"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <source protocol="rbd" name="vms/a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <host name="192.168.122.100" port="6789"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      </source>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <auth username="openstack">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:        <secret type="ceph" uuid="34829716-a12c-57a6-8915-c1aa615c9d8a"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      </auth>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <target dev="sda" bus="sata"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    </disk>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <serial type="pty">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <log file="/var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/console.log" append="off"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    </serial>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <video>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <model type="virtio"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    </video>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <input type="tablet" bus="usb"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <rng model="virtio">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <backend model="random">/dev/urandom</backend>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    </rng>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="pci" model="pcie-root-port"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <controller type="usb" index="0"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    <memballoon model="virtio">
Nov 22 04:57:01 np0005532048 nova_compute[253661]:      <stats period="10"/>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:    </memballoon>
Nov 22 04:57:01 np0005532048 nova_compute[253661]:  </devices>
Nov 22 04:57:01 np0005532048 nova_compute[253661]: </domain>
Nov 22 04:57:01 np0005532048 nova_compute[253661]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 22 04:57:01 np0005532048 nova_compute[253661]: 2025-11-22 09:57:01.492 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:57:01 np0005532048 nova_compute[253661]: 2025-11-22 09:57:01.493 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 22 04:57:01 np0005532048 nova_compute[253661]: 2025-11-22 09:57:01.493 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Using config drive#033[00m
Nov 22 04:57:01 np0005532048 nova_compute[253661]: 2025-11-22 09:57:01.511 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:57:01 np0005532048 nova_compute[253661]: 2025-11-22 09:57:01.913 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:01 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:02.121+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:02 np0005532048 nova_compute[253661]: 2025-11-22 09:57:02.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:57:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:57:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:57:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:03.121+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:03 np0005532048 nova_compute[253661]: 2025-11-22 09:57:03.221 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:03 np0005532048 nova_compute[253661]: 2025-11-22 09:57:03.275 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Creating config drive at /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config#033[00m
Nov 22 04:57:03 np0005532048 nova_compute[253661]: 2025-11-22 09:57:03.280 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8h1uymm8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:57:03 np0005532048 nova_compute[253661]: 2025-11-22 09:57:03.432 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8h1uymm8" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:57:03 np0005532048 nova_compute[253661]: 2025-11-22 09:57:03.466 253665 DEBUG nova.storage.rbd_utils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] rbd image a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 22 04:57:03 np0005532048 nova_compute[253661]: 2025-11-22 09:57:03.471 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:57:03 np0005532048 nova_compute[253661]: 2025-11-22 09:57:03.668 253665 DEBUG oslo_concurrency.processutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config a7eff414-1d1e-4670-a9ca-5477d690015b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:57:03 np0005532048 nova_compute[253661]: 2025-11-22 09:57:03.669 253665 INFO nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Deleting local config drive /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b/disk.config because it was imported into RBD.#033[00m
Nov 22 04:57:03 np0005532048 systemd[1]: Starting libvirt secret daemon...
Nov 22 04:57:03 np0005532048 systemd[1]: Started libvirt secret daemon.
Nov 22 04:57:03 np0005532048 systemd-machined[215941]: New machine qemu-186-instance-00000098.
Nov 22 04:57:03 np0005532048 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 396 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.923022) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423923095, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 305, "num_deletes": 255, "total_data_size": 75399, "memory_usage": 81608, "flush_reason": "Manual Compaction"}
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423927008, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 74821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60439, "largest_seqno": 60743, "table_properties": {"data_size": 72831, "index_size": 153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4975, "raw_average_key_size": 17, "raw_value_size": 68928, "raw_average_value_size": 244, "num_data_blocks": 7, "num_entries": 282, "num_filter_entries": 282, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805419, "oldest_key_time": 1763805419, "file_creation_time": 1763805423, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 4044 microseconds, and 1488 cpu microseconds.
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.927071) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 74821 bytes OK
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.927097) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.928891) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.928921) EVENT_LOG_v1 {"time_micros": 1763805423928912, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.928951) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 73177, prev total WAL file size 73177, number of live WAL files 2.
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.929984) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353130' seq:72057594037927935, type:22 .. '6C6F676D0032373631' seq:0, type:0; will stop at (end)
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(73KB)], [140(8433KB)]
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423930051, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 8711138, "oldest_snapshot_seqno": -1}
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8209 keys, 8600641 bytes, temperature: kUnknown
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423988852, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 8600641, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8550284, "index_size": 28740, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 216812, "raw_average_key_size": 26, "raw_value_size": 8408273, "raw_average_value_size": 1024, "num_data_blocks": 1100, "num_entries": 8209, "num_filter_entries": 8209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805423, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.989101) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 8600641 bytes
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.991107) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.0 rd, 146.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.2 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(231.4) write-amplify(114.9) OK, records in: 8726, records dropped: 517 output_compression: NoCompression
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.991127) EVENT_LOG_v1 {"time_micros": 1763805423991117, "job": 86, "event": "compaction_finished", "compaction_time_micros": 58878, "compaction_time_cpu_micros": 22447, "output_level": 6, "num_output_files": 1, "total_output_size": 8600641, "num_input_records": 8726, "num_output_records": 8209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423991241, "job": 86, "event": "table_file_deletion", "file_number": 142}
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805423992523, "job": 86, "event": "table_file_deletion", "file_number": 140}
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.929805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:57:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:57:03.992636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:57:04 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:04 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 396 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:04.133+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.523 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.524 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.524 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805424.5221908, a7eff414-1d1e-4670-a9ca-5477d690015b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.524 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] VM Resumed (Lifecycle Event)#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.532 253665 INFO nova.virt.libvirt.driver [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance spawned successfully.#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.533 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.561 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.568 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.571 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.571 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.572 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.572 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.572 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.573 253665 DEBUG nova.virt.libvirt.driver [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.607 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.608 253665 DEBUG nova.virt.driver [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] Emitting event <LifecycleEvent: 1763805424.5248044, a7eff414-1d1e-4670-a9ca-5477d690015b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.609 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] VM Started (Lifecycle Event)#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.626 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.632 253665 DEBUG nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.662 253665 INFO nova.compute.manager [None req-029fe46a-b588-4c74-a21a-33d32a411fd0 - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.682 253665 INFO nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Took 5.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.682 253665 DEBUG nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:57:04 np0005532048 nova_compute[253661]: 2025-11-22 09:57:04.862 253665 INFO nova.compute.manager [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Took 7.43 seconds to build instance.#033[00m
Nov 22 04:57:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.056 253665 DEBUG oslo_concurrency.lockutils [None req-db1c7701-5e4a-4e04-b047-cc7c7078160d c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:57:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:05.170+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.261 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:57:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:57:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209909598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.805 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.898 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:57:05 np0005532048 nova_compute[253661]: 2025-11-22 09:57:05.898 253665 DEBUG nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 22 04:57:06 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.088 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.089 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3350MB free_disk=59.92204284667969GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.089 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.090 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.192 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Instance a7eff414-1d1e-4670-a9ca-5477d690015b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.193 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.193 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:57:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:06.198+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.225 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.253 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.254 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.278 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: 0fcd87d5-8036-4d27-8f48-a032d34c7fdf _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.325 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.421 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:57:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.918 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:57:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412150792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.978 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:57:06 np0005532048 nova_compute[253661]: 2025-11-22 09:57:06.985 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.004 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:57:07 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.040 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:57:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:07.208+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.643 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "a7eff414-1d1e-4670-a9ca-5477d690015b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.644 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.645 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "a7eff414-1d1e-4670-a9ca-5477d690015b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.645 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.645 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.646 253665 INFO nova.compute.manager [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Terminating instance#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.647 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "refresh_cache-a7eff414-1d1e-4670-a9ca-5477d690015b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.647 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquired lock "refresh_cache-a7eff414-1d1e-4670-a9ca-5477d690015b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 22 04:57:07 np0005532048 nova_compute[253661]: 2025-11-22 09:57:07.648 253665 DEBUG nova.network.neutron [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 22 04:57:08 np0005532048 nova_compute[253661]: 2025-11-22 09:57:08.037 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:57:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:08 np0005532048 nova_compute[253661]: 2025-11-22 09:57:08.072 253665 DEBUG nova.network.neutron [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:57:08 np0005532048 ovn_controller[152872]: 2025-11-22T09:57:08Z|01650|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 22 04:57:08 np0005532048 nova_compute[253661]: 2025-11-22 09:57:08.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:08.234+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:08 np0005532048 nova_compute[253661]: 2025-11-22 09:57:08.286 253665 DEBUG nova.network.neutron [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:57:08 np0005532048 nova_compute[253661]: 2025-11-22 09:57:08.298 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Releasing lock "refresh_cache-a7eff414-1d1e-4670-a9ca-5477d690015b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 22 04:57:08 np0005532048 nova_compute[253661]: 2025-11-22 09:57:08.299 253665 DEBUG nova.compute.manager [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 22 04:57:08 np0005532048 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 22 04:57:08 np0005532048 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 4.536s CPU time.
Nov 22 04:57:08 np0005532048 systemd-machined[215941]: Machine qemu-186-instance-00000098 terminated.
Nov 22 04:57:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:57:08 np0005532048 nova_compute[253661]: 2025-11-22 09:57:08.532 253665 INFO nova.virt.libvirt.driver [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance destroyed successfully.#033[00m
Nov 22 04:57:08 np0005532048 nova_compute[253661]: 2025-11-22 09:57:08.533 253665 DEBUG nova.objects.instance [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lazy-loading 'resources' on Instance uuid a7eff414-1d1e-4670-a9ca-5477d690015b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 22 04:57:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 401 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:09 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:09 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 401 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:09 np0005532048 nova_compute[253661]: 2025-11-22 09:57:09.050 253665 INFO nova.virt.libvirt.driver [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Deleting instance files /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b_del#033[00m
Nov 22 04:57:09 np0005532048 nova_compute[253661]: 2025-11-22 09:57:09.051 253665 INFO nova.virt.libvirt.driver [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Deletion of /var/lib/nova/instances/a7eff414-1d1e-4670-a9ca-5477d690015b_del complete#033[00m
Nov 22 04:57:09 np0005532048 nova_compute[253661]: 2025-11-22 09:57:09.130 253665 INFO nova.compute.manager [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 22 04:57:09 np0005532048 nova_compute[253661]: 2025-11-22 09:57:09.131 253665 DEBUG oslo.service.loopingcall [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 22 04:57:09 np0005532048 nova_compute[253661]: 2025-11-22 09:57:09.132 253665 DEBUG nova.compute.manager [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 22 04:57:09 np0005532048 nova_compute[253661]: 2025-11-22 09:57:09.132 253665 DEBUG nova.network.neutron [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 22 04:57:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:09.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.068 253665 DEBUG nova.network.neutron [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.077 253665 DEBUG nova.network.neutron [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.087 253665 INFO nova.compute.manager [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Took 0.95 seconds to deallocate network for instance.#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.122 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.123 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.172 253665 DEBUG oslo_concurrency.processutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:57:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:10.205+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:57:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:57:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145690875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.620 253665 DEBUG oslo_concurrency.processutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.626 253665 DEBUG nova.compute.provider_tree [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.639 253665 DEBUG nova.scheduler.client.report [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.656 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.681 253665 INFO nova.scheduler.client.report [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Deleted allocations for instance a7eff414-1d1e-4670-a9ca-5477d690015b#033[00m
Nov 22 04:57:10 np0005532048 nova_compute[253661]: 2025-11-22 09:57:10.738 253665 DEBUG oslo_concurrency.lockutils [None req-be458019-18eb-450b-b970-6023a9f0a820 c1b227b0892a49698bf98933a153ab9c 1aa9495592994e239bcee4c6e795c5d2 - - default default] Lock "a7eff414-1d1e-4670-a9ca-5477d690015b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:57:11 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:11.212+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:11 np0005532048 nova_compute[253661]: 2025-11-22 09:57:11.921 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:12 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:12.210+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:57:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567971391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:57:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:57:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567971391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:57:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 1 active+clean+laggy, 304 active+clean; 242 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 22 04:57:13 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:13.161+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:13 np0005532048 nova_compute[253661]: 2025-11-22 09:57:13.217 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:13 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 406 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:14 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:14 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 406 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:14.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 22 04:57:15 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:15.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:16 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:16.206+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 04:57:16 np0005532048 nova_compute[253661]: 2025-11-22 09:57:16.957 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:17 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:17.251+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:18 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:18.224+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:18 np0005532048 nova_compute[253661]: 2025-11-22 09:57:18.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:18 np0005532048 podman[415361]: 2025-11-22 09:57:18.385352028 +0000 UTC m=+0.072794117 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 22 04:57:18 np0005532048 podman[415362]: 2025-11-22 09:57:18.434439404 +0000 UTC m=+0.111722885 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 22 04:57:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 22 04:57:18 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 412 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:19 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:19 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 412 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:19.181+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:20 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:20.160+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 22 04:57:21 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:21.191+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:21 np0005532048 nova_compute[253661]: 2025-11-22 09:57:21.961 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:22 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:22.171+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 22 04:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:57:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:57:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:23.155+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:23 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:23 np0005532048 nova_compute[253661]: 2025-11-22 09:57:23.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:23 np0005532048 nova_compute[253661]: 2025-11-22 09:57:23.531 253665 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763805428.52945, a7eff414-1d1e-4670-a9ca-5477d690015b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 22 04:57:23 np0005532048 nova_compute[253661]: 2025-11-22 09:57:23.531 253665 INFO nova.compute.manager [-] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] VM Stopped (Lifecycle Event)#033[00m
Nov 22 04:57:23 np0005532048 nova_compute[253661]: 2025-11-22 09:57:23.565 253665 DEBUG nova.compute.manager [None req-79fec480-9453-4174-a1a4-224de9db142c - - - - - -] [instance: a7eff414-1d1e-4670-a9ca-5477d690015b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 22 04:57:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 416 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:24 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:24 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 416 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:24.173+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 22 04:57:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:25.214+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:25 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:25 np0005532048 podman[415399]: 2025-11-22 09:57:25.427068811 +0000 UTC m=+0.114913025 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 04:57:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:26.168+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:26 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:26 np0005532048 nova_compute[253661]: 2025-11-22 09:57:26.963 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:27.188+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:27 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:57:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:57:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:57:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:57:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:57:27.999 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:57:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:28.170+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:28 np0005532048 nova_compute[253661]: 2025-11-22 09:57:28.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:57:28.694 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 04:57:28 np0005532048 nova_compute[253661]: 2025-11-22 09:57:28.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:57:28.697 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 04:57:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 422 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:29.204+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:29 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 422 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:29 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:30.157+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:30 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:31.134+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:31 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:31 np0005532048 nova_compute[253661]: 2025-11-22 09:57:31.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:32.151+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:32 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:33.100+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:33 np0005532048 nova_compute[253661]: 2025-11-22 09:57:33.292 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:33 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 427 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:34.091+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:34 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 427 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:34 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:34 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:57:34.699 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 04:57:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:35.099+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:35 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:36.111+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:36 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:36 np0005532048 nova_compute[253661]: 2025-11-22 09:57:36.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:37.116+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:37 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:38.154+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:38 np0005532048 nova_compute[253661]: 2025-11-22 09:57:38.294 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:38 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 431 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:57:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:39.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:57:39 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:39 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 431 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:39 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:39 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:40.146+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 59348556-2c42-49a5-8e19-e046b1317b24 does not exist
Nov 22 04:57:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 788079e9-07f7-42c5-8121-3022c6f562c1 does not exist
Nov 22 04:57:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e9e63380-c8d5-4858-9c2c-7b2a211a5ea4 does not exist
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:57:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:57:40 np0005532048 podman[415816]: 2025-11-22 09:57:40.936055427 +0000 UTC m=+0.066182149 container create b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:57:40 np0005532048 systemd[1]: Started libpod-conmon-b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356.scope.
Nov 22 04:57:40 np0005532048 podman[415816]: 2025-11-22 09:57:40.89470443 +0000 UTC m=+0.024831202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:57:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:57:41 np0005532048 podman[415816]: 2025-11-22 09:57:41.041958522 +0000 UTC m=+0.172085284 container init b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:57:41 np0005532048 podman[415816]: 2025-11-22 09:57:41.050990074 +0000 UTC m=+0.181116806 container start b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:57:41 np0005532048 podman[415816]: 2025-11-22 09:57:41.055109556 +0000 UTC m=+0.185236278 container attach b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 04:57:41 np0005532048 vigilant_yonath[415832]: 167 167
Nov 22 04:57:41 np0005532048 systemd[1]: libpod-b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356.scope: Deactivated successfully.
Nov 22 04:57:41 np0005532048 podman[415816]: 2025-11-22 09:57:41.060779605 +0000 UTC m=+0.190906327 container died b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:57:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f40dea2b50875389bf41afa93007b03c92144d4ce0158b951b08c047c0349db0-merged.mount: Deactivated successfully.
Nov 22 04:57:41 np0005532048 podman[415816]: 2025-11-22 09:57:41.102350997 +0000 UTC m=+0.232477699 container remove b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:57:41 np0005532048 systemd[1]: libpod-conmon-b07de3d9dabf68f83bae75b89d4e2d814061d6fe5e3f8ecebc70efc5c8477356.scope: Deactivated successfully.
Nov 22 04:57:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:41.143+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:41 np0005532048 podman[415856]: 2025-11-22 09:57:41.260196069 +0000 UTC m=+0.038321333 container create b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:57:41 np0005532048 systemd[1]: Started libpod-conmon-b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992.scope.
Nov 22 04:57:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:57:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:41 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:41 np0005532048 podman[415856]: 2025-11-22 09:57:41.242563195 +0000 UTC m=+0.020688469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:57:41 np0005532048 podman[415856]: 2025-11-22 09:57:41.341616181 +0000 UTC m=+0.119741435 container init b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:57:41 np0005532048 podman[415856]: 2025-11-22 09:57:41.35133219 +0000 UTC m=+0.129457444 container start b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:57:41 np0005532048 podman[415856]: 2025-11-22 09:57:41.354239172 +0000 UTC m=+0.132364416 container attach b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:57:41 np0005532048 nova_compute[253661]: 2025-11-22 09:57:41.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:42.192+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:42 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:42 np0005532048 sweet_archimedes[415872]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:57:42 np0005532048 sweet_archimedes[415872]: --> relative data size: 1.0
Nov 22 04:57:42 np0005532048 sweet_archimedes[415872]: --> All data devices are unavailable
Nov 22 04:57:42 np0005532048 systemd[1]: libpod-b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992.scope: Deactivated successfully.
Nov 22 04:57:42 np0005532048 systemd[1]: libpod-b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992.scope: Consumed 1.087s CPU time.
Nov 22 04:57:42 np0005532048 podman[415856]: 2025-11-22 09:57:42.48245486 +0000 UTC m=+1.260580234 container died b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:57:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4f64bc85dcabc5c2d52a5a91051a99a1718c02c28be5301afd6647b1dee1a6cc-merged.mount: Deactivated successfully.
Nov 22 04:57:42 np0005532048 podman[415856]: 2025-11-22 09:57:42.545346597 +0000 UTC m=+1.323471841 container remove b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:57:42 np0005532048 systemd[1]: libpod-conmon-b88d940d465315fe1401f560e6ee0ed097c8445d8f16f025b00dcd3596e08992.scope: Deactivated successfully.
Nov 22 04:57:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:43.201+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:43 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:43 np0005532048 podman[416051]: 2025-11-22 09:57:43.276419437 +0000 UTC m=+0.045373687 container create 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:57:43 np0005532048 nova_compute[253661]: 2025-11-22 09:57:43.296 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:43 np0005532048 systemd[1]: Started libpod-conmon-689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822.scope.
Nov 22 04:57:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:57:43 np0005532048 podman[416051]: 2025-11-22 09:57:43.255213526 +0000 UTC m=+0.024167796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:57:43 np0005532048 podman[416051]: 2025-11-22 09:57:43.360630418 +0000 UTC m=+0.129584688 container init 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:57:43 np0005532048 podman[416051]: 2025-11-22 09:57:43.369898896 +0000 UTC m=+0.138853146 container start 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:57:43 np0005532048 podman[416051]: 2025-11-22 09:57:43.373514285 +0000 UTC m=+0.142468545 container attach 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:57:43 np0005532048 laughing_carson[416067]: 167 167
Nov 22 04:57:43 np0005532048 systemd[1]: libpod-689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822.scope: Deactivated successfully.
Nov 22 04:57:43 np0005532048 podman[416051]: 2025-11-22 09:57:43.377216196 +0000 UTC m=+0.146170486 container died 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:57:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f2682cfe65998ff588fdc02a48f6ad3d19b6d44746db664ed58e964066e7e899-merged.mount: Deactivated successfully.
Nov 22 04:57:43 np0005532048 podman[416051]: 2025-11-22 09:57:43.422985572 +0000 UTC m=+0.191939842 container remove 689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 22 04:57:43 np0005532048 systemd[1]: libpod-conmon-689429eebb0f0a84a89e19845b15f63565eb7896de5576fa3cf6d85c4bb03822.scope: Deactivated successfully.
Nov 22 04:57:43 np0005532048 podman[416091]: 2025-11-22 09:57:43.615328462 +0000 UTC m=+0.057612248 container create b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:57:43 np0005532048 systemd[1]: Started libpod-conmon-b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21.scope.
Nov 22 04:57:43 np0005532048 podman[416091]: 2025-11-22 09:57:43.590757508 +0000 UTC m=+0.033041344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:57:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:57:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:43 np0005532048 podman[416091]: 2025-11-22 09:57:43.706293859 +0000 UTC m=+0.148577665 container init b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 04:57:43 np0005532048 podman[416091]: 2025-11-22 09:57:43.720293954 +0000 UTC m=+0.162577740 container start b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:57:43 np0005532048 podman[416091]: 2025-11-22 09:57:43.723923233 +0000 UTC m=+0.166207019 container attach b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:57:43 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 437 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:44.215+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:44 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:44 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 437 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]: {
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:    "0": [
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:        {
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "devices": [
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "/dev/loop3"
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            ],
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_name": "ceph_lv0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_size": "21470642176",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "name": "ceph_lv0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "tags": {
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.cluster_name": "ceph",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.crush_device_class": "",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.encrypted": "0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.osd_id": "0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.type": "block",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.vdo": "0"
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            },
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "type": "block",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "vg_name": "ceph_vg0"
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:        }
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:    ],
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:    "1": [
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:        {
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "devices": [
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "/dev/loop4"
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            ],
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_name": "ceph_lv1",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_size": "21470642176",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "name": "ceph_lv1",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "tags": {
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.cluster_name": "ceph",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.crush_device_class": "",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.encrypted": "0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.osd_id": "1",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.type": "block",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.vdo": "0"
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            },
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "type": "block",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "vg_name": "ceph_vg1"
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:        }
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:    ],
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:    "2": [
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:        {
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "devices": [
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "/dev/loop5"
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            ],
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_name": "ceph_lv2",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_size": "21470642176",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "name": "ceph_lv2",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "tags": {
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.cluster_name": "ceph",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.crush_device_class": "",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.encrypted": "0",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.osd_id": "2",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.type": "block",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:                "ceph.vdo": "0"
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            },
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "type": "block",
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:            "vg_name": "ceph_vg2"
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:        }
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]:    ]
Nov 22 04:57:44 np0005532048 awesome_johnson[416108]: }
Nov 22 04:57:44 np0005532048 systemd[1]: libpod-b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21.scope: Deactivated successfully.
Nov 22 04:57:44 np0005532048 podman[416117]: 2025-11-22 09:57:44.683676808 +0000 UTC m=+0.035747020 container died b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:57:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-52a940dd3035c563cb471693caba8c0b33ec0969673ae500fb4377d3b2fdc0af-merged.mount: Deactivated successfully.
Nov 22 04:57:44 np0005532048 podman[416117]: 2025-11-22 09:57:44.73824647 +0000 UTC m=+0.090316662 container remove b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 04:57:44 np0005532048 systemd[1]: libpod-conmon-b31eb0871d3285dadfd9973b4ede88758fbecf4f41b64e73bfcdaa3e77047f21.scope: Deactivated successfully.
Nov 22 04:57:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:45.248+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:45 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:45 np0005532048 podman[416271]: 2025-11-22 09:57:45.465153167 +0000 UTC m=+0.051437576 container create 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:57:45 np0005532048 systemd[1]: Started libpod-conmon-04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e.scope.
Nov 22 04:57:45 np0005532048 podman[416271]: 2025-11-22 09:57:45.437302182 +0000 UTC m=+0.023586681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:57:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:57:45 np0005532048 podman[416271]: 2025-11-22 09:57:45.563047205 +0000 UTC m=+0.149331634 container init 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:57:45 np0005532048 podman[416271]: 2025-11-22 09:57:45.574944078 +0000 UTC m=+0.161228487 container start 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:57:45 np0005532048 unruffled_lewin[416287]: 167 167
Nov 22 04:57:45 np0005532048 podman[416271]: 2025-11-22 09:57:45.578591357 +0000 UTC m=+0.164875766 container attach 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:57:45 np0005532048 systemd[1]: libpod-04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e.scope: Deactivated successfully.
Nov 22 04:57:45 np0005532048 podman[416271]: 2025-11-22 09:57:45.581393196 +0000 UTC m=+0.167677605 container died 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 04:57:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c99be7e3fbb9b366c0015f8ff7617f856e744a09d4167313b6c1147bb2779f9d-merged.mount: Deactivated successfully.
Nov 22 04:57:45 np0005532048 podman[416271]: 2025-11-22 09:57:45.621025802 +0000 UTC m=+0.207310221 container remove 04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lewin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:57:45 np0005532048 systemd[1]: libpod-conmon-04208d9acca9b2a35026e18d651b05c74cb7443fcfb8f116617c510c438c635e.scope: Deactivated successfully.
Nov 22 04:57:45 np0005532048 podman[416311]: 2025-11-22 09:57:45.824148187 +0000 UTC m=+0.048959115 container create ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:57:45 np0005532048 systemd[1]: Started libpod-conmon-ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb.scope.
Nov 22 04:57:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:57:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:57:45 np0005532048 podman[416311]: 2025-11-22 09:57:45.806162075 +0000 UTC m=+0.030973023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:57:45 np0005532048 podman[416311]: 2025-11-22 09:57:45.911043244 +0000 UTC m=+0.135854172 container init ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:57:45 np0005532048 podman[416311]: 2025-11-22 09:57:45.918541128 +0000 UTC m=+0.143352056 container start ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:57:45 np0005532048 podman[416311]: 2025-11-22 09:57:45.921574413 +0000 UTC m=+0.146385371 container attach ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 04:57:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:46.216+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:46 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]: {
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "osd_id": 1,
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "type": "bluestore"
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:    },
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "osd_id": 0,
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "type": "bluestore"
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:    },
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "osd_id": 2,
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:        "type": "bluestore"
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]:    }
Nov 22 04:57:46 np0005532048 cranky_lovelace[416328]: }
Nov 22 04:57:46 np0005532048 systemd[1]: libpod-ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb.scope: Deactivated successfully.
Nov 22 04:57:46 np0005532048 podman[416311]: 2025-11-22 09:57:46.969601879 +0000 UTC m=+1.194412847 container died ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:57:46 np0005532048 systemd[1]: libpod-ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb.scope: Consumed 1.055s CPU time.
Nov 22 04:57:47 np0005532048 nova_compute[253661]: 2025-11-22 09:57:47.015 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7e29f631afe077f12afa210a1016db55b7e92b94d3ccdf0098ec3dc2156c188f-merged.mount: Deactivated successfully.
Nov 22 04:57:47 np0005532048 podman[416311]: 2025-11-22 09:57:47.061208692 +0000 UTC m=+1.286019640 container remove ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:57:47 np0005532048 systemd[1]: libpod-conmon-ff6457f3ba6eba1aa5958eca498af151657bff74bf0b266c6b66fc19ea9d83fb.scope: Deactivated successfully.
Nov 22 04:57:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:57:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:47 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:57:47 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:47 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0ce397da-5f87-49a7-905a-5f182341d52c does not exist
Nov 22 04:57:47 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bbc07c5e-5de1-46ea-93e4-4f587066cc3f does not exist
Nov 22 04:57:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:47.196+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:47 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:47 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:47 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:57:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:48.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:48 np0005532048 nova_compute[253661]: 2025-11-22 09:57:48.297 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:48 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:48 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 441 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:49.267+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:49 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 441 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:49 np0005532048 podman[416425]: 2025-11-22 09:57:49.412011648 +0000 UTC m=+0.095462818 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 04:57:49 np0005532048 podman[416426]: 2025-11-22 09:57:49.416357285 +0000 UTC m=+0.091574573 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 04:57:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:50.310+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:51.289+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:51 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:52 np0005532048 nova_compute[253661]: 2025-11-22 09:57:52.018 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:52.256+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:57:52
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'backups', 'vms', 'volumes']
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:57:52 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:57:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:57:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:53.272+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:53 np0005532048 nova_compute[253661]: 2025-11-22 09:57:53.299 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:53 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:53 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 446 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:53 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:54 np0005532048 nova_compute[253661]: 2025-11-22 09:57:54.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:57:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:54.230+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:54 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 446 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:55 np0005532048 nova_compute[253661]: 2025-11-22 09:57:55.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:57:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:55.227+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:57:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:57:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:57:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:57:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:57:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:56.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:56 np0005532048 nova_compute[253661]: 2025-11-22 09:57:56.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:57:56 np0005532048 nova_compute[253661]: 2025-11-22 09:57:56.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:57:56 np0005532048 nova_compute[253661]: 2025-11-22 09:57:56.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:57:56 np0005532048 nova_compute[253661]: 2025-11-22 09:57:56.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:57:56 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:56 np0005532048 podman[416460]: 2025-11-22 09:57:56.448354173 +0000 UTC m=+0.134925910 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 22 04:57:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:57 np0005532048 nova_compute[253661]: 2025-11-22 09:57:57.066 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:57.191+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:57 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:57:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:57:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:57:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:57:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:57:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:58.184+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:58 np0005532048 nova_compute[253661]: 2025-11-22 09:57:58.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:57:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:57:58 np0005532048 ovn_controller[152872]: 2025-11-22T09:57:58Z|01651|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 22 04:57:58 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 451 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:57:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:57:59.138+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:57:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:57:59 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 451 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:57:59 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:00.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:00 np0005532048 nova_compute[253661]: 2025-11-22 09:58:00.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:00 np0005532048 nova_compute[253661]: 2025-11-22 09:58:00.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:58:00 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:01.084+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:01 np0005532048 nova_compute[253661]: 2025-11-22 09:58:01.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:01 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:02 np0005532048 nova_compute[253661]: 2025-11-22 09:58:02.070 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:02.130+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:02 np0005532048 nova_compute[253661]: 2025-11-22 09:58:02.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:02 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:58:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:58:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:03.156+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:03 np0005532048 nova_compute[253661]: 2025-11-22 09:58:03.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:03 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:03 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 457 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:03 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:04.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:04 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 457 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:04 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:05.119+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.251 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.252 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:58:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:58:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3625945087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.732 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.944 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.945 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3506MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.945 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:58:05 np0005532048 nova_compute[253661]: 2025-11-22 09:58:05.946 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:58:06 np0005532048 nova_compute[253661]: 2025-11-22 09:58:06.038 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:58:06 np0005532048 nova_compute[253661]: 2025-11-22 09:58:06.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:58:06 np0005532048 nova_compute[253661]: 2025-11-22 09:58:06.061 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:58:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:06.155+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:06 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:58:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771040852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:58:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:06 np0005532048 nova_compute[253661]: 2025-11-22 09:58:06.525 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:58:06 np0005532048 nova_compute[253661]: 2025-11-22 09:58:06.532 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:58:06 np0005532048 nova_compute[253661]: 2025-11-22 09:58:06.546 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:58:06 np0005532048 nova_compute[253661]: 2025-11-22 09:58:06.751 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:58:06 np0005532048 nova_compute[253661]: 2025-11-22 09:58:06.752 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:58:07 np0005532048 nova_compute[253661]: 2025-11-22 09:58:07.074 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:07.195+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:07 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:07 np0005532048 nova_compute[253661]: 2025-11-22 09:58:07.753 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:07 np0005532048 nova_compute[253661]: 2025-11-22 09:58:07.754 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:08.154+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:08 np0005532048 nova_compute[253661]: 2025-11-22 09:58:08.374 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:08 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 462 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:09.196+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:09 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 462 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:09 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:10.222+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 04:58:11 np0005532048 nova_compute[253661]: 2025-11-22 09:58:11.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:12 np0005532048 nova_compute[253661]: 2025-11-22 09:58:12.078 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:58:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/789421861' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:58:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:58:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/789421861' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:58:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:13 np0005532048 nova_compute[253661]: 2025-11-22 09:58:13.377 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:13 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 467 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 04:58:14 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 467 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 04:58:17 np0005532048 nova_compute[253661]: 2025-11-22 09:58:17.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:18 np0005532048 nova_compute[253661]: 2025-11-22 09:58:18.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 04:58:18 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 471 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:19 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 471 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:20 np0005532048 podman[416532]: 2025-11-22 09:58:20.404671823 +0000 UTC m=+0.087827671 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 22 04:58:20 np0005532048 podman[416533]: 2025-11-22 09:58:20.418712799 +0000 UTC m=+0.098823372 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:58:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 04:58:22 np0005532048 nova_compute[253661]: 2025-11-22 09:58:22.112 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 04:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:58:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.150826) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503150919, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1167, "num_deletes": 251, "total_data_size": 1353463, "memory_usage": 1386800, "flush_reason": "Manual Compaction"}
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503182018, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1321670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60744, "largest_seqno": 61910, "table_properties": {"data_size": 1316476, "index_size": 2461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13356, "raw_average_key_size": 20, "raw_value_size": 1305164, "raw_average_value_size": 2014, "num_data_blocks": 108, "num_entries": 648, "num_filter_entries": 648, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805424, "oldest_key_time": 1763805424, "file_creation_time": 1763805503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 31245 microseconds, and 4666 cpu microseconds.
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.182083) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1321670 bytes OK
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.182113) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.185959) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.185977) EVENT_LOG_v1 {"time_micros": 1763805503185971, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.186002) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1347947, prev total WAL file size 1347947, number of live WAL files 2.
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.186662) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1290KB)], [143(8399KB)]
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503186757, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 9922311, "oldest_snapshot_seqno": -1}
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8343 keys, 8428177 bytes, temperature: kUnknown
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503269065, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 8428177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8377374, "index_size": 28866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 220741, "raw_average_key_size": 26, "raw_value_size": 8233321, "raw_average_value_size": 986, "num_data_blocks": 1097, "num_entries": 8343, "num_filter_entries": 8343, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.269375) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 8428177 bytes
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.298766) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.4 rd, 102.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(13.9) write-amplify(6.4) OK, records in: 8857, records dropped: 514 output_compression: NoCompression
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.298822) EVENT_LOG_v1 {"time_micros": 1763805503298803, "job": 88, "event": "compaction_finished", "compaction_time_micros": 82398, "compaction_time_cpu_micros": 24586, "output_level": 6, "num_output_files": 1, "total_output_size": 8428177, "num_input_records": 8857, "num_output_records": 8343, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503299631, "job": 88, "event": "table_file_deletion", "file_number": 145}
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805503302079, "job": 88, "event": "table_file_deletion", "file_number": 143}
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.186486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-09:58:23.302238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:58:23 np0005532048 nova_compute[253661]: 2025-11-22 09:58:23.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 476 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:24 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 476 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 04:58:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:27 np0005532048 nova_compute[253661]: 2025-11-22 09:58:27.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:27 np0005532048 podman[416572]: 2025-11-22 09:58:27.394516406 +0000 UTC m=+0.084727056 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:58:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:58:28.000 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:58:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:58:28.000 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:58:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:58:28.001 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:58:28 np0005532048 nova_compute[253661]: 2025-11-22 09:58:28.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:28 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 481 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:29 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 481 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:32 np0005532048 nova_compute[253661]: 2025-11-22 09:58:32.121 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:33 np0005532048 nova_compute[253661]: 2025-11-22 09:58:33.402 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:33 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 486 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:34 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 486 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:37 np0005532048 nova_compute[253661]: 2025-11-22 09:58:37.152 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:38 np0005532048 nova_compute[253661]: 2025-11-22 09:58:38.438 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:38 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 496 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:38 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:39 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 496 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:41.140+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:41 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:42.096+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:42 np0005532048 nova_compute[253661]: 2025-11-22 09:58:42.157 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:42 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:43.097+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:43 np0005532048 nova_compute[253661]: 2025-11-22 09:58:43.471 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:44 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:44.124+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:45 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 501 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:45.172+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:45 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:46.125+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:46 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 501 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:46 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:46 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:47.095+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:47 np0005532048 nova_compute[253661]: 2025-11-22 09:58:47.160 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:47 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:48.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:58:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bbffb866-1d01-4afc-a484-4968f512f85e does not exist
Nov 22 04:58:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 79c69271-abb1-452d-b994-531febaca8e1 does not exist
Nov 22 04:58:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fc4a4a62-9f0e-4ec3-957b-c956458be006 does not exist
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:58:48 np0005532048 nova_compute[253661]: 2025-11-22 09:58:48.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:58:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:58:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:48 np0005532048 podman[416870]: 2025-11-22 09:58:48.868798952 +0000 UTC m=+0.026998485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:58:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:49 np0005532048 podman[416870]: 2025-11-22 09:58:49.074261545 +0000 UTC m=+0.232461038 container create d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:58:49 np0005532048 systemd[1]: Started libpod-conmon-d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c.scope.
Nov 22 04:58:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:49.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:58:49 np0005532048 podman[416870]: 2025-11-22 09:58:49.217417236 +0000 UTC m=+0.375616779 container init d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:58:49 np0005532048 podman[416870]: 2025-11-22 09:58:49.23182081 +0000 UTC m=+0.390020303 container start d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:58:49 np0005532048 xenodochial_payne[416888]: 167 167
Nov 22 04:58:49 np0005532048 systemd[1]: libpod-d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c.scope: Deactivated successfully.
Nov 22 04:58:49 np0005532048 podman[416870]: 2025-11-22 09:58:49.255286927 +0000 UTC m=+0.413486480 container attach d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:58:49 np0005532048 podman[416870]: 2025-11-22 09:58:49.256025716 +0000 UTC m=+0.414225189 container died d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 04:58:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d8348f6c8cdb7d6520d3f94f373f2e17afff0c2f69c19d03107d94586c517329-merged.mount: Deactivated successfully.
Nov 22 04:58:49 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:49 np0005532048 podman[416870]: 2025-11-22 09:58:49.606215358 +0000 UTC m=+0.764414821 container remove d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_payne, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:58:49 np0005532048 systemd[1]: libpod-conmon-d15b41a0ec396de02f765539d32ca4aa3e76bd322220897ebdf9c2457304bb7c.scope: Deactivated successfully.
Nov 22 04:58:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:50.100+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:50 np0005532048 podman[416911]: 2025-11-22 09:58:50.037013203 +0000 UTC m=+0.256991051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:58:50 np0005532048 podman[416911]: 2025-11-22 09:58:50.127745684 +0000 UTC m=+0.347723512 container create 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:58:50 np0005532048 systemd[1]: Started libpod-conmon-34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c.scope.
Nov 22 04:58:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:58:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:50 np0005532048 podman[416911]: 2025-11-22 09:58:50.44916179 +0000 UTC m=+0.669139718 container init 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:58:50 np0005532048 podman[416911]: 2025-11-22 09:58:50.459138745 +0000 UTC m=+0.679116573 container start 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:58:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:50 np0005532048 podman[416911]: 2025-11-22 09:58:50.656672063 +0000 UTC m=+0.876649931 container attach 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:58:50 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:51.063+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:51 np0005532048 podman[416941]: 2025-11-22 09:58:51.392529321 +0000 UTC m=+0.081023843 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:58:51 np0005532048 podman[416942]: 2025-11-22 09:58:51.397486893 +0000 UTC m=+0.086010406 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 22 04:58:51 np0005532048 zealous_gagarin[416928]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:58:51 np0005532048 zealous_gagarin[416928]: --> relative data size: 1.0
Nov 22 04:58:51 np0005532048 zealous_gagarin[416928]: --> All data devices are unavailable
Nov 22 04:58:51 np0005532048 systemd[1]: libpod-34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c.scope: Deactivated successfully.
Nov 22 04:58:51 np0005532048 systemd[1]: libpod-34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c.scope: Consumed 1.114s CPU time.
Nov 22 04:58:51 np0005532048 podman[416911]: 2025-11-22 09:58:51.644051317 +0000 UTC m=+1.864029165 container died 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:58:51 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:52.080+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:52 np0005532048 nova_compute[253661]: 2025-11-22 09:58:52.162 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-75ace32a43f65788eadfc91e9450e868a42a725e0fcc64b46aa99ecd8a5a2990-merged.mount: Deactivated successfully.
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:58:52
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:58:52 np0005532048 podman[416911]: 2025-11-22 09:58:52.51141654 +0000 UTC m=+2.731394358 container remove 34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:58:52 np0005532048 systemd[1]: libpod-conmon-34ec3cd7b286f627c0d06c716e345930b16b3bc8db69b9bc83ac95d4e0f8053c.scope: Deactivated successfully.
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:58:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:58:52 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:53.077+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:53 np0005532048 podman[417151]: 2025-11-22 09:58:53.345674977 +0000 UTC m=+0.119315225 container create f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:58:53 np0005532048 podman[417151]: 2025-11-22 09:58:53.251556442 +0000 UTC m=+0.025196730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:58:53 np0005532048 nova_compute[253661]: 2025-11-22 09:58:53.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:53 np0005532048 systemd[1]: Started libpod-conmon-f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55.scope.
Nov 22 04:58:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:58:53 np0005532048 podman[417151]: 2025-11-22 09:58:53.672955066 +0000 UTC m=+0.446595324 container init f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:58:53 np0005532048 podman[417151]: 2025-11-22 09:58:53.687319839 +0000 UTC m=+0.460960107 container start f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:58:53 np0005532048 sleepy_cartwright[417167]: 167 167
Nov 22 04:58:53 np0005532048 systemd[1]: libpod-f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55.scope: Deactivated successfully.
Nov 22 04:58:53 np0005532048 conmon[417167]: conmon f41cad74bf44254bfd78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55.scope/container/memory.events
Nov 22 04:58:53 np0005532048 podman[417151]: 2025-11-22 09:58:53.813499074 +0000 UTC m=+0.587139412 container attach f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:58:53 np0005532048 podman[417151]: 2025-11-22 09:58:53.81541221 +0000 UTC m=+0.589052538 container died f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:58:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-12d217134b9be0a1fcc6e0363a32d5fd4ae94c7dec834662298988483b160bc8-merged.mount: Deactivated successfully.
Nov 22 04:58:53 np0005532048 podman[417151]: 2025-11-22 09:58:53.968594747 +0000 UTC m=+0.742234985 container remove f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:58:53 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:53 np0005532048 systemd[1]: libpod-conmon-f41cad74bf44254bfd78c4b769881983451aa58876e0eb774ba53d67fe59fb55.scope: Deactivated successfully.
Nov 22 04:58:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 506 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:54.115+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:54 np0005532048 podman[417192]: 2025-11-22 09:58:54.206844807 +0000 UTC m=+0.116550558 container create 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:58:54 np0005532048 podman[417192]: 2025-11-22 09:58:54.132590081 +0000 UTC m=+0.042295802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:58:54 np0005532048 systemd[1]: Started libpod-conmon-0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39.scope.
Nov 22 04:58:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:58:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:54 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:54 np0005532048 podman[417192]: 2025-11-22 09:58:54.478016977 +0000 UTC m=+0.387722778 container init 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:58:54 np0005532048 podman[417192]: 2025-11-22 09:58:54.493357484 +0000 UTC m=+0.403063195 container start 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:58:54 np0005532048 podman[417192]: 2025-11-22 09:58:54.497422224 +0000 UTC m=+0.407127975 container attach 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 04:58:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:55.089+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:55 np0005532048 nova_compute[253661]: 2025-11-22 09:58:55.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:55 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 506 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:55 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]: {
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:    "0": [
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:        {
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "devices": [
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "/dev/loop3"
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            ],
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_name": "ceph_lv0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_size": "21470642176",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "name": "ceph_lv0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "tags": {
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.cluster_name": "ceph",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.crush_device_class": "",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.encrypted": "0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.osd_id": "0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.type": "block",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.vdo": "0"
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            },
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "type": "block",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "vg_name": "ceph_vg0"
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:        }
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:    ],
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:    "1": [
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:        {
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "devices": [
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "/dev/loop4"
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            ],
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_name": "ceph_lv1",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_size": "21470642176",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "name": "ceph_lv1",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "tags": {
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.cluster_name": "ceph",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.crush_device_class": "",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.encrypted": "0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.osd_id": "1",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.type": "block",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.vdo": "0"
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            },
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "type": "block",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "vg_name": "ceph_vg1"
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:        }
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:    ],
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:    "2": [
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:        {
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "devices": [
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "/dev/loop5"
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            ],
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_name": "ceph_lv2",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_size": "21470642176",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "name": "ceph_lv2",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "tags": {
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.cephx_lockbox_secret": "",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.cluster_name": "ceph",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.crush_device_class": "",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.encrypted": "0",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.osd_id": "2",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.type": "block",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:                "ceph.vdo": "0"
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            },
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "type": "block",
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:            "vg_name": "ceph_vg2"
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:        }
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]:    ]
Nov 22 04:58:55 np0005532048 funny_chebyshev[417209]: }
Nov 22 04:58:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:58:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:58:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:58:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:58:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:58:55 np0005532048 systemd[1]: libpod-0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39.scope: Deactivated successfully.
Nov 22 04:58:55 np0005532048 systemd[1]: libpod-0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39.scope: Consumed 1.290s CPU time.
Nov 22 04:58:55 np0005532048 podman[417192]: 2025-11-22 09:58:55.824283288 +0000 UTC m=+1.733989029 container died 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:58:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:56.116+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-752a4aa12d0e2fe50394d84fdd10ac87117adfd69dfd1c66c65609e679199211-merged.mount: Deactivated successfully.
Nov 22 04:58:56 np0005532048 nova_compute[253661]: 2025-11-22 09:58:56.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:56 np0005532048 podman[417192]: 2025-11-22 09:58:56.580879637 +0000 UTC m=+2.490585348 container remove 0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:58:56 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:56 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:56 np0005532048 systemd[1]: libpod-conmon-0f9f3d9148e0efbde2b464ed06285003ab1aa9c20eb0ea99dfdd189e010d9b39.scope: Deactivated successfully.
Nov 22 04:58:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:57.112+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:57 np0005532048 nova_compute[253661]: 2025-11-22 09:58:57.166 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:57 np0005532048 podman[417371]: 2025-11-22 09:58:57.195984764 +0000 UTC m=+0.040073976 container create 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:58:57 np0005532048 systemd[1]: Started libpod-conmon-1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87.scope.
Nov 22 04:58:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:58:57 np0005532048 podman[417371]: 2025-11-22 09:58:57.269628366 +0000 UTC m=+0.113717588 container init 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:58:57 np0005532048 podman[417371]: 2025-11-22 09:58:57.179590702 +0000 UTC m=+0.023679924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:58:57 np0005532048 podman[417371]: 2025-11-22 09:58:57.275806327 +0000 UTC m=+0.119895529 container start 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:58:57 np0005532048 podman[417371]: 2025-11-22 09:58:57.279234472 +0000 UTC m=+0.123323674 container attach 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:58:57 np0005532048 angry_knuth[417388]: 167 167
Nov 22 04:58:57 np0005532048 systemd[1]: libpod-1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87.scope: Deactivated successfully.
Nov 22 04:58:57 np0005532048 podman[417371]: 2025-11-22 09:58:57.28197482 +0000 UTC m=+0.126064022 container died 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:58:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-816483efdfee3468a12ddce582312409b4bc8cdcb2e1d837ad8ec1058b04fff6-merged.mount: Deactivated successfully.
Nov 22 04:58:57 np0005532048 podman[417371]: 2025-11-22 09:58:57.319998715 +0000 UTC m=+0.164087917 container remove 1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_knuth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:58:57 np0005532048 systemd[1]: libpod-conmon-1b769a434f6252558b8a3d275113f67c8ca79667795638bfe1caee6258917f87.scope: Deactivated successfully.
Nov 22 04:58:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:58:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:58:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:58:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:58:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:58:57 np0005532048 podman[417413]: 2025-11-22 09:58:57.475742005 +0000 UTC m=+0.039013771 container create db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:58:57 np0005532048 systemd[1]: Started libpod-conmon-db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19.scope.
Nov 22 04:58:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 04:58:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:58:57 np0005532048 podman[417413]: 2025-11-22 09:58:57.546896155 +0000 UTC m=+0.110167951 container init db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:58:57 np0005532048 podman[417413]: 2025-11-22 09:58:57.458942772 +0000 UTC m=+0.022214558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:58:57 np0005532048 podman[417413]: 2025-11-22 09:58:57.557615469 +0000 UTC m=+0.120887225 container start db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:58:57 np0005532048 podman[417413]: 2025-11-22 09:58:57.562034808 +0000 UTC m=+0.125306594 container attach db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:58:57 np0005532048 podman[417427]: 2025-11-22 09:58:57.624604447 +0000 UTC m=+0.111580606 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:58:57 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:58.090+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:58 np0005532048 nova_compute[253661]: 2025-11-22 09:58:58.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:58:58 np0005532048 nova_compute[253661]: 2025-11-22 09:58:58.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 04:58:58 np0005532048 nova_compute[253661]: 2025-11-22 09:58:58.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 04:58:58 np0005532048 nova_compute[253661]: 2025-11-22 09:58:58.242 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 04:58:58 np0005532048 nova_compute[253661]: 2025-11-22 09:58:58.477 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:58:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:58:58 np0005532048 confident_benz[417430]: {
Nov 22 04:58:58 np0005532048 confident_benz[417430]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "osd_id": 1,
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "type": "bluestore"
Nov 22 04:58:58 np0005532048 confident_benz[417430]:    },
Nov 22 04:58:58 np0005532048 confident_benz[417430]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "osd_id": 0,
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "type": "bluestore"
Nov 22 04:58:58 np0005532048 confident_benz[417430]:    },
Nov 22 04:58:58 np0005532048 confident_benz[417430]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "osd_id": 2,
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 04:58:58 np0005532048 confident_benz[417430]:        "type": "bluestore"
Nov 22 04:58:58 np0005532048 confident_benz[417430]:    }
Nov 22 04:58:58 np0005532048 confident_benz[417430]: }
Nov 22 04:58:58 np0005532048 systemd[1]: libpod-db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19.scope: Deactivated successfully.
Nov 22 04:58:58 np0005532048 systemd[1]: libpod-db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19.scope: Consumed 1.039s CPU time.
Nov 22 04:58:58 np0005532048 podman[417413]: 2025-11-22 09:58:58.592707816 +0000 UTC m=+1.155979582 container died db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:58:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f54c9809b40ee6f96592545529caf74a46f2e9b2b4a9f684ce54634815270c59-merged.mount: Deactivated successfully.
Nov 22 04:58:58 np0005532048 podman[417413]: 2025-11-22 09:58:58.682285669 +0000 UTC m=+1.245557435 container remove db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_benz, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:58:58 np0005532048 systemd[1]: libpod-conmon-db2ad05d6d209094f22106834ed0ff1d6a54a0a1ba274a85942343c95bccdc19.scope: Deactivated successfully.
Nov 22 04:58:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:58:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:58:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:58:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:58:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 793b5e67-da40-4dea-993a-ee9a773df2ff does not exist
Nov 22 04:58:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 57cbaa32-fd3e-44f4-94da-ce8dbcf2155a does not exist
Nov 22 04:58:58 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:58:58 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 04:58:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 516 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:58:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:58:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:58:59.072+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:58:59 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 516 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:58:59 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:00.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:00 np0005532048 nova_compute[253661]: 2025-11-22 09:59:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:00 np0005532048 nova_compute[253661]: 2025-11-22 09:59:00.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 04:59:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:00 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:00.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:01.934+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:02 np0005532048 nova_compute[253661]: 2025-11-22 09:59:02.170 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:02 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:02.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:59:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:59:03 np0005532048 nova_compute[253661]: 2025-11-22 09:59:03.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:03 np0005532048 nova_compute[253661]: 2025-11-22 09:59:03.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:03 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:03 np0005532048 nova_compute[253661]: 2025-11-22 09:59:03.479 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:03 np0005532048 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 04:59:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:03.883+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:04 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:04.898+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.258 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:59:05 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 521 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:05 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:05 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:59:05 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2484235646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.701 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.851 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.852 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3487MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.852 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.852 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.917 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.918 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 04:59:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:05.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:05 np0005532048 nova_compute[253661]: 2025-11-22 09:59:05.934 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 04:59:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:59:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4267733888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:59:06 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:06 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 521 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:06 np0005532048 nova_compute[253661]: 2025-11-22 09:59:06.570 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 04:59:06 np0005532048 nova_compute[253661]: 2025-11-22 09:59:06.576 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 04:59:06 np0005532048 nova_compute[253661]: 2025-11-22 09:59:06.590 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 04:59:06 np0005532048 nova_compute[253661]: 2025-11-22 09:59:06.591 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 04:59:06 np0005532048 nova_compute[253661]: 2025-11-22 09:59:06.592 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:59:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:06.916+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:07 np0005532048 nova_compute[253661]: 2025-11-22 09:59:07.175 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:07 np0005532048 nova_compute[253661]: 2025-11-22 09:59:07.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:07 np0005532048 nova_compute[253661]: 2025-11-22 09:59:07.592 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:07 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:07 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:07.915+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:08 np0005532048 nova_compute[253661]: 2025-11-22 09:59:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:08 np0005532048 nova_compute[253661]: 2025-11-22 09:59:08.480 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:08.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:09 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:09.894+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:10 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:10.927+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:11 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:11 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:11.897+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:12 np0005532048 nova_compute[253661]: 2025-11-22 09:59:12.179 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:59:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2051394240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:59:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:59:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2051394240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:59:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:12 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:12.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:13 np0005532048 nova_compute[253661]: 2025-11-22 09:59:13.241 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:13 np0005532048 nova_compute[253661]: 2025-11-22 09:59:13.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 04:59:13 np0005532048 nova_compute[253661]: 2025-11-22 09:59:13.533 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:13 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:13.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 526 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:14 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:14 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 526 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:14.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:15.975+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:16 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:16.969+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:17 np0005532048 nova_compute[253661]: 2025-11-22 09:59:17.182 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:17 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:17 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:18.006+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:18 np0005532048 nova_compute[253661]: 2025-11-22 09:59:18.535 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:18 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:18.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 536 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:19 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:19 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 536 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:19.995+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:20 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:21.032+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:21 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:22.080+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:22 np0005532048 nova_compute[253661]: 2025-11-22 09:59:22.186 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:22 np0005532048 podman[417594]: 2025-11-22 09:59:22.368680703 +0000 UTC m=+0.055103026 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 04:59:22 np0005532048 podman[417593]: 2025-11-22 09:59:22.392307704 +0000 UTC m=+0.082451949 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 04:59:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:59:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:59:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:23.100+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:23 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:23 np0005532048 nova_compute[253661]: 2025-11-22 09:59:23.540 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:24.149+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:24 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:25.183+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:25 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 541 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:25 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:26.216+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:26 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:26 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 541 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:27 np0005532048 nova_compute[253661]: 2025-11-22 09:59:27.190 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:27.253+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:27 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:27 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:59:28.002 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 04:59:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:59:28.003 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 04:59:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 09:59:28.003 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 04:59:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:28.247+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:28 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:28 np0005532048 podman[417631]: 2025-11-22 09:59:28.456859339 +0000 UTC m=+0.144044884 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:59:28 np0005532048 nova_compute[253661]: 2025-11-22 09:59:28.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:29.245+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:29 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:30.294+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:30 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:31.263+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:31 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:32 np0005532048 nova_compute[253661]: 2025-11-22 09:59:32.194 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:32.287+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:32 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:33.333+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:33 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:33 np0005532048 nova_compute[253661]: 2025-11-22 09:59:33.543 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 546 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:34.320+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:34 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 546 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:34 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:35.336+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:35 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:36.381+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:37 np0005532048 nova_compute[253661]: 2025-11-22 09:59:37.197 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:37.425+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:37 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:38 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:38.470+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:38 np0005532048 nova_compute[253661]: 2025-11-22 09:59:38.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 551 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:39.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:39 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:39 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 551 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:40.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:40 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:41.479+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:41 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:42 np0005532048 nova_compute[253661]: 2025-11-22 09:59:42.202 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:42.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:42 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:43.470+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:43 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:43 np0005532048 nova_compute[253661]: 2025-11-22 09:59:43.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 556 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:44.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:44 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:44 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 556 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:45 np0005532048 nova_compute[253661]: 2025-11-22 09:59:45.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:45 np0005532048 nova_compute[253661]: 2025-11-22 09:59:45.241 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 04:59:45 np0005532048 nova_compute[253661]: 2025-11-22 09:59:45.258 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 04:59:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:45.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:45 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:46.448+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:46 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:47 np0005532048 nova_compute[253661]: 2025-11-22 09:59:47.206 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:47.450+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:47 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:48.441+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:48 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:48 np0005532048 nova_compute[253661]: 2025-11-22 09:59:48.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 561 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:49.399+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:49 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:49 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 561 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:50.444+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:50 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:51.419+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:51 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:52 np0005532048 nova_compute[253661]: 2025-11-22 09:59:52.210 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_09:59:52
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:59:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:52.453+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:52 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:59:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:59:53 np0005532048 podman[417658]: 2025-11-22 09:59:53.362122054 +0000 UTC m=+0.051713973 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:59:53 np0005532048 podman[417659]: 2025-11-22 09:59:53.369036264 +0000 UTC m=+0.053495756 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 22 04:59:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:53.464+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:53 np0005532048 nova_compute[253661]: 2025-11-22 09:59:53.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:53 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 566 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:54.490+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:54 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:54 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 566 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:55.476+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:55 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:59:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:59:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:59:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:59:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:59:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:56.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:56 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:57 np0005532048 nova_compute[253661]: 2025-11-22 09:59:57.214 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:57 np0005532048 nova_compute[253661]: 2025-11-22 09:59:57.237 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:57 np0005532048 nova_compute[253661]: 2025-11-22 09:59:57.238 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 04:59:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:57.426+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:59:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:59:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:59:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:59:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:59:57 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:58.474+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:58 np0005532048 nova_compute[253661]: 2025-11-22 09:59:58.554 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 04:59:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 04:59:58 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:59 np0005532048 podman[417721]: 2025-11-22 09:59:59.109872266 +0000 UTC m=+0.096875103 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 04:59:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 576 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:59:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T09:59:59.467+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 04:59:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:59 np0005532048 podman[417896]: 2025-11-22 09:59:59.835970684 +0000 UTC m=+0.093253855 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 22 04:59:59 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 04:59:59 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 576 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 04:59:59 np0005532048 podman[417896]: 2025-11-22 09:59:59.987356267 +0000 UTC m=+0.244639418 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 05:00:00 np0005532048 nova_compute[253661]: 2025-11-22 10:00:00.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:00 np0005532048 nova_compute[253661]: 2025-11-22 10:00:00.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:00:00 np0005532048 nova_compute[253661]: 2025-11-22 10:00:00.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:00:00 np0005532048 nova_compute[253661]: 2025-11-22 10:00:00.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:00:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:00.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:00:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:00:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:00 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:01 np0005532048 nova_compute[253661]: 2025-11-22 10:00:01.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:01 np0005532048 nova_compute[253661]: 2025-11-22 10:00:01.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:00:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:01.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3b6478a8-886a-4444-b2bd-d3d56268310a does not exist
Nov 22 05:00:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d5d9bc11-da27-47f6-b276-8491c1020729 does not exist
Nov 22 05:00:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 68891905-0d9d-4af8-9e98-24e2cc03d3e6 does not exist
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:00:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:00:02 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:00:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:00:02 np0005532048 nova_compute[253661]: 2025-11-22 10:00:02.216 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:02 np0005532048 podman[418328]: 2025-11-22 10:00:02.367795483 +0000 UTC m=+0.046737600 container create d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:00:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:02.402+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:02 np0005532048 systemd[1]: Started libpod-conmon-d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9.scope.
Nov 22 05:00:02 np0005532048 podman[418328]: 2025-11-22 10:00:02.34651213 +0000 UTC m=+0.025454277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:00:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:00:02 np0005532048 podman[418328]: 2025-11-22 10:00:02.465933887 +0000 UTC m=+0.144876004 container init d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:00:02 np0005532048 podman[418328]: 2025-11-22 10:00:02.481708895 +0000 UTC m=+0.160651012 container start d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:00:02 np0005532048 podman[418328]: 2025-11-22 10:00:02.485542819 +0000 UTC m=+0.164484936 container attach d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:00:02 np0005532048 inspiring_euclid[418344]: 167 167
Nov 22 05:00:02 np0005532048 systemd[1]: libpod-d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9.scope: Deactivated successfully.
Nov 22 05:00:02 np0005532048 conmon[418344]: conmon d030a9ac915a14735241 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9.scope/container/memory.events
Nov 22 05:00:02 np0005532048 podman[418328]: 2025-11-22 10:00:02.48963806 +0000 UTC m=+0.168580177 container died d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:00:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b537fefaa098117ee6d2a785d2b301c57794800a81744576fdc76f9183aaccd6-merged.mount: Deactivated successfully.
Nov 22 05:00:02 np0005532048 podman[418328]: 2025-11-22 10:00:02.531663933 +0000 UTC m=+0.210606040 container remove d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:00:02 np0005532048 systemd[1]: libpod-conmon-d030a9ac915a14735241d455e344fdc7a262a1cdc9617b8bd4d58841b603b6a9.scope: Deactivated successfully.
Nov 22 05:00:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:02 np0005532048 podman[418367]: 2025-11-22 10:00:02.692197552 +0000 UTC m=+0.041121493 container create 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:00:02 np0005532048 systemd[1]: Started libpod-conmon-98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a.scope.
Nov 22 05:00:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:00:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:02 np0005532048 podman[418367]: 2025-11-22 10:00:02.675385299 +0000 UTC m=+0.024309270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:00:02 np0005532048 podman[418367]: 2025-11-22 10:00:02.778110985 +0000 UTC m=+0.127034966 container init 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:00:02 np0005532048 podman[418367]: 2025-11-22 10:00:02.785408044 +0000 UTC m=+0.134331985 container start 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:00:02 np0005532048 podman[418367]: 2025-11-22 10:00:02.789872234 +0000 UTC m=+0.138796185 container attach 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:00:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:00:03 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:03.376+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:03 np0005532048 nova_compute[253661]: 2025-11-22 10:00:03.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:03 np0005532048 frosty_payne[418383]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:00:03 np0005532048 frosty_payne[418383]: --> relative data size: 1.0
Nov 22 05:00:03 np0005532048 frosty_payne[418383]: --> All data devices are unavailable
Nov 22 05:00:03 np0005532048 systemd[1]: libpod-98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a.scope: Deactivated successfully.
Nov 22 05:00:03 np0005532048 systemd[1]: libpod-98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a.scope: Consumed 1.034s CPU time.
Nov 22 05:00:03 np0005532048 podman[418367]: 2025-11-22 10:00:03.894334477 +0000 UTC m=+1.243258418 container died 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:00:03 np0005532048 systemd[1]: var-lib-containers-storage-overlay-28dc22d9517b5b7d5f36b92b36888225136775e1f5e8d6a37ce9b01762596739-merged.mount: Deactivated successfully.
Nov 22 05:00:03 np0005532048 podman[418367]: 2025-11-22 10:00:03.957116872 +0000 UTC m=+1.306040803 container remove 98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:00:03 np0005532048 systemd[1]: libpod-conmon-98cf17412f4771448c3d817a22e9f4123a054fd1560083b99f14736dc907fa0a.scope: Deactivated successfully.
Nov 22 05:00:04 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:04 np0005532048 nova_compute[253661]: 2025-11-22 10:00:04.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:04.369+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:04 np0005532048 podman[418561]: 2025-11-22 10:00:04.743748899 +0000 UTC m=+0.053226431 container create 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 05:00:04 np0005532048 systemd[1]: Started libpod-conmon-7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0.scope.
Nov 22 05:00:04 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:00:04 np0005532048 podman[418561]: 2025-11-22 10:00:04.722056825 +0000 UTC m=+0.031534367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:00:04 np0005532048 podman[418561]: 2025-11-22 10:00:04.829079937 +0000 UTC m=+0.138557549 container init 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:00:04 np0005532048 podman[418561]: 2025-11-22 10:00:04.838605152 +0000 UTC m=+0.148082674 container start 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:00:04 np0005532048 podman[418561]: 2025-11-22 10:00:04.84220406 +0000 UTC m=+0.151681622 container attach 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:00:04 np0005532048 hopeful_blackwell[418577]: 167 167
Nov 22 05:00:04 np0005532048 systemd[1]: libpod-7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0.scope: Deactivated successfully.
Nov 22 05:00:04 np0005532048 podman[418561]: 2025-11-22 10:00:04.847653234 +0000 UTC m=+0.157130826 container died 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:00:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0e441218bf327d6770faea2f2354d9b069853742344a600b2ce8950edaa90b60-merged.mount: Deactivated successfully.
Nov 22 05:00:04 np0005532048 podman[418561]: 2025-11-22 10:00:04.898661019 +0000 UTC m=+0.208138551 container remove 7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:00:04 np0005532048 systemd[1]: libpod-conmon-7eb6c5ba6ce074ae3afdedf23ece899b1572f0e1dd2fa26633eca3320d3a45e0.scope: Deactivated successfully.
Nov 22 05:00:05 np0005532048 podman[418601]: 2025-11-22 10:00:05.090958998 +0000 UTC m=+0.049724774 container create abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:00:05 np0005532048 systemd[1]: Started libpod-conmon-abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600.scope.
Nov 22 05:00:05 np0005532048 podman[418601]: 2025-11-22 10:00:05.067254045 +0000 UTC m=+0.026019861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:00:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:00:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:05 np0005532048 podman[418601]: 2025-11-22 10:00:05.187267857 +0000 UTC m=+0.146033723 container init abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:00:05 np0005532048 podman[418601]: 2025-11-22 10:00:05.200190144 +0000 UTC m=+0.158955960 container start abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:00:05 np0005532048 podman[418601]: 2025-11-22 10:00:05.20897329 +0000 UTC m=+0.167739096 container attach abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:00:05 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 581 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:05 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:05 np0005532048 nova_compute[253661]: 2025-11-22 10:00:05.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:05.370+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]: {
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:    "0": [
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:        {
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "devices": [
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "/dev/loop3"
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            ],
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_name": "ceph_lv0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_size": "21470642176",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "name": "ceph_lv0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "tags": {
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.cluster_name": "ceph",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.crush_device_class": "",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.encrypted": "0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.osd_id": "0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.type": "block",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.vdo": "0"
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            },
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "type": "block",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "vg_name": "ceph_vg0"
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:        }
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:    ],
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:    "1": [
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:        {
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "devices": [
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "/dev/loop4"
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            ],
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_name": "ceph_lv1",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_size": "21470642176",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "name": "ceph_lv1",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "tags": {
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.cluster_name": "ceph",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.crush_device_class": "",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.encrypted": "0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.osd_id": "1",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.type": "block",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.vdo": "0"
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            },
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "type": "block",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "vg_name": "ceph_vg1"
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:        }
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:    ],
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:    "2": [
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:        {
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "devices": [
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "/dev/loop5"
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            ],
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_name": "ceph_lv2",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_size": "21470642176",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "name": "ceph_lv2",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "tags": {
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.cluster_name": "ceph",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.crush_device_class": "",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.encrypted": "0",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.osd_id": "2",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.type": "block",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:                "ceph.vdo": "0"
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            },
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "type": "block",
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:            "vg_name": "ceph_vg2"
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:        }
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]:    ]
Nov 22 05:00:06 np0005532048 festive_chebyshev[418617]: }
Nov 22 05:00:06 np0005532048 systemd[1]: libpod-abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600.scope: Deactivated successfully.
Nov 22 05:00:06 np0005532048 podman[418601]: 2025-11-22 10:00:06.053138802 +0000 UTC m=+1.011904578 container died abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:00:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4f8bd00d52b8bd399a0b28d50b400e859ed27e301ad650ee456e9f4edb0cf36c-merged.mount: Deactivated successfully.
Nov 22 05:00:06 np0005532048 podman[418601]: 2025-11-22 10:00:06.11892361 +0000 UTC m=+1.077689386 container remove abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:00:06 np0005532048 systemd[1]: libpod-conmon-abf38dba143e4ff93068a7bc5e3ace4c8af59e0ad5baee36945c775d02680600.scope: Deactivated successfully.
Nov 22 05:00:06 np0005532048 nova_compute[253661]: 2025-11-22 10:00:06.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:06 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:06 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 581 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:06.378+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:06 np0005532048 podman[418781]: 2025-11-22 10:00:06.761021522 +0000 UTC m=+0.041034791 container create 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:00:06 np0005532048 systemd[1]: Started libpod-conmon-2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a.scope.
Nov 22 05:00:06 np0005532048 podman[418781]: 2025-11-22 10:00:06.742099226 +0000 UTC m=+0.022112495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:00:06 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:00:06 np0005532048 podman[418781]: 2025-11-22 10:00:06.867137221 +0000 UTC m=+0.147150480 container init 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:00:06 np0005532048 podman[418781]: 2025-11-22 10:00:06.873259993 +0000 UTC m=+0.153273252 container start 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:00:06 np0005532048 podman[418781]: 2025-11-22 10:00:06.87682527 +0000 UTC m=+0.156838549 container attach 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 05:00:06 np0005532048 mystifying_mcclintock[418797]: 167 167
Nov 22 05:00:06 np0005532048 systemd[1]: libpod-2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a.scope: Deactivated successfully.
Nov 22 05:00:06 np0005532048 podman[418781]: 2025-11-22 10:00:06.880266535 +0000 UTC m=+0.160279804 container died 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:00:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-65c185d7b8c4fa91be9879b6c0e20b0490b0f769965a949b7ba0a822423ba297-merged.mount: Deactivated successfully.
Nov 22 05:00:06 np0005532048 podman[418781]: 2025-11-22 10:00:06.921378036 +0000 UTC m=+0.201391295 container remove 2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcclintock, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:00:06 np0005532048 systemd[1]: libpod-conmon-2d9b865ee22cd790e0578eaa06d36b0fddbe59d78150e6e31d93dca2d597fe2a.scope: Deactivated successfully.
Nov 22 05:00:07 np0005532048 podman[418824]: 2025-11-22 10:00:07.123014695 +0000 UTC m=+0.046229428 container create ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:00:07 np0005532048 systemd[1]: Started libpod-conmon-ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54.scope.
Nov 22 05:00:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:00:07 np0005532048 podman[418824]: 2025-11-22 10:00:07.104072719 +0000 UTC m=+0.027287452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:00:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:07 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:00:07 np0005532048 podman[418824]: 2025-11-22 10:00:07.215992462 +0000 UTC m=+0.139207215 container init ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.220 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:07 np0005532048 podman[418824]: 2025-11-22 10:00:07.231059762 +0000 UTC m=+0.154274505 container start ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:00:07 np0005532048 podman[418824]: 2025-11-22 10:00:07.235758358 +0000 UTC m=+0.158973111 container attach ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:00:07 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.252 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.252 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:00:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:07.376+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:00:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465728619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.674 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.840 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.842 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3442MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.842 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.842 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.915 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.916 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:00:07 np0005532048 nova_compute[253661]: 2025-11-22 10:00:07.932 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:00:08 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]: {
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "osd_id": 1,
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "type": "bluestore"
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:    },
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "osd_id": 0,
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "type": "bluestore"
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:    },
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "osd_id": 2,
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:        "type": "bluestore"
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]:    }
Nov 22 05:00:08 np0005532048 trusting_fermi[418841]: }
Nov 22 05:00:08 np0005532048 systemd[1]: libpod-ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54.scope: Deactivated successfully.
Nov 22 05:00:08 np0005532048 systemd[1]: libpod-ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54.scope: Consumed 1.045s CPU time.
Nov 22 05:00:08 np0005532048 podman[418824]: 2025-11-22 10:00:08.283335083 +0000 UTC m=+1.206549826 container died ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:00:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0b7983d74f721d719e83e9f77dfd858922025c7d2e69bfaa0052a46a8a4d8ddb-merged.mount: Deactivated successfully.
Nov 22 05:00:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:08.337+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:08 np0005532048 podman[418824]: 2025-11-22 10:00:08.349805927 +0000 UTC m=+1.273020660 container remove ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 05:00:08 np0005532048 systemd[1]: libpod-conmon-ca82e204fcdc2c4a04f662473703be1ec36985b4cffd51fecf5a63b95beccf54.scope: Deactivated successfully.
Nov 22 05:00:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:00:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:00:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312017244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:00:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:00:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6b2cca4f-b5bb-410c-9a7c-85a192240ec0 does not exist
Nov 22 05:00:08 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fc4db60e-7048-421d-98b9-204e5efa0693 does not exist
Nov 22 05:00:08 np0005532048 nova_compute[253661]: 2025-11-22 10:00:08.418 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:00:08 np0005532048 nova_compute[253661]: 2025-11-22 10:00:08.426 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:00:08 np0005532048 nova_compute[253661]: 2025-11-22 10:00:08.446 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:00:08 np0005532048 nova_compute[253661]: 2025-11-22 10:00:08.448 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:00:08 np0005532048 nova_compute[253661]: 2025-11-22 10:00:08.448 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:00:08 np0005532048 nova_compute[253661]: 2025-11-22 10:00:08.557 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:09 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:09 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:00:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:09.315+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:10 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:10.350+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:10 np0005532048 nova_compute[253661]: 2025-11-22 10:00:10.450 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:11 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:11.377+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:12 np0005532048 nova_compute[253661]: 2025-11-22 10:00:12.224 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:12.386+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:00:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1348051008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:00:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:00:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1348051008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:00:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:13 np0005532048 nova_compute[253661]: 2025-11-22 10:00:13.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:13 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:13.374+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:13 np0005532048 nova_compute[253661]: 2025-11-22 10:00:13.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 586 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:14 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:14 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 586 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:14.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:15 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:15.358+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:16 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:16 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:16.348+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:17 np0005532048 nova_compute[253661]: 2025-11-22 10:00:17.229 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:17.302+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:17 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:18.282+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:18 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:18 np0005532048 nova_compute[253661]: 2025-11-22 10:00:18.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 591 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:19.251+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:19 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:19 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 591 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:20.217+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:20 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:21.174+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:21 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:22.143+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:22 np0005532048 nova_compute[253661]: 2025-11-22 10:00:22.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:22 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:00:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:00:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:23.172+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:23 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:23 np0005532048 nova_compute[253661]: 2025-11-22 10:00:23.603 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:24.216+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 596 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:24 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:24 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 596 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:24 np0005532048 podman[418979]: 2025-11-22 10:00:24.390268706 +0000 UTC m=+0.069004139 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 22 05:00:24 np0005532048 podman[418980]: 2025-11-22 10:00:24.398046717 +0000 UTC m=+0.077414615 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 05:00:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:25.180+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:25 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:25 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:26.149+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:26 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:27.112+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:27 np0005532048 nova_compute[253661]: 2025-11-22 10:00:27.236 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:27 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:00:28.004 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:00:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:00:28.005 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:00:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:00:28.005 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:00:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:28.087+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:28 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:28 np0005532048 nova_compute[253661]: 2025-11-22 10:00:28.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:29.079+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 602 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:29 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:29 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 602 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:29 np0005532048 podman[419016]: 2025-11-22 10:00:29.426123789 +0000 UTC m=+0.123543039 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:00:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:30.044+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:30 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:31.035+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:31 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:32.011+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:32 np0005532048 nova_compute[253661]: 2025-11-22 10:00:32.241 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:32 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:33.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:33 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:33 np0005532048 nova_compute[253661]: 2025-11-22 10:00:33.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:34.032+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 606 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:34 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:34 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 606 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:35.009+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:35 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:36.023+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:36 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:37.059+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:37 np0005532048 nova_compute[253661]: 2025-11-22 10:00:37.245 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:37 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:38.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:38 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:38 np0005532048 nova_compute[253661]: 2025-11-22 10:00:38.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:39.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 611 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:39 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:39 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 611 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:40.141+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:40 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:41.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:41 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:42.199+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:42 np0005532048 nova_compute[253661]: 2025-11-22 10:00:42.249 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:42 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:43.243+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:43 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:43 np0005532048 nova_compute[253661]: 2025-11-22 10:00:43.632 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 616 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:44.290+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:44 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 616 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:44 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:45.245+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:45 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:46.196+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:46 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:47.194+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:47 np0005532048 nova_compute[253661]: 2025-11-22 10:00:47.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:47 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:48.153+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:48 np0005532048 nova_compute[253661]: 2025-11-22 10:00:48.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:48 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:49.111+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 627 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:49 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:49 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 627 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:50.090+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:50 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:51.130+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:51 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:52.123+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:52 np0005532048 nova_compute[253661]: 2025-11-22 10:00:52.257 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:00:52
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta']
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:52 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:00:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:00:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:53.091+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:53 np0005532048 nova_compute[253661]: 2025-11-22 10:00:53.633 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:53 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:54.057+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 631 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:54 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:55.027+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:55 np0005532048 podman[419043]: 2025-11-22 10:00:55.372048251 +0000 UTC m=+0.063366891 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 05:00:55 np0005532048 podman[419044]: 2025-11-22 10:00:55.375468256 +0000 UTC m=+0.062993482 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:00:55 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 631 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:00:55 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:00:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:00:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:00:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:00:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:00:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:56.071+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:56 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:57.027+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:57 np0005532048 nova_compute[253661]: 2025-11-22 10:00:57.262 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:00:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:00:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:00:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:00:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:00:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:00:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 63K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1772 writes, 8277 keys, 1772 commit groups, 1.0 writes per commit group, ingest: 9.43 MB, 0.02 MB/s#012Interval WAL: 1772 writes, 1772 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     39.1      1.81              0.26        44    0.041       0      0       0.0       0.0#012  L6      1/0    8.04 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   5.0     95.1     80.8      4.39              1.09        43    0.102    279K    23K       0.0       0.0#012 Sum      1/0    8.04 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   6.0     67.4     68.7      6.20              1.35        87    0.071    279K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2     56.3     56.2      1.10              0.20        12    0.092     51K   3063       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     95.1     80.8      4.39              1.09        43    0.102    279K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     39.2      1.80              0.26        43    0.042       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.069, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.42 GB write, 0.08 MB/s write, 0.41 GB read, 0.08 MB/s read, 6.2 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 47.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000397 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3097,45.58 MB,14.9942%) FilterBlock(88,792.30 KB,0.254516%) IndexBlock(88,1.25 MB,0.412524%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 05:00:57 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:57.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:58 np0005532048 nova_compute[253661]: 2025-11-22 10:00:58.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:00:58 np0005532048 nova_compute[253661]: 2025-11-22 10:00:58.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:00:58 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:58.955+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:59 np0005532048 nova_compute[253661]: 2025-11-22 10:00:59.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:00:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:00:59 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:00:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:00:59.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:00:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:00 np0005532048 podman[419082]: 2025-11-22 10:01:00.390383328 +0000 UTC m=+0.089246468 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 05:01:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:00 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:00.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:01.933+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:01 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:02 np0005532048 nova_compute[253661]: 2025-11-22 10:01:02.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:01:02 np0005532048 nova_compute[253661]: 2025-11-22 10:01:02.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:01:02 np0005532048 nova_compute[253661]: 2025-11-22 10:01:02.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:01:02 np0005532048 nova_compute[253661]: 2025-11-22 10:01:02.265 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:02 np0005532048 nova_compute[253661]: 2025-11-22 10:01:02.906 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:01:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:02.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:02 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:01:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:01:03 np0005532048 nova_compute[253661]: 2025-11-22 10:01:03.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:01:03 np0005532048 nova_compute[253661]: 2025-11-22 10:01:03.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:01:03 np0005532048 nova_compute[253661]: 2025-11-22 10:01:03.637 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:03.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:04 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:04 np0005532048 nova_compute[253661]: 2025-11-22 10:01:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:01:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 636 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:04.906+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:05 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:05 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 636 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:05.865+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:06 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:06 np0005532048 nova_compute[253661]: 2025-11-22 10:01:06.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:01:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:06.873+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:07 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.255 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.299 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:01:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604114902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.727 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.894 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.895 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.895 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:01:07 np0005532048 nova_compute[253661]: 2025-11-22 10:01:07.896 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:01:08 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:01:08 np0005532048 nova_compute[253661]: 2025-11-22 10:01:08.063 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:01:08 np0005532048 nova_compute[253661]: 2025-11-22 10:01:08.063 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:01:08 np0005532048 nova_compute[253661]: 2025-11-22 10:01:08.169 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:01:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:01:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778119789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:01:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:01:08 np0005532048 nova_compute[253661]: 2025-11-22 10:01:08.633 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:01:08 np0005532048 nova_compute[253661]: 2025-11-22 10:01:08.640 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:01:08 np0005532048 nova_compute[253661]: 2025-11-22 10:01:08.644 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:08 np0005532048 nova_compute[253661]: 2025-11-22 10:01:08.659 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:01:08 np0005532048 nova_compute[253661]: 2025-11-22 10:01:08.661 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:01:08 np0005532048 nova_compute[253661]: 2025-11-22 10:01:08.662 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 646 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:01:09 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 423f8840-60a5-49a9-9229-09a8e3e8058e does not exist
Nov 22 05:01:09 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 20793c4a-8c75-48bb-b226-8bf09083fec8 does not exist
Nov 22 05:01:09 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b35fe79a-86a6-4607-8ea7-3dacdaf81cbb does not exist
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:01:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:01:10 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 646 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:01:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:01:10 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:01:10 np0005532048 podman[419434]: 2025-11-22 10:01:10.172224924 +0000 UTC m=+0.037749290 container create f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:01:10 np0005532048 systemd[1]: Started libpod-conmon-f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26.scope.
Nov 22 05:01:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:01:10 np0005532048 podman[419434]: 2025-11-22 10:01:10.154818375 +0000 UTC m=+0.020342781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:01:10 np0005532048 podman[419434]: 2025-11-22 10:01:10.262563408 +0000 UTC m=+0.128087794 container init f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:01:10 np0005532048 podman[419434]: 2025-11-22 10:01:10.270729149 +0000 UTC m=+0.136253525 container start f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:01:10 np0005532048 podman[419434]: 2025-11-22 10:01:10.275723882 +0000 UTC m=+0.141248248 container attach f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:01:10 np0005532048 systemd[1]: libpod-f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26.scope: Deactivated successfully.
Nov 22 05:01:10 np0005532048 priceless_volhard[419451]: 167 167
Nov 22 05:01:10 np0005532048 conmon[419451]: conmon f11ad7681db75fc2dedd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26.scope/container/memory.events
Nov 22 05:01:10 np0005532048 podman[419434]: 2025-11-22 10:01:10.27887622 +0000 UTC m=+0.144400606 container died f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:01:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-07e80bef68984375350d58472030ffa4f381a18f11ff3b5066d98bbfa84cc81d-merged.mount: Deactivated successfully.
Nov 22 05:01:10 np0005532048 podman[419434]: 2025-11-22 10:01:10.316275151 +0000 UTC m=+0.181799507 container remove f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_volhard, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:01:10 np0005532048 systemd[1]: libpod-conmon-f11ad7681db75fc2dedd5fd3c18c231060301fc57d1968707bc9667cf929da26.scope: Deactivated successfully.
Nov 22 05:01:10 np0005532048 podman[419474]: 2025-11-22 10:01:10.485160167 +0000 UTC m=+0.044060605 container create 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:01:10 np0005532048 systemd[1]: Started libpod-conmon-684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37.scope.
Nov 22 05:01:10 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:01:10 np0005532048 podman[419474]: 2025-11-22 10:01:10.469016741 +0000 UTC m=+0.027917209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:01:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:10 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:10 np0005532048 podman[419474]: 2025-11-22 10:01:10.586352019 +0000 UTC m=+0.145252507 container init 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:01:10 np0005532048 podman[419474]: 2025-11-22 10:01:10.594710075 +0000 UTC m=+0.153610513 container start 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:01:10 np0005532048 podman[419474]: 2025-11-22 10:01:10.598627761 +0000 UTC m=+0.157528199 container attach 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:01:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:01:10 np0005532048 nova_compute[253661]: 2025-11-22 10:01:10.663 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:01:11 np0005532048 objective_shirley[419490]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:01:11 np0005532048 objective_shirley[419490]: --> relative data size: 1.0
Nov 22 05:01:11 np0005532048 objective_shirley[419490]: --> All data devices are unavailable
Nov 22 05:01:11 np0005532048 systemd[1]: libpod-684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37.scope: Deactivated successfully.
Nov 22 05:01:11 np0005532048 podman[419474]: 2025-11-22 10:01:11.679838887 +0000 UTC m=+1.238739325 container died 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:01:11 np0005532048 systemd[1]: libpod-684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37.scope: Consumed 1.028s CPU time.
Nov 22 05:01:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-206a58cccd9795b72c673883edc7ac42e73afc623264d4cbd18db2df4f7a12eb-merged.mount: Deactivated successfully.
Nov 22 05:01:11 np0005532048 podman[419474]: 2025-11-22 10:01:11.738216654 +0000 UTC m=+1.297117092 container remove 684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shirley, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 22 05:01:11 np0005532048 systemd[1]: libpod-conmon-684c525846cb2b6ff1159db5020eac61aa6753604f2b30a076d8fe56153bdd37.scope: Deactivated successfully.
Nov 22 05:01:12 np0005532048 nova_compute[253661]: 2025-11-22 10:01:12.302 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:01:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577879536' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:01:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:01:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577879536' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:01:12 np0005532048 podman[419672]: 2025-11-22 10:01:12.466661906 +0000 UTC m=+0.055221901 container create 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:01:12 np0005532048 systemd[1]: Started libpod-conmon-5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84.scope.
Nov 22 05:01:12 np0005532048 podman[419672]: 2025-11-22 10:01:12.437293964 +0000 UTC m=+0.025853999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:01:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:01:12 np0005532048 podman[419672]: 2025-11-22 10:01:12.558426235 +0000 UTC m=+0.146986250 container init 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:01:12 np0005532048 podman[419672]: 2025-11-22 10:01:12.565061318 +0000 UTC m=+0.153621303 container start 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:01:12 np0005532048 podman[419672]: 2025-11-22 10:01:12.568985965 +0000 UTC m=+0.157545950 container attach 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:01:12 np0005532048 systemd[1]: libpod-5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84.scope: Deactivated successfully.
Nov 22 05:01:12 np0005532048 gifted_nash[419688]: 167 167
Nov 22 05:01:12 np0005532048 conmon[419688]: conmon 5be5866da4bfde19c912 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84.scope/container/memory.events
Nov 22 05:01:12 np0005532048 podman[419672]: 2025-11-22 10:01:12.573275381 +0000 UTC m=+0.161835376 container died 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:01:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7ff346a89dc72178075d54a817eeba0b2d870bb845c69b54fd267055f9b960b1-merged.mount: Deactivated successfully.
Nov 22 05:01:12 np0005532048 podman[419672]: 2025-11-22 10:01:12.618635737 +0000 UTC m=+0.207195722 container remove 5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:01:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:01:12 np0005532048 systemd[1]: libpod-conmon-5be5866da4bfde19c912da3afb1606045d2bc9b1b98bb4bb6cb1f4f35e7adb84.scope: Deactivated successfully.
Nov 22 05:01:12 np0005532048 podman[419712]: 2025-11-22 10:01:12.796192698 +0000 UTC m=+0.048699760 container create 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:01:12 np0005532048 systemd[1]: Started libpod-conmon-3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878.scope.
Nov 22 05:01:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:01:12 np0005532048 podman[419712]: 2025-11-22 10:01:12.779710512 +0000 UTC m=+0.032217594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:01:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:12 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:12 np0005532048 podman[419712]: 2025-11-22 10:01:12.89339667 +0000 UTC m=+0.145903762 container init 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:01:12 np0005532048 podman[419712]: 2025-11-22 10:01:12.900521276 +0000 UTC m=+0.153028338 container start 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:01:12 np0005532048 podman[419712]: 2025-11-22 10:01:12.904014322 +0000 UTC m=+0.156521384 container attach 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 22 05:01:13 np0005532048 nova_compute[253661]: 2025-11-22 10:01:13.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]: {
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:    "0": [
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:        {
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "devices": [
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "/dev/loop3"
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            ],
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_name": "ceph_lv0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_size": "21470642176",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "name": "ceph_lv0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "tags": {
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.cluster_name": "ceph",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.crush_device_class": "",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.encrypted": "0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.osd_id": "0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.type": "block",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.vdo": "0"
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            },
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "type": "block",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "vg_name": "ceph_vg0"
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:        }
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:    ],
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:    "1": [
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:        {
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "devices": [
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "/dev/loop4"
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            ],
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_name": "ceph_lv1",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_size": "21470642176",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "name": "ceph_lv1",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "tags": {
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.cluster_name": "ceph",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.crush_device_class": "",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.encrypted": "0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.osd_id": "1",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.type": "block",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.vdo": "0"
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            },
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "type": "block",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "vg_name": "ceph_vg1"
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:        }
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:    ],
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:    "2": [
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:        {
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "devices": [
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "/dev/loop5"
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            ],
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_name": "ceph_lv2",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_size": "21470642176",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "name": "ceph_lv2",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "tags": {
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.cluster_name": "ceph",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.crush_device_class": "",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.encrypted": "0",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.osd_id": "2",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.type": "block",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:                "ceph.vdo": "0"
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            },
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "type": "block",
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:            "vg_name": "ceph_vg2"
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:        }
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]:    ]
Nov 22 05:01:13 np0005532048 distracted_mccarthy[419729]: }
Nov 22 05:01:13 np0005532048 systemd[1]: libpod-3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878.scope: Deactivated successfully.
Nov 22 05:01:13 np0005532048 podman[419712]: 2025-11-22 10:01:13.686938375 +0000 UTC m=+0.939445437 container died 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:01:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-12b146e4a64a6de22b2525de9cd567811b8a6d5a12d9679194448bca7faa8d06-merged.mount: Deactivated successfully.
Nov 22 05:01:13 np0005532048 podman[419712]: 2025-11-22 10:01:13.747478135 +0000 UTC m=+0.999985197 container remove 3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:01:13 np0005532048 systemd[1]: libpod-conmon-3990d7d9b8f967b278ca324121e30912074cdd397cbcb2c7168bb07121909878.scope: Deactivated successfully.
Nov 22 05:01:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:14 np0005532048 podman[419889]: 2025-11-22 10:01:14.40533938 +0000 UTC m=+0.043030321 container create 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:01:14 np0005532048 systemd[1]: Started libpod-conmon-669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c.scope.
Nov 22 05:01:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:01:14 np0005532048 podman[419889]: 2025-11-22 10:01:14.478430529 +0000 UTC m=+0.116121470 container init 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:01:14 np0005532048 podman[419889]: 2025-11-22 10:01:14.387670765 +0000 UTC m=+0.025361726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:01:14 np0005532048 podman[419889]: 2025-11-22 10:01:14.486676252 +0000 UTC m=+0.124367203 container start 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 05:01:14 np0005532048 podman[419889]: 2025-11-22 10:01:14.490493366 +0000 UTC m=+0.128184427 container attach 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:01:14 np0005532048 gracious_tesla[419905]: 167 167
Nov 22 05:01:14 np0005532048 systemd[1]: libpod-669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c.scope: Deactivated successfully.
Nov 22 05:01:14 np0005532048 podman[419889]: 2025-11-22 10:01:14.492368892 +0000 UTC m=+0.130059843 container died 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:01:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-225d00ea74ea5ff9a8b24dd95211489ca179dc2254e3358c7916f09ccfeb1680-merged.mount: Deactivated successfully.
Nov 22 05:01:14 np0005532048 podman[419889]: 2025-11-22 10:01:14.529584739 +0000 UTC m=+0.167275680 container remove 669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:01:14 np0005532048 systemd[1]: libpod-conmon-669c7986b921acf336da590ae554fdccb78c807ddf724ab9ddf2b67b8b97479c.scope: Deactivated successfully.
Nov 22 05:01:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:01:14 np0005532048 podman[419928]: 2025-11-22 10:01:14.67835122 +0000 UTC m=+0.039890192 container create 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:01:14 np0005532048 systemd[1]: Started libpod-conmon-103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80.scope.
Nov 22 05:01:14 np0005532048 podman[419928]: 2025-11-22 10:01:14.660700787 +0000 UTC m=+0.022239779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:01:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:01:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:01:14 np0005532048 podman[419928]: 2025-11-22 10:01:14.780842303 +0000 UTC m=+0.142381285 container init 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:01:14 np0005532048 podman[419928]: 2025-11-22 10:01:14.787517798 +0000 UTC m=+0.149056770 container start 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:01:14 np0005532048 podman[419928]: 2025-11-22 10:01:14.790112432 +0000 UTC m=+0.151651404 container attach 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 651 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.106923) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675106957, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2156, "num_deletes": 251, "total_data_size": 2822160, "memory_usage": 2867696, "flush_reason": "Manual Compaction"}
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675122278, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2756079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61911, "largest_seqno": 64066, "table_properties": {"data_size": 2747251, "index_size": 5065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22571, "raw_average_key_size": 21, "raw_value_size": 2727553, "raw_average_value_size": 2558, "num_data_blocks": 222, "num_entries": 1066, "num_filter_entries": 1066, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805504, "oldest_key_time": 1763805504, "file_creation_time": 1763805675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 15409 microseconds, and 5843 cpu microseconds.
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.122327) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2756079 bytes OK
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.122349) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.124700) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.124713) EVENT_LOG_v1 {"time_micros": 1763805675124709, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.124729) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2812841, prev total WAL file size 2812841, number of live WAL files 2.
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.125462) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2691KB)], [146(8230KB)]
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675125528, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 11184256, "oldest_snapshot_seqno": -1}
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8895 keys, 9739112 bytes, temperature: kUnknown
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675172603, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 9739112, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9683965, "index_size": 31802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 233885, "raw_average_key_size": 26, "raw_value_size": 9529379, "raw_average_value_size": 1071, "num_data_blocks": 1216, "num_entries": 8895, "num_filter_entries": 8895, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805675, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 651 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.172889) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 9739112 bytes
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.174248) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.1 rd, 206.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 9409, records dropped: 514 output_compression: NoCompression
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.174268) EVENT_LOG_v1 {"time_micros": 1763805675174258, "job": 90, "event": "compaction_finished", "compaction_time_micros": 47169, "compaction_time_cpu_micros": 25902, "output_level": 6, "num_output_files": 1, "total_output_size": 9739112, "num_input_records": 9409, "num_output_records": 8895, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675174901, "job": 90, "event": "table_file_deletion", "file_number": 148}
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805675176268, "job": 90, "event": "table_file_deletion", "file_number": 146}
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.125371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:15.176446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]: {
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "osd_id": 1,
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "type": "bluestore"
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:    },
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "osd_id": 0,
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "type": "bluestore"
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:    },
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "osd_id": 2,
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:        "type": "bluestore"
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]:    }
Nov 22 05:01:15 np0005532048 xenodochial_mclean[419944]: }
Nov 22 05:01:15 np0005532048 systemd[1]: libpod-103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80.scope: Deactivated successfully.
Nov 22 05:01:15 np0005532048 podman[419928]: 2025-11-22 10:01:15.763556505 +0000 UTC m=+1.125095477 container died 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:01:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-08f14ef735df9378d277389426bf24d9a8dc9bd612f95f16a0f3b989b38f2a55-merged.mount: Deactivated successfully.
Nov 22 05:01:15 np0005532048 podman[419928]: 2025-11-22 10:01:15.821501782 +0000 UTC m=+1.183040764 container remove 103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:01:15 np0005532048 systemd[1]: libpod-conmon-103a225d4b5609ddbef61f91cbeaeb0df9ae858a891bb59e36a771d716799f80.scope: Deactivated successfully.
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:01:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:01:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d0c3f236-5029-4b54-a1ee-b570f66c2ba4 does not exist
Nov 22 05:01:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 031491c6-905f-4353-aaa8-57cf6eec0774 does not exist
Nov 22 05:01:16 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:01:16 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:01:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:01:17 np0005532048 nova_compute[253661]: 2025-11-22 10:01:17.305 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:01:18 np0005532048 nova_compute[253661]: 2025-11-22 10:01:18.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:22 np0005532048 nova_compute[253661]: 2025-11-22 10:01:22.308 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:01:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:01:23 np0005532048 nova_compute[253661]: 2025-11-22 10:01:23.650 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 656 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:25 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 656 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:26 np0005532048 podman[420043]: 2025-11-22 10:01:26.392229139 +0000 UTC m=+0.073526951 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 05:01:26 np0005532048 podman[420042]: 2025-11-22 10:01:26.41783259 +0000 UTC m=+0.094895608 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:01:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:27 np0005532048 nova_compute[253661]: 2025-11-22 10:01:27.309 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:01:28.006 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:01:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:01:28.007 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:01:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:01:28.007 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:01:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:28 np0005532048 nova_compute[253661]: 2025-11-22 10:01:28.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 666 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:30 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 666 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:31 np0005532048 podman[420081]: 2025-11-22 10:01:31.417786413 +0000 UTC m=+0.106187505 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 05:01:32 np0005532048 nova_compute[253661]: 2025-11-22 10:01:32.311 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:33 np0005532048 nova_compute[253661]: 2025-11-22 10:01:33.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:35 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 0 slow ops, oldest one blocked for 671 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:35 np0005532048 ceph-mon[75021]: Health check update: 0 slow ops, oldest one blocked for 671 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:37 np0005532048 nova_compute[253661]: 2025-11-22 10:01:37.311 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:37.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:38 np0005532048 nova_compute[253661]: 2025-11-22 10:01:38.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:38.888+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:39 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:39.868+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:40 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:40.914+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:41 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:41.868+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:42 np0005532048 nova_compute[253661]: 2025-11-22 10:01:42.313 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:42 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:42.892+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:43 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:43 np0005532048 nova_compute[253661]: 2025-11-22 10:01:43.657 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:43.936+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 676 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.266038) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704266084, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 517, "num_deletes": 255, "total_data_size": 467334, "memory_usage": 477616, "flush_reason": "Manual Compaction"}
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704271408, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 461638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64067, "largest_seqno": 64583, "table_properties": {"data_size": 458789, "index_size": 820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6710, "raw_average_key_size": 18, "raw_value_size": 453056, "raw_average_value_size": 1248, "num_data_blocks": 36, "num_entries": 363, "num_filter_entries": 363, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805675, "oldest_key_time": 1763805675, "file_creation_time": 1763805704, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 5405 microseconds, and 2355 cpu microseconds.
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.271443) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 461638 bytes OK
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.271465) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273204) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273219) EVENT_LOG_v1 {"time_micros": 1763805704273214, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273236) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 464348, prev total WAL file size 464348, number of live WAL files 2.
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273655) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373630' seq:72057594037927935, type:22 .. '6C6F676D0033303131' seq:0, type:0; will stop at (end)
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(450KB)], [149(9510KB)]
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704273706, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 10200750, "oldest_snapshot_seqno": -1}
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8738 keys, 10069186 bytes, temperature: kUnknown
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704321557, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 10069186, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10014249, "index_size": 31976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 231560, "raw_average_key_size": 26, "raw_value_size": 9861606, "raw_average_value_size": 1128, "num_data_blocks": 1222, "num_entries": 8738, "num_filter_entries": 8738, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805704, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.321886) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10069186 bytes
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.323123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.7 rd, 209.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.3 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(43.9) write-amplify(21.8) OK, records in: 9258, records dropped: 520 output_compression: NoCompression
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.323147) EVENT_LOG_v1 {"time_micros": 1763805704323135, "job": 92, "event": "compaction_finished", "compaction_time_micros": 47969, "compaction_time_cpu_micros": 24521, "output_level": 6, "num_output_files": 1, "total_output_size": 10069186, "num_input_records": 9258, "num_output_records": 8738, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704323577, "job": 92, "event": "table_file_deletion", "file_number": 151}
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805704326046, "job": 92, "event": "table_file_deletion", "file_number": 149}
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.273559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:01:44.326284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:44 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 676 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:44.956+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:45 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:45.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:46 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:46 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:46.920+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:47 np0005532048 nova_compute[253661]: 2025-11-22 10:01:47.315 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:47 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:47.898+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:48 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:48 np0005532048 nova_compute[253661]: 2025-11-22 10:01:48.659 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:48.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 681 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:49 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:49 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 681 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:49.908+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:50 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:50.919+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:51 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:51.959+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:52 np0005532048 nova_compute[253661]: 2025-11-22 10:01:52.316 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:01:52
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.control', 'images', '.rgw.root']
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:01:52 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:01:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:01:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:52.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:53 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:53 np0005532048 nova_compute[253661]: 2025-11-22 10:01:53.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:53.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 686 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:54 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:54 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 686 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:54.997+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:55 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:01:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:01:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:01:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:01:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:01:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:55.950+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:56 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:56.933+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:57 np0005532048 nova_compute[253661]: 2025-11-22 10:01:57.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:01:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:01:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:57.898+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:01:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:58.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:01:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 691 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:01:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:01:59 np0005532048 nova_compute[253661]: 2025-11-22 10:01:59.290 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:01:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:01:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:01:59 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:01:59 np0005532048 podman[420110]: 2025-11-22 10:01:59.364427624 +0000 UTC m=+0.053917228 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 05:01:59 np0005532048 podman[420111]: 2025-11-22 10:01:59.393459878 +0000 UTC m=+0.072530445 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 05:01:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:01:59.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:01:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:00 np0005532048 nova_compute[253661]: 2025-11-22 10:02:00.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:00 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:00 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:00 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 691 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:00.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:01 np0005532048 nova_compute[253661]: 2025-11-22 10:02:01.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:01 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:01.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:02 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:02 np0005532048 nova_compute[253661]: 2025-11-22 10:02:02.353 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:02 np0005532048 podman[420150]: 2025-11-22 10:02:02.423647833 +0000 UTC m=+0.104993766 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:02:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:02.967+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:02:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:02:03 np0005532048 nova_compute[253661]: 2025-11-22 10:02:03.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:03 np0005532048 nova_compute[253661]: 2025-11-22 10:02:03.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:02:03 np0005532048 nova_compute[253661]: 2025-11-22 10:02:03.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:02:03 np0005532048 nova_compute[253661]: 2025-11-22 10:02:03.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:02:03 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:03.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:04 np0005532048 nova_compute[253661]: 2025-11-22 10:02:04.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:04 np0005532048 nova_compute[253661]: 2025-11-22 10:02:04.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:04 np0005532048 nova_compute[253661]: 2025-11-22 10:02:04.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:02:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 696 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:04 np0005532048 nova_compute[253661]: 2025-11-22 10:02:04.323 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:04 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:04 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 696 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:04.878+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:05 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:05.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:06 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:06.873+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.261 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.262 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:07 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:02:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108095252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.761 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:02:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:07.909+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.970 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.971 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.972 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:02:07 np0005532048 nova_compute[253661]: 2025-11-22 10:02:07.972 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.027 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.027 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.090 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.112 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.113 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.130 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.151 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.209 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:02:08 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:02:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391868096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.722 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.732 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.749 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.752 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:02:08 np0005532048 nova_compute[253661]: 2025-11-22 10:02:08.752 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:02:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:08.892+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 701 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:09 np0005532048 nova_compute[253661]: 2025-11-22 10:02:09.325 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:09 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:09 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 701 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:09 np0005532048 nova_compute[253661]: 2025-11-22 10:02:09.754 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:09 np0005532048 nova_compute[253661]: 2025-11-22 10:02:09.755 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:09.930+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:10 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:10.923+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:11 np0005532048 nova_compute[253661]: 2025-11-22 10:02:11.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:11 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:11.913+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:12 np0005532048 nova_compute[253661]: 2025-11-22 10:02:12.358 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:12 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:02:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2989815337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:02:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:02:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2989815337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:02:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:12.865+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:13 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:13.892+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 706 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:14 np0005532048 nova_compute[253661]: 2025-11-22 10:02:14.328 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:14 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:14 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:14 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 706 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:14.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:15 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:02:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:15.912+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:02:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:16.957+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:02:16 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4abd0143-4b88-47ac-9225-f334262fe12a does not exist
Nov 22 05:02:16 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 451fe9f6-50b3-4be6-b89b-e9eb88b9243c does not exist
Nov 22 05:02:16 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3f5872e3-a200-417a-af93-33a1623b0c8b does not exist
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:02:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:02:17 np0005532048 nova_compute[253661]: 2025-11-22 10:02:17.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:02:17 np0005532048 nova_compute[253661]: 2025-11-22 10:02:17.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:02:17 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:02:17 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:02:17 np0005532048 podman[420491]: 2025-11-22 10:02:17.716831705 +0000 UTC m=+0.028264757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:02:17 np0005532048 podman[420491]: 2025-11-22 10:02:17.91810823 +0000 UTC m=+0.229541262 container create 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:02:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:17.998+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:18 np0005532048 systemd[1]: Started libpod-conmon-5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9.scope.
Nov 22 05:02:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:02:18 np0005532048 podman[420491]: 2025-11-22 10:02:18.205224298 +0000 UTC m=+0.516657350 container init 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:02:18 np0005532048 podman[420491]: 2025-11-22 10:02:18.214882906 +0000 UTC m=+0.526315938 container start 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:02:18 np0005532048 sweet_mendeleev[420507]: 167 167
Nov 22 05:02:18 np0005532048 systemd[1]: libpod-5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9.scope: Deactivated successfully.
Nov 22 05:02:18 np0005532048 podman[420491]: 2025-11-22 10:02:18.281831504 +0000 UTC m=+0.593264536 container attach 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:02:18 np0005532048 podman[420491]: 2025-11-22 10:02:18.285513124 +0000 UTC m=+0.596946176 container died 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:02:18 np0005532048 systemd[1]: var-lib-containers-storage-overlay-96c6ec78a89bc3720527e7af28bbc3ad13b7ebd7a6c4277cdfb4e0304cc6a7ba-merged.mount: Deactivated successfully.
Nov 22 05:02:18 np0005532048 podman[420491]: 2025-11-22 10:02:18.543587488 +0000 UTC m=+0.855020550 container remove 5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mendeleev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:02:18 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:18 np0005532048 systemd[1]: libpod-conmon-5bd2d34da20302dd621d9899e35938857b3334fa307e1035cb3a4f0f444c20a9.scope: Deactivated successfully.
Nov 22 05:02:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:02:18 np0005532048 podman[420531]: 2025-11-22 10:02:18.746526264 +0000 UTC m=+0.046022674 container create dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:02:18 np0005532048 systemd[1]: Started libpod-conmon-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope.
Nov 22 05:02:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:02:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:18 np0005532048 podman[420531]: 2025-11-22 10:02:18.7280943 +0000 UTC m=+0.027590720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:02:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:18 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:18 np0005532048 podman[420531]: 2025-11-22 10:02:18.845101711 +0000 UTC m=+0.144598141 container init dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:02:18 np0005532048 podman[420531]: 2025-11-22 10:02:18.853542828 +0000 UTC m=+0.153039228 container start dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:02:18 np0005532048 podman[420531]: 2025-11-22 10:02:18.858366467 +0000 UTC m=+0.157862927 container attach dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:02:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:18.989+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 711 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:19 np0005532048 nova_compute[253661]: 2025-11-22 10:02:19.327 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:19 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:19 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 711 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:19.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:20 np0005532048 lucid_volhard[420548]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:02:20 np0005532048 lucid_volhard[420548]: --> relative data size: 1.0
Nov 22 05:02:20 np0005532048 lucid_volhard[420548]: --> All data devices are unavailable
Nov 22 05:02:20 np0005532048 systemd[1]: libpod-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope: Deactivated successfully.
Nov 22 05:02:20 np0005532048 systemd[1]: libpod-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope: Consumed 1.200s CPU time.
Nov 22 05:02:20 np0005532048 conmon[420548]: conmon dbc66aea6068412ecbce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope/container/memory.events
Nov 22 05:02:20 np0005532048 podman[420531]: 2025-11-22 10:02:20.099131431 +0000 UTC m=+1.398627871 container died dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:02:20 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bdcd009547243398cbdce3c97b0a9c58568941b8a09b8b5a499ad729853b981b-merged.mount: Deactivated successfully.
Nov 22 05:02:20 np0005532048 podman[420531]: 2025-11-22 10:02:20.173110692 +0000 UTC m=+1.472607102 container remove dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:02:20 np0005532048 systemd[1]: libpod-conmon-dbc66aea6068412ecbce3535d8826c330358777c7647e7cd5e34e5ed930ff3b5.scope: Deactivated successfully.
Nov 22 05:02:20 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:02:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:20.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:20 np0005532048 podman[420729]: 2025-11-22 10:02:20.970728387 +0000 UTC m=+0.052804731 container create 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:02:21 np0005532048 systemd[1]: Started libpod-conmon-1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654.scope.
Nov 22 05:02:21 np0005532048 podman[420729]: 2025-11-22 10:02:20.950183321 +0000 UTC m=+0.032259715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:02:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:02:21 np0005532048 podman[420729]: 2025-11-22 10:02:21.067813156 +0000 UTC m=+0.149889520 container init 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:02:21 np0005532048 podman[420729]: 2025-11-22 10:02:21.075833974 +0000 UTC m=+0.157910318 container start 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:02:21 np0005532048 podman[420729]: 2025-11-22 10:02:21.079179977 +0000 UTC m=+0.161256341 container attach 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:02:21 np0005532048 eloquent_fermat[420745]: 167 167
Nov 22 05:02:21 np0005532048 systemd[1]: libpod-1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654.scope: Deactivated successfully.
Nov 22 05:02:21 np0005532048 podman[420729]: 2025-11-22 10:02:21.082643822 +0000 UTC m=+0.164720166 container died 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:02:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-466cffd2d80954eb467e1c398a7ff69f3c883dcbf62fdbaaceaf614a18f044e2-merged.mount: Deactivated successfully.
Nov 22 05:02:21 np0005532048 podman[420729]: 2025-11-22 10:02:21.168815324 +0000 UTC m=+0.250891668 container remove 1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:02:21 np0005532048 systemd[1]: libpod-conmon-1a7c50c0aae89e8eb717ada864d8e2e7b50e713bd8ecedb48bf11c5e4cfa9654.scope: Deactivated successfully.
Nov 22 05:02:21 np0005532048 podman[420769]: 2025-11-22 10:02:21.380282349 +0000 UTC m=+0.053425427 container create 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:02:21 np0005532048 systemd[1]: Started libpod-conmon-209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d.scope.
Nov 22 05:02:21 np0005532048 podman[420769]: 2025-11-22 10:02:21.356603606 +0000 UTC m=+0.029746704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:02:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:02:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:21 np0005532048 podman[420769]: 2025-11-22 10:02:21.480597968 +0000 UTC m=+0.153741066 container init 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:02:21 np0005532048 podman[420769]: 2025-11-22 10:02:21.488164315 +0000 UTC m=+0.161307383 container start 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:02:21 np0005532048 podman[420769]: 2025-11-22 10:02:21.491801364 +0000 UTC m=+0.164944772 container attach 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:02:21 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:21.920+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]: {
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:    "0": [
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:        {
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "devices": [
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "/dev/loop3"
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            ],
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_name": "ceph_lv0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_size": "21470642176",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "name": "ceph_lv0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "tags": {
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.cluster_name": "ceph",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.crush_device_class": "",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.encrypted": "0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.osd_id": "0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.type": "block",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.vdo": "0"
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            },
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "type": "block",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "vg_name": "ceph_vg0"
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:        }
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:    ],
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:    "1": [
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:        {
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "devices": [
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "/dev/loop4"
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            ],
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_name": "ceph_lv1",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_size": "21470642176",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "name": "ceph_lv1",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "tags": {
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.cluster_name": "ceph",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.crush_device_class": "",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.encrypted": "0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.osd_id": "1",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.type": "block",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.vdo": "0"
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            },
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "type": "block",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "vg_name": "ceph_vg1"
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:        }
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:    ],
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:    "2": [
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:        {
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "devices": [
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "/dev/loop5"
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            ],
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_name": "ceph_lv2",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_size": "21470642176",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "name": "ceph_lv2",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "tags": {
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.cluster_name": "ceph",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.crush_device_class": "",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.encrypted": "0",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.osd_id": "2",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.type": "block",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:                "ceph.vdo": "0"
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            },
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "type": "block",
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:            "vg_name": "ceph_vg2"
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:        }
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]:    ]
Nov 22 05:02:22 np0005532048 eloquent_lumiere[420786]: }
Nov 22 05:02:22 np0005532048 systemd[1]: libpod-209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d.scope: Deactivated successfully.
Nov 22 05:02:22 np0005532048 podman[420769]: 2025-11-22 10:02:22.353710442 +0000 UTC m=+1.026853520 container died 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:02:22 np0005532048 nova_compute[253661]: 2025-11-22 10:02:22.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a66bc9d2241ec4e2a44225a2cbc8887a44a82c724c6e0cd35412efba61998671-merged.mount: Deactivated successfully.
Nov 22 05:02:22 np0005532048 podman[420769]: 2025-11-22 10:02:22.415953234 +0000 UTC m=+1.089096312 container remove 209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:02:22 np0005532048 systemd[1]: libpod-conmon-209c960eef6af79817be6a0c8d9202426919895c80983ed97c662551a320a04d.scope: Deactivated successfully.
Nov 22 05:02:22 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:02:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:02:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:22.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:23 np0005532048 podman[420949]: 2025-11-22 10:02:23.169667228 +0000 UTC m=+0.046787053 container create a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:02:23 np0005532048 systemd[1]: Started libpod-conmon-a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2.scope.
Nov 22 05:02:23 np0005532048 podman[420949]: 2025-11-22 10:02:23.147960223 +0000 UTC m=+0.025080078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:02:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:02:23 np0005532048 podman[420949]: 2025-11-22 10:02:23.264482032 +0000 UTC m=+0.141601867 container init a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:02:23 np0005532048 podman[420949]: 2025-11-22 10:02:23.277635846 +0000 UTC m=+0.154755671 container start a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:02:23 np0005532048 podman[420949]: 2025-11-22 10:02:23.281718937 +0000 UTC m=+0.158838742 container attach a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:02:23 np0005532048 nervous_borg[420965]: 167 167
Nov 22 05:02:23 np0005532048 systemd[1]: libpod-a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2.scope: Deactivated successfully.
Nov 22 05:02:23 np0005532048 podman[420949]: 2025-11-22 10:02:23.287724564 +0000 UTC m=+0.164844419 container died a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:02:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-86f1e1db2f7740b89efefc7e59c9d2946c7e1ee365620ece45eef74bdd019df1-merged.mount: Deactivated successfully.
Nov 22 05:02:23 np0005532048 podman[420949]: 2025-11-22 10:02:23.341482197 +0000 UTC m=+0.218602022 container remove a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:02:23 np0005532048 systemd[1]: libpod-conmon-a51200513d25c3f7f884bfe2cea7333a85402afa5273d5a31644db045d05cbc2.scope: Deactivated successfully.
Nov 22 05:02:23 np0005532048 podman[420989]: 2025-11-22 10:02:23.564104437 +0000 UTC m=+0.072963757 container create 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:02:23 np0005532048 systemd[1]: Started libpod-conmon-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope.
Nov 22 05:02:23 np0005532048 podman[420989]: 2025-11-22 10:02:23.536732124 +0000 UTC m=+0.045591524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:02:23 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:02:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:02:23 np0005532048 podman[420989]: 2025-11-22 10:02:23.672182769 +0000 UTC m=+0.181042089 container init 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:02:23 np0005532048 podman[420989]: 2025-11-22 10:02:23.686660265 +0000 UTC m=+0.195519625 container start 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:02:23 np0005532048 podman[420989]: 2025-11-22 10:02:23.691156225 +0000 UTC m=+0.200015585 container attach 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:02:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:24.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 716 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:24 np0005532048 nova_compute[253661]: 2025-11-22 10:02:24.331 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:02:24 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:24 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 716 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]: {
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "osd_id": 1,
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "type": "bluestore"
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:    },
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "osd_id": 0,
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "type": "bluestore"
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:    },
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "osd_id": 2,
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:        "type": "bluestore"
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]:    }
Nov 22 05:02:24 np0005532048 laughing_grothendieck[421005]: }
Nov 22 05:02:24 np0005532048 systemd[1]: libpod-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope: Deactivated successfully.
Nov 22 05:02:24 np0005532048 systemd[1]: libpod-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope: Consumed 1.131s CPU time.
Nov 22 05:02:24 np0005532048 conmon[421005]: conmon 642ba035e5b6ddbe31b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope/container/memory.events
Nov 22 05:02:24 np0005532048 podman[420989]: 2025-11-22 10:02:24.813543655 +0000 UTC m=+1.322402975 container died 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:02:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d826ded3bc0a5ff07c462fb27a2ac98149a120bf3883c8d68f8a881dd41cf064-merged.mount: Deactivated successfully.
Nov 22 05:02:24 np0005532048 podman[420989]: 2025-11-22 10:02:24.880205036 +0000 UTC m=+1.389064356 container remove 642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:02:24 np0005532048 systemd[1]: libpod-conmon-642ba035e5b6ddbe31b9d990986617d3c6239b14b755fd4f9f697ee6fe094310.scope: Deactivated successfully.
Nov 22 05:02:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:02:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:02:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:02:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:02:24 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8461952b-e900-4efd-ab4f-9ed7c438ad9f does not exist
Nov 22 05:02:24 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev aca971f1-23e5-4494-9002-a98de3bbf982 does not exist
Nov 22 05:02:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:24.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:02:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:02:25 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:25.968+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:02:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:26.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:26 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:27 np0005532048 nova_compute[253661]: 2025-11-22 10:02:27.368 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:27 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:27.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:02:28.008 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:02:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:02:28.009 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:02:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:02:28.009 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:02:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:02:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:28.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:28 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 726 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:29 np0005532048 nova_compute[253661]: 2025-11-22 10:02:29.333 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:29.955+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:29 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:29 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 726 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:30 np0005532048 podman[421103]: 2025-11-22 10:02:30.422051778 +0000 UTC m=+0.094246360 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:02:30 np0005532048 podman[421104]: 2025-11-22 10:02:30.457251165 +0000 UTC m=+0.129201802 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 22 05:02:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:30 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:31.002+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:31 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:32.008+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:32 np0005532048 nova_compute[253661]: 2025-11-22 10:02:32.371 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:32.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:33 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:33 np0005532048 podman[421141]: 2025-11-22 10:02:33.426177381 +0000 UTC m=+0.112462839 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller)
Nov 22 05:02:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:34.008+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:34 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:34 np0005532048 nova_compute[253661]: 2025-11-22 10:02:34.337 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:35 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 731 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:35 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:35.052+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:36 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 731 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:36 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:36.055+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:37.022+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:37 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:37 np0005532048 nova_compute[253661]: 2025-11-22 10:02:37.376 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:37.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:38 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:39.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:39 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:39 np0005532048 nova_compute[253661]: 2025-11-22 10:02:39.339 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:39.978+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:40 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:40.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:41 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:41.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:42 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:42 np0005532048 nova_compute[253661]: 2025-11-22 10:02:42.380 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:43.002+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:43 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:44.005+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 736 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:44 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:44 np0005532048 nova_compute[253661]: 2025-11-22 10:02:44.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:45.002+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:45 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:45 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 736 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:46.027+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:46 np0005532048 ceph-mon[75021]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Nov 22 05:02:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:47.033+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:47 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:47 np0005532048 nova_compute[253661]: 2025-11-22 10:02:47.383 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:48.025+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:48 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:48.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 2 slow ops, oldest one blocked for 741 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:49 np0005532048 nova_compute[253661]: 2025-11-22 10:02:49.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:49 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:49 np0005532048 ceph-mon[75021]: Health check update: 2 slow ops, oldest one blocked for 741 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:50.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:50 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:51.036+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:51 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:52.030+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:02:52
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'vms', 'images', '.rgw.root']
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:02:52 np0005532048 nova_compute[253661]: 2025-11-22 10:02:52.389 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:52 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:02:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:02:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:53.061+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:53 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:54.045+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 746 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:54 np0005532048 nova_compute[253661]: 2025-11-22 10:02:54.345 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:54 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:54 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 746 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:55.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:55 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:02:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:02:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:02:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:02:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:02:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:56.072+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:56 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:57.032+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:57 np0005532048 nova_compute[253661]: 2025-11-22 10:02:57.392 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:57 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:57.996+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:58 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:02:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:59.024+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:02:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:02:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:02:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:02:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:02:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 751 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:02:59 np0005532048 nova_compute[253661]: 2025-11-22 10:02:59.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:02:59 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:59 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:02:59 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 751 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:02:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:02:59.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:02:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:00 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:00.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:01 np0005532048 nova_compute[253661]: 2025-11-22 10:03:01.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:03:01 np0005532048 nova_compute[253661]: 2025-11-22 10:03:01.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:03:01 np0005532048 podman[421167]: 2025-11-22 10:03:01.389869653 +0000 UTC m=+0.075023259 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 05:03:01 np0005532048 podman[421168]: 2025-11-22 10:03:01.390921239 +0000 UTC m=+0.073666605 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 22 05:03:01 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:02.003+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:02 np0005532048 nova_compute[253661]: 2025-11-22 10:03:02.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:02 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:03.040+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:03:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:03:03 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:04.057+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:04 np0005532048 nova_compute[253661]: 2025-11-22 10:03:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:03:04 np0005532048 nova_compute[253661]: 2025-11-22 10:03:04.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:03:04 np0005532048 nova_compute[253661]: 2025-11-22 10:03:04.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:03:04 np0005532048 nova_compute[253661]: 2025-11-22 10:03:04.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:03:04 np0005532048 nova_compute[253661]: 2025-11-22 10:03:04.246 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:03:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 756 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:04 np0005532048 nova_compute[253661]: 2025-11-22 10:03:04.349 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:04 np0005532048 podman[421205]: 2025-11-22 10:03:04.398753062 +0000 UTC m=+0.089236478 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:03:04 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:04 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 756 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:05.033+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:05 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:06.039+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:06 np0005532048 nova_compute[253661]: 2025-11-22 10:03:06.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:03:06 np0005532048 nova_compute[253661]: 2025-11-22 10:03:06.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:03:06 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:07.012+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:07 np0005532048 nova_compute[253661]: 2025-11-22 10:03:07.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:07 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:07.990+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.275 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.277 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.278 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.278 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.279 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:03:08 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:08 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:03:08 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463145236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.760 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.902 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.903 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3530MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.904 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.904 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.973 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.973 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:03:08 np0005532048 nova_compute[253661]: 2025-11-22 10:03:08.998 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:03:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:09.009+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 761 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:09 np0005532048 nova_compute[253661]: 2025-11-22 10:03:09.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:03:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/569418396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:03:09 np0005532048 nova_compute[253661]: 2025-11-22 10:03:09.431 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:03:09 np0005532048 nova_compute[253661]: 2025-11-22 10:03:09.439 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:03:09 np0005532048 nova_compute[253661]: 2025-11-22 10:03:09.452 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:03:09 np0005532048 nova_compute[253661]: 2025-11-22 10:03:09.453 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:03:09 np0005532048 nova_compute[253661]: 2025-11-22 10:03:09.454 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:03:09 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:09 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 761 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:10.020+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:10 np0005532048 nova_compute[253661]: 2025-11-22 10:03:10.454 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:03:10 np0005532048 nova_compute[253661]: 2025-11-22 10:03:10.454 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:03:10 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:10.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:11 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:11.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:12 np0005532048 nova_compute[253661]: 2025-11-22 10:03:12.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:03:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132826932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:03:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:03:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3132826932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:03:12 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:12.912+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:13 np0005532048 nova_compute[253661]: 2025-11-22 10:03:13.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:03:13 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:13.953+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 766 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:14 np0005532048 nova_compute[253661]: 2025-11-22 10:03:14.353 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:14 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:14 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 766 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:14.906+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:15 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:15.859+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:16 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:16.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:17 np0005532048 nova_compute[253661]: 2025-11-22 10:03:17.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:17 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:17.918+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:18 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 05:03:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:18.924+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 771 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:19 np0005532048 nova_compute[253661]: 2025-11-22 10:03:19.356 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:19 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:19 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 771 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:19.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:20 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 05:03:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:20.878+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:21 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:21.898+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:22 np0005532048 nova_compute[253661]: 2025-11-22 10:03:22.412 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:22 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 05:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:03:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:03:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:22.936+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:23 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:23.956+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 776 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:24 np0005532048 nova_compute[253661]: 2025-11-22 10:03:24.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:24 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:24 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 776 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:03:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:24.956+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:25 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:25.949+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:26 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8a4532a2-fa9b-474a-8abc-c756609af5f2 does not exist
Nov 22 05:03:26 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a98ccef8-3a78-4584-9383-a84e5cd1ddbc does not exist
Nov 22 05:03:26 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e320c87d-f995-4f0d-8a0e-ad703158ca70 does not exist
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:03:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:03:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:03:26 np0005532048 podman[421665]: 2025-11-22 10:03:26.950986936 +0000 UTC m=+0.047346747 container create 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:03:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:26.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:26 np0005532048 systemd[1]: Started libpod-conmon-643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e.scope.
Nov 22 05:03:27 np0005532048 podman[421665]: 2025-11-22 10:03:26.927572209 +0000 UTC m=+0.023932060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:03:27 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:03:27 np0005532048 podman[421665]: 2025-11-22 10:03:27.042205701 +0000 UTC m=+0.138565532 container init 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:03:27 np0005532048 podman[421665]: 2025-11-22 10:03:27.053799317 +0000 UTC m=+0.150159128 container start 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:03:27 np0005532048 podman[421665]: 2025-11-22 10:03:27.057218771 +0000 UTC m=+0.153578662 container attach 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 05:03:27 np0005532048 vigorous_mirzakhani[421681]: 167 167
Nov 22 05:03:27 np0005532048 systemd[1]: libpod-643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e.scope: Deactivated successfully.
Nov 22 05:03:27 np0005532048 conmon[421681]: conmon 643eda9b2ce3a919ef9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e.scope/container/memory.events
Nov 22 05:03:27 np0005532048 podman[421665]: 2025-11-22 10:03:27.063737412 +0000 UTC m=+0.160097233 container died 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:03:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-aea5a8248e87072cfc69595d38f5857c7878cfaac047711735ab80f889f761ef-merged.mount: Deactivated successfully.
Nov 22 05:03:27 np0005532048 podman[421665]: 2025-11-22 10:03:27.107597781 +0000 UTC m=+0.203957592 container remove 643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:03:27 np0005532048 systemd[1]: libpod-conmon-643eda9b2ce3a919ef9de4339b8732eb1cef10a67054694cebcfeb401edee78e.scope: Deactivated successfully.
Nov 22 05:03:27 np0005532048 podman[421706]: 2025-11-22 10:03:27.280407935 +0000 UTC m=+0.053740194 container create cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:03:27 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:03:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:03:27 np0005532048 systemd[1]: Started libpod-conmon-cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165.scope.
Nov 22 05:03:27 np0005532048 podman[421706]: 2025-11-22 10:03:27.25419489 +0000 UTC m=+0.027527139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:03:27 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:03:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:27 np0005532048 podman[421706]: 2025-11-22 10:03:27.377099775 +0000 UTC m=+0.150432014 container init cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:03:27 np0005532048 podman[421706]: 2025-11-22 10:03:27.384542559 +0000 UTC m=+0.157874838 container start cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:03:27 np0005532048 podman[421706]: 2025-11-22 10:03:27.38863517 +0000 UTC m=+0.161967499 container attach cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:03:27 np0005532048 nova_compute[253661]: 2025-11-22 10:03:27.414 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:27.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:03:28.010 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:03:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:03:28.013 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:03:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:03:28.014 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:03:28 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:28 np0005532048 stupefied_grothendieck[421722]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:03:28 np0005532048 stupefied_grothendieck[421722]: --> relative data size: 1.0
Nov 22 05:03:28 np0005532048 stupefied_grothendieck[421722]: --> All data devices are unavailable
Nov 22 05:03:28 np0005532048 systemd[1]: libpod-cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165.scope: Deactivated successfully.
Nov 22 05:03:28 np0005532048 podman[421706]: 2025-11-22 10:03:28.417186169 +0000 UTC m=+1.190518388 container died cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:03:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-839e1b065b380ae457ee6b024881d1a5100e14d31cbca562a521ca254fc5cfad-merged.mount: Deactivated successfully.
Nov 22 05:03:28 np0005532048 podman[421706]: 2025-11-22 10:03:28.486765642 +0000 UTC m=+1.260097861 container remove cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_grothendieck, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:03:28 np0005532048 systemd[1]: libpod-conmon-cf052be93cd1887f5fc9fb4618fa5a2cc45f6463d4dadd59d3e235453a91a165.scope: Deactivated successfully.
Nov 22 05:03:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:03:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:28.944+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:29 np0005532048 podman[421905]: 2025-11-22 10:03:29.295615723 +0000 UTC m=+0.049792437 container create ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:03:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 781 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:29 np0005532048 systemd[1]: Started libpod-conmon-ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814.scope.
Nov 22 05:03:29 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:29 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 781 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:29 np0005532048 nova_compute[253661]: 2025-11-22 10:03:29.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:03:29 np0005532048 podman[421905]: 2025-11-22 10:03:29.275451906 +0000 UTC m=+0.029628630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:03:29 np0005532048 podman[421905]: 2025-11-22 10:03:29.386336486 +0000 UTC m=+0.140513220 container init ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:03:29 np0005532048 podman[421905]: 2025-11-22 10:03:29.394481206 +0000 UTC m=+0.148657920 container start ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:03:29 np0005532048 podman[421905]: 2025-11-22 10:03:29.401820178 +0000 UTC m=+0.155996942 container attach ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:03:29 np0005532048 interesting_gates[421922]: 167 167
Nov 22 05:03:29 np0005532048 systemd[1]: libpod-ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814.scope: Deactivated successfully.
Nov 22 05:03:29 np0005532048 podman[421905]: 2025-11-22 10:03:29.404200426 +0000 UTC m=+0.158377180 container died ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:03:29 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4d3c362f65c35a352fe8a893e17b5faa2fca1378f6e9bbfd48ac63cbdad8c242-merged.mount: Deactivated successfully.
Nov 22 05:03:29 np0005532048 podman[421905]: 2025-11-22 10:03:29.451130391 +0000 UTC m=+0.205307125 container remove ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:03:29 np0005532048 systemd[1]: libpod-conmon-ba1a6c54d87c5809a60097a055446522a4b9d10fb799f5b58b97e3958af51814.scope: Deactivated successfully.
Nov 22 05:03:29 np0005532048 podman[421946]: 2025-11-22 10:03:29.651882463 +0000 UTC m=+0.052295298 container create 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 22 05:03:29 np0005532048 systemd[1]: Started libpod-conmon-456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d.scope.
Nov 22 05:03:29 np0005532048 podman[421946]: 2025-11-22 10:03:29.631373768 +0000 UTC m=+0.031786633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:03:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:03:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:29 np0005532048 podman[421946]: 2025-11-22 10:03:29.773725852 +0000 UTC m=+0.174138707 container init 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:03:29 np0005532048 podman[421946]: 2025-11-22 10:03:29.786965229 +0000 UTC m=+0.187378094 container start 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:03:29 np0005532048 podman[421946]: 2025-11-22 10:03:29.79234365 +0000 UTC m=+0.192756575 container attach 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:03:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:29.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:30 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:30 np0005532048 naughty_elion[421964]: {
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:    "0": [
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:        {
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "devices": [
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "/dev/loop3"
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            ],
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_name": "ceph_lv0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_size": "21470642176",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "name": "ceph_lv0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "tags": {
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.cluster_name": "ceph",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.crush_device_class": "",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.encrypted": "0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.osd_id": "0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.type": "block",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.vdo": "0"
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            },
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "type": "block",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "vg_name": "ceph_vg0"
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:        }
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:    ],
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:    "1": [
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:        {
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "devices": [
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "/dev/loop4"
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            ],
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_name": "ceph_lv1",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_size": "21470642176",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "name": "ceph_lv1",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "tags": {
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.cluster_name": "ceph",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.crush_device_class": "",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.encrypted": "0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.osd_id": "1",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.type": "block",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.vdo": "0"
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            },
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "type": "block",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "vg_name": "ceph_vg1"
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:        }
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:    ],
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:    "2": [
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:        {
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "devices": [
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "/dev/loop5"
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            ],
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_name": "ceph_lv2",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_size": "21470642176",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "name": "ceph_lv2",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "tags": {
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.cluster_name": "ceph",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.crush_device_class": "",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.encrypted": "0",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.osd_id": "2",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.type": "block",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:                "ceph.vdo": "0"
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            },
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "type": "block",
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:            "vg_name": "ceph_vg2"
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:        }
Nov 22 05:03:30 np0005532048 naughty_elion[421964]:    ]
Nov 22 05:03:30 np0005532048 naughty_elion[421964]: }
Nov 22 05:03:30 np0005532048 systemd[1]: libpod-456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d.scope: Deactivated successfully.
Nov 22 05:03:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 05:03:30 np0005532048 podman[421973]: 2025-11-22 10:03:30.73274077 +0000 UTC m=+0.030767888 container died 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:03:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0563457514a851cc18f84b44d737d65e527de17999975806d9a71135495c7c11-merged.mount: Deactivated successfully.
Nov 22 05:03:30 np0005532048 podman[421973]: 2025-11-22 10:03:30.796833948 +0000 UTC m=+0.094861046 container remove 456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_elion, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:03:30 np0005532048 systemd[1]: libpod-conmon-456cb39cb049a03e5c615d5d37aa0c5350ace837975162df13d874b9efd15f2d.scope: Deactivated successfully.
Nov 22 05:03:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:30.930+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:31 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:31 np0005532048 podman[422127]: 2025-11-22 10:03:31.534952938 +0000 UTC m=+0.044475296 container create 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:03:31 np0005532048 systemd[1]: Started libpod-conmon-22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906.scope.
Nov 22 05:03:31 np0005532048 podman[422127]: 2025-11-22 10:03:31.514607047 +0000 UTC m=+0.024129405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:03:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:03:31 np0005532048 podman[422127]: 2025-11-22 10:03:31.636025396 +0000 UTC m=+0.145547764 container init 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:03:31 np0005532048 podman[422127]: 2025-11-22 10:03:31.645807467 +0000 UTC m=+0.155329805 container start 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:03:31 np0005532048 kind_bell[422145]: 167 167
Nov 22 05:03:31 np0005532048 systemd[1]: libpod-22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906.scope: Deactivated successfully.
Nov 22 05:03:31 np0005532048 podman[422127]: 2025-11-22 10:03:31.653330353 +0000 UTC m=+0.162852691 container attach 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:03:31 np0005532048 podman[422127]: 2025-11-22 10:03:31.653831325 +0000 UTC m=+0.163353653 container died 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:03:31 np0005532048 podman[422141]: 2025-11-22 10:03:31.662800535 +0000 UTC m=+0.067167394 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 05:03:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4eb6480b8b581856f94dbcfcbe8e8cb29c8ac18356713e74729b5972cae7e3ad-merged.mount: Deactivated successfully.
Nov 22 05:03:31 np0005532048 podman[422127]: 2025-11-22 10:03:31.696934256 +0000 UTC m=+0.206456594 container remove 22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:03:31 np0005532048 podman[422144]: 2025-11-22 10:03:31.699132619 +0000 UTC m=+0.103235661 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 05:03:31 np0005532048 systemd[1]: libpod-conmon-22d4269777d7fd61a59ded641b55809b5e4a75254f0462d2bd88125a46b91906.scope: Deactivated successfully.
Nov 22 05:03:31 np0005532048 podman[422207]: 2025-11-22 10:03:31.903618523 +0000 UTC m=+0.068591359 container create e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:03:31 np0005532048 systemd[1]: Started libpod-conmon-e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828.scope.
Nov 22 05:03:31 np0005532048 podman[422207]: 2025-11-22 10:03:31.879862548 +0000 UTC m=+0.044835504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:03:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:31.975+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:03:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:03:32 np0005532048 podman[422207]: 2025-11-22 10:03:32.006698181 +0000 UTC m=+0.171671097 container init e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:03:32 np0005532048 podman[422207]: 2025-11-22 10:03:32.018129042 +0000 UTC m=+0.183101878 container start e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:03:32 np0005532048 podman[422207]: 2025-11-22 10:03:32.022459509 +0000 UTC m=+0.187432335 container attach e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:03:32 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:32 np0005532048 nova_compute[253661]: 2025-11-22 10:03:32.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 05:03:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:32.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:33 np0005532048 blissful_curran[422224]: {
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "osd_id": 1,
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "type": "bluestore"
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:    },
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "osd_id": 0,
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "type": "bluestore"
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:    },
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "osd_id": 2,
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:        "type": "bluestore"
Nov 22 05:03:33 np0005532048 blissful_curran[422224]:    }
Nov 22 05:03:33 np0005532048 blissful_curran[422224]: }
Nov 22 05:03:33 np0005532048 systemd[1]: libpod-e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828.scope: Deactivated successfully.
Nov 22 05:03:33 np0005532048 systemd[1]: libpod-e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828.scope: Consumed 1.238s CPU time.
Nov 22 05:03:33 np0005532048 podman[422257]: 2025-11-22 10:03:33.321299392 +0000 UTC m=+0.044507346 container died e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:03:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-31196b1e8c2da0af96ba65d777b9d66fdf423da4a1c5136825bfeacecfc8ed0d-merged.mount: Deactivated successfully.
Nov 22 05:03:33 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:33 np0005532048 podman[422257]: 2025-11-22 10:03:33.385270437 +0000 UTC m=+0.108478351 container remove e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 05:03:33 np0005532048 systemd[1]: libpod-conmon-e92b97e86a6ae0f70d89b634932721a37cbadc16babddfb2469a2f8fdeb68828.scope: Deactivated successfully.
Nov 22 05:03:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:03:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:03:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:33 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9a12728d-ffef-486e-9701-27a8f7f1a31a does not exist
Nov 22 05:03:33 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 70abbe0c-4fd9-4314-9266-d16f03822086 does not exist
Nov 22 05:03:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:33.893+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 786 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:34 np0005532048 nova_compute[253661]: 2025-11-22 10:03:34.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:34 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:03:34 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 786 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 05:03:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:34.864+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:35 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:35 np0005532048 podman[422322]: 2025-11-22 10:03:35.412885161 +0000 UTC m=+0.098111286 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:03:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:35.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:36 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:36.856+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:37 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:37 np0005532048 nova_compute[253661]: 2025-11-22 10:03:37.424 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:37.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:38 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:38.885+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 791 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:39 np0005532048 nova_compute[253661]: 2025-11-22 10:03:39.363 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:39 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:39 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 791 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:39.848+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:40 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:40.851+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:41 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:41.878+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:42 np0005532048 nova_compute[253661]: 2025-11-22 10:03:42.427 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:42 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:42.905+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:43 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:43 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:43.928+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 796 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:44 np0005532048 nova_compute[253661]: 2025-11-22 10:03:44.385 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:44 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:44 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 796 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:44.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:45 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:45.938+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:46 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:46.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:47 np0005532048 nova_compute[253661]: 2025-11-22 10:03:47.430 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:47 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:03:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:47.986+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:48 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:03:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 45K writes, 185K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 45K writes, 15K syncs, 2.92 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 311 writes, 649 keys, 311 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s#012Interval WAL: 311 writes, 148 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 05:03:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:48.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 801 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:49 np0005532048 nova_compute[253661]: 2025-11-22 10:03:49.389 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:49 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:49 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 801 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:49.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:50 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:50.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:51 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:51.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:03:52
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', '.mgr', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:03:52 np0005532048 nova_compute[253661]: 2025-11-22 10:03:52.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:52 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:03:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:03:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:52.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:53 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:54.013+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 806 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:54 np0005532048 nova_compute[253661]: 2025-11-22 10:03:54.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:54 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:54 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 806 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:55.021+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:03:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.2 total, 600.0 interval#012Cumulative writes: 45K writes, 178K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 15K syncs, 2.83 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 560 writes, 1092 keys, 560 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s#012Interval WAL: 560 writes, 273 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 05:03:55 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:03:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:03:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:03:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:03:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:03:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:56.059+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:56 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:57.056+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:57 np0005532048 nova_compute[253661]: 2025-11-22 10:03:57.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:57 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:58.042+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:58 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:03:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:59.005+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:03:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:03:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:03:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:03:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:03:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 811 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:03:59 np0005532048 nova_compute[253661]: 2025-11-22 10:03:59.393 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:03:59 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:03:59 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 811 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:03:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:03:59.957+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:03:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:04:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 35K writes, 145K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.90 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 417 writes, 910 keys, 417 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s#012Interval WAL: 417 writes, 202 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 05:04:00 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:00.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:01 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:01.969+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:02 np0005532048 podman[422348]: 2025-11-22 10:04:02.409813909 +0000 UTC m=+0.093023361 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 05:04:02 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 05:04:02 np0005532048 podman[422349]: 2025-11-22 10:04:02.435751248 +0000 UTC m=+0.117339340 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:04:02 np0005532048 nova_compute[253661]: 2025-11-22 10:04:02.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:02 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:02.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:04:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:04:03 np0005532048 nova_compute[253661]: 2025-11-22 10:04:03.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:03 np0005532048 nova_compute[253661]: 2025-11-22 10:04:03.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.636494) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843636547, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1885, "num_deletes": 251, "total_data_size": 2348220, "memory_usage": 2394816, "flush_reason": "Manual Compaction"}
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843648467, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2288878, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64584, "largest_seqno": 66468, "table_properties": {"data_size": 2281141, "index_size": 4230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20284, "raw_average_key_size": 21, "raw_value_size": 2263733, "raw_average_value_size": 2360, "num_data_blocks": 186, "num_entries": 959, "num_filter_entries": 959, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805704, "oldest_key_time": 1763805704, "file_creation_time": 1763805843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 12069 microseconds, and 6250 cpu microseconds.
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.648560) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2288878 bytes OK
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.648598) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.650606) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.650637) EVENT_LOG_v1 {"time_micros": 1763805843650626, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.650673) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2339910, prev total WAL file size 2339910, number of live WAL files 2.
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.652106) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2235KB)], [152(9833KB)]
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843652160, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 12358064, "oldest_snapshot_seqno": -1}
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 9183 keys, 10949903 bytes, temperature: kUnknown
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843732378, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10949903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10891656, "index_size": 34198, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 242417, "raw_average_key_size": 26, "raw_value_size": 10730543, "raw_average_value_size": 1168, "num_data_blocks": 1310, "num_entries": 9183, "num_filter_entries": 9183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.732696) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10949903 bytes
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.734310) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.9 rd, 136.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.6 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(10.2) write-amplify(4.8) OK, records in: 9697, records dropped: 514 output_compression: NoCompression
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.734367) EVENT_LOG_v1 {"time_micros": 1763805843734359, "job": 94, "event": "compaction_finished", "compaction_time_micros": 80314, "compaction_time_cpu_micros": 58069, "output_level": 6, "num_output_files": 1, "total_output_size": 10949903, "num_input_records": 9697, "num_output_records": 9183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843734882, "job": 94, "event": "table_file_deletion", "file_number": 154}
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805843736745, "job": 94, "event": "table_file_deletion", "file_number": 152}
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.652003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:04:03 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:04:03.736822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:04:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:03.943+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 819 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:04 np0005532048 nova_compute[253661]: 2025-11-22 10:04:04.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:04 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:04 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 819 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:04.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:05 np0005532048 nova_compute[253661]: 2025-11-22 10:04:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:05 np0005532048 nova_compute[253661]: 2025-11-22 10:04:05.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:04:05 np0005532048 nova_compute[253661]: 2025-11-22 10:04:05.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:04:05 np0005532048 nova_compute[253661]: 2025-11-22 10:04:05.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:04:05 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:06.010+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:06 np0005532048 nova_compute[253661]: 2025-11-22 10:04:06.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:06 np0005532048 podman[422387]: 2025-11-22 10:04:06.471558566 +0000 UTC m=+0.142855138 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:04:06 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:07.029+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:07 np0005532048 nova_compute[253661]: 2025-11-22 10:04:07.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:07 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:08.071+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:08 np0005532048 nova_compute[253661]: 2025-11-22 10:04:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:08 np0005532048 nova_compute[253661]: 2025-11-22 10:04:08.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:04:08 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:09.101+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 824 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:09 np0005532048 nova_compute[253661]: 2025-11-22 10:04:09.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:09 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:09 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 824 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:10.123+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:10 np0005532048 nova_compute[253661]: 2025-11-22 10:04:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:10 np0005532048 nova_compute[253661]: 2025-11-22 10:04:10.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:10 np0005532048 nova_compute[253661]: 2025-11-22 10:04:10.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:04:10 np0005532048 nova_compute[253661]: 2025-11-22 10:04:10.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:04:10 np0005532048 nova_compute[253661]: 2025-11-22 10:04:10.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:04:10 np0005532048 nova_compute[253661]: 2025-11-22 10:04:10.260 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:04:10 np0005532048 nova_compute[253661]: 2025-11-22 10:04:10.260 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:04:10 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:04:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/689085572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:04:10 np0005532048 nova_compute[253661]: 2025-11-22 10:04:10.795 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.036 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.039 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3546MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.039 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.040 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:04:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:11.110+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.130 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.131 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.158 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:04:11 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:04:11 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2210598809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.642 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.651 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.670 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.672 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:04:11 np0005532048 nova_compute[253661]: 2025-11-22 10:04:11.672 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:04:11 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:12.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:12 np0005532048 nova_compute[253661]: 2025-11-22 10:04:12.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:12 np0005532048 nova_compute[253661]: 2025-11-22 10:04:12.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:04:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1460149436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:04:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:04:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1460149436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:04:12 np0005532048 nova_compute[253661]: 2025-11-22 10:04:12.449 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:12 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:13.129+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:13 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:14.085+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 829 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:14 np0005532048 nova_compute[253661]: 2025-11-22 10:04:14.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:14 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:14 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 829 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:15.118+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:15 np0005532048 nova_compute[253661]: 2025-11-22 10:04:15.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:15 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:16.123+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:16 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:17.115+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:17 np0005532048 nova_compute[253661]: 2025-11-22 10:04:17.477 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:17 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:18.102+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:18 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:19.061+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 834 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:19 np0005532048 nova_compute[253661]: 2025-11-22 10:04:19.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:19 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:19 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 834 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:20.058+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:20 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:04:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:21.099+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:21 np0005532048 nova_compute[253661]: 2025-11-22 10:04:21.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:21 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:22.073+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:22 np0005532048 nova_compute[253661]: 2025-11-22 10:04:22.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:04:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:04:22 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:23.075+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:23 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:24.067+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 839 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:24 np0005532048 nova_compute[253661]: 2025-11-22 10:04:24.450 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:04:24 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:24 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 839 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:25.078+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:25 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:26.108+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:04:26 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:27.106+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:27 np0005532048 nova_compute[253661]: 2025-11-22 10:04:27.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:27 np0005532048 nova_compute[253661]: 2025-11-22 10:04:27.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 05:04:27 np0005532048 nova_compute[253661]: 2025-11-22 10:04:27.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:27 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:04:28.012 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:04:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:04:28.012 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:04:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:04:28.013 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:04:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:28.145+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:04:28 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:29.121+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 844 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:29 np0005532048 nova_compute[253661]: 2025-11-22 10:04:29.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:29 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:29 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 844 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:30.074+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:04:30 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:31.055+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:31 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:32.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:32 np0005532048 nova_compute[253661]: 2025-11-22 10:04:32.488 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:04:32 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:32.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:33 np0005532048 podman[422457]: 2025-11-22 10:04:33.381086582 +0000 UTC m=+0.071716946 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 05:04:33 np0005532048 podman[422458]: 2025-11-22 10:04:33.381086592 +0000 UTC m=+0.063384631 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 05:04:33 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:34.019+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 849 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:34 np0005532048 nova_compute[253661]: 2025-11-22 10:04:34.455 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:04:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 17eb6a2e-09f3-4315-8e62-36770a79d6b9 does not exist
Nov 22 05:04:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 20afb727-dfa4-4562-ae27-67f359ce1a11 does not exist
Nov 22 05:04:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8a45c9ae-f1d5-4960-9731-95866f1d9d11 does not exist
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:04:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 849 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:04:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:04:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:34.980+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:35 np0005532048 podman[422767]: 2025-11-22 10:04:35.301586249 +0000 UTC m=+0.059758002 container create e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:04:35 np0005532048 systemd[1]: Started libpod-conmon-e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995.scope.
Nov 22 05:04:35 np0005532048 podman[422767]: 2025-11-22 10:04:35.273974469 +0000 UTC m=+0.032146322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:04:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:04:35 np0005532048 podman[422767]: 2025-11-22 10:04:35.401510349 +0000 UTC m=+0.159682152 container init e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 05:04:35 np0005532048 podman[422767]: 2025-11-22 10:04:35.415500993 +0000 UTC m=+0.173672766 container start e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:04:35 np0005532048 podman[422767]: 2025-11-22 10:04:35.419375819 +0000 UTC m=+0.177547622 container attach e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:04:35 np0005532048 cool_williams[422783]: 167 167
Nov 22 05:04:35 np0005532048 systemd[1]: libpod-e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995.scope: Deactivated successfully.
Nov 22 05:04:35 np0005532048 conmon[422783]: conmon e811f682b76cb0777c96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995.scope/container/memory.events
Nov 22 05:04:35 np0005532048 podman[422767]: 2025-11-22 10:04:35.425233633 +0000 UTC m=+0.183405386 container died e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:04:35 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9ef8774ea0fa5d734d03cc0470e5330f51fa5255c77df897726ea4253bd6a5d5-merged.mount: Deactivated successfully.
Nov 22 05:04:35 np0005532048 podman[422767]: 2025-11-22 10:04:35.469150084 +0000 UTC m=+0.227321857 container remove e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williams, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:04:35 np0005532048 systemd[1]: libpod-conmon-e811f682b76cb0777c96789d5574747447bb4647bdc6ad9e6192fbf60cf11995.scope: Deactivated successfully.
Nov 22 05:04:35 np0005532048 podman[422806]: 2025-11-22 10:04:35.639425275 +0000 UTC m=+0.046425393 container create da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:04:35 np0005532048 systemd[1]: Started libpod-conmon-da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a.scope.
Nov 22 05:04:35 np0005532048 podman[422806]: 2025-11-22 10:04:35.620605112 +0000 UTC m=+0.027605260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:04:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:04:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:35 np0005532048 podman[422806]: 2025-11-22 10:04:35.756199691 +0000 UTC m=+0.163199889 container init da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:04:35 np0005532048 podman[422806]: 2025-11-22 10:04:35.770053892 +0000 UTC m=+0.177054050 container start da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:04:35 np0005532048 podman[422806]: 2025-11-22 10:04:35.776628483 +0000 UTC m=+0.183628641 container attach da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:04:35 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:35.993+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:36 np0005532048 nice_rubin[422823]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:04:36 np0005532048 nice_rubin[422823]: --> relative data size: 1.0
Nov 22 05:04:36 np0005532048 nice_rubin[422823]: --> All data devices are unavailable
Nov 22 05:04:36 np0005532048 systemd[1]: libpod-da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a.scope: Deactivated successfully.
Nov 22 05:04:36 np0005532048 podman[422806]: 2025-11-22 10:04:36.907712007 +0000 UTC m=+1.314712135 container died da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:04:36 np0005532048 systemd[1]: libpod-da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a.scope: Consumed 1.097s CPU time.
Nov 22 05:04:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5117e3535fefd8f28e56a7e8b899cd0d6b793e263a4ed156e5e25c7a621a7ff3-merged.mount: Deactivated successfully.
Nov 22 05:04:36 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:36 np0005532048 podman[422806]: 2025-11-22 10:04:36.97851746 +0000 UTC m=+1.385517568 container remove da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rubin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:04:36 np0005532048 systemd[1]: libpod-conmon-da72c1fbb0f6502c99c807ccf1ff0146a9a80a1a4144dd7bbd2c676fa873004a.scope: Deactivated successfully.
Nov 22 05:04:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:37.000+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:37 np0005532048 podman[422853]: 2025-11-22 10:04:37.064740732 +0000 UTC m=+0.117117823 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 05:04:37 np0005532048 nova_compute[253661]: 2025-11-22 10:04:37.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:37 np0005532048 podman[423029]: 2025-11-22 10:04:37.813698999 +0000 UTC m=+0.076801532 container create 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:04:37 np0005532048 systemd[1]: Started libpod-conmon-94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa.scope.
Nov 22 05:04:37 np0005532048 podman[423029]: 2025-11-22 10:04:37.783921286 +0000 UTC m=+0.047023679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:04:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:04:37 np0005532048 podman[423029]: 2025-11-22 10:04:37.909854006 +0000 UTC m=+0.172956319 container init 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:04:37 np0005532048 podman[423029]: 2025-11-22 10:04:37.919649258 +0000 UTC m=+0.182751571 container start 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:04:37 np0005532048 podman[423029]: 2025-11-22 10:04:37.923740128 +0000 UTC m=+0.186842461 container attach 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:04:37 np0005532048 quizzical_allen[423045]: 167 167
Nov 22 05:04:37 np0005532048 systemd[1]: libpod-94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa.scope: Deactivated successfully.
Nov 22 05:04:37 np0005532048 podman[423029]: 2025-11-22 10:04:37.926942477 +0000 UTC m=+0.190044810 container died 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 05:04:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a804b735849b59e60f48e31111c7d97947bad9c1f5385ff84b0ecea8eb7dbcc2-merged.mount: Deactivated successfully.
Nov 22 05:04:37 np0005532048 podman[423029]: 2025-11-22 10:04:37.967394693 +0000 UTC m=+0.230497006 container remove 94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:04:37 np0005532048 systemd[1]: libpod-conmon-94ca7ed53cb5383e909301faa4a894e8000283686ba3ff1ebc2672cedde223aa.scope: Deactivated successfully.
Nov 22 05:04:37 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:38.037+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:38 np0005532048 podman[423068]: 2025-11-22 10:04:38.140385842 +0000 UTC m=+0.048431464 container create 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:04:38 np0005532048 systemd[1]: Started libpod-conmon-1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462.scope.
Nov 22 05:04:38 np0005532048 podman[423068]: 2025-11-22 10:04:38.118364709 +0000 UTC m=+0.026410361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:04:38 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:04:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:38 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:38 np0005532048 podman[423068]: 2025-11-22 10:04:38.268021673 +0000 UTC m=+0.176067305 container init 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:04:38 np0005532048 podman[423068]: 2025-11-22 10:04:38.277822235 +0000 UTC m=+0.185867887 container start 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:04:38 np0005532048 podman[423068]: 2025-11-22 10:04:38.311273408 +0000 UTC m=+0.219319070 container attach 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:04:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:39 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:39.068+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:39 np0005532048 amazing_wing[423084]: {
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:    "0": [
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:        {
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "devices": [
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "/dev/loop3"
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            ],
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_name": "ceph_lv0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_size": "21470642176",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "name": "ceph_lv0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "tags": {
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.cluster_name": "ceph",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.crush_device_class": "",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.encrypted": "0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.osd_id": "0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.type": "block",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.vdo": "0"
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            },
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "type": "block",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "vg_name": "ceph_vg0"
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:        }
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:    ],
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:    "1": [
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:        {
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "devices": [
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "/dev/loop4"
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            ],
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_name": "ceph_lv1",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_size": "21470642176",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "name": "ceph_lv1",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "tags": {
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.cluster_name": "ceph",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.crush_device_class": "",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.encrypted": "0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.osd_id": "1",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.type": "block",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.vdo": "0"
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            },
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "type": "block",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "vg_name": "ceph_vg1"
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:        }
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:    ],
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:    "2": [
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:        {
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "devices": [
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "/dev/loop5"
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            ],
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_name": "ceph_lv2",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_size": "21470642176",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "name": "ceph_lv2",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "tags": {
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.cluster_name": "ceph",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.crush_device_class": "",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.encrypted": "0",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.osd_id": "2",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.type": "block",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:                "ceph.vdo": "0"
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            },
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "type": "block",
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:            "vg_name": "ceph_vg2"
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:        }
Nov 22 05:04:39 np0005532048 amazing_wing[423084]:    ]
Nov 22 05:04:39 np0005532048 amazing_wing[423084]: }
Nov 22 05:04:39 np0005532048 systemd[1]: libpod-1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462.scope: Deactivated successfully.
Nov 22 05:04:39 np0005532048 conmon[423084]: conmon 1afdb66dd2f5d7aa0f02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462.scope/container/memory.events
Nov 22 05:04:39 np0005532048 podman[423068]: 2025-11-22 10:04:39.148265922 +0000 UTC m=+1.056311544 container died 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:04:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f974eaa021523c178a07ba658b949fd557be146ae1386ec04cafc5a8400c095f-merged.mount: Deactivated successfully.
Nov 22 05:04:39 np0005532048 podman[423068]: 2025-11-22 10:04:39.226920369 +0000 UTC m=+1.134966001 container remove 1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wing, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:04:39 np0005532048 systemd[1]: libpod-conmon-1afdb66dd2f5d7aa0f0293698329f7b7676b52a047aa0af863ca642a0a3d8462.scope: Deactivated successfully.
Nov 22 05:04:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 854 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:39 np0005532048 nova_compute[253661]: 2025-11-22 10:04:39.458 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:39 np0005532048 podman[423246]: 2025-11-22 10:04:39.985933563 +0000 UTC m=+0.047691675 container create 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:04:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:40 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 854 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:40 np0005532048 systemd[1]: Started libpod-conmon-68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab.scope.
Nov 22 05:04:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:40.031+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:40 np0005532048 podman[423246]: 2025-11-22 10:04:39.96552959 +0000 UTC m=+0.027287742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:04:40 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:04:40 np0005532048 podman[423246]: 2025-11-22 10:04:40.079413243 +0000 UTC m=+0.141171385 container init 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:04:40 np0005532048 podman[423246]: 2025-11-22 10:04:40.086844697 +0000 UTC m=+0.148602819 container start 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:04:40 np0005532048 podman[423246]: 2025-11-22 10:04:40.09066609 +0000 UTC m=+0.152424192 container attach 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:04:40 np0005532048 elastic_swartz[423263]: 167 167
Nov 22 05:04:40 np0005532048 systemd[1]: libpod-68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab.scope: Deactivated successfully.
Nov 22 05:04:40 np0005532048 podman[423246]: 2025-11-22 10:04:40.094976657 +0000 UTC m=+0.156734759 container died 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:04:40 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e0884ddbff43dff30790bc9e701e2948ad498573846f40b04021b8a6c4f01d92-merged.mount: Deactivated successfully.
Nov 22 05:04:40 np0005532048 podman[423246]: 2025-11-22 10:04:40.133748021 +0000 UTC m=+0.195506123 container remove 68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:04:40 np0005532048 systemd[1]: libpod-conmon-68cfabd70c128b7093616ecfd22781d89948831cedd7531494adce9a15ee5bab.scope: Deactivated successfully.
Nov 22 05:04:40 np0005532048 podman[423286]: 2025-11-22 10:04:40.328798303 +0000 UTC m=+0.059804993 container create 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:04:40 np0005532048 systemd[1]: Started libpod-conmon-420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33.scope.
Nov 22 05:04:40 np0005532048 podman[423286]: 2025-11-22 10:04:40.296914308 +0000 UTC m=+0.027921108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:04:40 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:04:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:40 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:04:40 np0005532048 podman[423286]: 2025-11-22 10:04:40.43634552 +0000 UTC m=+0.167352220 container init 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:04:40 np0005532048 podman[423286]: 2025-11-22 10:04:40.450423476 +0000 UTC m=+0.181430176 container start 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:04:40 np0005532048 podman[423286]: 2025-11-22 10:04:40.453209775 +0000 UTC m=+0.184216465 container attach 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:04:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:40.998+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:41 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:41 np0005532048 kind_almeida[423302]: {
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "osd_id": 1,
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "type": "bluestore"
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:    },
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "osd_id": 0,
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "type": "bluestore"
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:    },
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "osd_id": 2,
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:        "type": "bluestore"
Nov 22 05:04:41 np0005532048 kind_almeida[423302]:    }
Nov 22 05:04:41 np0005532048 kind_almeida[423302]: }
Nov 22 05:04:41 np0005532048 systemd[1]: libpod-420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33.scope: Deactivated successfully.
Nov 22 05:04:41 np0005532048 systemd[1]: libpod-420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33.scope: Consumed 1.055s CPU time.
Nov 22 05:04:41 np0005532048 podman[423286]: 2025-11-22 10:04:41.498929957 +0000 UTC m=+1.229936667 container died 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:04:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e899315117740de4ee823546f6d9c83cd6b1d2daf72535c00a9a2f4d54e1597a-merged.mount: Deactivated successfully.
Nov 22 05:04:41 np0005532048 podman[423286]: 2025-11-22 10:04:41.566171513 +0000 UTC m=+1.297178203 container remove 420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_almeida, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:04:41 np0005532048 systemd[1]: libpod-conmon-420005b94685bd98f36dca5f7afef8efa377a8c6adc0cd249e5ce21144a0ed33.scope: Deactivated successfully.
Nov 22 05:04:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:04:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:04:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:04:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:04:41 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2dff942a-1001-44b6-839c-65288f05eeab does not exist
Nov 22 05:04:41 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 13d8cda4-786d-4104-bf70-d582f771ac3d does not exist
Nov 22 05:04:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:42.001+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:42 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:04:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:04:42 np0005532048 nova_compute[253661]: 2025-11-22 10:04:42.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:43 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:43.047+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:44 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:44.076+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 859 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:44 np0005532048 nova_compute[253661]: 2025-11-22 10:04:44.461 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:45 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:45 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 859 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:45.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:46 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:46.152+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:47 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:47.119+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:47 np0005532048 nova_compute[253661]: 2025-11-22 10:04:47.541 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:48.088+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:48 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:49.121+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 865 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:49 np0005532048 nova_compute[253661]: 2025-11-22 10:04:49.462 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:50.108+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:50 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 865 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:51 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:04:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:51.153+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:52 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:52.190+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:04:52
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:04:52 np0005532048 nova_compute[253661]: 2025-11-22 10:04:52.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:04:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:04:53 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:53.224+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:54 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:54.233+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 870 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:54 np0005532048 nova_compute[253661]: 2025-11-22 10:04:54.466 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:55 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:55 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 870 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:55.202+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:55 np0005532048 nova_compute[253661]: 2025-11-22 10:04:55.339 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:04:55 np0005532048 nova_compute[253661]: 2025-11-22 10:04:55.340 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 05:04:55 np0005532048 nova_compute[253661]: 2025-11-22 10:04:55.354 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 05:04:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:04:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:04:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:04:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:04:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:04:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:56.155+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:56 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:57.115+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:57 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:57 np0005532048 nova_compute[253661]: 2025-11-22 10:04:57.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:04:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:58.139+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:58 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:04:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:04:59.116+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:04:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:59 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:04:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:04:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:04:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:04:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:04:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:04:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 875 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:04:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:04:59 np0005532048 nova_compute[253661]: 2025-11-22 10:04:59.484 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:00.164+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:00 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:00 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 875 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:01.204+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:01 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:02 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:02.235+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:02 np0005532048 nova_compute[253661]: 2025-11-22 10:05:02.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:03.201+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:05:03 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:05:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:05:03 np0005532048 nova_compute[253661]: 2025-11-22 10:05:03.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:04.197+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:04 np0005532048 nova_compute[253661]: 2025-11-22 10:05:04.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:04 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 880 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:04 np0005532048 podman[423401]: 2025-11-22 10:05:04.391795119 +0000 UTC m=+0.075707975 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 05:05:04 np0005532048 podman[423402]: 2025-11-22 10:05:04.397828777 +0000 UTC m=+0.082249796 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 05:05:04 np0005532048 nova_compute[253661]: 2025-11-22 10:05:04.516 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:05.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:05 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:05 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 880 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:06.221+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:06 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:07 np0005532048 nova_compute[253661]: 2025-11-22 10:05:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:07 np0005532048 nova_compute[253661]: 2025-11-22 10:05:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:05:07 np0005532048 nova_compute[253661]: 2025-11-22 10:05:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:05:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:07.237+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:07 np0005532048 nova_compute[253661]: 2025-11-22 10:05:07.242 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:05:07 np0005532048 nova_compute[253661]: 2025-11-22 10:05:07.242 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:07 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:07 np0005532048 podman[423441]: 2025-11-22 10:05:07.418155678 +0000 UTC m=+0.110884701 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 05:05:07 np0005532048 nova_compute[253661]: 2025-11-22 10:05:07.553 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:08 np0005532048 nova_compute[253661]: 2025-11-22 10:05:08.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:08 np0005532048 nova_compute[253661]: 2025-11-22 10:05:08.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:05:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:08.239+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:08 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:09.263+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:09 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 885 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:09 np0005532048 nova_compute[253661]: 2025-11-22 10:05:09.519 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:10 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:10 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 885 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:10.297+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:11 np0005532048 nova_compute[253661]: 2025-11-22 10:05:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:11.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:11 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.263 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.264 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:05:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:12.289+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:12 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:05:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438071525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:05:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:05:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438071525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.600 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:05:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1173368490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.791 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.941 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.943 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.944 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:05:12 np0005532048 nova_compute[253661]: 2025-11-22 10:05:12.944 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:05:13 np0005532048 nova_compute[253661]: 2025-11-22 10:05:13.003 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:05:13 np0005532048 nova_compute[253661]: 2025-11-22 10:05:13.004 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:05:13 np0005532048 nova_compute[253661]: 2025-11-22 10:05:13.030 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:05:13 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:13.320+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:05:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425683252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:05:13 np0005532048 nova_compute[253661]: 2025-11-22 10:05:13.481 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:05:13 np0005532048 nova_compute[253661]: 2025-11-22 10:05:13.490 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:05:13 np0005532048 nova_compute[253661]: 2025-11-22 10:05:13.513 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:05:13 np0005532048 nova_compute[253661]: 2025-11-22 10:05:13.515 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:05:13 np0005532048 nova_compute[253661]: 2025-11-22 10:05:13.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:05:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:14.294+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 890 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:14 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:14 np0005532048 nova_compute[253661]: 2025-11-22 10:05:14.516 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:14 np0005532048 nova_compute[253661]: 2025-11-22 10:05:14.521 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:15 np0005532048 nova_compute[253661]: 2025-11-22 10:05:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:15.290+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:15 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:15 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 890 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:16.333+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:16 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:17.355+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:17 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:17 np0005532048 nova_compute[253661]: 2025-11-22 10:05:17.603 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:18 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:18.394+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:18 np0005532048 nova_compute[253661]: 2025-11-22 10:05:18.476 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:05:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 895 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:19 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:19 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 895 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:19.435+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:19 np0005532048 nova_compute[253661]: 2025-11-22 10:05:19.567 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:20 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:20.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:21.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:21 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:22.362+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:22 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:22 np0005532048 nova_compute[253661]: 2025-11-22 10:05:22.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 22 05:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:05:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:05:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:23.377+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:05:23 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 900 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.334719) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924334756, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1186, "num_deletes": 252, "total_data_size": 1317340, "memory_usage": 1339896, "flush_reason": "Manual Compaction"}
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924343580, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 842992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66469, "largest_seqno": 67654, "table_properties": {"data_size": 838661, "index_size": 1662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13514, "raw_average_key_size": 21, "raw_value_size": 828378, "raw_average_value_size": 1321, "num_data_blocks": 74, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805844, "oldest_key_time": 1763805844, "file_creation_time": 1763805924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 8908 microseconds, and 2962 cpu microseconds.
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.343624) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 842992 bytes OK
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.343647) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.344916) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.344934) EVENT_LOG_v1 {"time_micros": 1763805924344928, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.344954) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1311738, prev total WAL file size 1311738, number of live WAL files 2.
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.345643) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353036' seq:72057594037927935, type:22 .. '6D6772737461740032373539' seq:0, type:0; will stop at (end)
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(823KB)], [155(10MB)]
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924345668, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 11792895, "oldest_snapshot_seqno": -1}
Nov 22 05:05:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:24.393+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 9328 keys, 8875500 bytes, temperature: kUnknown
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924395808, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 8875500, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8820452, "index_size": 30615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23365, "raw_key_size": 246138, "raw_average_key_size": 26, "raw_value_size": 8660814, "raw_average_value_size": 928, "num_data_blocks": 1162, "num_entries": 9328, "num_filter_entries": 9328, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.396061) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 8875500 bytes
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.397277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.8 rd, 176.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.4 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(24.5) write-amplify(10.5) OK, records in: 9810, records dropped: 482 output_compression: NoCompression
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.397292) EVENT_LOG_v1 {"time_micros": 1763805924397285, "job": 96, "event": "compaction_finished", "compaction_time_micros": 50231, "compaction_time_cpu_micros": 25337, "output_level": 6, "num_output_files": 1, "total_output_size": 8875500, "num_input_records": 9810, "num_output_records": 9328, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924397535, "job": 96, "event": "table_file_deletion", "file_number": 157}
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805924399569, "job": 96, "event": "table_file_deletion", "file_number": 155}
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.345579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:05:24.399607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:05:24 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 900 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:24 np0005532048 nova_compute[253661]: 2025-11-22 10:05:24.569 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 22 05:05:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:25.417+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:25 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:26.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:26 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 22 05:05:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:27.365+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:27 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:27 np0005532048 nova_compute[253661]: 2025-11-22 10:05:27.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:05:28.013 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:05:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:05:28.014 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:05:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:05:28.014 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:05:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:28.352+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:28 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:05:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 905 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:29.387+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:29 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:29 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 905 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:29 np0005532048 nova_compute[253661]: 2025-11-22 10:05:29.573 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:30.358+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:30 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:05:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:31.353+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:31 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:32.323+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:32 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:32 np0005532048 nova_compute[253661]: 2025-11-22 10:05:32.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:05:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:33.313+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:33 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:34.288+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 910 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:34 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:34 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 910 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:34 np0005532048 nova_compute[253661]: 2025-11-22 10:05:34.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Nov 22 05:05:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:35.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:35 np0005532048 podman[423511]: 2025-11-22 10:05:35.399394518 +0000 UTC m=+0.078029482 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 05:05:35 np0005532048 podman[423512]: 2025-11-22 10:05:35.418285293 +0000 UTC m=+0.087895465 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:05:35 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:36.293+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:36 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 22 05:05:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:37.314+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:37 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:37 np0005532048 nova_compute[253661]: 2025-11-22 10:05:37.654 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:38.360+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:38 np0005532048 podman[423551]: 2025-11-22 10:05:38.429397917 +0000 UTC m=+0.112151442 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 05:05:38 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Nov 22 05:05:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 915 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:39.340+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:39 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:39 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 915 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:39 np0005532048 nova_compute[253661]: 2025-11-22 10:05:39.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:40.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:40 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:40 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:41.418+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:41 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:42.432+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:42 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:42 np0005532048 nova_compute[253661]: 2025-11-22 10:05:42.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:43 np0005532048 podman[423849]: 2025-11-22 10:05:43.336743741 +0000 UTC m=+0.045340848 container create 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 22 05:05:43 np0005532048 systemd[1]: Started libpod-conmon-4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8.scope.
Nov 22 05:05:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:43.389+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:43 np0005532048 podman[423849]: 2025-11-22 10:05:43.318146852 +0000 UTC m=+0.026743979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:05:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:05:43 np0005532048 podman[423849]: 2025-11-22 10:05:43.430576121 +0000 UTC m=+0.139173238 container init 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:05:43 np0005532048 podman[423849]: 2025-11-22 10:05:43.440814052 +0000 UTC m=+0.149411149 container start 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:05:43 np0005532048 podman[423849]: 2025-11-22 10:05:43.447483797 +0000 UTC m=+0.156080904 container attach 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:05:43 np0005532048 dazzling_borg[423865]: 167 167
Nov 22 05:05:43 np0005532048 systemd[1]: libpod-4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8.scope: Deactivated successfully.
Nov 22 05:05:43 np0005532048 podman[423849]: 2025-11-22 10:05:43.450978182 +0000 UTC m=+0.159575289 container died 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:05:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f62671ae6e76e613a2929e42567dc6956b66c0df16fd477486afb0ea7980705f-merged.mount: Deactivated successfully.
Nov 22 05:05:43 np0005532048 podman[423849]: 2025-11-22 10:05:43.499393684 +0000 UTC m=+0.207990781 container remove 4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_borg, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:05:43 np0005532048 systemd[1]: libpod-conmon-4fa23c34669f83c93027e05455a684975971d9b61bcc8f308216c729f38036b8.scope: Deactivated successfully.
Nov 22 05:05:43 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:43 np0005532048 podman[423891]: 2025-11-22 10:05:43.677942229 +0000 UTC m=+0.041198324 container create cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:05:43 np0005532048 systemd[1]: Started libpod-conmon-cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3.scope.
Nov 22 05:05:43 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:05:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:43 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:43 np0005532048 podman[423891]: 2025-11-22 10:05:43.660617273 +0000 UTC m=+0.023873388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:05:43 np0005532048 podman[423891]: 2025-11-22 10:05:43.76774031 +0000 UTC m=+0.130996435 container init cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:05:43 np0005532048 podman[423891]: 2025-11-22 10:05:43.780820652 +0000 UTC m=+0.144076777 container start cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:05:43 np0005532048 podman[423891]: 2025-11-22 10:05:43.785570009 +0000 UTC m=+0.148826134 container attach cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:05:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 920 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:44.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:44 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 920 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:44 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:44 np0005532048 nova_compute[253661]: 2025-11-22 10:05:44.620 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]: [
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:    {
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        "available": false,
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        "ceph_device": false,
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        "lsm_data": {},
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        "lvs": [],
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        "path": "/dev/sr0",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        "rejected_reasons": [
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "Has a FileSystem",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "Insufficient space (<5GB)"
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        ],
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        "sys_api": {
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "actuators": null,
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "device_nodes": "sr0",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "devname": "sr0",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "human_readable_size": "482.00 KB",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "id_bus": "ata",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "model": "QEMU DVD-ROM",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "nr_requests": "2",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "parent": "/dev/sr0",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "partitions": {},
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "path": "/dev/sr0",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "removable": "1",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "rev": "2.5+",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "ro": "0",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "rotational": "1",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "sas_address": "",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "sas_device_handle": "",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "scheduler_mode": "mq-deadline",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "sectors": 0,
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "sectorsize": "2048",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "size": 493568.0,
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "support_discard": "2048",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "type": "disk",
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:            "vendor": "QEMU"
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:        }
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]:    }
Nov 22 05:05:45 np0005532048 pensive_nightingale[423908]: ]
Nov 22 05:05:45 np0005532048 systemd[1]: libpod-cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3.scope: Deactivated successfully.
Nov 22 05:05:45 np0005532048 systemd[1]: libpod-cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3.scope: Consumed 1.537s CPU time.
Nov 22 05:05:45 np0005532048 podman[423891]: 2025-11-22 10:05:45.279214088 +0000 UTC m=+1.642470203 container died cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:05:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fe24a107229169f759503bdfb53fa7980451c3f3f56aaee62bd97fddff18e752-merged.mount: Deactivated successfully.
Nov 22 05:05:45 np0005532048 podman[423891]: 2025-11-22 10:05:45.344282779 +0000 UTC m=+1.707538864 container remove cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:05:45 np0005532048 systemd[1]: libpod-conmon-cb968c3072142a342997cb6f0d6d372e98ea00e8794523dc9a4a861454d157f3.scope: Deactivated successfully.
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:45 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6febcea1-14fd-47c1-b3a8-ff7510c17371 does not exist
Nov 22 05:05:45 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9f3d8736-da6a-4946-addc-7019b02f4685 does not exist
Nov 22 05:05:45 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 291fede7-c742-41d9-b647-58707cc8dd9e does not exist
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:05:45 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:05:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:45.460+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:46 np0005532048 podman[425886]: 2025-11-22 10:05:46.023480709 +0000 UTC m=+0.042673912 container create 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:05:46 np0005532048 systemd[1]: Started libpod-conmon-38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774.scope.
Nov 22 05:05:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:05:46 np0005532048 podman[425886]: 2025-11-22 10:05:46.005838005 +0000 UTC m=+0.025031238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:05:46 np0005532048 podman[425886]: 2025-11-22 10:05:46.112370947 +0000 UTC m=+0.131564150 container init 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:05:46 np0005532048 podman[425886]: 2025-11-22 10:05:46.119568014 +0000 UTC m=+0.138761217 container start 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:05:46 np0005532048 podman[425886]: 2025-11-22 10:05:46.122917657 +0000 UTC m=+0.142110970 container attach 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:05:46 np0005532048 laughing_shtern[425902]: 167 167
Nov 22 05:05:46 np0005532048 systemd[1]: libpod-38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774.scope: Deactivated successfully.
Nov 22 05:05:46 np0005532048 podman[425886]: 2025-11-22 10:05:46.126846853 +0000 UTC m=+0.146040046 container died 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:05:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5d9a9639fb932e8eaab9391452108b3bf81333a9e2a1827ef3f41dd09b0c24fc-merged.mount: Deactivated successfully.
Nov 22 05:05:46 np0005532048 podman[425886]: 2025-11-22 10:05:46.164030479 +0000 UTC m=+0.183223682 container remove 38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shtern, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:05:46 np0005532048 systemd[1]: libpod-conmon-38a63a71a40518fa2704e4762df203735b73d15b78ec0dee988116a3d7dea774.scope: Deactivated successfully.
Nov 22 05:05:46 np0005532048 podman[425924]: 2025-11-22 10:05:46.312720909 +0000 UTC m=+0.038861908 container create d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:05:46 np0005532048 systemd[1]: Started libpod-conmon-d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8.scope.
Nov 22 05:05:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:05:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:46 np0005532048 podman[425924]: 2025-11-22 10:05:46.295621618 +0000 UTC m=+0.021762637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:05:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:05:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:46 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:05:46 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:46 np0005532048 podman[425924]: 2025-11-22 10:05:46.414748451 +0000 UTC m=+0.140889470 container init d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:05:46 np0005532048 podman[425924]: 2025-11-22 10:05:46.424091961 +0000 UTC m=+0.150232960 container start d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:05:46 np0005532048 podman[425924]: 2025-11-22 10:05:46.42733145 +0000 UTC m=+0.153472589 container attach d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 05:05:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:46.494+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:47 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:47 np0005532048 gracious_elgamal[425940]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:05:47 np0005532048 gracious_elgamal[425940]: --> relative data size: 1.0
Nov 22 05:05:47 np0005532048 gracious_elgamal[425940]: --> All data devices are unavailable
Nov 22 05:05:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:47.454+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:47 np0005532048 systemd[1]: libpod-d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8.scope: Deactivated successfully.
Nov 22 05:05:47 np0005532048 podman[425924]: 2025-11-22 10:05:47.485831797 +0000 UTC m=+1.211972826 container died d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:05:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d911e6483af10294e164a07507748be482b0029f4bcd2c18f6395684cb8689d9-merged.mount: Deactivated successfully.
Nov 22 05:05:47 np0005532048 podman[425924]: 2025-11-22 10:05:47.547403333 +0000 UTC m=+1.273544332 container remove d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:05:47 np0005532048 systemd[1]: libpod-conmon-d9d718ef0634b07243d9cdf03837490ad1822358edb65d44810c2fb71bc521a8.scope: Deactivated successfully.
Nov 22 05:05:47 np0005532048 nova_compute[253661]: 2025-11-22 10:05:47.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:48 np0005532048 podman[426120]: 2025-11-22 10:05:48.239009479 +0000 UTC m=+0.086209644 container create 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:05:48 np0005532048 podman[426120]: 2025-11-22 10:05:48.176400547 +0000 UTC m=+0.023600732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:05:48 np0005532048 systemd[1]: Started libpod-conmon-034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd.scope.
Nov 22 05:05:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:05:48 np0005532048 podman[426120]: 2025-11-22 10:05:48.393363558 +0000 UTC m=+0.240563723 container init 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:05:48 np0005532048 podman[426120]: 2025-11-22 10:05:48.39994474 +0000 UTC m=+0.247144905 container start 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:05:48 np0005532048 dreamy_lederberg[426137]: 167 167
Nov 22 05:05:48 np0005532048 systemd[1]: libpod-034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd.scope: Deactivated successfully.
Nov 22 05:05:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:48.420+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:48 np0005532048 podman[426120]: 2025-11-22 10:05:48.456942963 +0000 UTC m=+0.304143128 container attach 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:05:48 np0005532048 podman[426120]: 2025-11-22 10:05:48.457815594 +0000 UTC m=+0.305015759 container died 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 05:05:48 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9ac25471a14ebb6cb467bf8b2463e6abde7dce624930efd1f14e1a3b123f4a65-merged.mount: Deactivated successfully.
Nov 22 05:05:48 np0005532048 podman[426120]: 2025-11-22 10:05:48.585738134 +0000 UTC m=+0.432938299 container remove 034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:05:48 np0005532048 systemd[1]: libpod-conmon-034ac051a13c844f493cd7c0b728238ea6140dc9043398ff91e2f2e52e50b9bd.scope: Deactivated successfully.
Nov 22 05:05:48 np0005532048 podman[426164]: 2025-11-22 10:05:48.74646121 +0000 UTC m=+0.042063217 container create 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:05:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:48 np0005532048 systemd[1]: Started libpod-conmon-52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4.scope.
Nov 22 05:05:48 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:05:48 np0005532048 podman[426164]: 2025-11-22 10:05:48.728357824 +0000 UTC m=+0.023959861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:05:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:48 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:48 np0005532048 podman[426164]: 2025-11-22 10:05:48.842260318 +0000 UTC m=+0.137862325 container init 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:05:48 np0005532048 podman[426164]: 2025-11-22 10:05:48.852012348 +0000 UTC m=+0.147614355 container start 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:05:48 np0005532048 podman[426164]: 2025-11-22 10:05:48.857090503 +0000 UTC m=+0.152692540 container attach 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:05:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 925 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:49.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:49 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:49 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 925 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:49 np0005532048 nova_compute[253661]: 2025-11-22 10:05:49.622 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]: {
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:    "0": [
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:        {
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "devices": [
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "/dev/loop3"
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            ],
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_name": "ceph_lv0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_size": "21470642176",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "name": "ceph_lv0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "tags": {
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.cluster_name": "ceph",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.crush_device_class": "",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.encrypted": "0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.osd_id": "0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.type": "block",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.vdo": "0"
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            },
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "type": "block",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "vg_name": "ceph_vg0"
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:        }
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:    ],
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:    "1": [
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:        {
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "devices": [
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "/dev/loop4"
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            ],
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_name": "ceph_lv1",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_size": "21470642176",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "name": "ceph_lv1",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "tags": {
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.cluster_name": "ceph",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.crush_device_class": "",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.encrypted": "0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.osd_id": "1",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.type": "block",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.vdo": "0"
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            },
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "type": "block",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "vg_name": "ceph_vg1"
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:        }
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:    ],
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:    "2": [
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:        {
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "devices": [
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "/dev/loop5"
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            ],
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_name": "ceph_lv2",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_size": "21470642176",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "name": "ceph_lv2",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "tags": {
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.cluster_name": "ceph",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.crush_device_class": "",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.encrypted": "0",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.osd_id": "2",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.type": "block",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:                "ceph.vdo": "0"
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            },
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "type": "block",
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:            "vg_name": "ceph_vg2"
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:        }
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]:    ]
Nov 22 05:05:49 np0005532048 adoring_goldwasser[426181]: }
Nov 22 05:05:49 np0005532048 systemd[1]: libpod-52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4.scope: Deactivated successfully.
Nov 22 05:05:49 np0005532048 podman[426164]: 2025-11-22 10:05:49.655913088 +0000 UTC m=+0.951515095 container died 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:05:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3d5e063d7a8330406d32db263188736f43d05425c7d68455a38ab054913cbbe0-merged.mount: Deactivated successfully.
Nov 22 05:05:49 np0005532048 podman[426164]: 2025-11-22 10:05:49.710777658 +0000 UTC m=+1.006379665 container remove 52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:05:49 np0005532048 systemd[1]: libpod-conmon-52c170364822bcf276131575c99a0110da7102745ae7c7cb13185aed9540b0e4.scope: Deactivated successfully.
Nov 22 05:05:50 np0005532048 podman[426341]: 2025-11-22 10:05:50.337929957 +0000 UTC m=+0.043995504 container create b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:05:50 np0005532048 systemd[1]: Started libpod-conmon-b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f.scope.
Nov 22 05:05:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:05:50 np0005532048 podman[426341]: 2025-11-22 10:05:50.405358487 +0000 UTC m=+0.111424024 container init b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:05:50 np0005532048 podman[426341]: 2025-11-22 10:05:50.412123003 +0000 UTC m=+0.118188550 container start b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:05:50 np0005532048 podman[426341]: 2025-11-22 10:05:50.317209586 +0000 UTC m=+0.023275153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:05:50 np0005532048 podman[426341]: 2025-11-22 10:05:50.41561272 +0000 UTC m=+0.121678267 container attach b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:05:50 np0005532048 lucid_grothendieck[426358]: 167 167
Nov 22 05:05:50 np0005532048 systemd[1]: libpod-b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f.scope: Deactivated successfully.
Nov 22 05:05:50 np0005532048 podman[426341]: 2025-11-22 10:05:50.417768602 +0000 UTC m=+0.123834149 container died b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:05:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fa0cd8fb525f658d934742423177e912c87bc989137502fea840d1a3a89415a2-merged.mount: Deactivated successfully.
Nov 22 05:05:50 np0005532048 podman[426341]: 2025-11-22 10:05:50.459905489 +0000 UTC m=+0.165971036 container remove b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_grothendieck, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:05:50 np0005532048 systemd[1]: libpod-conmon-b5bf5a5f1e2591b3d685b0dd346c24b1ba44c3bee537368183bc2dc4c088041f.scope: Deactivated successfully.
Nov 22 05:05:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:50.478+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:50 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:50 np0005532048 podman[426381]: 2025-11-22 10:05:50.650976063 +0000 UTC m=+0.044649130 container create a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:05:50 np0005532048 systemd[1]: Started libpod-conmon-a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd.scope.
Nov 22 05:05:50 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:05:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:50 np0005532048 podman[426381]: 2025-11-22 10:05:50.631458973 +0000 UTC m=+0.025132060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:05:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:50 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:05:50 np0005532048 podman[426381]: 2025-11-22 10:05:50.740430805 +0000 UTC m=+0.134103902 container init a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:05:50 np0005532048 podman[426381]: 2025-11-22 10:05:50.750743299 +0000 UTC m=+0.144416366 container start a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:05:50 np0005532048 podman[426381]: 2025-11-22 10:05:50.754996644 +0000 UTC m=+0.148669731 container attach a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 05:05:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:51.480+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:51 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]: {
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "osd_id": 1,
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "type": "bluestore"
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:    },
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "osd_id": 0,
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "type": "bluestore"
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:    },
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "osd_id": 2,
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:        "type": "bluestore"
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]:    }
Nov 22 05:05:51 np0005532048 compassionate_kepler[426397]: }
Nov 22 05:05:51 np0005532048 systemd[1]: libpod-a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd.scope: Deactivated successfully.
Nov 22 05:05:51 np0005532048 systemd[1]: libpod-a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd.scope: Consumed 1.099s CPU time.
Nov 22 05:05:51 np0005532048 podman[426381]: 2025-11-22 10:05:51.844199517 +0000 UTC m=+1.237872584 container died a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:05:51 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c185d422e1d9711875ba022fe2dcbeb660e92d3fc9ffcdb70532441a569526ec-merged.mount: Deactivated successfully.
Nov 22 05:05:51 np0005532048 podman[426381]: 2025-11-22 10:05:51.904249135 +0000 UTC m=+1.297922202 container remove a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:05:51 np0005532048 systemd[1]: libpod-conmon-a6e6e119c59fd2ac22e75908cd167270230738739a9d070e248051bc257eb0dd.scope: Deactivated successfully.
Nov 22 05:05:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:05:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:05:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b6f93535-4bbe-438c-89bc-616cf734242e does not exist
Nov 22 05:05:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 12555245-e955-4221-9c17-2549d19ac16a does not exist
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:05:52
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:05:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:52.499+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:52 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:05:52 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:52 np0005532048 nova_compute[253661]: 2025-11-22 10:05:52.714 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:05:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:05:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:53.522+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:53 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 930 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:54.508+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:54 np0005532048 nova_compute[253661]: 2025-11-22 10:05:54.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:54 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 930 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:54 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:55.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:05:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:05:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:05:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:05:55 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:05:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:56.507+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:56 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:56 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:57.487+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:57 np0005532048 nova_compute[253661]: 2025-11-22 10:05:57.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:57 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:58.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:05:58 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:05:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:05:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:05:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 934 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:05:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:05:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:05:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:05:59.449+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:05:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:05:59 np0005532048 nova_compute[253661]: 2025-11-22 10:05:59.627 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:05:59 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 934 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:05:59 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:00.464+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:00 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:01.464+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:01 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:02.490+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:02 np0005532048 nova_compute[253661]: 2025-11-22 10:06:02.721 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:02 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:06:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:06:03 np0005532048 nova_compute[253661]: 2025-11-22 10:06:03.255 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:03.470+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:03 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:04 np0005532048 nova_compute[253661]: 2025-11-22 10:06:04.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 940 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.346220) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964346264, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 753, "num_deletes": 259, "total_data_size": 733849, "memory_usage": 749176, "flush_reason": "Manual Compaction"}
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964353743, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 712199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67655, "largest_seqno": 68407, "table_properties": {"data_size": 708598, "index_size": 1316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9397, "raw_average_key_size": 19, "raw_value_size": 700787, "raw_average_value_size": 1463, "num_data_blocks": 58, "num_entries": 479, "num_filter_entries": 479, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805924, "oldest_key_time": 1763805924, "file_creation_time": 1763805964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 7559 microseconds, and 2843 cpu microseconds.
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.353777) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 712199 bytes OK
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.353795) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356419) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356431) EVENT_LOG_v1 {"time_micros": 1763805964356426, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356447) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 729879, prev total WAL file size 729879, number of live WAL files 2.
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356794) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303130' seq:72057594037927935, type:22 .. '6C6F676D0033323635' seq:0, type:0; will stop at (end)
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(695KB)], [158(8667KB)]
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964356816, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 9587699, "oldest_snapshot_seqno": -1}
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 9278 keys, 9438621 bytes, temperature: kUnknown
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964420583, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 9438621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9383045, "index_size": 31271, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 246313, "raw_average_key_size": 26, "raw_value_size": 9223423, "raw_average_value_size": 994, "num_data_blocks": 1186, "num_entries": 9278, "num_filter_entries": 9278, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763805964, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.420953) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 9438621 bytes
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.422875) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.9 rd, 147.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(26.7) write-amplify(13.3) OK, records in: 9807, records dropped: 529 output_compression: NoCompression
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.422891) EVENT_LOG_v1 {"time_micros": 1763805964422883, "job": 98, "event": "compaction_finished", "compaction_time_micros": 63969, "compaction_time_cpu_micros": 25808, "output_level": 6, "num_output_files": 1, "total_output_size": 9438621, "num_input_records": 9807, "num_output_records": 9278, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964423487, "job": 98, "event": "table_file_deletion", "file_number": 160}
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763805964425569, "job": 98, "event": "table_file_deletion", "file_number": 158}
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.356749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:04.425747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:04.467+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:04 np0005532048 nova_compute[253661]: 2025-11-22 10:06:04.629 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:05 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 940 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:05 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:05.456+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:06 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:06 np0005532048 podman[426496]: 2025-11-22 10:06:06.372854826 +0000 UTC m=+0.064376136 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 22 05:06:06 np0005532048 podman[426497]: 2025-11-22 10:06:06.386520653 +0000 UTC m=+0.077301984 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 22 05:06:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:06.445+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:07 np0005532048 nova_compute[253661]: 2025-11-22 10:06:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:07 np0005532048 nova_compute[253661]: 2025-11-22 10:06:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:06:07 np0005532048 nova_compute[253661]: 2025-11-22 10:06:07.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:06:07 np0005532048 nova_compute[253661]: 2025-11-22 10:06:07.239 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:06:07 np0005532048 nova_compute[253661]: 2025-11-22 10:06:07.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:07 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:07.405+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:07 np0005532048 nova_compute[253661]: 2025-11-22 10:06:07.726 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:08.365+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:08 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 945 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:09 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:09 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 945 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:09.387+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:09 np0005532048 podman[426532]: 2025-11-22 10:06:09.429743897 +0000 UTC m=+0.119987665 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 05:06:09 np0005532048 nova_compute[253661]: 2025-11-22 10:06:09.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:10 np0005532048 nova_compute[253661]: 2025-11-22 10:06:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:10 np0005532048 nova_compute[253661]: 2025-11-22 10:06:10.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:06:10 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:10.407+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:11.382+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:11 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.251 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.252 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.252 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:06:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:12.404+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:12 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:06:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/59297485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:06:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:06:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/59297485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:06:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:06:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952283727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.747 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:06:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.887 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.889 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.890 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:06:12 np0005532048 nova_compute[253661]: 2025-11-22 10:06:12.890 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:06:13 np0005532048 nova_compute[253661]: 2025-11-22 10:06:13.090 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:06:13 np0005532048 nova_compute[253661]: 2025-11-22 10:06:13.091 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:06:13 np0005532048 nova_compute[253661]: 2025-11-22 10:06:13.109 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:06:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:13.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:13 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:06:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157286242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:06:13 np0005532048 nova_compute[253661]: 2025-11-22 10:06:13.569 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:06:13 np0005532048 nova_compute[253661]: 2025-11-22 10:06:13.574 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:06:13 np0005532048 nova_compute[253661]: 2025-11-22 10:06:13.587 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:06:13 np0005532048 nova_compute[253661]: 2025-11-22 10:06:13.589 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:06:13 np0005532048 nova_compute[253661]: 2025-11-22 10:06:13.589 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:06:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 950 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:14.419+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:14 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:14 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 950 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:14 np0005532048 nova_compute[253661]: 2025-11-22 10:06:14.589 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:14 np0005532048 nova_compute[253661]: 2025-11-22 10:06:14.590 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:14 np0005532048 nova_compute[253661]: 2025-11-22 10:06:14.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:15 np0005532048 nova_compute[253661]: 2025-11-22 10:06:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:15.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:15 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:16.325+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:16 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:17.283+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:17 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:17 np0005532048 nova_compute[253661]: 2025-11-22 10:06:17.769 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:18.275+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:18 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:19.258+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 955 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:19 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:19 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 955 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:19 np0005532048 nova_compute[253661]: 2025-11-22 10:06:19.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:20.274+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:20 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:21.240+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:21 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:22.198+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:22 np0005532048 nova_compute[253661]: 2025-11-22 10:06:22.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:06:22 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:22 np0005532048 nova_compute[253661]: 2025-11-22 10:06:22.774 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:06:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:06:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:23.190+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:23 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:24.174+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 960 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:24 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:24 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 960 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:24 np0005532048 nova_compute[253661]: 2025-11-22 10:06:24.672 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:25.214+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:25 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:25 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:26.250+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:26 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:27.249+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:27 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:27 np0005532048 nova_compute[253661]: 2025-11-22 10:06:27.777 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:06:28.014 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:06:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:06:28.015 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:06:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:06:28.015 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:06:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:28.285+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:28 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:29.296+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 965 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:29 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:29 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 965 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:29 np0005532048 nova_compute[253661]: 2025-11-22 10:06:29.675 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:30.261+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:30 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:31.222+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:31 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:32.236+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:32 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:32 np0005532048 nova_compute[253661]: 2025-11-22 10:06:32.817 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:33.208+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:33 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:34.217+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 970 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:34 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:34 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 970 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:34 np0005532048 nova_compute[253661]: 2025-11-22 10:06:34.680 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:35.201+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:35 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:36.213+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:36 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:37.226+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:37 np0005532048 podman[426603]: 2025-11-22 10:06:37.380150913 +0000 UTC m=+0.067315828 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 05:06:37 np0005532048 podman[426604]: 2025-11-22 10:06:37.429887088 +0000 UTC m=+0.107759434 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:06:37 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:37 np0005532048 nova_compute[253661]: 2025-11-22 10:06:37.820 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:38.252+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:38 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:39.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 975 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:39 np0005532048 nova_compute[253661]: 2025-11-22 10:06:39.682 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:39 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:39 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 975 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:40.223+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:40 np0005532048 podman[426642]: 2025-11-22 10:06:40.444664332 +0000 UTC m=+0.128103965 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:06:40 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:41.241+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:41 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:42.215+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:42 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:42 np0005532048 nova_compute[253661]: 2025-11-22 10:06:42.855 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:43.199+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:43 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:44.184+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 980 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:44 np0005532048 nova_compute[253661]: 2025-11-22 10:06:44.685 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:44 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:44 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 980 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:45.231+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.771373) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005771432, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 727, "num_deletes": 251, "total_data_size": 669735, "memory_usage": 683176, "flush_reason": "Manual Compaction"}
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005781386, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 658992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68408, "largest_seqno": 69134, "table_properties": {"data_size": 655471, "index_size": 1236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9194, "raw_average_key_size": 19, "raw_value_size": 648007, "raw_average_value_size": 1405, "num_data_blocks": 55, "num_entries": 461, "num_filter_entries": 461, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763805964, "oldest_key_time": 1763805964, "file_creation_time": 1763806005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 10070 microseconds, and 3801 cpu microseconds.
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.781436) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 658992 bytes OK
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.781463) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.783792) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.783864) EVENT_LOG_v1 {"time_micros": 1763806005783850, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.783900) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 665907, prev total WAL file size 665907, number of live WAL files 2.
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.784700) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(643KB)], [161(9217KB)]
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005784739, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 10097613, "oldest_snapshot_seqno": -1}
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 9229 keys, 8678948 bytes, temperature: kUnknown
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005849076, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 8678948, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8624322, "index_size": 30449, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23109, "raw_key_size": 246264, "raw_average_key_size": 26, "raw_value_size": 8466086, "raw_average_value_size": 917, "num_data_blocks": 1145, "num_entries": 9229, "num_filter_entries": 9229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.849621) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 8678948 bytes
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.851035) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.6 rd, 134.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.0 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(28.5) write-amplify(13.2) OK, records in: 9739, records dropped: 510 output_compression: NoCompression
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.851064) EVENT_LOG_v1 {"time_micros": 1763806005851050, "job": 100, "event": "compaction_finished", "compaction_time_micros": 64498, "compaction_time_cpu_micros": 25906, "output_level": 6, "num_output_files": 1, "total_output_size": 8678948, "num_input_records": 9739, "num_output_records": 9229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005851387, "job": 100, "event": "table_file_deletion", "file_number": 163}
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806005853635, "job": 100, "event": "table_file_deletion", "file_number": 161}
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.784606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:45 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:06:45.853815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:06:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:46.279+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:46 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:47.291+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:47 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:47 np0005532048 nova_compute[253661]: 2025-11-22 10:06:47.858 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:48.289+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:48 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:49.304+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 985 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:49 np0005532048 nova_compute[253661]: 2025-11-22 10:06:49.686 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:49 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:49 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 985 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:50.350+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:50 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:51.382+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:51 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:06:52
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'images', 'vms']
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:06:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:52.350+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:52 np0005532048 nova_compute[253661]: 2025-11-22 10:06:52.861 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ba4af6e4-fcb0-433a-91cc-f9e723842f42 does not exist
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c0bf4a41-8b64-4aba-b1bc-ed34d2ef7344 does not exist
Nov 22 05:06:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 698c475a-5774-4c7b-a865-2823cb7cfd46 does not exist
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:06:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:06:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:53.309+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:53 np0005532048 podman[426938]: 2025-11-22 10:06:53.52142981 +0000 UTC m=+0.044465476 container create 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:06:53 np0005532048 systemd[1]: Started libpod-conmon-28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4.scope.
Nov 22 05:06:53 np0005532048 podman[426938]: 2025-11-22 10:06:53.502338059 +0000 UTC m=+0.025373745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:06:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:06:53 np0005532048 podman[426938]: 2025-11-22 10:06:53.633916039 +0000 UTC m=+0.156951735 container init 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:06:53 np0005532048 podman[426938]: 2025-11-22 10:06:53.646840386 +0000 UTC m=+0.169876052 container start 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:06:53 np0005532048 podman[426938]: 2025-11-22 10:06:53.650590589 +0000 UTC m=+0.173626285 container attach 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:06:53 np0005532048 xenodochial_kilby[426954]: 167 167
Nov 22 05:06:53 np0005532048 systemd[1]: libpod-28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4.scope: Deactivated successfully.
Nov 22 05:06:53 np0005532048 podman[426938]: 2025-11-22 10:06:53.656016963 +0000 UTC m=+0.179052659 container died 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 05:06:53 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2e39e854f3cdf55342d8b2ac8f14805e7fe864c9172e63bbb91c26c5dab24c4d-merged.mount: Deactivated successfully.
Nov 22 05:06:53 np0005532048 podman[426938]: 2025-11-22 10:06:53.706995267 +0000 UTC m=+0.230030933 container remove 28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kilby, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:06:53 np0005532048 systemd[1]: libpod-conmon-28a85d3aa1b0074a5429d8c1a6bd0959570e75bb38cf8b19b5869f53bf05daa4.scope: Deactivated successfully.
Nov 22 05:06:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:06:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:06:53 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:06:53 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:53 np0005532048 podman[426978]: 2025-11-22 10:06:53.884762473 +0000 UTC m=+0.046172227 container create 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:06:53 np0005532048 systemd[1]: Started libpod-conmon-6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979.scope.
Nov 22 05:06:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:06:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:53 np0005532048 podman[426978]: 2025-11-22 10:06:53.864858594 +0000 UTC m=+0.026268518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:06:53 np0005532048 podman[426978]: 2025-11-22 10:06:53.964391804 +0000 UTC m=+0.125801608 container init 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:06:53 np0005532048 podman[426978]: 2025-11-22 10:06:53.97198149 +0000 UTC m=+0.133391244 container start 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:06:53 np0005532048 podman[426978]: 2025-11-22 10:06:53.975470947 +0000 UTC m=+0.136880771 container attach 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 05:06:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:54.335+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 990 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:54 np0005532048 nova_compute[253661]: 2025-11-22 10:06:54.688 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:54 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:54 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 990 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:55 np0005532048 practical_keller[426994]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:06:55 np0005532048 practical_keller[426994]: --> relative data size: 1.0
Nov 22 05:06:55 np0005532048 practical_keller[426994]: --> All data devices are unavailable
Nov 22 05:06:55 np0005532048 systemd[1]: libpod-6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979.scope: Deactivated successfully.
Nov 22 05:06:55 np0005532048 systemd[1]: libpod-6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979.scope: Consumed 1.033s CPU time.
Nov 22 05:06:55 np0005532048 podman[426978]: 2025-11-22 10:06:55.067915329 +0000 UTC m=+1.229325093 container died 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:06:55 np0005532048 systemd[1]: var-lib-containers-storage-overlay-240b1fc59f4f9dc2e787e7f8cf97b9f850dd74a63738d68bf9c693abf0b122c7-merged.mount: Deactivated successfully.
Nov 22 05:06:55 np0005532048 podman[426978]: 2025-11-22 10:06:55.127590738 +0000 UTC m=+1.289000492 container remove 6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:06:55 np0005532048 systemd[1]: libpod-conmon-6d1cc712e9bd08c8ab5e5f92e88cb025dacaace5ca48b41acd02ed8a3c737979.scope: Deactivated successfully.
Nov 22 05:06:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:55.328+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:56 np0005532048 podman[427179]: 2025-11-22 10:06:55.728773797 +0000 UTC m=+0.025103579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:06:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:06:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:06:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:56.324+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:06:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:06:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:06:56 np0005532048 podman[427179]: 2025-11-22 10:06:56.552425452 +0000 UTC m=+0.848755224 container create 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:06:56 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:56 np0005532048 systemd[1]: Started libpod-conmon-57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851.scope.
Nov 22 05:06:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:06:56 np0005532048 podman[427179]: 2025-11-22 10:06:56.635726733 +0000 UTC m=+0.932056555 container init 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:06:56 np0005532048 podman[427179]: 2025-11-22 10:06:56.64333602 +0000 UTC m=+0.939665782 container start 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:06:56 np0005532048 podman[427179]: 2025-11-22 10:06:56.646072798 +0000 UTC m=+0.942402630 container attach 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:06:56 np0005532048 sweet_archimedes[427195]: 167 167
Nov 22 05:06:56 np0005532048 systemd[1]: libpod-57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851.scope: Deactivated successfully.
Nov 22 05:06:56 np0005532048 podman[427179]: 2025-11-22 10:06:56.647974735 +0000 UTC m=+0.944304507 container died 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:06:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8034d4d72cd866a75abb1ffe6d72c290c55bde232559887d957310bf88f36539-merged.mount: Deactivated successfully.
Nov 22 05:06:56 np0005532048 podman[427179]: 2025-11-22 10:06:56.687842386 +0000 UTC m=+0.984172148 container remove 57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_archimedes, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:06:56 np0005532048 systemd[1]: libpod-conmon-57f7c3c7df048afec8eb1036ac3d31935bcf20a5dedfea119274edd8a7508851.scope: Deactivated successfully.
Nov 22 05:06:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:56 np0005532048 podman[427218]: 2025-11-22 10:06:56.878590261 +0000 UTC m=+0.050582006 container create f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:06:56 np0005532048 systemd[1]: Started libpod-conmon-f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e.scope.
Nov 22 05:06:56 np0005532048 podman[427218]: 2025-11-22 10:06:56.859073461 +0000 UTC m=+0.031065206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:06:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:06:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:56 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:56 np0005532048 podman[427218]: 2025-11-22 10:06:56.977265561 +0000 UTC m=+0.149257306 container init f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:06:56 np0005532048 podman[427218]: 2025-11-22 10:06:56.983996767 +0000 UTC m=+0.155988502 container start f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:06:56 np0005532048 podman[427218]: 2025-11-22 10:06:56.988420395 +0000 UTC m=+0.160412110 container attach f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:06:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:57.339+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:57 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:57 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]: {
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:    "0": [
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:        {
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "devices": [
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "/dev/loop3"
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            ],
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_name": "ceph_lv0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_size": "21470642176",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "name": "ceph_lv0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "tags": {
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.cluster_name": "ceph",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.crush_device_class": "",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.encrypted": "0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.osd_id": "0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.type": "block",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.vdo": "0"
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            },
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "type": "block",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "vg_name": "ceph_vg0"
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:        }
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:    ],
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:    "1": [
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:        {
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "devices": [
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "/dev/loop4"
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            ],
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_name": "ceph_lv1",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_size": "21470642176",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "name": "ceph_lv1",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "tags": {
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.cluster_name": "ceph",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.crush_device_class": "",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.encrypted": "0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.osd_id": "1",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.type": "block",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.vdo": "0"
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            },
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "type": "block",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "vg_name": "ceph_vg1"
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:        }
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:    ],
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:    "2": [
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:        {
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "devices": [
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "/dev/loop5"
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            ],
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_name": "ceph_lv2",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_size": "21470642176",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "name": "ceph_lv2",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "tags": {
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.cluster_name": "ceph",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.crush_device_class": "",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.encrypted": "0",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.osd_id": "2",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.type": "block",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:                "ceph.vdo": "0"
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            },
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "type": "block",
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:            "vg_name": "ceph_vg2"
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:        }
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]:    ]
Nov 22 05:06:57 np0005532048 nostalgic_tesla[427235]: }
Nov 22 05:06:57 np0005532048 systemd[1]: libpod-f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e.scope: Deactivated successfully.
Nov 22 05:06:57 np0005532048 podman[427244]: 2025-11-22 10:06:57.855628133 +0000 UTC m=+0.032508201 container died f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:06:57 np0005532048 nova_compute[253661]: 2025-11-22 10:06:57.864 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:06:57 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9dfabf3212b835717b84471388fe0715edc8452da6ee3a5dee74a1e1f5b173b7-merged.mount: Deactivated successfully.
Nov 22 05:06:57 np0005532048 podman[427244]: 2025-11-22 10:06:57.925429561 +0000 UTC m=+0.102309619 container remove f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tesla, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:06:57 np0005532048 systemd[1]: libpod-conmon-f9f634ba5bef86350a01d1b6c169a78a3dd70f99d1ee0836a6f3f4c3d366ea9e.scope: Deactivated successfully.
Nov 22 05:06:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:58.353+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:58 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:58 np0005532048 podman[427400]: 2025-11-22 10:06:58.692707749 +0000 UTC m=+0.052827371 container create 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:06:58 np0005532048 systemd[1]: Started libpod-conmon-6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3.scope.
Nov 22 05:06:58 np0005532048 podman[427400]: 2025-11-22 10:06:58.669829457 +0000 UTC m=+0.029949119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:06:58 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:06:58 np0005532048 podman[427400]: 2025-11-22 10:06:58.797234233 +0000 UTC m=+0.157353905 container init 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:06:58 np0005532048 podman[427400]: 2025-11-22 10:06:58.807935766 +0000 UTC m=+0.168055428 container start 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 22 05:06:58 np0005532048 podman[427400]: 2025-11-22 10:06:58.812301473 +0000 UTC m=+0.172421135 container attach 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:06:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:06:58 np0005532048 awesome_benz[427416]: 167 167
Nov 22 05:06:58 np0005532048 systemd[1]: libpod-6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3.scope: Deactivated successfully.
Nov 22 05:06:58 np0005532048 podman[427400]: 2025-11-22 10:06:58.815173444 +0000 UTC m=+0.175293106 container died 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:06:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ae0231d54be356b52de63b794f819bac75da8d75513dbd49852d38405667b6b9-merged.mount: Deactivated successfully.
Nov 22 05:06:58 np0005532048 podman[427400]: 2025-11-22 10:06:58.867203935 +0000 UTC m=+0.227323577 container remove 6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:06:58 np0005532048 systemd[1]: libpod-conmon-6a8e5ce87ceee152a4d93b093956ff86f744a3acac1294b40c03b6010646e1c3.scope: Deactivated successfully.
Nov 22 05:06:59 np0005532048 podman[427439]: 2025-11-22 10:06:59.070349946 +0000 UTC m=+0.046070315 container create 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:06:59 np0005532048 systemd[1]: Started libpod-conmon-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope.
Nov 22 05:06:59 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:06:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:59 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:06:59 np0005532048 podman[427439]: 2025-11-22 10:06:59.052797693 +0000 UTC m=+0.028518082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:06:59 np0005532048 podman[427439]: 2025-11-22 10:06:59.167143059 +0000 UTC m=+0.142863458 container init 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:06:59 np0005532048 podman[427439]: 2025-11-22 10:06:59.174265884 +0000 UTC m=+0.149986253 container start 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:06:59 np0005532048 podman[427439]: 2025-11-22 10:06:59.177625586 +0000 UTC m=+0.153345955 container attach 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:06:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 995 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:06:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:06:59.368+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:06:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:06:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:06:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:06:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:06:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:06:59 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 995 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:06:59 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:06:59 np0005532048 nova_compute[253661]: 2025-11-22 10:06:59.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:00 np0005532048 infallible_edison[427456]: {
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "osd_id": 1,
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "type": "bluestore"
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:    },
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "osd_id": 0,
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "type": "bluestore"
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:    },
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "osd_id": 2,
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:        "type": "bluestore"
Nov 22 05:07:00 np0005532048 infallible_edison[427456]:    }
Nov 22 05:07:00 np0005532048 infallible_edison[427456]: }
Nov 22 05:07:00 np0005532048 systemd[1]: libpod-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope: Deactivated successfully.
Nov 22 05:07:00 np0005532048 systemd[1]: libpod-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope: Consumed 1.036s CPU time.
Nov 22 05:07:00 np0005532048 conmon[427456]: conmon 87fa04487078b713604f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope/container/memory.events
Nov 22 05:07:00 np0005532048 podman[427489]: 2025-11-22 10:07:00.250158238 +0000 UTC m=+0.026086532 container died 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:07:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-0eeae97a8b57069f7faae0152ff33b01fb16a508c9bf2b8852f04dc64941feaa-merged.mount: Deactivated successfully.
Nov 22 05:07:00 np0005532048 podman[427489]: 2025-11-22 10:07:00.304263241 +0000 UTC m=+0.080191515 container remove 87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_edison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:07:00 np0005532048 systemd[1]: libpod-conmon-87fa04487078b713604f7fa4ac3b1779ddfd24d5851272038553b5d30387d5c6.scope: Deactivated successfully.
Nov 22 05:07:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:00.329+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:07:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:07:00 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:07:00 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:07:00 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c539ce19-c914-4a12-89ef-3756d46bdfbc does not exist
Nov 22 05:07:00 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3df6099a-63e7-4431-9464-f212f6fc73ce does not exist
Nov 22 05:07:00 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:07:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:07:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:01.339+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:01 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:02.378+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:02 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:02 np0005532048 nova_compute[253661]: 2025-11-22 10:07:02.867 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:07:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:07:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:03.360+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:03 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:04 np0005532048 nova_compute[253661]: 2025-11-22 10:07:04.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:07:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:04.359+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1000 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:04 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:04 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1000 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:04 np0005532048 nova_compute[253661]: 2025-11-22 10:07:04.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:05.387+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:05 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:06 np0005532048 nova_compute[253661]: 2025-11-22 10:07:06.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:07:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:06.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:06 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:07 np0005532048 nova_compute[253661]: 2025-11-22 10:07:07.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:07:07 np0005532048 nova_compute[253661]: 2025-11-22 10:07:07.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:07:07 np0005532048 nova_compute[253661]: 2025-11-22 10:07:07.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:07:07 np0005532048 nova_compute[253661]: 2025-11-22 10:07:07.239 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:07:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:07.342+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:07 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:07 np0005532048 nova_compute[253661]: 2025-11-22 10:07:07.872 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:08.314+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:08 np0005532048 podman[427555]: 2025-11-22 10:07:08.38411038 +0000 UTC m=+0.065306699 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 05:07:08 np0005532048 podman[427554]: 2025-11-22 10:07:08.392301152 +0000 UTC m=+0.072435124 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:07:08 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:07:09 np0005532048 nova_compute[253661]: 2025-11-22 10:07:09.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:07:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:09.360+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1005 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:09 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:09 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1005 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:09 np0005532048 nova_compute[253661]: 2025-11-22 10:07:09.697 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:10.315+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:10 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:07:11 np0005532048 nova_compute[253661]: 2025-11-22 10:07:11.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:07:11 np0005532048 nova_compute[253661]: 2025-11-22 10:07:11.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:07:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:11.293+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:11 np0005532048 podman[427588]: 2025-11-22 10:07:11.400302228 +0000 UTC m=+0.093184025 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:07:11 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:12.282+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:07:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4067917069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:07:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:07:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4067917069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:07:12 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:07:12 np0005532048 nova_compute[253661]: 2025-11-22 10:07:12.913 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:07:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:13.249+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.259 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.260 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.260 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.261 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:07:13 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:07:13 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313661028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.727 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:07:13 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.917 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.919 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.919 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.919 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.992 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:07:13 np0005532048 nova_compute[253661]: 2025-11-22 10:07:13.993 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.054 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.152 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.153 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.174 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.204 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.225 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:07:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:14.298+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1010 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.698 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:07:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1793474515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.731 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.739 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:07:14 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:14 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1010 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.756 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.758 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:07:14 np0005532048 nova_compute[253661]: 2025-11-22 10:07:14.758 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:07:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:07:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:15.304+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:15 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:16.337+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:16 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:16 np0005532048 nova_compute[253661]: 2025-11-22 10:07:16.759 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:07:16 np0005532048 nova_compute[253661]: 2025-11-22 10:07:16.760 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:07:16 np0005532048 nova_compute[253661]: 2025-11-22 10:07:16.760 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:07:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:07:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:17.349+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:17 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:17 np0005532048 nova_compute[253661]: 2025-11-22 10:07:17.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:18.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:18 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:07:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:19.340+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1015 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:19 np0005532048 nova_compute[253661]: 2025-11-22 10:07:19.700 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:19 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:19 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1015 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:20.310+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:20 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:21.295+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:21 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:22.272+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:07:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:07:22 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:22 np0005532048 nova_compute[253661]: 2025-11-22 10:07:22.920 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:23.250+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:23 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:24.206+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1020 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:24 np0005532048 nova_compute[253661]: 2025-11-22 10:07:24.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:24 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:24 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1020 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:25.172+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:25 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:26.210+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:26 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:27.247+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:27 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:27 np0005532048 nova_compute[253661]: 2025-11-22 10:07:27.950 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:07:28.015 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:07:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:07:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:07:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:07:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:07:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:28.262+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:28 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:29.252+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1025 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:29 np0005532048 nova_compute[253661]: 2025-11-22 10:07:29.703 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:29 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:29 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1025 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:30.272+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:30 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:31.263+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:31 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:32.219+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:32 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:32 np0005532048 nova_compute[253661]: 2025-11-22 10:07:32.953 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:33.200+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:33 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:34.154+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1030 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:34 np0005532048 nova_compute[253661]: 2025-11-22 10:07:34.706 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:34 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:34 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1030 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:35.107+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:35 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:36.059+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:36 np0005532048 ceph-mon[75021]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'images' : 6 ])
Nov 22 05:07:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:37.061+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:37 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:37 np0005532048 nova_compute[253661]: 2025-11-22 10:07:37.956 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:38.067+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:38 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:39.047+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 6 slow ops, oldest one blocked for 1034 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:39 np0005532048 podman[427660]: 2025-11-22 10:07:39.424820092 +0000 UTC m=+0.098712790 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 05:07:39 np0005532048 podman[427661]: 2025-11-22 10:07:39.435568917 +0000 UTC m=+0.103641292 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 22 05:07:39 np0005532048 nova_compute[253661]: 2025-11-22 10:07:39.708 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:39 np0005532048 ceph-mon[75021]: Health check update: 6 slow ops, oldest one blocked for 1034 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:40.034+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:40 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:41.042+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:41 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:41.999+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:42 np0005532048 podman[427699]: 2025-11-22 10:07:42.437536625 +0000 UTC m=+0.118784205 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 22 05:07:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:42 np0005532048 nova_compute[253661]: 2025-11-22 10:07:42.958 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:42 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:43.011+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:43 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:44.051+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1039 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:44 np0005532048 nova_compute[253661]: 2025-11-22 10:07:44.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:45 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:45 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1039 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:45.042+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:45.997+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:46 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:46.980+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:47 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:47 np0005532048 nova_compute[253661]: 2025-11-22 10:07:47.962 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:47.995+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:48 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:49.023+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:49 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1044 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:49 np0005532048 nova_compute[253661]: 2025-11-22 10:07:49.713 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:50.023+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:50 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:50 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1044 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:51.062+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:51 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:52.063+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:52 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:07:52
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.data', 'volumes', '.rgw.root']
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:07:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:52 np0005532048 nova_compute[253661]: 2025-11-22 10:07:52.966 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:53 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:53.079+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:54.075+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:54 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1049 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:54 np0005532048 nova_compute[253661]: 2025-11-22 10:07:54.715 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:55 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:55 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1049 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:55.104+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:56.061+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:56 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:07:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:07:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:07:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:07:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:07:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:57.078+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:57 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:57 np0005532048 nova_compute[253661]: 2025-11-22 10:07:57.969 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:07:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:58.092+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:58 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:07:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:07:59.049+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:07:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:59 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:07:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1054 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:07:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:07:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:07:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:07:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:07:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:07:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:07:59 np0005532048 nova_compute[253661]: 2025-11-22 10:07:59.718 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:00.028+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:00 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:00 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1054 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:01.065+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 93cf6347-0f18-4eea-b4d0-75699a267ab8 does not exist
Nov 22 05:08:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 571ab854-3a11-4b35-a525-8bf221ecb1ac does not exist
Nov 22 05:08:01 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 58ea6602-25b4-4bc5-b42e-76ab98c848e9 does not exist
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:08:01 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:08:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:02.106+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:02 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:08:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:02 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:08:02 np0005532048 podman[428120]: 2025-11-22 10:08:02.57374101 +0000 UTC m=+0.048552397 container create f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:08:02 np0005532048 systemd[1]: Started libpod-conmon-f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460.scope.
Nov 22 05:08:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:08:02 np0005532048 podman[428120]: 2025-11-22 10:08:02.553515711 +0000 UTC m=+0.028327138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:08:02 np0005532048 podman[428120]: 2025-11-22 10:08:02.664066803 +0000 UTC m=+0.138878250 container init f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:08:02 np0005532048 podman[428120]: 2025-11-22 10:08:02.672816288 +0000 UTC m=+0.147627705 container start f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:08:02 np0005532048 podman[428120]: 2025-11-22 10:08:02.67655821 +0000 UTC m=+0.151369637 container attach f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:08:02 np0005532048 beautiful_bassi[428136]: 167 167
Nov 22 05:08:02 np0005532048 systemd[1]: libpod-f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460.scope: Deactivated successfully.
Nov 22 05:08:02 np0005532048 podman[428120]: 2025-11-22 10:08:02.680667471 +0000 UTC m=+0.155478868 container died f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:08:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d086bb52e3fa894aa31699d0f7026a7148b7efd7fa392a88fd78aac367b8a503-merged.mount: Deactivated successfully.
Nov 22 05:08:02 np0005532048 podman[428120]: 2025-11-22 10:08:02.721951628 +0000 UTC m=+0.196763025 container remove f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:08:02 np0005532048 systemd[1]: libpod-conmon-f6fd83c28e61cb24e5fce3d345622671128e4f506dd3e63821fca93622265460.scope: Deactivated successfully.
Nov 22 05:08:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:02 np0005532048 podman[428159]: 2025-11-22 10:08:02.892924307 +0000 UTC m=+0.052236478 container create 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:08:02 np0005532048 systemd[1]: Started libpod-conmon-03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39.scope.
Nov 22 05:08:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:08:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:02 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:02 np0005532048 podman[428159]: 2025-11-22 10:08:02.871714984 +0000 UTC m=+0.031027225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:08:02 np0005532048 podman[428159]: 2025-11-22 10:08:02.973767996 +0000 UTC m=+0.133080177 container init 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:08:02 np0005532048 nova_compute[253661]: 2025-11-22 10:08:02.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:02 np0005532048 podman[428159]: 2025-11-22 10:08:02.982469881 +0000 UTC m=+0.141782032 container start 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:08:02 np0005532048 podman[428159]: 2025-11-22 10:08:02.986423518 +0000 UTC m=+0.145735749 container attach 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:08:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:03.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:03 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:08:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:08:04 np0005532048 determined_jennings[428176]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:08:04 np0005532048 determined_jennings[428176]: --> relative data size: 1.0
Nov 22 05:08:04 np0005532048 determined_jennings[428176]: --> All data devices are unavailable
Nov 22 05:08:04 np0005532048 systemd[1]: libpod-03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39.scope: Deactivated successfully.
Nov 22 05:08:04 np0005532048 podman[428159]: 2025-11-22 10:08:04.100187716 +0000 UTC m=+1.259499877 container died 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:08:04 np0005532048 systemd[1]: libpod-03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39.scope: Consumed 1.067s CPU time.
Nov 22 05:08:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:04.118+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cfea5ef1b98f983ed37ad8a8cef8bf43101c259eebd8498a2e10737d39547fa6-merged.mount: Deactivated successfully.
Nov 22 05:08:04 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:04 np0005532048 podman[428159]: 2025-11-22 10:08:04.173708095 +0000 UTC m=+1.333020266 container remove 03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:08:04 np0005532048 systemd[1]: libpod-conmon-03dc9fae9f490651390a024d3e57e2b66814c4ad5c75c674b65bb38ffe1a2f39.scope: Deactivated successfully.
Nov 22 05:08:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1059 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:04 np0005532048 nova_compute[253661]: 2025-11-22 10:08:04.721 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:04 np0005532048 podman[428354]: 2025-11-22 10:08:04.910109093 +0000 UTC m=+0.119599875 container create d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:08:04 np0005532048 podman[428354]: 2025-11-22 10:08:04.81412116 +0000 UTC m=+0.023611942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:08:04 np0005532048 systemd[1]: Started libpod-conmon-d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00.scope.
Nov 22 05:08:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:08:05 np0005532048 podman[428354]: 2025-11-22 10:08:05.083737947 +0000 UTC m=+0.293228759 container init d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:08:05 np0005532048 podman[428354]: 2025-11-22 10:08:05.092406471 +0000 UTC m=+0.301897243 container start d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:08:05 np0005532048 nostalgic_hofstadter[428370]: 167 167
Nov 22 05:08:05 np0005532048 systemd[1]: libpod-d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00.scope: Deactivated successfully.
Nov 22 05:08:05 np0005532048 podman[428354]: 2025-11-22 10:08:05.098683925 +0000 UTC m=+0.308174727 container attach d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:08:05 np0005532048 podman[428354]: 2025-11-22 10:08:05.099529056 +0000 UTC m=+0.309019838 container died d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:08:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:05.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:05 np0005532048 nova_compute[253661]: 2025-11-22 10:08:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:05 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:05 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1059 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-83f6e18a69fd230aed3786a645b56df2827ec4a33e25b9e83c32e108ba48b150-merged.mount: Deactivated successfully.
Nov 22 05:08:05 np0005532048 podman[428354]: 2025-11-22 10:08:05.444512219 +0000 UTC m=+0.654003001 container remove d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hofstadter, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:08:05 np0005532048 systemd[1]: libpod-conmon-d77654ba6832239181cca0e6486d90fdc379e3983a7f1746484caaf1d10d8b00.scope: Deactivated successfully.
Nov 22 05:08:05 np0005532048 podman[428395]: 2025-11-22 10:08:05.65381891 +0000 UTC m=+0.045055409 container create 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:08:05 np0005532048 systemd[1]: Started libpod-conmon-66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792.scope.
Nov 22 05:08:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:08:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:05 np0005532048 podman[428395]: 2025-11-22 10:08:05.636187287 +0000 UTC m=+0.027423806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:08:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:05 np0005532048 podman[428395]: 2025-11-22 10:08:05.753976847 +0000 UTC m=+0.145213426 container init 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:08:05 np0005532048 podman[428395]: 2025-11-22 10:08:05.761087542 +0000 UTC m=+0.152324031 container start 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:08:05 np0005532048 podman[428395]: 2025-11-22 10:08:05.764499106 +0000 UTC m=+0.155735615 container attach 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:08:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:06.153+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:06 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:06 np0005532048 hungry_newton[428412]: {
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:    "0": [
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:        {
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "devices": [
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "/dev/loop3"
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            ],
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_name": "ceph_lv0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_size": "21470642176",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "name": "ceph_lv0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "tags": {
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.cluster_name": "ceph",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.crush_device_class": "",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.encrypted": "0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.osd_id": "0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.type": "block",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.vdo": "0"
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            },
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "type": "block",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "vg_name": "ceph_vg0"
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:        }
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:    ],
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:    "1": [
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:        {
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "devices": [
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "/dev/loop4"
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            ],
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_name": "ceph_lv1",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_size": "21470642176",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "name": "ceph_lv1",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "tags": {
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.cluster_name": "ceph",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.crush_device_class": "",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.encrypted": "0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.osd_id": "1",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.type": "block",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.vdo": "0"
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            },
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "type": "block",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "vg_name": "ceph_vg1"
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:        }
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:    ],
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:    "2": [
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:        {
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "devices": [
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "/dev/loop5"
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            ],
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_name": "ceph_lv2",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_size": "21470642176",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "name": "ceph_lv2",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "tags": {
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.cluster_name": "ceph",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.crush_device_class": "",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.encrypted": "0",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.osd_id": "2",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.type": "block",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:                "ceph.vdo": "0"
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            },
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "type": "block",
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:            "vg_name": "ceph_vg2"
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:        }
Nov 22 05:08:06 np0005532048 hungry_newton[428412]:    ]
Nov 22 05:08:06 np0005532048 hungry_newton[428412]: }
Nov 22 05:08:06 np0005532048 systemd[1]: libpod-66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792.scope: Deactivated successfully.
Nov 22 05:08:06 np0005532048 conmon[428412]: conmon 66aced40f85fb29456ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792.scope/container/memory.events
Nov 22 05:08:06 np0005532048 podman[428395]: 2025-11-22 10:08:06.600223998 +0000 UTC m=+0.991460477 container died 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:08:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3e8f511968aab7cb047a1387af4ac281a399989fa1e9e611a11698a1f4636f3d-merged.mount: Deactivated successfully.
Nov 22 05:08:06 np0005532048 podman[428395]: 2025-11-22 10:08:06.654344741 +0000 UTC m=+1.045581230 container remove 66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:08:06 np0005532048 systemd[1]: libpod-conmon-66aced40f85fb29456edf8b461b5cfe55302176d1537c347231b917a91cd3792.scope: Deactivated successfully.
Nov 22 05:08:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:07.122+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:07 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:07 np0005532048 podman[428577]: 2025-11-22 10:08:07.320420658 +0000 UTC m=+0.042551019 container create bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:08:07 np0005532048 systemd[1]: Started libpod-conmon-bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa.scope.
Nov 22 05:08:07 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:08:07 np0005532048 podman[428577]: 2025-11-22 10:08:07.301078341 +0000 UTC m=+0.023208712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:08:07 np0005532048 podman[428577]: 2025-11-22 10:08:07.397960896 +0000 UTC m=+0.120091257 container init bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:08:07 np0005532048 podman[428577]: 2025-11-22 10:08:07.405467492 +0000 UTC m=+0.127597853 container start bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:08:07 np0005532048 competent_germain[428593]: 167 167
Nov 22 05:08:07 np0005532048 systemd[1]: libpod-bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa.scope: Deactivated successfully.
Nov 22 05:08:07 np0005532048 podman[428577]: 2025-11-22 10:08:07.429446382 +0000 UTC m=+0.151576763 container attach bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:08:07 np0005532048 podman[428577]: 2025-11-22 10:08:07.431083752 +0000 UTC m=+0.153214143 container died bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:08:07 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9508e82a3920cac17c415321ddc6013bd6205b78b9605fcf3092c2a07eb33d4f-merged.mount: Deactivated successfully.
Nov 22 05:08:07 np0005532048 podman[428577]: 2025-11-22 10:08:07.619958352 +0000 UTC m=+0.342088723 container remove bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_germain, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:08:07 np0005532048 systemd[1]: libpod-conmon-bbfd7c834684528bd3f532425b11e09f545d4340036f674dc7c3a8d33ed3bbaa.scope: Deactivated successfully.
Nov 22 05:08:07 np0005532048 podman[428617]: 2025-11-22 10:08:07.832954845 +0000 UTC m=+0.085062746 container create 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:08:07 np0005532048 podman[428617]: 2025-11-22 10:08:07.78807476 +0000 UTC m=+0.040182691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:08:07 np0005532048 systemd[1]: Started libpod-conmon-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope.
Nov 22 05:08:08 np0005532048 nova_compute[253661]: 2025-11-22 10:08:08.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:08:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:08 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:08:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:08.085+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:08 np0005532048 podman[428617]: 2025-11-22 10:08:08.111265006 +0000 UTC m=+0.363372987 container init 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:08:08 np0005532048 podman[428617]: 2025-11-22 10:08:08.117981601 +0000 UTC m=+0.370089542 container start 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:08:08 np0005532048 podman[428617]: 2025-11-22 10:08:08.155973777 +0000 UTC m=+0.408082098 container attach 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:08:08 np0005532048 nova_compute[253661]: 2025-11-22 10:08:08.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:08 np0005532048 nova_compute[253661]: 2025-11-22 10:08:08.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:08 np0005532048 nova_compute[253661]: 2025-11-22 10:08:08.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:08:08 np0005532048 nova_compute[253661]: 2025-11-22 10:08:08.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:08:08 np0005532048 nova_compute[253661]: 2025-11-22 10:08:08.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:08:08 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:09.114+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:09 np0005532048 loving_mayer[428636]: {
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "osd_id": 1,
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "type": "bluestore"
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:    },
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "osd_id": 0,
Nov 22 05:08:09 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:08:09 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "type": "bluestore"
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:    },
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "osd_id": 2,
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:        "type": "bluestore"
Nov 22 05:08:09 np0005532048 loving_mayer[428636]:    }
Nov 22 05:08:09 np0005532048 loving_mayer[428636]: }
Nov 22 05:08:09 np0005532048 systemd[1]: libpod-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope: Deactivated successfully.
Nov 22 05:08:09 np0005532048 systemd[1]: libpod-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope: Consumed 1.037s CPU time.
Nov 22 05:08:09 np0005532048 conmon[428636]: conmon 0c0c05a573da376701d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope/container/memory.events
Nov 22 05:08:09 np0005532048 podman[428617]: 2025-11-22 10:08:09.161416878 +0000 UTC m=+1.413524799 container died 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:08:09 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8fe79f2822a72f033ba2642b9d161dba17d74ec321293f9fe1f13dc70e2f0064-merged.mount: Deactivated successfully.
Nov 22 05:08:09 np0005532048 podman[428617]: 2025-11-22 10:08:09.222993613 +0000 UTC m=+1.475101514 container remove 0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:08:09 np0005532048 systemd[1]: libpod-conmon-0c0c05a573da376701d8c1e7898cb1f34a65d90d9aa3d46e359c8232e1b80818.scope: Deactivated successfully.
Nov 22 05:08:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:08:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:08:09 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:09 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5646cf36-dfaf-4332-a99f-8bf538161c10 does not exist
Nov 22 05:08:09 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 382699f5-e949-4023-98c8-7fa703d9973c does not exist
Nov 22 05:08:09 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:09 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:08:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1065 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:09 np0005532048 nova_compute[253661]: 2025-11-22 10:08:09.722 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:10.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:10 np0005532048 nova_compute[253661]: 2025-11-22 10:08:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:10 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:10 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1065 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:10 np0005532048 podman[428733]: 2025-11-22 10:08:10.373181396 +0000 UTC m=+0.066153859 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 05:08:10 np0005532048 podman[428734]: 2025-11-22 10:08:10.381253695 +0000 UTC m=+0.073973032 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 22 05:08:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:11.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:11 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:12.156+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:12 np0005532048 nova_compute[253661]: 2025-11-22 10:08:12.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:12 np0005532048 nova_compute[253661]: 2025-11-22 10:08:12.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:08:12 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:08:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1165278847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:08:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:08:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1165278847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:08:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:13 np0005532048 nova_compute[253661]: 2025-11-22 10:08:13.078 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:13.141+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:13 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:13 np0005532048 podman[428771]: 2025-11-22 10:08:13.395337493 +0000 UTC m=+0.088852759 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 05:08:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:14.162+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.254 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.254 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.255 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:08:14 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1069 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:08:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082360209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.713 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.725 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.885 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.886 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.886 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.887 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.958 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.958 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:08:14 np0005532048 nova_compute[253661]: 2025-11-22 10:08:14.979 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:08:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:15.120+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:15 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:15 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1069 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:08:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3414982244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:08:15 np0005532048 nova_compute[253661]: 2025-11-22 10:08:15.459 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:08:15 np0005532048 nova_compute[253661]: 2025-11-22 10:08:15.467 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:08:15 np0005532048 nova_compute[253661]: 2025-11-22 10:08:15.485 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:08:15 np0005532048 nova_compute[253661]: 2025-11-22 10:08:15.487 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:08:15 np0005532048 nova_compute[253661]: 2025-11-22 10:08:15.487 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:08:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:16.088+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:16 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:16 np0005532048 nova_compute[253661]: 2025-11-22 10:08:16.488 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:16 np0005532048 nova_compute[253661]: 2025-11-22 10:08:16.489 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:17.091+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:17 np0005532048 nova_compute[253661]: 2025-11-22 10:08:17.232 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:17 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:18.056+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:18 np0005532048 nova_compute[253661]: 2025-11-22 10:08:18.082 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:18 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:19.015+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1074 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:19 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:19 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1074 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:19 np0005532048 nova_compute[253661]: 2025-11-22 10:08:19.729 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:20.037+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:20 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:21.053+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:21 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:22.039+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:22 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:08:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:08:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:23.017+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:23 np0005532048 nova_compute[253661]: 2025-11-22 10:08:23.086 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:23 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:24.004+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1079 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:24 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:24 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1079 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:24 np0005532048 nova_compute[253661]: 2025-11-22 10:08:24.732 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:25.003+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:25 np0005532048 nova_compute[253661]: 2025-11-22 10:08:25.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:08:25 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:26.021+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:26 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:27.052+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:27 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:08:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:08:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:08:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:08:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:08:28.016 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:08:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:28.034+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:28 np0005532048 nova_compute[253661]: 2025-11-22 10:08:28.088 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:28 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:29.003+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1084 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:29 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:29 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1084 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:29 np0005532048 nova_compute[253661]: 2025-11-22 10:08:29.735 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:29.978+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:30 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:30.949+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:31 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:31.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:32 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:32.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:33 np0005532048 nova_compute[253661]: 2025-11-22 10:08:33.092 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:33 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:33.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1089 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:34 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:34 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1089 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:34 np0005532048 nova_compute[253661]: 2025-11-22 10:08:34.784 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:34.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:35 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:35.958+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:36 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:36.928+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:37 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:37.902+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:38 np0005532048 nova_compute[253661]: 2025-11-22 10:08:38.096 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:38 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:38.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1094 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:39 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1094 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:39 np0005532048 nova_compute[253661]: 2025-11-22 10:08:39.787 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:39.859+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:40 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:40.845+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:41 np0005532048 podman[428843]: 2025-11-22 10:08:41.378213948 +0000 UTC m=+0.067867411 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:08:41 np0005532048 podman[428844]: 2025-11-22 10:08:41.383512529 +0000 UTC m=+0.071997383 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:08:41 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:41.815+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:42 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:42 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:42.772+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:43 np0005532048 nova_compute[253661]: 2025-11-22 10:08:43.134 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:43 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:43.817+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1099 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:44 np0005532048 podman[428882]: 2025-11-22 10:08:44.411539179 +0000 UTC m=+0.098551408 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 05:08:44 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:44 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1099 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:44.787+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:44 np0005532048 nova_compute[253661]: 2025-11-22 10:08:44.789 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:45 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:45.779+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:46 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:46.804+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:47 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:47.768+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:48 np0005532048 nova_compute[253661]: 2025-11-22 10:08:48.136 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:48 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:48.760+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1104 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:49 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:49 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1104 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:49.737+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:49 np0005532048 nova_compute[253661]: 2025-11-22 10:08:49.792 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:50 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:50.702+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:51 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:51.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:08:52
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.log', 'vms', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta']
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:08:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:52.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:52 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:08:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:53 np0005532048 nova_compute[253661]: 2025-11-22 10:08:53.141 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:53 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:53.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1109 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:54.682+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:54 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:54 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1109 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:54 np0005532048 nova_compute[253661]: 2025-11-22 10:08:54.794 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:55.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:55 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:08:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:08:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:08:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:08:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:08:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:56.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:56 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:57.689+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:57 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:58 np0005532048 nova_compute[253661]: 2025-11-22 10:08:58.145 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:08:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:58.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:08:59 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1114 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:08:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:08:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:08:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:08:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:08:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:08:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:08:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:08:59.641+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:08:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:08:59 np0005532048 nova_compute[253661]: 2025-11-22 10:08:59.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:00 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:00 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1114 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:00.675+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:01 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:01.719+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:02 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:02.695+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:03 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:03 np0005532048 nova_compute[253661]: 2025-11-22 10:09:03.193 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:09:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:09:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:03.713+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:04 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1119 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:04.687+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:04 np0005532048 nova_compute[253661]: 2025-11-22 10:09:04.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:05 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:05 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1119 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:05 np0005532048 nova_compute[253661]: 2025-11-22 10:09:05.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:05.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:06 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:06.684+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:07 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:07.704+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:08 np0005532048 nova_compute[253661]: 2025-11-22 10:09:08.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:08 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:08.689+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:09 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:09 np0005532048 nova_compute[253661]: 2025-11-22 10:09:09.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:09 np0005532048 nova_compute[253661]: 2025-11-22 10:09:09.233 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:09:09 np0005532048 nova_compute[253661]: 2025-11-22 10:09:09.233 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:09:09 np0005532048 nova_compute[253661]: 2025-11-22 10:09:09.253 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:09:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1124 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:09.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:09 np0005532048 nova_compute[253661]: 2025-11-22 10:09:09.832 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1124 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:10 np0005532048 nova_compute[253661]: 2025-11-22 10:09:10.243 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:09:10 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 61f14f6d-a484-4e55-80d2-778b2f64ecd3 does not exist
Nov 22 05:09:10 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev ff0933eb-fa76-48aa-94f1-5019fe6d55bb does not exist
Nov 22 05:09:10 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev f1523180-802f-45be-954c-b3925935dfb4 does not exist
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:09:10 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:09:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:10.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:11 np0005532048 podman[429180]: 2025-11-22 10:09:11.149213423 +0000 UTC m=+0.059403233 container create 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:09:11 np0005532048 systemd[1]: Started libpod-conmon-34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4.scope.
Nov 22 05:09:11 np0005532048 podman[429180]: 2025-11-22 10:09:11.122828613 +0000 UTC m=+0.033018403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:09:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:09:11 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:09:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:09:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:09:11 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:09:11 np0005532048 podman[429180]: 2025-11-22 10:09:11.253180981 +0000 UTC m=+0.163370801 container init 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:09:11 np0005532048 podman[429180]: 2025-11-22 10:09:11.268216771 +0000 UTC m=+0.178406551 container start 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:09:11 np0005532048 podman[429180]: 2025-11-22 10:09:11.273126452 +0000 UTC m=+0.183316232 container attach 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:09:11 np0005532048 great_kepler[429196]: 167 167
Nov 22 05:09:11 np0005532048 systemd[1]: libpod-34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4.scope: Deactivated successfully.
Nov 22 05:09:11 np0005532048 podman[429180]: 2025-11-22 10:09:11.281410976 +0000 UTC m=+0.191600816 container died 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:09:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-41979220069ae723e411bdc927a10b4cc53c018b81dc0c4a82599fa3119e4437-merged.mount: Deactivated successfully.
Nov 22 05:09:11 np0005532048 podman[429180]: 2025-11-22 10:09:11.337717362 +0000 UTC m=+0.247907132 container remove 34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:09:11 np0005532048 systemd[1]: libpod-conmon-34f9c163622513ea8a192ab637c6a7624190fa2edc4429cee4e49464be671ea4.scope: Deactivated successfully.
Nov 22 05:09:11 np0005532048 podman[429220]: 2025-11-22 10:09:11.528132697 +0000 UTC m=+0.056931822 container create 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 05:09:11 np0005532048 systemd[1]: Started libpod-conmon-704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7.scope.
Nov 22 05:09:11 np0005532048 podman[429220]: 2025-11-22 10:09:11.503966973 +0000 UTC m=+0.032766188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:09:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:09:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:11 np0005532048 podman[429220]: 2025-11-22 10:09:11.659237333 +0000 UTC m=+0.188036518 container init 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:09:11 np0005532048 podman[429220]: 2025-11-22 10:09:11.668565163 +0000 UTC m=+0.197364288 container start 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:09:11 np0005532048 podman[429220]: 2025-11-22 10:09:11.672413487 +0000 UTC m=+0.201212642 container attach 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:09:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:11.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:11 np0005532048 podman[429235]: 2025-11-22 10:09:11.682937036 +0000 UTC m=+0.105344823 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:09:11 np0005532048 podman[429236]: 2025-11-22 10:09:11.688589975 +0000 UTC m=+0.109568187 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true)
Nov 22 05:09:12 np0005532048 nova_compute[253661]: 2025-11-22 10:09:12.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:12 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:09:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1367630835' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:09:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:09:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1367630835' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:09:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:12.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:12 np0005532048 fervent_bassi[429249]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:09:12 np0005532048 fervent_bassi[429249]: --> relative data size: 1.0
Nov 22 05:09:12 np0005532048 fervent_bassi[429249]: --> All data devices are unavailable
Nov 22 05:09:12 np0005532048 systemd[1]: libpod-704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7.scope: Deactivated successfully.
Nov 22 05:09:12 np0005532048 systemd[1]: libpod-704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7.scope: Consumed 1.108s CPU time.
Nov 22 05:09:12 np0005532048 podman[429220]: 2025-11-22 10:09:12.836824071 +0000 UTC m=+1.365623206 container died 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:09:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1c98796c75a4626f9ce08bb854f32157d1b07fe6e9d54ea2152c8adf5f8b1047-merged.mount: Deactivated successfully.
Nov 22 05:09:12 np0005532048 podman[429220]: 2025-11-22 10:09:12.892742086 +0000 UTC m=+1.421541211 container remove 704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:09:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:12 np0005532048 systemd[1]: libpod-conmon-704c68e858ca5eab703953366422a6e5d71eacbedb11f592692ed53a2e2ec0e7.scope: Deactivated successfully.
Nov 22 05:09:13 np0005532048 nova_compute[253661]: 2025-11-22 10:09:13.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:13 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:13 np0005532048 podman[429453]: 2025-11-22 10:09:13.654800119 +0000 UTC m=+0.064740394 container create 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:09:13 np0005532048 systemd[1]: Started libpod-conmon-782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f.scope.
Nov 22 05:09:13 np0005532048 podman[429453]: 2025-11-22 10:09:13.628774869 +0000 UTC m=+0.038715184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:09:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:13.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:13 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:09:13 np0005532048 podman[429453]: 2025-11-22 10:09:13.76617225 +0000 UTC m=+0.176112555 container init 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:09:13 np0005532048 podman[429453]: 2025-11-22 10:09:13.780087083 +0000 UTC m=+0.190027378 container start 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:09:13 np0005532048 podman[429453]: 2025-11-22 10:09:13.784556163 +0000 UTC m=+0.194496548 container attach 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:09:13 np0005532048 eager_hodgkin[429469]: 167 167
Nov 22 05:09:13 np0005532048 systemd[1]: libpod-782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f.scope: Deactivated successfully.
Nov 22 05:09:13 np0005532048 podman[429453]: 2025-11-22 10:09:13.787707961 +0000 UTC m=+0.197648266 container died 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:09:13 np0005532048 systemd[1]: var-lib-containers-storage-overlay-740dfc7813fb39898479be9c89458311a9e8922889762ccd679705726f2d40b4-merged.mount: Deactivated successfully.
Nov 22 05:09:13 np0005532048 podman[429453]: 2025-11-22 10:09:13.845946103 +0000 UTC m=+0.255886398 container remove 782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hodgkin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 22 05:09:13 np0005532048 systemd[1]: libpod-conmon-782808e57af7f55c48c1de141a63ddc0ace0ef9666199ef99617c05e806fad6f.scope: Deactivated successfully.
Nov 22 05:09:14 np0005532048 podman[429495]: 2025-11-22 10:09:14.068484459 +0000 UTC m=+0.051548499 container create 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:09:14 np0005532048 systemd[1]: Started libpod-conmon-02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471.scope.
Nov 22 05:09:14 np0005532048 podman[429495]: 2025-11-22 10:09:14.046643122 +0000 UTC m=+0.029707152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:09:14 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:09:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:14 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:14 np0005532048 podman[429495]: 2025-11-22 10:09:14.185456728 +0000 UTC m=+0.168520838 container init 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:09:14 np0005532048 podman[429495]: 2025-11-22 10:09:14.195344771 +0000 UTC m=+0.178408841 container start 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:09:14 np0005532048 podman[429495]: 2025-11-22 10:09:14.199720269 +0000 UTC m=+0.182784329 container attach 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:09:14 np0005532048 nova_compute[253661]: 2025-11-22 10:09:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:14 np0005532048 nova_compute[253661]: 2025-11-22 10:09:14.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:09:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1129 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:14 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:14 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1129 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:14.709+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:14 np0005532048 nova_compute[253661]: 2025-11-22 10:09:14.833 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]: {
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:    "0": [
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:        {
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "devices": [
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "/dev/loop3"
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            ],
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_name": "ceph_lv0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_size": "21470642176",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "name": "ceph_lv0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "tags": {
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.cluster_name": "ceph",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.crush_device_class": "",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.encrypted": "0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.osd_id": "0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.type": "block",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.vdo": "0"
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            },
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "type": "block",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "vg_name": "ceph_vg0"
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:        }
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:    ],
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:    "1": [
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:        {
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "devices": [
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "/dev/loop4"
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            ],
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_name": "ceph_lv1",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_size": "21470642176",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "name": "ceph_lv1",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "tags": {
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.cluster_name": "ceph",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.crush_device_class": "",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.encrypted": "0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.osd_id": "1",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.type": "block",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.vdo": "0"
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            },
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "type": "block",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "vg_name": "ceph_vg1"
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:        }
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:    ],
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:    "2": [
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:        {
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "devices": [
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "/dev/loop5"
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            ],
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_name": "ceph_lv2",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_size": "21470642176",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "name": "ceph_lv2",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "tags": {
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.cluster_name": "ceph",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.crush_device_class": "",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.encrypted": "0",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.osd_id": "2",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.type": "block",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:                "ceph.vdo": "0"
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            },
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "type": "block",
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:            "vg_name": "ceph_vg2"
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:        }
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]:    ]
Nov 22 05:09:15 np0005532048 vibrant_keller[429512]: }
Nov 22 05:09:15 np0005532048 systemd[1]: libpod-02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471.scope: Deactivated successfully.
Nov 22 05:09:15 np0005532048 podman[429495]: 2025-11-22 10:09:15.04423574 +0000 UTC m=+1.027299780 container died 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:09:15 np0005532048 systemd[1]: var-lib-containers-storage-overlay-244c01c536338b3ce5a494eddd31ac333c84dcc1e6a80b4c0758b3f626580994-merged.mount: Deactivated successfully.
Nov 22 05:09:15 np0005532048 podman[429495]: 2025-11-22 10:09:15.114461798 +0000 UTC m=+1.097525808 container remove 02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:09:15 np0005532048 systemd[1]: libpod-conmon-02159b1feeeb2c77635f232e2154cedef33cb502ec952428f7ef4ec6678d5471.scope: Deactivated successfully.
Nov 22 05:09:15 np0005532048 podman[429522]: 2025-11-22 10:09:15.202030183 +0000 UTC m=+0.123863669 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.252 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.253 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.253 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.254 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:09:15 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:09:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728628970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:09:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:15.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.733 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:09:15 np0005532048 podman[429722]: 2025-11-22 10:09:15.929235387 +0000 UTC m=+0.071184173 container create 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.978 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:09:15 np0005532048 systemd[1]: Started libpod-conmon-84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3.scope.
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.980 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3493MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.980 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:09:15 np0005532048 nova_compute[253661]: 2025-11-22 10:09:15.981 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:09:15 np0005532048 podman[429722]: 2025-11-22 10:09:15.900247484 +0000 UTC m=+0.042196360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:09:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:09:16 np0005532048 podman[429722]: 2025-11-22 10:09:16.042492104 +0000 UTC m=+0.184440910 container init 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:09:16 np0005532048 podman[429722]: 2025-11-22 10:09:16.051994618 +0000 UTC m=+0.193943444 container start 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:09:16 np0005532048 podman[429722]: 2025-11-22 10:09:16.058481337 +0000 UTC m=+0.200430183 container attach 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:09:16 np0005532048 systemd[1]: libpod-84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3.scope: Deactivated successfully.
Nov 22 05:09:16 np0005532048 focused_galileo[429739]: 167 167
Nov 22 05:09:16 np0005532048 conmon[429739]: conmon 84ee42a81af4bf0517fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3.scope/container/memory.events
Nov 22 05:09:16 np0005532048 podman[429722]: 2025-11-22 10:09:16.06301727 +0000 UTC m=+0.204966066 container died 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:09:16 np0005532048 nova_compute[253661]: 2025-11-22 10:09:16.079 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:09:16 np0005532048 nova_compute[253661]: 2025-11-22 10:09:16.081 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:09:16 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8fbd1cdd32785c1f705ec3285e3215f100dc6f4088ff566f6ba43def69a82683-merged.mount: Deactivated successfully.
Nov 22 05:09:16 np0005532048 nova_compute[253661]: 2025-11-22 10:09:16.105 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:09:16 np0005532048 podman[429722]: 2025-11-22 10:09:16.109221547 +0000 UTC m=+0.251170343 container remove 84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galileo, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:09:16 np0005532048 systemd[1]: libpod-conmon-84ee42a81af4bf0517feeb4566b5fab71f0bb434024ddb68b1c741112869d4c3.scope: Deactivated successfully.
Nov 22 05:09:16 np0005532048 podman[429762]: 2025-11-22 10:09:16.298691859 +0000 UTC m=+0.050268918 container create 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:09:16 np0005532048 systemd[1]: Started libpod-conmon-71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b.scope.
Nov 22 05:09:16 np0005532048 podman[429762]: 2025-11-22 10:09:16.275930509 +0000 UTC m=+0.027507558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:09:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:09:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:09:16 np0005532048 podman[429762]: 2025-11-22 10:09:16.398189637 +0000 UTC m=+0.149766696 container init 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 05:09:16 np0005532048 podman[429762]: 2025-11-22 10:09:16.411629288 +0000 UTC m=+0.163206297 container start 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:09:16 np0005532048 podman[429762]: 2025-11-22 10:09:16.415348639 +0000 UTC m=+0.166925698 container attach 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:09:16 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:09:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278779866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:09:16 np0005532048 nova_compute[253661]: 2025-11-22 10:09:16.628 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:09:16 np0005532048 nova_compute[253661]: 2025-11-22 10:09:16.636 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:09:16 np0005532048 nova_compute[253661]: 2025-11-22 10:09:16.660 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:09:16 np0005532048 nova_compute[253661]: 2025-11-22 10:09:16.662 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:09:16 np0005532048 nova_compute[253661]: 2025-11-22 10:09:16.662 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:09:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:16.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]: {
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "osd_id": 1,
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "type": "bluestore"
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:    },
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "osd_id": 0,
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "type": "bluestore"
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:    },
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "osd_id": 2,
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:        "type": "bluestore"
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]:    }
Nov 22 05:09:17 np0005532048 relaxed_chaum[429797]: }
Nov 22 05:09:17 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:17 np0005532048 systemd[1]: libpod-71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b.scope: Deactivated successfully.
Nov 22 05:09:17 np0005532048 systemd[1]: libpod-71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b.scope: Consumed 1.117s CPU time.
Nov 22 05:09:17 np0005532048 podman[429762]: 2025-11-22 10:09:17.523735744 +0000 UTC m=+1.275312773 container died 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:09:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2894bc786a16f549732876e2ce7b4e0911c3000d4801ff1897f8eca6daeabe70-merged.mount: Deactivated successfully.
Nov 22 05:09:17 np0005532048 podman[429762]: 2025-11-22 10:09:17.590287441 +0000 UTC m=+1.341864470 container remove 71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chaum, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:09:17 np0005532048 systemd[1]: libpod-conmon-71325e4da2d20496bd1dac5146263734450afe09d8a5ff4606041bd527af1b4b.scope: Deactivated successfully.
Nov 22 05:09:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:09:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:09:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:09:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:09:17 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 27cb19db-9765-47ec-bde2-aaa3c1a85c55 does not exist
Nov 22 05:09:17 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a6723859-f3f8-46f4-98fe-b2ee0043da17 does not exist
Nov 22 05:09:17 np0005532048 nova_compute[253661]: 2025-11-22 10:09:17.662 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:17 np0005532048 nova_compute[253661]: 2025-11-22 10:09:17.662 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:17 np0005532048 nova_compute[253661]: 2025-11-22 10:09:17.662 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:17.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:18 np0005532048 nova_compute[253661]: 2025-11-22 10:09:18.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:18 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:18 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:09:18 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:09:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:18.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1134 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:19 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:19 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1134 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:19.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:19 np0005532048 nova_compute[253661]: 2025-11-22 10:09:19.835 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:20 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:20.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:21 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:21.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:22 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:22.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:09:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:09:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:23 np0005532048 nova_compute[253661]: 2025-11-22 10:09:23.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:23 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:23.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:24 np0005532048 nova_compute[253661]: 2025-11-22 10:09:24.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1139 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:24 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:24 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1139 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:24.636+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:24 np0005532048 nova_compute[253661]: 2025-11-22 10:09:24.838 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:25 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:25.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:26 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:26.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.606966) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167607003, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2168, "num_deletes": 251, "total_data_size": 2744548, "memory_usage": 2796888, "flush_reason": "Manual Compaction"}
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Nov 22 05:09:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:27.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167632438, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 2667871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69135, "largest_seqno": 71302, "table_properties": {"data_size": 2659095, "index_size": 4949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22940, "raw_average_key_size": 21, "raw_value_size": 2639311, "raw_average_value_size": 2443, "num_data_blocks": 217, "num_entries": 1080, "num_filter_entries": 1080, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806006, "oldest_key_time": 1763806006, "file_creation_time": 1763806167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 25550 microseconds, and 8784 cpu microseconds.
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.632503) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 2667871 bytes OK
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.632536) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.633998) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.634015) EVENT_LOG_v1 {"time_micros": 1763806167634010, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.634035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 2735145, prev total WAL file size 2735145, number of live WAL files 2.
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.635545) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(2605KB)], [164(8475KB)]
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167635613, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11346819, "oldest_snapshot_seqno": -1}
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 9795 keys, 9918981 bytes, temperature: kUnknown
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167687274, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9918981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9859973, "index_size": 33423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 259777, "raw_average_key_size": 26, "raw_value_size": 9690962, "raw_average_value_size": 989, "num_data_blocks": 1265, "num_entries": 9795, "num_filter_entries": 9795, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.687582) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9918981 bytes
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.688777) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.2 rd, 191.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.3 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 10309, records dropped: 514 output_compression: NoCompression
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.688791) EVENT_LOG_v1 {"time_micros": 1763806167688784, "job": 102, "event": "compaction_finished", "compaction_time_micros": 51776, "compaction_time_cpu_micros": 33425, "output_level": 6, "num_output_files": 1, "total_output_size": 9918981, "num_input_records": 10309, "num_output_records": 9795, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167689424, "job": 102, "event": "table_file_deletion", "file_number": 166}
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806167691035, "job": 102, "event": "table_file_deletion", "file_number": 164}
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.635437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:09:27 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:09:27.691135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:09:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:09:28.017 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:09:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:09:28.017 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:09:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:09:28.017 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:09:28 np0005532048 nova_compute[253661]: 2025-11-22 10:09:28.251 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:28 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:28 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:28.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1144 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:29 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:29 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1144 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:29.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:29 np0005532048 nova_compute[253661]: 2025-11-22 10:09:29.882 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:30.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:30 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:30 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:31 np0005532048 nova_compute[253661]: 2025-11-22 10:09:31.242 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:31 np0005532048 nova_compute[253661]: 2025-11-22 10:09:31.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 05:09:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:31.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:32.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:32 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:33 np0005532048 nova_compute[253661]: 2025-11-22 10:09:33.290 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:33.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:33 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1149 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:34 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:34 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1149 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:34.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:34 np0005532048 nova_compute[253661]: 2025-11-22 10:09:34.884 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:35.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:35 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:36.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:36 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:36 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:37.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:38 np0005532048 nova_compute[253661]: 2025-11-22 10:09:38.294 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:38.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:38 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1154 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:39.589+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:39 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1154 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:39 np0005532048 nova_compute[253661]: 2025-11-22 10:09:39.886 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:40.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:41.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:41 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:41 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:42 np0005532048 podman[429893]: 2025-11-22 10:09:42.419313173 +0000 UTC m=+0.100841252 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:09:42 np0005532048 podman[429894]: 2025-11-22 10:09:42.429509004 +0000 UTC m=+0.110770937 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 22 05:09:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:42.546+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:42 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:43 np0005532048 nova_compute[253661]: 2025-11-22 10:09:43.336 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:43.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:43 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1159 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:44.514+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:44 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1159 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:44 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:44 np0005532048 nova_compute[253661]: 2025-11-22 10:09:44.888 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:45 np0005532048 podman[429929]: 2025-11-22 10:09:45.397409316 +0000 UTC m=+0.090696643 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 05:09:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:45.493+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:45 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:46.454+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:46 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:47.481+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:47 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:48 np0005532048 nova_compute[253661]: 2025-11-22 10:09:48.341 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:48.505+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:48 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1165 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:49.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:49 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1165 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:49 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:49 np0005532048 nova_compute[253661]: 2025-11-22 10:09:49.890 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:50.558+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:50 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:51.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:51 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:09:52
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', 'backups', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root']
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:09:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:52.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:09:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:53 np0005532048 nova_compute[253661]: 2025-11-22 10:09:53.343 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:53.634+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:53 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1169 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:54.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:54 np0005532048 nova_compute[253661]: 2025-11-22 10:09:54.892 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:54 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:54 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1169 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:55.660+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:55 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:09:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:09:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:09:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:09:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:09:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:56.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:57 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:57.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:58 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:58 np0005532048 nova_compute[253661]: 2025-11-22 10:09:58.244 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:09:58 np0005532048 nova_compute[253661]: 2025-11-22 10:09:58.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 05:09:58 np0005532048 nova_compute[253661]: 2025-11-22 10:09:58.262 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 05:09:58 np0005532048 nova_compute[253661]: 2025-11-22 10:09:58.348 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:09:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:58.691+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:09:59 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1174 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:09:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:09:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:09:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:09:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:09:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:09:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:09:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:09:59.656+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:09:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:09:59 np0005532048 nova_compute[253661]: 2025-11-22 10:09:59.927 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:00 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:00 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1174 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:00.696+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:01 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:01.705+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:02 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:02.748+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:03 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:10:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:10:03 np0005532048 nova_compute[253661]: 2025-11-22 10:10:03.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:03.698+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:04 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1179 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:04.723+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:04 np0005532048 nova_compute[253661]: 2025-11-22 10:10:04.931 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:05 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:05 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1179 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:05.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:06 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:06 np0005532048 nova_compute[253661]: 2025-11-22 10:10:06.247 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:06.644+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:07 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:07.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:08 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:08 np0005532048 nova_compute[253661]: 2025-11-22 10:10:08.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:08.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:09 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1184 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:09.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:09 np0005532048 nova_compute[253661]: 2025-11-22 10:10:09.934 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:10 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:10 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1184 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:10.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:11 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:11 np0005532048 nova_compute[253661]: 2025-11-22 10:10:11.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:11 np0005532048 nova_compute[253661]: 2025-11-22 10:10:11.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:10:11 np0005532048 nova_compute[253661]: 2025-11-22 10:10:11.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:10:11 np0005532048 nova_compute[253661]: 2025-11-22 10:10:11.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:10:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:11.763+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:12 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:12 np0005532048 nova_compute[253661]: 2025-11-22 10:10:12.234 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:10:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4015370798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:10:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:10:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4015370798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:10:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:12.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:13 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:13 np0005532048 podman[429956]: 2025-11-22 10:10:13.384664711 +0000 UTC m=+0.072357552 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 05:10:13 np0005532048 nova_compute[253661]: 2025-11-22 10:10:13.403 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:13 np0005532048 podman[429955]: 2025-11-22 10:10:13.41429309 +0000 UTC m=+0.099629732 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:10:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:13.792+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:14 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:14 np0005532048 nova_compute[253661]: 2025-11-22 10:10:14.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:14 np0005532048 nova_compute[253661]: 2025-11-22 10:10:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:14 np0005532048 nova_compute[253661]: 2025-11-22 10:10:14.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:10:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1189 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:14.744+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:14 np0005532048 nova_compute[253661]: 2025-11-22 10:10:14.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:15 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:15 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1189 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:15.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:16 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:10:16 np0005532048 podman[429995]: 2025-11-22 10:10:16.448223068 +0000 UTC m=+0.135462215 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:10:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:16.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:16 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:10:16 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613545980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.726 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.871 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.872 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.873 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.873 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.931 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.931 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:10:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:16 np0005532048 nova_compute[253661]: 2025-11-22 10:10:16.945 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:10:17 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:10:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138327405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:10:17 np0005532048 nova_compute[253661]: 2025-11-22 10:10:17.411 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:10:17 np0005532048 nova_compute[253661]: 2025-11-22 10:10:17.416 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:10:17 np0005532048 nova_compute[253661]: 2025-11-22 10:10:17.427 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:10:17 np0005532048 nova_compute[253661]: 2025-11-22 10:10:17.428 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:10:17 np0005532048 nova_compute[253661]: 2025-11-22 10:10:17.428 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:10:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:17.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:18 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:18 np0005532048 nova_compute[253661]: 2025-11-22 10:10:18.405 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:18 np0005532048 nova_compute[253661]: 2025-11-22 10:10:18.427 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:18 np0005532048 nova_compute[253661]: 2025-11-22 10:10:18.428 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:18 np0005532048 podman[430239]: 2025-11-22 10:10:18.638596326 +0000 UTC m=+0.071343416 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:10:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:18.664+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:18 np0005532048 podman[430239]: 2025-11-22 10:10:18.74478275 +0000 UTC m=+0.177529870 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:10:18 np0005532048 systemd-logind[822]: New session 52 of user zuul.
Nov 22 05:10:18 np0005532048 systemd[1]: Started Session 52 of User zuul.
Nov 22 05:10:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:19 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1194 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:10:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:10:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:19.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:19 np0005532048 nova_compute[253661]: 2025-11-22 10:10:19.974 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1194 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 868cfc09-154e-4c25-b7c2-6c5a9ded3464 does not exist
Nov 22 05:10:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fea7bea6-8db4-4f67-bff0-0023af980dd1 does not exist
Nov 22 05:10:20 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2a7e325e-9e9a-427d-adb4-fd5ef77b5657 does not exist
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:10:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:10:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:20.733+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:21 np0005532048 podman[430717]: 2025-11-22 10:10:21.078017734 +0000 UTC m=+0.053224701 container create 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:10:21 np0005532048 systemd[1]: Started libpod-conmon-6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2.scope.
Nov 22 05:10:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:10:21 np0005532048 podman[430717]: 2025-11-22 10:10:21.051208024 +0000 UTC m=+0.026415051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:10:21 np0005532048 podman[430717]: 2025-11-22 10:10:21.160165806 +0000 UTC m=+0.135372763 container init 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:10:21 np0005532048 podman[430717]: 2025-11-22 10:10:21.168222494 +0000 UTC m=+0.143429431 container start 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:10:21 np0005532048 podman[430717]: 2025-11-22 10:10:21.171436133 +0000 UTC m=+0.146643110 container attach 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:10:21 np0005532048 pensive_napier[430734]: 167 167
Nov 22 05:10:21 np0005532048 systemd[1]: libpod-6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2.scope: Deactivated successfully.
Nov 22 05:10:21 np0005532048 podman[430717]: 2025-11-22 10:10:21.176510698 +0000 UTC m=+0.151717645 container died 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:10:21 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:10:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:21 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:10:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-91284a34d2f5545cfd9102731c75de39e6b48697204587062a45be84514372bb-merged.mount: Deactivated successfully.
Nov 22 05:10:21 np0005532048 podman[430717]: 2025-11-22 10:10:21.225030701 +0000 UTC m=+0.200237678 container remove 6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_napier, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 05:10:21 np0005532048 systemd[1]: libpod-conmon-6bc27011a3e956e86b21537d81368a8ed33e960dfbed1c54ce60262ba0a8e0f2.scope: Deactivated successfully.
Nov 22 05:10:21 np0005532048 podman[430757]: 2025-11-22 10:10:21.413427038 +0000 UTC m=+0.049137700 container create 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:10:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:10:21.431 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 05:10:21 np0005532048 nova_compute[253661]: 2025-11-22 10:10:21.432 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:21 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:10:21.432 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 05:10:21 np0005532048 systemd[1]: Started libpod-conmon-4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce.scope.
Nov 22 05:10:21 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:10:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:21 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:21 np0005532048 podman[430757]: 2025-11-22 10:10:21.393572039 +0000 UTC m=+0.029282741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:10:21 np0005532048 podman[430757]: 2025-11-22 10:10:21.494109882 +0000 UTC m=+0.129820574 container init 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:10:21 np0005532048 podman[430757]: 2025-11-22 10:10:21.500687135 +0000 UTC m=+0.136397797 container start 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:10:21 np0005532048 podman[430757]: 2025-11-22 10:10:21.504069798 +0000 UTC m=+0.139780470 container attach 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:10:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:21.715+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:22 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:22 np0005532048 cranky_franklin[430773]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:10:22 np0005532048 cranky_franklin[430773]: --> relative data size: 1.0
Nov 22 05:10:22 np0005532048 cranky_franklin[430773]: --> All data devices are unavailable
Nov 22 05:10:22 np0005532048 systemd[1]: libpod-4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce.scope: Deactivated successfully.
Nov 22 05:10:22 np0005532048 podman[430757]: 2025-11-22 10:10:22.525460982 +0000 UTC m=+1.161171634 container died 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:10:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-8a11e7fd2e93db85aae56efcd363e8f3018990750b7ff11f42ef3adafb76e4a3-merged.mount: Deactivated successfully.
Nov 22 05:10:22 np0005532048 podman[430757]: 2025-11-22 10:10:22.58956918 +0000 UTC m=+1.225279832 container remove 4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:10:22 np0005532048 systemd[1]: libpod-conmon-4399097daa84b288c948bb84f5d4bd467b3d8c5879a1e5ec1dfc306353bc26ce.scope: Deactivated successfully.
Nov 22 05:10:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:22.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:10:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:10:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:23 np0005532048 podman[431138]: 2025-11-22 10:10:23.129253339 +0000 UTC m=+0.037324848 container create f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:10:23 np0005532048 systemd[1]: Started libpod-conmon-f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5.scope.
Nov 22 05:10:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:10:23 np0005532048 podman[431138]: 2025-11-22 10:10:23.112167239 +0000 UTC m=+0.020238768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:10:23 np0005532048 podman[431138]: 2025-11-22 10:10:23.208342036 +0000 UTC m=+0.116413565 container init f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:10:23 np0005532048 podman[431138]: 2025-11-22 10:10:23.216490957 +0000 UTC m=+0.124562486 container start f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:10:23 np0005532048 podman[431138]: 2025-11-22 10:10:23.221202142 +0000 UTC m=+0.129273681 container attach f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:10:23 np0005532048 eager_shannon[431154]: 167 167
Nov 22 05:10:23 np0005532048 systemd[1]: libpod-f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5.scope: Deactivated successfully.
Nov 22 05:10:23 np0005532048 podman[431138]: 2025-11-22 10:10:23.223833287 +0000 UTC m=+0.131904796 container died f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:10:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d33bfac015c1cd2f5d048bb2beced284730ca087e74123a17d304800a7a69cec-merged.mount: Deactivated successfully.
Nov 22 05:10:23 np0005532048 podman[431138]: 2025-11-22 10:10:23.260085289 +0000 UTC m=+0.168156798 container remove f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shannon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 22 05:10:23 np0005532048 systemd[1]: libpod-conmon-f83675646dc8b3c70ad4492d8482891baf4cc79c73499d5a58f15675bab44dd5.scope: Deactivated successfully.
Nov 22 05:10:23 np0005532048 nova_compute[253661]: 2025-11-22 10:10:23.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:23 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:23 np0005532048 podman[431176]: 2025-11-22 10:10:23.441788111 +0000 UTC m=+0.049886579 container create 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:10:23 np0005532048 systemd[1]: Started libpod-conmon-56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd.scope.
Nov 22 05:10:23 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:10:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:23 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:23 np0005532048 podman[431176]: 2025-11-22 10:10:23.422952217 +0000 UTC m=+0.031050695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:10:23 np0005532048 podman[431176]: 2025-11-22 10:10:23.525638653 +0000 UTC m=+0.133737141 container init 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:10:23 np0005532048 podman[431176]: 2025-11-22 10:10:23.540442538 +0000 UTC m=+0.148540996 container start 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:10:23 np0005532048 podman[431176]: 2025-11-22 10:10:23.54379994 +0000 UTC m=+0.151898428 container attach 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:10:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:23.672+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]: {
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:    "0": [
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:        {
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "devices": [
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "/dev/loop3"
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            ],
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_name": "ceph_lv0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_size": "21470642176",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "name": "ceph_lv0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "tags": {
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.cluster_name": "ceph",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.crush_device_class": "",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.encrypted": "0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.osd_id": "0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.type": "block",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.vdo": "0"
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            },
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "type": "block",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "vg_name": "ceph_vg0"
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:        }
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:    ],
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:    "1": [
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:        {
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "devices": [
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "/dev/loop4"
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            ],
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_name": "ceph_lv1",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_size": "21470642176",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "name": "ceph_lv1",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "tags": {
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.cluster_name": "ceph",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.crush_device_class": "",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.encrypted": "0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.osd_id": "1",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.type": "block",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.vdo": "0"
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            },
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "type": "block",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "vg_name": "ceph_vg1"
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:        }
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:    ],
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:    "2": [
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:        {
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "devices": [
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "/dev/loop5"
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            ],
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_name": "ceph_lv2",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_size": "21470642176",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "name": "ceph_lv2",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "tags": {
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.cluster_name": "ceph",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.crush_device_class": "",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.encrypted": "0",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.osd_id": "2",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.type": "block",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:                "ceph.vdo": "0"
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            },
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "type": "block",
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:            "vg_name": "ceph_vg2"
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:        }
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]:    ]
Nov 22 05:10:24 np0005532048 stupefied_buck[431193]: }
Nov 22 05:10:24 np0005532048 systemd[1]: libpod-56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd.scope: Deactivated successfully.
Nov 22 05:10:24 np0005532048 podman[431176]: 2025-11-22 10:10:24.353980137 +0000 UTC m=+0.962078615 container died 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:10:24 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b948c2698f2ba6e0f6e193c21bc06c7e5f71084ee3291ec75d533b120f6a05b4-merged.mount: Deactivated successfully.
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1199 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.414176) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224414235, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 899, "num_deletes": 256, "total_data_size": 938323, "memory_usage": 955944, "flush_reason": "Manual Compaction"}
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224424138, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 924177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71303, "largest_seqno": 72201, "table_properties": {"data_size": 919967, "index_size": 1733, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10861, "raw_average_key_size": 20, "raw_value_size": 910783, "raw_average_value_size": 1677, "num_data_blocks": 77, "num_entries": 543, "num_filter_entries": 543, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806168, "oldest_key_time": 1763806168, "file_creation_time": 1763806224, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 10040 microseconds, and 4205 cpu microseconds.
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.424213) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 924177 bytes OK
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.424239) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.428500) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.428519) EVENT_LOG_v1 {"time_micros": 1763806224428513, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.428540) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 933790, prev total WAL file size 933790, number of live WAL files 2.
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.429240) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323634' seq:72057594037927935, type:22 .. '6C6F676D0033353137' seq:0, type:0; will stop at (end)
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(902KB)], [167(9686KB)]
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224429381, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10843158, "oldest_snapshot_seqno": -1}
Nov 22 05:10:24 np0005532048 podman[431176]: 2025-11-22 10:10:24.444838483 +0000 UTC m=+1.052936941 container remove 56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1199 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:24 np0005532048 systemd[1]: libpod-conmon-56803d3263b136755016b89f72c21ef48c610fa23ebdc196171a9e5fb9dfdfbd.scope: Deactivated successfully.
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 9814 keys, 10708418 bytes, temperature: kUnknown
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224487164, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10708418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10648215, "index_size": 34554, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 261504, "raw_average_key_size": 26, "raw_value_size": 10477697, "raw_average_value_size": 1067, "num_data_blocks": 1310, "num_entries": 9814, "num_filter_entries": 9814, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806224, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.487464) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10708418 bytes
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.488985) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.4 rd, 185.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(23.3) write-amplify(11.6) OK, records in: 10338, records dropped: 524 output_compression: NoCompression
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.489006) EVENT_LOG_v1 {"time_micros": 1763806224488997, "job": 104, "event": "compaction_finished", "compaction_time_micros": 57847, "compaction_time_cpu_micros": 34740, "output_level": 6, "num_output_files": 1, "total_output_size": 10708418, "num_input_records": 10338, "num_output_records": 9814, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224489307, "job": 104, "event": "table_file_deletion", "file_number": 169}
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806224491630, "job": 104, "event": "table_file_deletion", "file_number": 167}
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.429137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:10:24 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:10:24.491877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:10:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:24.720+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:24 np0005532048 systemd[1]: session-52.scope: Deactivated successfully.
Nov 22 05:10:24 np0005532048 systemd-logind[822]: Session 52 logged out. Waiting for processes to exit.
Nov 22 05:10:24 np0005532048 systemd-logind[822]: Removed session 52.
Nov 22 05:10:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:24 np0005532048 nova_compute[253661]: 2025-11-22 10:10:24.976 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:25 np0005532048 podman[431378]: 2025-11-22 10:10:25.396583793 +0000 UTC m=+0.072367702 container create ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:10:25 np0005532048 systemd[1]: Started libpod-conmon-ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475.scope.
Nov 22 05:10:25 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:25 np0005532048 podman[431378]: 2025-11-22 10:10:25.367536778 +0000 UTC m=+0.043320787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:10:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:10:25 np0005532048 podman[431378]: 2025-11-22 10:10:25.491018786 +0000 UTC m=+0.166802735 container init ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:10:25 np0005532048 podman[431378]: 2025-11-22 10:10:25.498958422 +0000 UTC m=+0.174742331 container start ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:10:25 np0005532048 podman[431378]: 2025-11-22 10:10:25.502901029 +0000 UTC m=+0.178684948 container attach ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:10:25 np0005532048 hardcore_jang[431394]: 167 167
Nov 22 05:10:25 np0005532048 systemd[1]: libpod-ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475.scope: Deactivated successfully.
Nov 22 05:10:25 np0005532048 conmon[431394]: conmon ad5169cb9d81966c1873 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475.scope/container/memory.events
Nov 22 05:10:25 np0005532048 podman[431378]: 2025-11-22 10:10:25.50783505 +0000 UTC m=+0.183618969 container died ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:10:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-34c78179b90af8b2b74f0aa37214bbdb3fbd34ceac2cdfccdc027d153d3c2349-merged.mount: Deactivated successfully.
Nov 22 05:10:25 np0005532048 podman[431378]: 2025-11-22 10:10:25.554445257 +0000 UTC m=+0.230229176 container remove ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:10:25 np0005532048 systemd[1]: libpod-conmon-ad5169cb9d81966c187399833e03cafa635beb0712fea312b55c2abe30618475.scope: Deactivated successfully.
Nov 22 05:10:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:25.728+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:25 np0005532048 podman[431419]: 2025-11-22 10:10:25.794019983 +0000 UTC m=+0.050841323 container create 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:10:25 np0005532048 systemd[1]: Started libpod-conmon-6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723.scope.
Nov 22 05:10:25 np0005532048 podman[431419]: 2025-11-22 10:10:25.773488057 +0000 UTC m=+0.030309417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:10:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:10:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:10:25 np0005532048 podman[431419]: 2025-11-22 10:10:25.897635533 +0000 UTC m=+0.154456863 container init 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:10:25 np0005532048 podman[431419]: 2025-11-22 10:10:25.91097077 +0000 UTC m=+0.167792090 container start 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:10:25 np0005532048 podman[431419]: 2025-11-22 10:10:25.913981104 +0000 UTC m=+0.170802424 container attach 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:10:26 np0005532048 nova_compute[253661]: 2025-11-22 10:10:26.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:10:26 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:26.731+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:26 np0005532048 trusting_wright[431435]: {
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "osd_id": 1,
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "type": "bluestore"
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:    },
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "osd_id": 0,
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "type": "bluestore"
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:    },
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "osd_id": 2,
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:        "type": "bluestore"
Nov 22 05:10:26 np0005532048 trusting_wright[431435]:    }
Nov 22 05:10:26 np0005532048 trusting_wright[431435]: }
Nov 22 05:10:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:26 np0005532048 systemd[1]: libpod-6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723.scope: Deactivated successfully.
Nov 22 05:10:26 np0005532048 podman[431419]: 2025-11-22 10:10:26.955651377 +0000 UTC m=+1.212472707 container died 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:10:26 np0005532048 systemd[1]: libpod-6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723.scope: Consumed 1.051s CPU time.
Nov 22 05:10:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d50311bc414ab185608a23f6065404e033f6661031245bc207a9f082d15377b9-merged.mount: Deactivated successfully.
Nov 22 05:10:27 np0005532048 podman[431419]: 2025-11-22 10:10:27.008366795 +0000 UTC m=+1.265188115 container remove 6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:10:27 np0005532048 systemd[1]: libpod-conmon-6f39bc87c05e92fb734f3a307219c1186e5a32b9defe56c55278c82f7c33e723.scope: Deactivated successfully.
Nov 22 05:10:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:10:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:10:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:27 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c220d0e1-6c32-4599-aa80-bac0f6800d99 does not exist
Nov 22 05:10:27 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 70aa17a3-6524-4d42-b861-9272992938a3 does not exist
Nov 22 05:10:27 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:27 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:10:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:27.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:10:28.017 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:10:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:10:28.018 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:10:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:10:28.018 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:10:28 np0005532048 nova_compute[253661]: 2025-11-22 10:10:28.411 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:28 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:28.817+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1204 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:29 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:10:29.434 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 05:10:29 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:29 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1204 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:29.794+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:30 np0005532048 nova_compute[253661]: 2025-11-22 10:10:30.004 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:30 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:30.843+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:31 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:31.803+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:32 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:32.847+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:33 np0005532048 nova_compute[253661]: 2025-11-22 10:10:33.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:33 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:33.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1209 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:34 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:34 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1209 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:34.848+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:35 np0005532048 nova_compute[253661]: 2025-11-22 10:10:35.007 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:35 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:35.880+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:36 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:36.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:36 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:37 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:37.791+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:38 np0005532048 nova_compute[253661]: 2025-11-22 10:10:38.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:38 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:38.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:38 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1214 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:39 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1214 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:39.870+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:40 np0005532048 nova_compute[253661]: 2025-11-22 10:10:40.008 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:40 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:40.840+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:40 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:41 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:41.793+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:42 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:42.777+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:42 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:43 np0005532048 nova_compute[253661]: 2025-11-22 10:10:43.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:43 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:43 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:43.816+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:44 np0005532048 podman[431531]: 2025-11-22 10:10:44.399086495 +0000 UTC m=+0.072827913 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:10:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1219 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:44 np0005532048 podman[431532]: 2025-11-22 10:10:44.423968357 +0000 UTC m=+0.100797592 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:10:44 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:44 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1219 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:44.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:44 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:45 np0005532048 nova_compute[253661]: 2025-11-22 10:10:45.049 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:45 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:45.877+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:46 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:46.832+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:46 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:47 np0005532048 podman[431569]: 2025-11-22 10:10:47.463709386 +0000 UTC m=+0.145782598 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 05:10:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:47.818+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:47 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:48 np0005532048 nova_compute[253661]: 2025-11-22 10:10:48.425 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:48.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:48 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:48 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1224 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:49 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:49 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1224 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:49.851+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:50 np0005532048 nova_compute[253661]: 2025-11-22 10:10:50.051 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:50 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:50.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:50 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:51 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:51.891+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:10:52
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', '.rgw.root', 'vms']
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:10:52 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:52.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:52 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:53 np0005532048 nova_compute[253661]: 2025-11-22 10:10:53.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:53 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:53.950+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1229 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:54 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:54 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1229 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:54.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:54 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:55 np0005532048 nova_compute[253661]: 2025-11-22 10:10:55.053 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:55 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:55.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:10:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:10:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:10:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:10:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:10:56 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:56.920+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:56 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:10:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 15K writes, 72K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1830 writes, 8975 keys, 1830 commit groups, 1.0 writes per commit group, ingest: 9.55 MB, 0.02 MB/s#012Interval WAL: 1830 writes, 1830 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     42.9      1.90              0.30        52    0.037       0      0       0.0       0.0#012  L6      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.3    103.2     88.5      4.86              1.34        51    0.095    357K    27K       0.0       0.0#012 Sum      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.3     74.1     75.7      6.76              1.64       103    0.066    357K    27K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9    149.2    153.1      0.56              0.29        16    0.035     78K   4107       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    103.2     88.5      4.86              1.34        51    0.095    357K    27K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     42.9      1.90              0.30        51    0.037       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.080, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.50 GB write, 0.09 MB/s write, 0.49 GB read, 0.08 MB/s read, 6.8 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 55.83 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000876 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3609,53.30 MB,17.5342%) FilterBlock(104,1001.55 KB,0.321735%) IndexBlock(104,1.55 MB,0.509779%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 05:10:57 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:57.931+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:58 np0005532048 nova_compute[253661]: 2025-11-22 10:10:58.520 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:10:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:58.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:58 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:58 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:10:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1234 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:10:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:10:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:10:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:10:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:10:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:10:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:10:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:10:59.856+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:10:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:59 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:10:59 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1234 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:00 np0005532048 nova_compute[253661]: 2025-11-22 10:11:00.056 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:00.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:00 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:00 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:01.818+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:01 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:02.801+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:02 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:02 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:11:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:11:03 np0005532048 nova_compute[253661]: 2025-11-22 10:11:03.523 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:03.805+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:03 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1239 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:04.840+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:04 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:04 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1239 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:04 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:05 np0005532048 nova_compute[253661]: 2025-11-22 10:11:05.058 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:05.825+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:05 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:06.871+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:06 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:06 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:07 np0005532048 nova_compute[253661]: 2025-11-22 10:11:07.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:11:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:07.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:08 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:08 np0005532048 nova_compute[253661]: 2025-11-22 10:11:08.527 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:08.855+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:08 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:09 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1244 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:09.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:10 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:10 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1244 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:10 np0005532048 nova_compute[253661]: 2025-11-22 10:11:10.105 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:10.873+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:10 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:11 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:11 np0005532048 nova_compute[253661]: 2025-11-22 10:11:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:11:11 np0005532048 nova_compute[253661]: 2025-11-22 10:11:11.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:11:11 np0005532048 nova_compute[253661]: 2025-11-22 10:11:11.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:11:11 np0005532048 nova_compute[253661]: 2025-11-22 10:11:11.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:11:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:11.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:12 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:11:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3557443404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:11:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:11:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3557443404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:11:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:12.836+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:12 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:13 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:13 np0005532048 nova_compute[253661]: 2025-11-22 10:11:13.531 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:13.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:14 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:14 np0005532048 nova_compute[253661]: 2025-11-22 10:11:14.236 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:11:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1249 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:14.811+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:14 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:15 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:15 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1249 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:15 np0005532048 nova_compute[253661]: 2025-11-22 10:11:15.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:15 np0005532048 nova_compute[253661]: 2025-11-22 10:11:15.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:11:15 np0005532048 nova_compute[253661]: 2025-11-22 10:11:15.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:11:15 np0005532048 nova_compute[253661]: 2025-11-22 10:11:15.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:11:15 np0005532048 podman[431597]: 2025-11-22 10:11:15.367442507 +0000 UTC m=+0.054943824 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:11:15 np0005532048 podman[431596]: 2025-11-22 10:11:15.36878949 +0000 UTC m=+0.057644130 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 05:11:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:15.799+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:16 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:16.807+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:16 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:17 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.257 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:11:17 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:11:17 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936066357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.694 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.835 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.836 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3568MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.836 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.837 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:11:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:17.841+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.904 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.905 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:11:17 np0005532048 nova_compute[253661]: 2025-11-22 10:11:17.927 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:11:18 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:11:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441160920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:11:18 np0005532048 nova_compute[253661]: 2025-11-22 10:11:18.361 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:11:18 np0005532048 nova_compute[253661]: 2025-11-22 10:11:18.368 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:11:18 np0005532048 nova_compute[253661]: 2025-11-22 10:11:18.386 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:11:18 np0005532048 nova_compute[253661]: 2025-11-22 10:11:18.388 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:11:18 np0005532048 nova_compute[253661]: 2025-11-22 10:11:18.388 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:11:18 np0005532048 podman[431676]: 2025-11-22 10:11:18.407076572 +0000 UTC m=+0.098298710 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 05:11:18 np0005532048 nova_compute[253661]: 2025-11-22 10:11:18.532 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:18.876+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:18 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:19 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1254 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:19.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:20 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:20 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1254 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:20 np0005532048 nova_compute[253661]: 2025-11-22 10:11:20.173 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:20.872+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:20 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:21 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:21.891+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:22 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:11:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:11:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:22.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:22 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:23 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:23 np0005532048 nova_compute[253661]: 2025-11-22 10:11:23.537 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:23.924+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:24 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1259 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:24.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:24 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:25 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:25 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1259 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:25 np0005532048 nova_compute[253661]: 2025-11-22 10:11:25.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:25.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:26 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:26 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:27.016+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:27 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:11:28.018 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:11:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:11:28.019 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:11:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:11:28.019 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:11:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:28.046+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:11:28 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c715f888-1a0e-4336-bffb-0c95fcd6d203 does not exist
Nov 22 05:11:28 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4cc644c6-b48a-4daf-8c3c-a3908b9af563 does not exist
Nov 22 05:11:28 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev af673936-09b6-4e98-8845-e850d654fde0 does not exist
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:11:28 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:11:28 np0005532048 nova_compute[253661]: 2025-11-22 10:11:28.545 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:28 np0005532048 podman[431974]: 2025-11-22 10:11:28.724005794 +0000 UTC m=+0.045239424 container create c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:11:28 np0005532048 systemd[1]: Started libpod-conmon-c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f.scope.
Nov 22 05:11:28 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:11:28 np0005532048 podman[431974]: 2025-11-22 10:11:28.70309955 +0000 UTC m=+0.024333210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:11:28 np0005532048 podman[431974]: 2025-11-22 10:11:28.813740512 +0000 UTC m=+0.134974222 container init c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:11:28 np0005532048 podman[431974]: 2025-11-22 10:11:28.820536869 +0000 UTC m=+0.141770509 container start c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:11:28 np0005532048 podman[431974]: 2025-11-22 10:11:28.82417998 +0000 UTC m=+0.145413700 container attach c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:11:28 np0005532048 systemd[1]: libpod-c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f.scope: Deactivated successfully.
Nov 22 05:11:28 np0005532048 zealous_robinson[431990]: 167 167
Nov 22 05:11:28 np0005532048 conmon[431990]: conmon c3ef36f53a761aa9bda9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f.scope/container/memory.events
Nov 22 05:11:28 np0005532048 podman[431974]: 2025-11-22 10:11:28.828888105 +0000 UTC m=+0.150121765 container died c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:11:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-525b428c70fec544b6997dfd1afae871e4ec24820d2d65d04ceaf4150740c750-merged.mount: Deactivated successfully.
Nov 22 05:11:28 np0005532048 podman[431974]: 2025-11-22 10:11:28.880491875 +0000 UTC m=+0.201725535 container remove c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:11:28 np0005532048 systemd[1]: libpod-conmon-c3ef36f53a761aa9bda9dc7b55d0a650b62fdb142c15c54f46e8a083d6bf6f4f.scope: Deactivated successfully.
Nov 22 05:11:28 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:29.076+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:29 np0005532048 podman[432014]: 2025-11-22 10:11:29.097127815 +0000 UTC m=+0.057254849 container create d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:11:29 np0005532048 systemd[1]: Started libpod-conmon-d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe.scope.
Nov 22 05:11:29 np0005532048 podman[432014]: 2025-11-22 10:11:29.064103543 +0000 UTC m=+0.024230667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:11:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:11:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:29 np0005532048 podman[432014]: 2025-11-22 10:11:29.201784941 +0000 UTC m=+0.161912015 container init d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:11:29 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:29 np0005532048 podman[432014]: 2025-11-22 10:11:29.214223707 +0000 UTC m=+0.174350731 container start d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:11:29 np0005532048 podman[432014]: 2025-11-22 10:11:29.217154459 +0000 UTC m=+0.177281483 container attach d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 22 05:11:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1264 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:30.107+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:30 np0005532048 nova_compute[253661]: 2025-11-22 10:11:30.176 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:30 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:30 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1264 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:30 np0005532048 strange_turing[432031]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:11:30 np0005532048 strange_turing[432031]: --> relative data size: 1.0
Nov 22 05:11:30 np0005532048 strange_turing[432031]: --> All data devices are unavailable
Nov 22 05:11:30 np0005532048 systemd[1]: libpod-d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe.scope: Deactivated successfully.
Nov 22 05:11:30 np0005532048 systemd[1]: libpod-d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe.scope: Consumed 1.172s CPU time.
Nov 22 05:11:30 np0005532048 podman[432014]: 2025-11-22 10:11:30.428595319 +0000 UTC m=+1.388722363 container died d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:11:30 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bf1c1db79870d0ac5a69ea037dd018459c44a899c0e13fdfefeb992365b5aaae-merged.mount: Deactivated successfully.
Nov 22 05:11:30 np0005532048 podman[432014]: 2025-11-22 10:11:30.486753811 +0000 UTC m=+1.446880835 container remove d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:11:30 np0005532048 systemd[1]: libpod-conmon-d3c127e22184ea84416cb088ab80fc15cae2a3f1a19f7bef86f64277ce771cfe.scope: Deactivated successfully.
Nov 22 05:11:30 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:31.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:31 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:31 np0005532048 podman[432214]: 2025-11-22 10:11:31.289295269 +0000 UTC m=+0.033923356 container create 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:11:31 np0005532048 systemd[1]: Started libpod-conmon-57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6.scope.
Nov 22 05:11:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:11:31 np0005532048 podman[432214]: 2025-11-22 10:11:31.274287419 +0000 UTC m=+0.018915526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:11:31 np0005532048 podman[432214]: 2025-11-22 10:11:31.37794035 +0000 UTC m=+0.122568507 container init 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:11:31 np0005532048 podman[432214]: 2025-11-22 10:11:31.386281755 +0000 UTC m=+0.130909872 container start 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:11:31 np0005532048 stupefied_aryabhata[432230]: 167 167
Nov 22 05:11:31 np0005532048 systemd[1]: libpod-57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6.scope: Deactivated successfully.
Nov 22 05:11:31 np0005532048 podman[432214]: 2025-11-22 10:11:31.391032453 +0000 UTC m=+0.135660570 container attach 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:11:31 np0005532048 podman[432214]: 2025-11-22 10:11:31.395191745 +0000 UTC m=+0.139819902 container died 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 05:11:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f387fbaa6976ca9ac65bc6fd3250da5768bbab6de8b20baeba10d37656ed667e-merged.mount: Deactivated successfully.
Nov 22 05:11:31 np0005532048 podman[432214]: 2025-11-22 10:11:31.456865163 +0000 UTC m=+0.201493250 container remove 57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:11:31 np0005532048 systemd[1]: libpod-conmon-57ec86eceeccd5d1d677575e99ee1950900d38db65ffea282490c9e5f06a0ff6.scope: Deactivated successfully.
Nov 22 05:11:31 np0005532048 podman[432254]: 2025-11-22 10:11:31.653954373 +0000 UTC m=+0.069776658 container create ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:11:31 np0005532048 systemd[1]: Started libpod-conmon-ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847.scope.
Nov 22 05:11:31 np0005532048 podman[432254]: 2025-11-22 10:11:31.62052488 +0000 UTC m=+0.036347215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:11:31 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:11:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:31 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:31 np0005532048 podman[432254]: 2025-11-22 10:11:31.757503141 +0000 UTC m=+0.173325466 container init ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:11:31 np0005532048 podman[432254]: 2025-11-22 10:11:31.770383808 +0000 UTC m=+0.186206093 container start ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:11:31 np0005532048 podman[432254]: 2025-11-22 10:11:31.775537954 +0000 UTC m=+0.191360229 container attach ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:11:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:32.189+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:32 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]: {
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:    "0": [
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:        {
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "devices": [
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "/dev/loop3"
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            ],
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_name": "ceph_lv0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_size": "21470642176",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "name": "ceph_lv0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "tags": {
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.cluster_name": "ceph",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.crush_device_class": "",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.encrypted": "0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.osd_id": "0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.type": "block",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.vdo": "0"
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            },
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "type": "block",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "vg_name": "ceph_vg0"
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:        }
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:    ],
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:    "1": [
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:        {
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "devices": [
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "/dev/loop4"
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            ],
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_name": "ceph_lv1",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_size": "21470642176",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "name": "ceph_lv1",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "tags": {
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.cluster_name": "ceph",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.crush_device_class": "",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.encrypted": "0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.osd_id": "1",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.type": "block",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.vdo": "0"
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            },
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "type": "block",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "vg_name": "ceph_vg1"
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:        }
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:    ],
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:    "2": [
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:        {
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "devices": [
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "/dev/loop5"
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            ],
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_name": "ceph_lv2",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_size": "21470642176",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "name": "ceph_lv2",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "tags": {
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.cluster_name": "ceph",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.crush_device_class": "",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.encrypted": "0",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.osd_id": "2",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.type": "block",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:                "ceph.vdo": "0"
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            },
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "type": "block",
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:            "vg_name": "ceph_vg2"
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:        }
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]:    ]
Nov 22 05:11:32 np0005532048 infallible_dirac[432271]: }
Nov 22 05:11:32 np0005532048 systemd[1]: libpod-ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847.scope: Deactivated successfully.
Nov 22 05:11:32 np0005532048 podman[432254]: 2025-11-22 10:11:32.534386137 +0000 UTC m=+0.950208392 container died ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:11:32 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f954cc3771c939f85143f7d08745f5a07dc654e6d445dc9b3c5d42ba90e7e9a0-merged.mount: Deactivated successfully.
Nov 22 05:11:32 np0005532048 podman[432254]: 2025-11-22 10:11:32.590417136 +0000 UTC m=+1.006239381 container remove ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:11:32 np0005532048 systemd[1]: libpod-conmon-ce779c1302a2c1131100bbc72f60d6a246b8df4b8f996ec3b45904d025e75847.scope: Deactivated successfully.
Nov 22 05:11:32 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:33.145+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:33 np0005532048 podman[432434]: 2025-11-22 10:11:33.245502546 +0000 UTC m=+0.057866125 container create 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:11:33 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:33 np0005532048 systemd[1]: Started libpod-conmon-19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d.scope.
Nov 22 05:11:33 np0005532048 podman[432434]: 2025-11-22 10:11:33.222352427 +0000 UTC m=+0.034716026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:11:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:11:33 np0005532048 podman[432434]: 2025-11-22 10:11:33.372135133 +0000 UTC m=+0.184498752 container init 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:11:33 np0005532048 podman[432434]: 2025-11-22 10:11:33.384100436 +0000 UTC m=+0.196464015 container start 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:11:33 np0005532048 podman[432434]: 2025-11-22 10:11:33.389189112 +0000 UTC m=+0.201552721 container attach 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:11:33 np0005532048 distracted_fermi[432450]: 167 167
Nov 22 05:11:33 np0005532048 systemd[1]: libpod-19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d.scope: Deactivated successfully.
Nov 22 05:11:33 np0005532048 podman[432434]: 2025-11-22 10:11:33.393016076 +0000 UTC m=+0.205379695 container died 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:11:33 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5bc1b14e2c0fcc27ac34f342671d407a6d5959c51c328a6850c741a9f10e4529-merged.mount: Deactivated successfully.
Nov 22 05:11:33 np0005532048 podman[432434]: 2025-11-22 10:11:33.443750405 +0000 UTC m=+0.256113994 container remove 19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:11:33 np0005532048 systemd[1]: libpod-conmon-19068a60be7a317e147ec5c3d4ec3a8e8d44b72481f33c1e3a581953bf6d131d.scope: Deactivated successfully.
Nov 22 05:11:33 np0005532048 nova_compute[253661]: 2025-11-22 10:11:33.548 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:33 np0005532048 podman[432475]: 2025-11-22 10:11:33.732484319 +0000 UTC m=+0.085478434 container create a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:11:33 np0005532048 podman[432475]: 2025-11-22 10:11:33.695850368 +0000 UTC m=+0.048844533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:11:33 np0005532048 systemd[1]: Started libpod-conmon-a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd.scope.
Nov 22 05:11:33 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:11:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:33 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:11:33 np0005532048 podman[432475]: 2025-11-22 10:11:33.841493662 +0000 UTC m=+0.194487827 container init a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:11:33 np0005532048 podman[432475]: 2025-11-22 10:11:33.85401131 +0000 UTC m=+0.207005395 container start a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:11:33 np0005532048 podman[432475]: 2025-11-22 10:11:33.858869729 +0000 UTC m=+0.211863894 container attach a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:11:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:34.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:34 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1270 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:34 np0005532048 practical_easley[432491]: {
Nov 22 05:11:34 np0005532048 practical_easley[432491]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "osd_id": 1,
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "type": "bluestore"
Nov 22 05:11:34 np0005532048 practical_easley[432491]:    },
Nov 22 05:11:34 np0005532048 practical_easley[432491]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "osd_id": 0,
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "type": "bluestore"
Nov 22 05:11:34 np0005532048 practical_easley[432491]:    },
Nov 22 05:11:34 np0005532048 practical_easley[432491]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "osd_id": 2,
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:11:34 np0005532048 practical_easley[432491]:        "type": "bluestore"
Nov 22 05:11:34 np0005532048 practical_easley[432491]:    }
Nov 22 05:11:34 np0005532048 practical_easley[432491]: }
Nov 22 05:11:34 np0005532048 systemd[1]: libpod-a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd.scope: Deactivated successfully.
Nov 22 05:11:34 np0005532048 systemd[1]: libpod-a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd.scope: Consumed 1.029s CPU time.
Nov 22 05:11:34 np0005532048 podman[432475]: 2025-11-22 10:11:34.878673305 +0000 UTC m=+1.231667390 container died a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:11:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1314160e59380cdf33715250f0ab3ab04f5f7cbdd06314efbc534376036d0834-merged.mount: Deactivated successfully.
Nov 22 05:11:34 np0005532048 podman[432475]: 2025-11-22 10:11:34.943007488 +0000 UTC m=+1.296001583 container remove a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:11:34 np0005532048 systemd[1]: libpod-conmon-a6154ed757e35ba0a7124b5fcdfcb0068e0a63ab8ebdf251f615b907f7fafccd.scope: Deactivated successfully.
Nov 22 05:11:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:11:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:11:34 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:11:35 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:11:35 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 7820ef98-6049-46ae-a76f-e78d760ce2c7 does not exist
Nov 22 05:11:35 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c5160e30-e2fc-4d85-a0bc-ac92a0bf6e1b does not exist
Nov 22 05:11:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:35.161+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:35 np0005532048 nova_compute[253661]: 2025-11-22 10:11:35.179 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:35 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:35 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1270 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:35 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:11:35 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:11:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:36.162+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:11:36.181 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 05:11:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:11:36.182 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 05:11:36 np0005532048 nova_compute[253661]: 2025-11-22 10:11:36.182 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:36 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:11:36.451 162862 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:3c:e4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5e:e3:54:1f:a3:7a'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 22 05:11:36 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:11:36.452 162862 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 22 05:11:36 np0005532048 nova_compute[253661]: 2025-11-22 10:11:36.452 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:37.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:37 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:38.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:38 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:38 np0005532048 nova_compute[253661]: 2025-11-22 10:11:38.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:39.133+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1275 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:40.105+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:40 np0005532048 nova_compute[253661]: 2025-11-22 10:11:40.181 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:40 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:40 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1275 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:41.079+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:41 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:42.102+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:42 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:42 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:11:42.454 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 05:11:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:43.065+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:43 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:43 np0005532048 systemd-logind[822]: New session 53 of user zuul.
Nov 22 05:11:43 np0005532048 systemd[1]: Started Session 53 of User zuul.
Nov 22 05:11:43 np0005532048 nova_compute[253661]: 2025-11-22 10:11:43.553 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:44.094+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:44 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1279 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:44 np0005532048 systemd[1]: Reloading.
Nov 22 05:11:44 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:11:44 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:11:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:45.051+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:45 np0005532048 nova_compute[253661]: 2025-11-22 10:11:45.183 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:45 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:11:45.184 162862 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=26987bf4-0c95-4db6-9113-da9e4051262c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 22 05:11:45 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:45 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1279 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:45 np0005532048 podman[432785]: 2025-11-22 10:11:45.479108324 +0000 UTC m=+0.066759454 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:11:45 np0005532048 systemd[1]: Reloading.
Nov 22 05:11:45 np0005532048 podman[432784]: 2025-11-22 10:11:45.496565373 +0000 UTC m=+0.089870112 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 05:11:45 np0005532048 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:11:45 np0005532048 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:11:45 np0005532048 systemd[1]: Starting Podman API Socket...
Nov 22 05:11:45 np0005532048 systemd[1]: Listening on Podman API Socket.
Nov 22 05:11:46 np0005532048 dbus-broker-launch[813]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Nov 22 05:11:46 np0005532048 systemd[1]: podman.socket: Deactivated successfully.
Nov 22 05:11:46 np0005532048 systemd[1]: Closed Podman API Socket.
Nov 22 05:11:46 np0005532048 systemd[1]: Stopping Podman API Socket...
Nov 22 05:11:46 np0005532048 systemd[1]: Starting Podman API Socket...
Nov 22 05:11:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:46.044+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:46 np0005532048 systemd[1]: Listening on Podman API Socket.
Nov 22 05:11:46 np0005532048 systemd-logind[822]: New session 54 of user zuul.
Nov 22 05:11:46 np0005532048 systemd[1]: Started Session 54 of User zuul.
Nov 22 05:11:46 np0005532048 systemd[1]: Starting Podman API Service...
Nov 22 05:11:46 np0005532048 systemd[1]: Started Podman API Service.
Nov 22 05:11:46 np0005532048 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 22 05:11:46 np0005532048 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="Setting parallel job count to 25"
Nov 22 05:11:46 np0005532048 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="Using sqlite as database backend"
Nov 22 05:11:46 np0005532048 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 22 05:11:46 np0005532048 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 22 05:11:46 np0005532048 podman[432884]: time="2025-11-22T10:11:46Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 22 05:11:46 np0005532048 podman[432884]: @ - - [22/Nov/2025:10:11:46 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 22 05:11:46 np0005532048 podman[432884]: @ - - [22/Nov/2025:10:11:46 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 24897 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 22 05:11:46 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:47.031+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:47 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:47.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:48 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:48 np0005532048 nova_compute[253661]: 2025-11-22 10:11:48.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:48.954+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:49 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1284 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:49 np0005532048 podman[432897]: 2025-11-22 10:11:49.467705981 +0000 UTC m=+0.149328755 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:11:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:49.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:50 np0005532048 nova_compute[253661]: 2025-11-22 10:11:50.231 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:50 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:50 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1284 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:50.900+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:51 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:51.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:11:52
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.meta', '.mgr', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:11:52 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:11:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:11:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:52.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:53 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:53 np0005532048 nova_compute[253661]: 2025-11-22 10:11:53.558 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:53.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:54 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1289 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:54.886+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:55 np0005532048 nova_compute[253661]: 2025-11-22 10:11:55.233 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:55 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:55 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1289 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:55.916+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:11:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:11:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:11:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:11:56 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:11:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:56.951+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:57 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:57 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:57.982+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:58 np0005532048 nova_compute[253661]: 2025-11-22 10:11:58.595 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:11:58 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:11:58.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:11:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:11:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1294 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:11:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:11:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:11:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:11:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:11:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:11:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:11:59 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:11:59 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1294 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:00.011+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:00 np0005532048 nova_compute[253661]: 2025-11-22 10:12:00.235 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:00 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:00.978+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:01 np0005532048 podman[432884]: time="2025-11-22T10:12:01Z" level=info msg="Received shutdown.Stop(), terminating!" PID=432884
Nov 22 05:12:01 np0005532048 systemd[1]: podman.service: Deactivated successfully.
Nov 22 05:12:01 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:01.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:02 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:02.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:12:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:12:03 np0005532048 nova_compute[253661]: 2025-11-22 10:12:03.598 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:03 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:03.985+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1299 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:04 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:04 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1299 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:04.959+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:05 np0005532048 nova_compute[253661]: 2025-11-22 10:12:05.237 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:05 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:05.967+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:06 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:06.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:07 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:07.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:08 np0005532048 nova_compute[253661]: 2025-11-22 10:12:08.602 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:08 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:08 np0005532048 systemd[1]: session-53.scope: Deactivated successfully.
Nov 22 05:12:08 np0005532048 systemd[1]: session-53.scope: Consumed 1.393s CPU time.
Nov 22 05:12:08 np0005532048 systemd-logind[822]: Session 53 logged out. Waiting for processes to exit.
Nov 22 05:12:08 np0005532048 systemd-logind[822]: Removed session 53.
Nov 22 05:12:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:09.030+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1304 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:09 np0005532048 systemd[1]: session-54.scope: Deactivated successfully.
Nov 22 05:12:09 np0005532048 systemd-logind[822]: Session 54 logged out. Waiting for processes to exit.
Nov 22 05:12:09 np0005532048 systemd-logind[822]: Removed session 54.
Nov 22 05:12:09 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:09 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1304 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:10.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:10 np0005532048 nova_compute[253661]: 2025-11-22 10:12:10.238 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:10 np0005532048 nova_compute[253661]: 2025-11-22 10:12:10.388 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:10 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:10.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:11 np0005532048 nova_compute[253661]: 2025-11-22 10:12:11.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:11 np0005532048 nova_compute[253661]: 2025-11-22 10:12:11.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:12:11 np0005532048 nova_compute[253661]: 2025-11-22 10:12:11.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:12:11 np0005532048 nova_compute[253661]: 2025-11-22 10:12:11.248 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.903580) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331903619, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1515, "num_deletes": 251, "total_data_size": 1794751, "memory_usage": 1829744, "flush_reason": "Manual Compaction"}
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331916146, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1754842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72202, "largest_seqno": 73716, "table_properties": {"data_size": 1748493, "index_size": 3230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16700, "raw_average_key_size": 20, "raw_value_size": 1734293, "raw_average_value_size": 2173, "num_data_blocks": 142, "num_entries": 798, "num_filter_entries": 798, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806224, "oldest_key_time": 1763806224, "file_creation_time": 1763806331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 12604 microseconds, and 5428 cpu microseconds.
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.916182) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1754842 bytes OK
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.916199) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.917697) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.917711) EVENT_LOG_v1 {"time_micros": 1763806331917706, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.917724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1787903, prev total WAL file size 1787903, number of live WAL files 2.
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.918338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1713KB)], [170(10MB)]
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331918361, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12463260, "oldest_snapshot_seqno": -1}
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 10098 keys, 11081056 bytes, temperature: kUnknown
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331979416, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11081056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11018922, "index_size": 35781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 268777, "raw_average_key_size": 26, "raw_value_size": 10843245, "raw_average_value_size": 1073, "num_data_blocks": 1356, "num_entries": 10098, "num_filter_entries": 10098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.979701) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11081056 bytes
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.980926) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.8 rd, 181.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(13.4) write-amplify(6.3) OK, records in: 10612, records dropped: 514 output_compression: NoCompression
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.980948) EVENT_LOG_v1 {"time_micros": 1763806331980937, "job": 106, "event": "compaction_finished", "compaction_time_micros": 61160, "compaction_time_cpu_micros": 34508, "output_level": 6, "num_output_files": 1, "total_output_size": 11081056, "num_input_records": 10612, "num_output_records": 10098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:12:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:11.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331981608, "job": 106, "event": "table_file_deletion", "file_number": 172}
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806331984081, "job": 106, "event": "table_file_deletion", "file_number": 170}
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.918260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:12:11 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:12:11.984157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:12:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:12:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492359398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:12:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:12:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3492359398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:12:12 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:13.022+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:13 np0005532048 nova_compute[253661]: 2025-11-22 10:12:13.604 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:13 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:13.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1309 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:14 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:14 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1309 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:14.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:15 np0005532048 nova_compute[253661]: 2025-11-22 10:12:15.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:15 np0005532048 nova_compute[253661]: 2025-11-22 10:12:15.240 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:15 np0005532048 podman[432974]: 2025-11-22 10:12:15.61317367 +0000 UTC m=+0.058225794 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:12:15 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:15.934+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:16 np0005532048 nova_compute[253661]: 2025-11-22 10:12:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:16 np0005532048 podman[432993]: 2025-11-22 10:12:16.415138764 +0000 UTC m=+0.094088397 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 22 05:12:16 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:16.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:17 np0005532048 nova_compute[253661]: 2025-11-22 10:12:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:17 np0005532048 nova_compute[253661]: 2025-11-22 10:12:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:17 np0005532048 nova_compute[253661]: 2025-11-22 10:12:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:17 np0005532048 nova_compute[253661]: 2025-11-22 10:12:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:17 np0005532048 nova_compute[253661]: 2025-11-22 10:12:17.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:12:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:17.909+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:17 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.257 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.608 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:12:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450177183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.751 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:12:18 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:18.958+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.985 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.987 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3564MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.987 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:12:18 np0005532048 nova_compute[253661]: 2025-11-22 10:12:18.988 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:12:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.108 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.109 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.195 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.258 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.259 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.279 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.308 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.328 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:12:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1314 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:12:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263260387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.851 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.860 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.882 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.885 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:12:19 np0005532048 nova_compute[253661]: 2025-11-22 10:12:19.885 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:12:19 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:19 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1314 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:19.992+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:20 np0005532048 nova_compute[253661]: 2025-11-22 10:12:20.242 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:20 np0005532048 podman[433059]: 2025-11-22 10:12:20.434088969 +0000 UTC m=+0.120862205 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:12:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:20.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:20 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:21.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:21 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:12:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:12:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:22.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:22 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:23 np0005532048 nova_compute[253661]: 2025-11-22 10:12:23.612 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:23.989+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:23 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1319 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:24.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:25 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:25 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1319 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:25 np0005532048 nova_compute[253661]: 2025-11-22 10:12:25.243 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:25.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:26 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:26.908+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:27 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:27.876+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:12:28.019 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:12:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:12:28.020 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:12:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:12:28.020 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:12:28 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:28 np0005532048 nova_compute[253661]: 2025-11-22 10:12:28.616 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:28.838+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:28 np0005532048 nova_compute[253661]: 2025-11-22 10:12:28.878 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:12:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:29 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1324 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:29.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:30 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:30 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1324 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:30 np0005532048 nova_compute[253661]: 2025-11-22 10:12:30.244 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:30.917+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:31 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:31.957+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:32 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:32.965+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:33 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:33 np0005532048 nova_compute[253661]: 2025-11-22 10:12:33.655 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:33.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:34 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1329 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:34.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:35 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:35 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1329 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:35 np0005532048 nova_compute[253661]: 2025-11-22 10:12:35.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:35.951+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:12:36 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9068e73e-bae2-43d4-a0db-f653d3d8fc97 does not exist
Nov 22 05:12:36 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev f9788f05-4758-4913-b57e-67306c01c5d2 does not exist
Nov 22 05:12:36 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5479cd4c-f0f8-4dc6-a881-683591428cdc does not exist
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:12:36 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:12:36 np0005532048 podman[433356]: 2025-11-22 10:12:36.876419363 +0000 UTC m=+0.048912014 container create de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:12:36 np0005532048 systemd[1]: Started libpod-conmon-de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b.scope.
Nov 22 05:12:36 np0005532048 podman[433356]: 2025-11-22 10:12:36.856214517 +0000 UTC m=+0.028707128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:12:36 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:12:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:36.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:36 np0005532048 podman[433356]: 2025-11-22 10:12:36.988536052 +0000 UTC m=+0.161028683 container init de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:12:37 np0005532048 podman[433356]: 2025-11-22 10:12:37.003981842 +0000 UTC m=+0.176474453 container start de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:12:37 np0005532048 podman[433356]: 2025-11-22 10:12:37.008673858 +0000 UTC m=+0.181166469 container attach de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:12:37 np0005532048 reverent_gould[433372]: 167 167
Nov 22 05:12:37 np0005532048 systemd[1]: libpod-de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b.scope: Deactivated successfully.
Nov 22 05:12:37 np0005532048 podman[433356]: 2025-11-22 10:12:37.012598315 +0000 UTC m=+0.185090936 container died de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 22 05:12:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay-06cb4860342cf7d6c4d765734a0f131eabc6693c0f3a9c2659a51e89e5e379da-merged.mount: Deactivated successfully.
Nov 22 05:12:37 np0005532048 podman[433356]: 2025-11-22 10:12:37.063982249 +0000 UTC m=+0.236474890 container remove de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:12:37 np0005532048 systemd[1]: libpod-conmon-de923680c09207fb03c56154468b7b4389345e3263b09432e07f928f40349c9b.scope: Deactivated successfully.
Nov 22 05:12:37 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:37 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:12:37 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:12:37 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:12:37 np0005532048 podman[433398]: 2025-11-22 10:12:37.320802759 +0000 UTC m=+0.072811013 container create 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:12:37 np0005532048 systemd[1]: Started libpod-conmon-81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13.scope.
Nov 22 05:12:37 np0005532048 podman[433398]: 2025-11-22 10:12:37.292872422 +0000 UTC m=+0.044880716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:12:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:12:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:37 np0005532048 podman[433398]: 2025-11-22 10:12:37.448614624 +0000 UTC m=+0.200622948 container init 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:12:37 np0005532048 podman[433398]: 2025-11-22 10:12:37.461124342 +0000 UTC m=+0.213132596 container start 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 05:12:37 np0005532048 podman[433398]: 2025-11-22 10:12:37.470255347 +0000 UTC m=+0.222263601 container attach 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:12:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:37.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:38 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:38 np0005532048 boring_euclid[433415]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:12:38 np0005532048 boring_euclid[433415]: --> relative data size: 1.0
Nov 22 05:12:38 np0005532048 boring_euclid[433415]: --> All data devices are unavailable
Nov 22 05:12:38 np0005532048 systemd[1]: libpod-81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13.scope: Deactivated successfully.
Nov 22 05:12:38 np0005532048 podman[433398]: 2025-11-22 10:12:38.614894833 +0000 UTC m=+1.366903057 container died 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:12:38 np0005532048 systemd[1]: libpod-81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13.scope: Consumed 1.099s CPU time.
Nov 22 05:12:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c542fae6334d1eb973c0c1672feb79409a3a00aacf57e5879d4f510ab2259cd2-merged.mount: Deactivated successfully.
Nov 22 05:12:38 np0005532048 nova_compute[253661]: 2025-11-22 10:12:38.658 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:38 np0005532048 podman[433398]: 2025-11-22 10:12:38.675401602 +0000 UTC m=+1.427409816 container remove 81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_euclid, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:12:38 np0005532048 systemd[1]: libpod-conmon-81745483dbfb9ad6aea5407f6f26c295f2de8a16840502e53f3d0d40c09e2d13.scope: Deactivated successfully.
Nov 22 05:12:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:38.933+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1334 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:39 np0005532048 podman[433599]: 2025-11-22 10:12:39.485151588 +0000 UTC m=+0.059151597 container create 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:12:39 np0005532048 systemd[1]: Started libpod-conmon-81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d.scope.
Nov 22 05:12:39 np0005532048 podman[433599]: 2025-11-22 10:12:39.456171575 +0000 UTC m=+0.030171634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:12:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:12:39 np0005532048 podman[433599]: 2025-11-22 10:12:39.575161563 +0000 UTC m=+0.149161572 container init 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:12:39 np0005532048 podman[433599]: 2025-11-22 10:12:39.587916847 +0000 UTC m=+0.161916846 container start 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 05:12:39 np0005532048 podman[433599]: 2025-11-22 10:12:39.59253411 +0000 UTC m=+0.166534159 container attach 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:12:39 np0005532048 vigorous_blackwell[433615]: 167 167
Nov 22 05:12:39 np0005532048 systemd[1]: libpod-81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d.scope: Deactivated successfully.
Nov 22 05:12:39 np0005532048 podman[433599]: 2025-11-22 10:12:39.59454228 +0000 UTC m=+0.168542269 container died 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:12:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-965f5c9349925a87a657e9498e97c279d372640bfc7ec290bdff3d2d9717181f-merged.mount: Deactivated successfully.
Nov 22 05:12:39 np0005532048 podman[433599]: 2025-11-22 10:12:39.633939839 +0000 UTC m=+0.207939828 container remove 81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:12:39 np0005532048 systemd[1]: libpod-conmon-81246d95770089fccac8aa89d24320bb03e58af8a11daa7a8c75f833b32e169d.scope: Deactivated successfully.
Nov 22 05:12:39 np0005532048 podman[433638]: 2025-11-22 10:12:39.831147582 +0000 UTC m=+0.039601436 container create 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:12:39 np0005532048 systemd[1]: Started libpod-conmon-42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0.scope.
Nov 22 05:12:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:12:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:39.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:39 np0005532048 podman[433638]: 2025-11-22 10:12:39.814633526 +0000 UTC m=+0.023087410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:12:39 np0005532048 podman[433638]: 2025-11-22 10:12:39.919703331 +0000 UTC m=+0.128157285 container init 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:12:39 np0005532048 podman[433638]: 2025-11-22 10:12:39.926074937 +0000 UTC m=+0.134528831 container start 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:12:39 np0005532048 podman[433638]: 2025-11-22 10:12:39.930167178 +0000 UTC m=+0.138621062 container attach 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:12:40 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:40 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1334 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:40 np0005532048 nova_compute[253661]: 2025-11-22 10:12:40.247 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:40 np0005532048 busy_banzai[433654]: {
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:    "0": [
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:        {
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "devices": [
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "/dev/loop3"
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            ],
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_name": "ceph_lv0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_size": "21470642176",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "name": "ceph_lv0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "tags": {
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.cluster_name": "ceph",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.crush_device_class": "",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.encrypted": "0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.osd_id": "0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.type": "block",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.vdo": "0"
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            },
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "type": "block",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "vg_name": "ceph_vg0"
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:        }
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:    ],
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:    "1": [
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:        {
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "devices": [
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "/dev/loop4"
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            ],
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_name": "ceph_lv1",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_size": "21470642176",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "name": "ceph_lv1",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "tags": {
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.cluster_name": "ceph",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.crush_device_class": "",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.encrypted": "0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.osd_id": "1",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.type": "block",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.vdo": "0"
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            },
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "type": "block",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "vg_name": "ceph_vg1"
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:        }
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:    ],
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:    "2": [
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:        {
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "devices": [
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "/dev/loop5"
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            ],
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_name": "ceph_lv2",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_size": "21470642176",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "name": "ceph_lv2",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "tags": {
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.cluster_name": "ceph",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.crush_device_class": "",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.encrypted": "0",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.osd_id": "2",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.type": "block",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:                "ceph.vdo": "0"
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            },
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "type": "block",
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:            "vg_name": "ceph_vg2"
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:        }
Nov 22 05:12:40 np0005532048 busy_banzai[433654]:    ]
Nov 22 05:12:40 np0005532048 busy_banzai[433654]: }
Nov 22 05:12:40 np0005532048 systemd[1]: libpod-42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0.scope: Deactivated successfully.
Nov 22 05:12:40 np0005532048 conmon[433654]: conmon 42ed0c6d079d923c8f56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0.scope/container/memory.events
Nov 22 05:12:40 np0005532048 podman[433638]: 2025-11-22 10:12:40.720668791 +0000 UTC m=+0.929122675 container died 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:12:40 np0005532048 systemd[1]: var-lib-containers-storage-overlay-32cd46fb6f2251e3364473adec79f68c13e89a1b54bad235d899e5519f676fe6-merged.mount: Deactivated successfully.
Nov 22 05:12:40 np0005532048 podman[433638]: 2025-11-22 10:12:40.800216328 +0000 UTC m=+1.008670202 container remove 42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:12:40 np0005532048 systemd[1]: libpod-conmon-42ed0c6d079d923c8f5608d7042f4a11af269ceee4f72e85d16d496f5d2c50f0.scope: Deactivated successfully.
Nov 22 05:12:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:40.861+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:41 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:41 np0005532048 podman[433816]: 2025-11-22 10:12:41.567277414 +0000 UTC m=+0.061614027 container create 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:12:41 np0005532048 systemd[1]: Started libpod-conmon-9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708.scope.
Nov 22 05:12:41 np0005532048 podman[433816]: 2025-11-22 10:12:41.548383858 +0000 UTC m=+0.042720451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:12:41 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:12:41 np0005532048 podman[433816]: 2025-11-22 10:12:41.661029401 +0000 UTC m=+0.155366024 container init 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:12:41 np0005532048 podman[433816]: 2025-11-22 10:12:41.668015003 +0000 UTC m=+0.162351616 container start 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:12:41 np0005532048 confident_panini[433833]: 167 167
Nov 22 05:12:41 np0005532048 systemd[1]: libpod-9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708.scope: Deactivated successfully.
Nov 22 05:12:41 np0005532048 podman[433816]: 2025-11-22 10:12:41.672850311 +0000 UTC m=+0.167186934 container attach 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:12:41 np0005532048 conmon[433833]: conmon 9c9d05e00beb573267f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708.scope/container/memory.events
Nov 22 05:12:41 np0005532048 podman[433816]: 2025-11-22 10:12:41.673830936 +0000 UTC m=+0.168167519 container died 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:12:41 np0005532048 systemd[1]: var-lib-containers-storage-overlay-47b1d829077b97182da298683884f9fda833ffec312e2cff38efb085361476f2-merged.mount: Deactivated successfully.
Nov 22 05:12:41 np0005532048 podman[433816]: 2025-11-22 10:12:41.721728424 +0000 UTC m=+0.216064997 container remove 9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:12:41 np0005532048 systemd[1]: libpod-conmon-9c9d05e00beb573267f3955c304cf519be677369a28585b376e4245c358aa708.scope: Deactivated successfully.
Nov 22 05:12:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:41.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:41 np0005532048 podman[433856]: 2025-11-22 10:12:41.937084544 +0000 UTC m=+0.064941230 container create 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:12:41 np0005532048 systemd[1]: Started libpod-conmon-0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b.scope.
Nov 22 05:12:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:12:42 np0005532048 podman[433856]: 2025-11-22 10:12:41.912118609 +0000 UTC m=+0.039975355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:12:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:12:42 np0005532048 podman[433856]: 2025-11-22 10:12:42.02270046 +0000 UTC m=+0.150557146 container init 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:12:42 np0005532048 podman[433856]: 2025-11-22 10:12:42.029792405 +0000 UTC m=+0.157649091 container start 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:12:42 np0005532048 podman[433856]: 2025-11-22 10:12:42.032980113 +0000 UTC m=+0.160836869 container attach 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:12:42 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:42.867+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:43 np0005532048 zen_curran[433872]: {
Nov 22 05:12:43 np0005532048 zen_curran[433872]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "osd_id": 1,
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "type": "bluestore"
Nov 22 05:12:43 np0005532048 zen_curran[433872]:    },
Nov 22 05:12:43 np0005532048 zen_curran[433872]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "osd_id": 0,
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "type": "bluestore"
Nov 22 05:12:43 np0005532048 zen_curran[433872]:    },
Nov 22 05:12:43 np0005532048 zen_curran[433872]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "osd_id": 2,
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:12:43 np0005532048 zen_curran[433872]:        "type": "bluestore"
Nov 22 05:12:43 np0005532048 zen_curran[433872]:    }
Nov 22 05:12:43 np0005532048 zen_curran[433872]: }
Nov 22 05:12:43 np0005532048 systemd[1]: libpod-0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b.scope: Deactivated successfully.
Nov 22 05:12:43 np0005532048 podman[433856]: 2025-11-22 10:12:43.098697598 +0000 UTC m=+1.226554314 container died 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:12:43 np0005532048 systemd[1]: libpod-0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b.scope: Consumed 1.072s CPU time.
Nov 22 05:12:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fd4cfb51e2ef375f7ed613295d4c45a07b96554ea03f29010819d8a640a63feb-merged.mount: Deactivated successfully.
Nov 22 05:12:43 np0005532048 podman[433856]: 2025-11-22 10:12:43.163168644 +0000 UTC m=+1.291025360 container remove 0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_curran, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:12:43 np0005532048 systemd[1]: libpod-conmon-0d086376b9463c5e9b2829aa616eecb50569053466c8b70a446cb0a82cf5fa1b.scope: Deactivated successfully.
Nov 22 05:12:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:12:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:12:43 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:12:43 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:12:43 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 31152732-d2d1-490f-bb7d-5dec00ddb5e2 does not exist
Nov 22 05:12:43 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 9482fd16-014a-4de4-8772-20b46c324a5e does not exist
Nov 22 05:12:43 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:43 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:12:43 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:12:43 np0005532048 nova_compute[253661]: 2025-11-22 10:12:43.662 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:43.846+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:44 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1339 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:44.825+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:45 np0005532048 nova_compute[253661]: 2025-11-22 10:12:45.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:45 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:45 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1339 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:45.844+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:46 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:46 np0005532048 podman[433968]: 2025-11-22 10:12:46.402653211 +0000 UTC m=+0.081180279 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 05:12:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:46.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:47 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:47 np0005532048 podman[433988]: 2025-11-22 10:12:47.386377518 +0000 UTC m=+0.076006732 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 05:12:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:47.773+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:48 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:48 np0005532048 nova_compute[253661]: 2025-11-22 10:12:48.666 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:48.782+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:49 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1344 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:49.804+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:50 np0005532048 nova_compute[253661]: 2025-11-22 10:12:50.252 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:50 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:50 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1344 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:50.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:51 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:51 np0005532048 podman[434009]: 2025-11-22 10:12:51.449336336 +0000 UTC m=+0.135456835 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:12:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:51.763+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:52 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:12:52
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta']
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:12:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:12:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:52.799+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:53 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:53 np0005532048 nova_compute[253661]: 2025-11-22 10:12:53.670 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:53.822+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:54 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1349 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:54.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:55 np0005532048 nova_compute[253661]: 2025-11-22 10:12:55.253 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:55 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:55 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1349 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:55.905+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:12:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:12:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:12:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:12:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:12:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:56.875+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:56 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:57.841+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:57 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:57 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:58 np0005532048 nova_compute[253661]: 2025-11-22 10:12:58.674 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:12:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:58.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:58 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:12:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1354 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:12:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:12:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:12:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:12:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:12:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:12:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:12:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:12:59.829+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:12:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:59 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:12:59 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1354 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:00 np0005532048 nova_compute[253661]: 2025-11-22 10:13:00.280 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:00.847+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:00 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:01.833+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:01 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:02.793+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:03 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:13:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:13:03 np0005532048 nova_compute[253661]: 2025-11-22 10:13:03.678 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:03.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:04 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1359 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:04.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:05 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:05 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1359 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:05 np0005532048 nova_compute[253661]: 2025-11-22 10:13:05.282 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:05.846+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:06 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:06.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:07 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:07.801+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:08 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:08 np0005532048 nova_compute[253661]: 2025-11-22 10:13:08.682 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:08.754+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:09 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1364 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:09.766+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:10 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:10 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1364 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:10 np0005532048 nova_compute[253661]: 2025-11-22 10:13:10.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:13:10 np0005532048 nova_compute[253661]: 2025-11-22 10:13:10.283 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:10.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:11 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:11.798+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:12 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:12 np0005532048 nova_compute[253661]: 2025-11-22 10:13:12.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:13:12 np0005532048 nova_compute[253661]: 2025-11-22 10:13:12.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:13:12 np0005532048 nova_compute[253661]: 2025-11-22 10:13:12.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:13:12 np0005532048 nova_compute[253661]: 2025-11-22 10:13:12.246 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:13:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:13:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/552056989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:13:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:13:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/552056989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:13:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:12.805+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:13 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:13 np0005532048 nova_compute[253661]: 2025-11-22 10:13:13.687 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:13.771+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:14 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1369 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:14.773+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:15 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:15 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1369 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:15 np0005532048 nova_compute[253661]: 2025-11-22 10:13:15.284 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:15.767+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:16 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:16.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:17 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:17 np0005532048 nova_compute[253661]: 2025-11-22 10:13:17.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:13:17 np0005532048 nova_compute[253661]: 2025-11-22 10:13:17.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:13:17 np0005532048 podman[434036]: 2025-11-22 10:13:17.379203813 +0000 UTC m=+0.073517289 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 05:13:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:17.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:18 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.263 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.263 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:13:18 np0005532048 podman[434056]: 2025-11-22 10:13:18.364473338 +0000 UTC m=+0.057902536 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 05:13:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:18.685+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:13:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424032644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:13:18 np0005532048 nova_compute[253661]: 2025-11-22 10:13:18.737 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.010 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.012 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3569MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.012 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.013 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:13:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.082 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.082 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.100 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:13:19 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1374 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:13:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3043942726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.566 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.573 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.590 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.593 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:13:19 np0005532048 nova_compute[253661]: 2025-11-22 10:13:19.593 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:13:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:19.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:20 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:20 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1374 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:20 np0005532048 nova_compute[253661]: 2025-11-22 10:13:20.287 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:20.662+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:21 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:21.707+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:22 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:22 np0005532048 podman[434117]: 2025-11-22 10:13:22.422859834 +0000 UTC m=+0.117113583 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:13:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:22.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:13:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:13:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:23 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:23.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:23 np0005532048 nova_compute[253661]: 2025-11-22 10:13:23.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:24 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1379 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:24.706+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:25 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:25 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1379 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:25 np0005532048 nova_compute[253661]: 2025-11-22 10:13:25.291 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:25.662+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:26 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:26.695+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:27 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:27.697+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:13:28.021 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:13:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:13:28.021 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:13:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:13:28.021 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:13:28 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:28.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:28 np0005532048 nova_compute[253661]: 2025-11-22 10:13:28.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:29 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1384 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:29.657+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:30 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:30 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1384 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:30 np0005532048 nova_compute[253661]: 2025-11-22 10:13:30.293 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:30.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:31 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:31 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:31.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:32 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:32.663+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:33.664+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:33 np0005532048 nova_compute[253661]: 2025-11-22 10:13:33.703 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:34 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1389 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:34.631+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:35 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:35 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1389 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:35 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:35 np0005532048 nova_compute[253661]: 2025-11-22 10:13:35.342 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:35.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:36 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:36.686+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:37.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:38 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:38.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:38 np0005532048 nova_compute[253661]: 2025-11-22 10:13:38.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1394 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:39.685+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:40 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:40 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1394 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:40 np0005532048 nova_compute[253661]: 2025-11-22 10:13:40.344 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:40.700+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:41 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:41.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:42 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:42.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:43 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:43.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:43 np0005532048 nova_compute[253661]: 2025-11-22 10:13:43.711 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:13:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e78f120f-5191-4bb2-8090-bb299e4bed44 does not exist
Nov 22 05:13:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 460acd00-9db6-4215-b796-0bb402bc0af2 does not exist
Nov 22 05:13:44 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev f4997b02-4213-4855-9246-1870bdba77f7 does not exist
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1399 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.451952) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424451989, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1322, "num_deletes": 250, "total_data_size": 1511685, "memory_usage": 1546104, "flush_reason": "Manual Compaction"}
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424459557, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 954256, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73717, "largest_seqno": 75038, "table_properties": {"data_size": 949539, "index_size": 1920, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14793, "raw_average_key_size": 21, "raw_value_size": 938242, "raw_average_value_size": 1375, "num_data_blocks": 85, "num_entries": 682, "num_filter_entries": 682, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806332, "oldest_key_time": 1763806332, "file_creation_time": 1763806424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 7652 microseconds, and 2915 cpu microseconds.
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.459597) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 954256 bytes OK
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.459618) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.461699) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.461716) EVENT_LOG_v1 {"time_micros": 1763806424461711, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.461732) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1505541, prev total WAL file size 1505541, number of live WAL files 2.
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.462402) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373538' seq:72057594037927935, type:22 .. '6D6772737461740033303039' seq:0, type:0; will stop at (end)
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(931KB)], [173(10MB)]
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424462439, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12035312, "oldest_snapshot_seqno": -1}
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 10308 keys, 9222932 bytes, temperature: kUnknown
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424524629, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 9222932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9163401, "index_size": 32666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 273977, "raw_average_key_size": 26, "raw_value_size": 8987763, "raw_average_value_size": 871, "num_data_blocks": 1228, "num_entries": 10308, "num_filter_entries": 10308, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.525102) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 9222932 bytes
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.527140) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.6 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.6 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(22.3) write-amplify(9.7) OK, records in: 10780, records dropped: 472 output_compression: NoCompression
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.527166) EVENT_LOG_v1 {"time_micros": 1763806424527154, "job": 108, "event": "compaction_finished", "compaction_time_micros": 62484, "compaction_time_cpu_micros": 39327, "output_level": 6, "num_output_files": 1, "total_output_size": 9222932, "num_input_records": 10780, "num_output_records": 10308, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424528052, "job": 108, "event": "table_file_deletion", "file_number": 175}
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806424530939, "job": 108, "event": "table_file_deletion", "file_number": 173}
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.462347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:13:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:13:44.531147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:13:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:44.611+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:44 np0005532048 podman[434415]: 2025-11-22 10:13:44.871386923 +0000 UTC m=+0.038090308 container create 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:13:44 np0005532048 systemd[1]: Started libpod-conmon-9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b.scope.
Nov 22 05:13:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:13:44 np0005532048 podman[434415]: 2025-11-22 10:13:44.854505738 +0000 UTC m=+0.021209153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:13:44 np0005532048 podman[434415]: 2025-11-22 10:13:44.96021727 +0000 UTC m=+0.126920665 container init 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:13:44 np0005532048 podman[434415]: 2025-11-22 10:13:44.968499474 +0000 UTC m=+0.135202849 container start 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:13:44 np0005532048 podman[434415]: 2025-11-22 10:13:44.971517278 +0000 UTC m=+0.138220653 container attach 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:13:44 np0005532048 adoring_payne[434432]: 167 167
Nov 22 05:13:44 np0005532048 systemd[1]: libpod-9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b.scope: Deactivated successfully.
Nov 22 05:13:44 np0005532048 conmon[434432]: conmon 9834d74f2dbfcf8b4a23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b.scope/container/memory.events
Nov 22 05:13:44 np0005532048 podman[434415]: 2025-11-22 10:13:44.975127396 +0000 UTC m=+0.141830771 container died 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:13:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e24961de1a563531af831543ac0de4c83edf0fb931256993cc44621c810222ae-merged.mount: Deactivated successfully.
Nov 22 05:13:45 np0005532048 podman[434415]: 2025-11-22 10:13:45.019945839 +0000 UTC m=+0.186649214 container remove 9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_payne, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:13:45 np0005532048 systemd[1]: libpod-conmon-9834d74f2dbfcf8b4a2316b5208c816e57e77d8ffd4cc870e0c1c7a650c7c69b.scope: Deactivated successfully.
Nov 22 05:13:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:45 np0005532048 podman[434456]: 2025-11-22 10:13:45.26299467 +0000 UTC m=+0.077769705 container create 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:13:45 np0005532048 systemd[1]: Started libpod-conmon-9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d.scope.
Nov 22 05:13:45 np0005532048 podman[434456]: 2025-11-22 10:13:45.231757062 +0000 UTC m=+0.046532147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:13:45 np0005532048 nova_compute[253661]: 2025-11-22 10:13:45.346 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:45 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:13:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:45 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1399 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:45 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:45 np0005532048 podman[434456]: 2025-11-22 10:13:45.381128757 +0000 UTC m=+0.195903832 container init 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:13:45 np0005532048 podman[434456]: 2025-11-22 10:13:45.396534366 +0000 UTC m=+0.211309401 container start 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:13:45 np0005532048 podman[434456]: 2025-11-22 10:13:45.40236234 +0000 UTC m=+0.217137415 container attach 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:13:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:45.645+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:46 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:46 np0005532048 agitated_chandrasekhar[434473]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:13:46 np0005532048 agitated_chandrasekhar[434473]: --> relative data size: 1.0
Nov 22 05:13:46 np0005532048 agitated_chandrasekhar[434473]: --> All data devices are unavailable
Nov 22 05:13:46 np0005532048 systemd[1]: libpod-9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d.scope: Deactivated successfully.
Nov 22 05:13:46 np0005532048 systemd[1]: libpod-9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d.scope: Consumed 1.119s CPU time.
Nov 22 05:13:46 np0005532048 podman[434456]: 2025-11-22 10:13:46.564395414 +0000 UTC m=+1.379170439 container died 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:13:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f9a2c3689177261236954e89c77ae08403cad15035ca5537e63047262c4f7bff-merged.mount: Deactivated successfully.
Nov 22 05:13:46 np0005532048 podman[434456]: 2025-11-22 10:13:46.613463352 +0000 UTC m=+1.428238347 container remove 9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chandrasekhar, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 22 05:13:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:46.614+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:46 np0005532048 systemd[1]: libpod-conmon-9855583b4f25c004072635b88203fc91829e06ce0a22d68be9e6fa5ba4a6d49d.scope: Deactivated successfully.
Nov 22 05:13:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:47 np0005532048 podman[434655]: 2025-11-22 10:13:47.233869049 +0000 UTC m=+0.059284001 container create 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:13:47 np0005532048 systemd[1]: Started libpod-conmon-538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d.scope.
Nov 22 05:13:47 np0005532048 podman[434655]: 2025-11-22 10:13:47.200845206 +0000 UTC m=+0.026260238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:13:47 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:13:47 np0005532048 podman[434655]: 2025-11-22 10:13:47.327046761 +0000 UTC m=+0.152461723 container init 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:13:47 np0005532048 podman[434655]: 2025-11-22 10:13:47.334786602 +0000 UTC m=+0.160201564 container start 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 05:13:47 np0005532048 podman[434655]: 2025-11-22 10:13:47.338488832 +0000 UTC m=+0.163903804 container attach 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:13:47 np0005532048 nervous_brattain[434671]: 167 167
Nov 22 05:13:47 np0005532048 systemd[1]: libpod-538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d.scope: Deactivated successfully.
Nov 22 05:13:47 np0005532048 conmon[434671]: conmon 538c8e797dcb5ad086e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d.scope/container/memory.events
Nov 22 05:13:47 np0005532048 podman[434655]: 2025-11-22 10:13:47.343167128 +0000 UTC m=+0.168582060 container died 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:13:47 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e56258037361383fb64f4bfd66dcc2dab993eba206f47d325c9fc4388d3616c9-merged.mount: Deactivated successfully.
Nov 22 05:13:47 np0005532048 podman[434655]: 2025-11-22 10:13:47.397795372 +0000 UTC m=+0.223210344 container remove 538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brattain, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:13:47 np0005532048 systemd[1]: libpod-conmon-538c8e797dcb5ad086e6faa4f3686f4fa373ff178535f25849707725ac717f1d.scope: Deactivated successfully.
Nov 22 05:13:47 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:47 np0005532048 podman[434689]: 2025-11-22 10:13:47.513833568 +0000 UTC m=+0.069574734 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:13:47 np0005532048 podman[434714]: 2025-11-22 10:13:47.561847399 +0000 UTC m=+0.039492313 container create 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:13:47 np0005532048 systemd[1]: Started libpod-conmon-48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c.scope.
Nov 22 05:13:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:47.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:47 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:13:47 np0005532048 podman[434714]: 2025-11-22 10:13:47.545617379 +0000 UTC m=+0.023262313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:13:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:47 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:47 np0005532048 podman[434714]: 2025-11-22 10:13:47.658103418 +0000 UTC m=+0.135748332 container init 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:13:47 np0005532048 podman[434714]: 2025-11-22 10:13:47.667184131 +0000 UTC m=+0.144829045 container start 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:13:47 np0005532048 podman[434714]: 2025-11-22 10:13:47.670197915 +0000 UTC m=+0.147842829 container attach 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]: {
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:    "0": [
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:        {
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "devices": [
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "/dev/loop3"
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            ],
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_name": "ceph_lv0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_size": "21470642176",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "name": "ceph_lv0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "tags": {
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.cluster_name": "ceph",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.crush_device_class": "",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.encrypted": "0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.osd_id": "0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.type": "block",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.vdo": "0"
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            },
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "type": "block",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "vg_name": "ceph_vg0"
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:        }
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:    ],
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:    "1": [
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:        {
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "devices": [
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "/dev/loop4"
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            ],
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_name": "ceph_lv1",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_size": "21470642176",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "name": "ceph_lv1",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "tags": {
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.cluster_name": "ceph",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.crush_device_class": "",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.encrypted": "0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.osd_id": "1",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.type": "block",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.vdo": "0"
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            },
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "type": "block",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "vg_name": "ceph_vg1"
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:        }
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:    ],
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:    "2": [
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:        {
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "devices": [
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "/dev/loop5"
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            ],
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_name": "ceph_lv2",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_size": "21470642176",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "name": "ceph_lv2",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "tags": {
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.cluster_name": "ceph",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.crush_device_class": "",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.encrypted": "0",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.osd_id": "2",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.type": "block",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:                "ceph.vdo": "0"
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            },
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "type": "block",
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:            "vg_name": "ceph_vg2"
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:        }
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]:    ]
Nov 22 05:13:48 np0005532048 blissful_tharp[434731]: }
Nov 22 05:13:48 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:48 np0005532048 systemd[1]: libpod-48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c.scope: Deactivated successfully.
Nov 22 05:13:48 np0005532048 podman[434714]: 2025-11-22 10:13:48.496656093 +0000 UTC m=+0.974301057 container died 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:13:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-05feb6f71b8de04833a85a633ab29ea57209ffdecb769ccc8cafdab9918bb93b-merged.mount: Deactivated successfully.
Nov 22 05:13:48 np0005532048 podman[434714]: 2025-11-22 10:13:48.566306336 +0000 UTC m=+1.043951250 container remove 48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:13:48 np0005532048 systemd[1]: libpod-conmon-48310fbc3d64434e59a92d170481c32691a79899ab86e611079f45ee67ad8c6c.scope: Deactivated successfully.
Nov 22 05:13:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:48.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:48 np0005532048 podman[434741]: 2025-11-22 10:13:48.635088208 +0000 UTC m=+0.100488533 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:13:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:13:48 np0005532048 ceph-osd[88656]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 45K writes, 185K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 15K syncs, 2.91 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56207e3251f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Nov 22 05:13:48 np0005532048 nova_compute[253661]: 2025-11-22 10:13:48.775 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:49 np0005532048 podman[434910]: 2025-11-22 10:13:49.340908417 +0000 UTC m=+0.050146445 container create 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:13:49 np0005532048 systemd[1]: Started libpod-conmon-31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0.scope.
Nov 22 05:13:49 np0005532048 podman[434910]: 2025-11-22 10:13:49.321214792 +0000 UTC m=+0.030452850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:13:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:13:49 np0005532048 podman[434910]: 2025-11-22 10:13:49.436300325 +0000 UTC m=+0.145538433 container init 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:13:49 np0005532048 podman[434910]: 2025-11-22 10:13:49.44747174 +0000 UTC m=+0.156709748 container start 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:13:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1404 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:49 np0005532048 podman[434910]: 2025-11-22 10:13:49.451107368 +0000 UTC m=+0.160345456 container attach 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:13:49 np0005532048 bold_dubinsky[434926]: 167 167
Nov 22 05:13:49 np0005532048 systemd[1]: libpod-31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0.scope: Deactivated successfully.
Nov 22 05:13:49 np0005532048 podman[434910]: 2025-11-22 10:13:49.455282921 +0000 UTC m=+0.164520989 container died 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:13:49 np0005532048 systemd[1]: var-lib-containers-storage-overlay-9f85c30c4036b57f7569122a474ee51af4c1014c4c42f38104e447dcf3bb300b-merged.mount: Deactivated successfully.
Nov 22 05:13:49 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:49 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1404 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:49 np0005532048 podman[434910]: 2025-11-22 10:13:49.505484207 +0000 UTC m=+0.214722245 container remove 31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:13:49 np0005532048 systemd[1]: libpod-conmon-31c696e9f308b5a5a00d4d47f11bcab9e294b5a7b73ecf9a5a14697720d314d0.scope: Deactivated successfully.
Nov 22 05:13:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:49.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:49 np0005532048 podman[434950]: 2025-11-22 10:13:49.700509226 +0000 UTC m=+0.060827168 container create 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:13:49 np0005532048 systemd[1]: Started libpod-conmon-03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b.scope.
Nov 22 05:13:49 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:13:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:49 np0005532048 podman[434950]: 2025-11-22 10:13:49.681630971 +0000 UTC m=+0.041948933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:13:49 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:13:49 np0005532048 podman[434950]: 2025-11-22 10:13:49.785970298 +0000 UTC m=+0.146288280 container init 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:13:49 np0005532048 podman[434950]: 2025-11-22 10:13:49.79493559 +0000 UTC m=+0.155253532 container start 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:13:49 np0005532048 podman[434950]: 2025-11-22 10:13:49.798745864 +0000 UTC m=+0.159063806 container attach 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:13:50 np0005532048 nova_compute[253661]: 2025-11-22 10:13:50.347 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:50 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:50.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]: {
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "osd_id": 1,
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "type": "bluestore"
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:    },
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "osd_id": 0,
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "type": "bluestore"
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:    },
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "osd_id": 2,
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:        "type": "bluestore"
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]:    }
Nov 22 05:13:50 np0005532048 stoic_herschel[434967]: }
Nov 22 05:13:50 np0005532048 systemd[1]: libpod-03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b.scope: Deactivated successfully.
Nov 22 05:13:50 np0005532048 podman[434950]: 2025-11-22 10:13:50.904951144 +0000 UTC m=+1.265269136 container died 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:13:50 np0005532048 systemd[1]: libpod-03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b.scope: Consumed 1.117s CPU time.
Nov 22 05:13:50 np0005532048 systemd[1]: var-lib-containers-storage-overlay-66314eaeec1db3084087371296c9bab81620f21d41c97c631447bc9d64541211-merged.mount: Deactivated successfully.
Nov 22 05:13:50 np0005532048 podman[434950]: 2025-11-22 10:13:50.983132097 +0000 UTC m=+1.343450029 container remove 03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:13:50 np0005532048 systemd[1]: libpod-conmon-03d430a6fa251f544e92193a408a6fb06e6c0d8cf4ddbf3ed5accd0f9a063a0b.scope: Deactivated successfully.
Nov 22 05:13:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:13:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:13:51 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:13:51 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:13:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev e475411b-83a4-4609-9fb4-1b89c446225d does not exist
Nov 22 05:13:51 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5ee3d942-9cfc-40b1-8a29-2a8ed3c8e8c8 does not exist
Nov 22 05:13:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:51 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:13:51 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:13:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:51.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:13:52
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes']
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:13:52 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:52.575+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:13:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:13:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:53 np0005532048 podman[435062]: 2025-11-22 10:13:53.458991002 +0000 UTC m=+0.136820418 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:13:53 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:53.595+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:53 np0005532048 nova_compute[253661]: 2025-11-22 10:13:53.778 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1409 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:54 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:54 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1409 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:54.608+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:13:55 np0005532048 ceph-osd[89679]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6001.2 total, 600.0 interval#012Cumulative writes: 45K writes, 179K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.83 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 274 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.33              0.00         1    0.328       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561381a311f0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt
Nov 22 05:13:55 np0005532048 nova_compute[253661]: 2025-11-22 10:13:55.350 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:55 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:55.630+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:56 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:56.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:13:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:13:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:13:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:13:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:13:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:57 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:57.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:58 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:58.648+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:58 np0005532048 nova_compute[253661]: 2025-11-22 10:13:58.782 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:13:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:13:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1415 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:13:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:13:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:13:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:13:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:13:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:13:59 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:13:59 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1415 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:13:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:13:59.651+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:13:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:00 np0005532048 nova_compute[253661]: 2025-11-22 10:14:00.351 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:14:00 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 35K writes, 145K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.90 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 22 05:14:00 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:00.616+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:01 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:01.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:02 np0005532048 ceph-mgr[75315]: [devicehealth INFO root] Check health
Nov 22 05:14:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:02.579+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:02 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:14:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:14:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:03.571+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:03 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:03 np0005532048 nova_compute[253661]: 2025-11-22 10:14:03.786 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1420 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:04.582+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:04 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:04 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1420 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:05 np0005532048 nova_compute[253661]: 2025-11-22 10:14:05.353 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:05 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:05.616+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.230 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.232 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.233 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.248 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.255 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 WARNING nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 WARNING nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Removable base files: /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4 /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/82db50257fd208421e31241f1b0ae2cc5ee8c9c4#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/1e779d522b09f035efed829c4b66cb9ab2a7bed4#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.256 253665 DEBUG nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 22 05:14:06 np0005532048 nova_compute[253661]: 2025-11-22 10:14:06.257 253665 INFO nova.virt.libvirt.imagecache [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 22 05:14:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:06.580+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:06 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:07.596+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:07 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:08.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:08 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:08 np0005532048 nova_compute[253661]: 2025-11-22 10:14:08.790 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1425 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:09.614+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:09 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:09 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1425 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:10 np0005532048 nova_compute[253661]: 2025-11-22 10:14:10.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:10.640+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:10 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:11 np0005532048 nova_compute[253661]: 2025-11-22 10:14:11.256 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:11.648+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:11 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:14:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/444475390' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:14:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:14:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/444475390' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:14:12 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:12.683+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:13 np0005532048 nova_compute[253661]: 2025-11-22 10:14:13.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:13 np0005532048 nova_compute[253661]: 2025-11-22 10:14:13.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:14:13 np0005532048 nova_compute[253661]: 2025-11-22 10:14:13.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:14:13 np0005532048 nova_compute[253661]: 2025-11-22 10:14:13.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:14:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:13.729+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:13 np0005532048 nova_compute[253661]: 2025-11-22 10:14:13.828 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1429 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:14 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:14 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1429 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:14.700+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:15 np0005532048 nova_compute[253661]: 2025-11-22 10:14:15.357 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:15 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:15 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:15.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:16.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:16 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:17 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:17.701+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.257 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.258 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.258 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.258 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:14:18 np0005532048 podman[435090]: 2025-11-22 10:14:18.427444441 +0000 UTC m=+0.097554392 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:14:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:18.677+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:18 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:18 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:14:18 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1309196864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.765 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:14:18 np0005532048 nova_compute[253661]: 2025-11-22 10:14:18.831 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.005 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.007 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3542MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.008 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.008 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.075 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.076 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:14:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.099 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:14:19 np0005532048 podman[435150]: 2025-11-22 10:14:19.388396807 +0000 UTC m=+0.076149495 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3)
Nov 22 05:14:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1434 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:14:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398911495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.586 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.593 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.604 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.606 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:14:19 np0005532048 nova_compute[253661]: 2025-11-22 10:14:19.606 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:14:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:19.673+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:19 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:19 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1434 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:20 np0005532048 nova_compute[253661]: 2025-11-22 10:14:20.359 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:20 np0005532048 nova_compute[253661]: 2025-11-22 10:14:20.605 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:20 np0005532048 nova_compute[253661]: 2025-11-22 10:14:20.605 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:20 np0005532048 nova_compute[253661]: 2025-11-22 10:14:20.605 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:20 np0005532048 nova_compute[253661]: 2025-11-22 10:14:20.606 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:14:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:20.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:20 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:21.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:21 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:21 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:22.654+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:22 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:14:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:14:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:14:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:14:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:14:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:14:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:23.668+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:23 np0005532048 nova_compute[253661]: 2025-11-22 10:14:23.834 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:24 np0005532048 podman[435174]: 2025-11-22 10:14:24.438132427 +0000 UTC m=+0.133325111 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 22 05:14:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1439 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:24.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:24 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:24 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1439 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:24 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:25 np0005532048 nova_compute[253661]: 2025-11-22 10:14:25.360 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:25.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:26.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:26 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:27 np0005532048 nova_compute[253661]: 2025-11-22 10:14:27.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:27.663+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:27 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:27 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:14:28.022 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:14:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:14:28.023 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:14:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:14:28.023 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:14:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:28.674+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:28 np0005532048 nova_compute[253661]: 2025-11-22 10:14:28.836 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1444 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:29.709+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:29 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:29 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1444 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:30 np0005532048 nova_compute[253661]: 2025-11-22 10:14:30.362 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:30.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:30 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:31.679+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:31 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:32 np0005532048 nova_compute[253661]: 2025-11-22 10:14:32.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:32 np0005532048 nova_compute[253661]: 2025-11-22 10:14:32.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 05:14:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:32.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:32 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:32 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:33.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:33 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:33 np0005532048 nova_compute[253661]: 2025-11-22 10:14:33.840 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1449 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:34.646+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:34 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1449 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:34 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:35 np0005532048 nova_compute[253661]: 2025-11-22 10:14:35.364 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:35.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:36.670+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:36 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:37.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:37 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:37 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:38 np0005532048 nova_compute[253661]: 2025-11-22 10:14:38.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:14:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:38.596+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:38 np0005532048 nova_compute[253661]: 2025-11-22 10:14:38.844 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:38 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1454 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:39.569+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:39 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1454 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:14:40 np0005532048 nova_compute[253661]: 2025-11-22 10:14:40.367 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:40.593+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:40 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:41.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:41 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:42.537+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:42 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 05:14:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:43.500+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:43 np0005532048 nova_compute[253661]: 2025-11-22 10:14:43.847 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:43 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1460 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.466004) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484466034, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 932, "num_deletes": 257, "total_data_size": 993086, "memory_usage": 1010328, "flush_reason": "Manual Compaction"}
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484473786, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 978110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75039, "largest_seqno": 75970, "table_properties": {"data_size": 973776, "index_size": 1857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11057, "raw_average_key_size": 19, "raw_value_size": 964334, "raw_average_value_size": 1740, "num_data_blocks": 82, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806425, "oldest_key_time": 1763806425, "file_creation_time": 1763806484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 7819 microseconds, and 3708 cpu microseconds.
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.473820) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 978110 bytes OK
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.473838) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.475522) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.475538) EVENT_LOG_v1 {"time_micros": 1763806484475534, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.475555) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 988441, prev total WAL file size 988441, number of live WAL files 2.
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.476092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353136' seq:72057594037927935, type:22 .. '6C6F676D0033373639' seq:0, type:0; will stop at (end)
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(955KB)], [176(9006KB)]
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484476160, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 10201042, "oldest_snapshot_seqno": -1}
Nov 22 05:14:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:44.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 10337 keys, 10060879 bytes, temperature: kUnknown
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484526585, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 10060879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10000044, "index_size": 33906, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25861, "raw_key_size": 275907, "raw_average_key_size": 26, "raw_value_size": 9822773, "raw_average_value_size": 950, "num_data_blocks": 1278, "num_entries": 10337, "num_filter_entries": 10337, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.526851) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 10060879 bytes
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.528038) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.0 rd, 199.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.8 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(20.7) write-amplify(10.3) OK, records in: 10862, records dropped: 525 output_compression: NoCompression
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.528053) EVENT_LOG_v1 {"time_micros": 1763806484528046, "job": 110, "event": "compaction_finished", "compaction_time_micros": 50500, "compaction_time_cpu_micros": 27274, "output_level": 6, "num_output_files": 1, "total_output_size": 10060879, "num_input_records": 10862, "num_output_records": 10337, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484528446, "job": 110, "event": "table_file_deletion", "file_number": 178}
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806484529940, "job": 110, "event": "table_file_deletion", "file_number": 176}
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.475985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:44.529996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1460 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:44 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:14:45 np0005532048 nova_compute[253661]: 2025-11-22 10:14:45.369 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:45.521+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:45 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:46.504+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:46 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:14:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:47.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:47 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:48.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:48 np0005532048 nova_compute[253661]: 2025-11-22 10:14:48.851 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:48 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:14:49 np0005532048 podman[435203]: 2025-11-22 10:14:49.355863776 +0000 UTC m=+0.055155628 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:14:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1465 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:49.535+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:49 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1465 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:49 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:50 np0005532048 nova_compute[253661]: 2025-11-22 10:14:50.372 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:50 np0005532048 podman[435223]: 2025-11-22 10:14:50.374190404 +0000 UTC m=+0.067577303 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:14:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:50.537+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:50 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:14:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:51.505+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:51 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 784831a1-432e-4828-8479-a492076fc052 does not exist
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 219825e0-58bf-4a96-a0c8-8308240a8647 does not exist
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8b10c3d3-bddc-4693-932f-ad1bd6ced888 does not exist
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:14:52
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', '.rgw.root', 'images']
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:14:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:52.511+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:52 np0005532048 podman[435514]: 2025-11-22 10:14:52.650877237 +0000 UTC m=+0.036015006 container create 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:14:52 np0005532048 systemd[1]: Started libpod-conmon-21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e.scope.
Nov 22 05:14:52 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:14:52 np0005532048 podman[435514]: 2025-11-22 10:14:52.634585847 +0000 UTC m=+0.019723626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:14:52 np0005532048 podman[435514]: 2025-11-22 10:14:52.739100378 +0000 UTC m=+0.124238167 container init 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:14:52 np0005532048 podman[435514]: 2025-11-22 10:14:52.745267131 +0000 UTC m=+0.130404890 container start 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:14:52 np0005532048 podman[435514]: 2025-11-22 10:14:52.748824258 +0000 UTC m=+0.133962037 container attach 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:14:52 np0005532048 exciting_jackson[435531]: 167 167
Nov 22 05:14:52 np0005532048 systemd[1]: libpod-21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e.scope: Deactivated successfully.
Nov 22 05:14:52 np0005532048 podman[435514]: 2025-11-22 10:14:52.751548215 +0000 UTC m=+0.136685974 container died 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:14:52 np0005532048 systemd[1]: var-lib-containers-storage-overlay-76e5fee803d649883e9bb8f0d2222aa05492f7aad9ce02d9726f1dbc9ce366ea-merged.mount: Deactivated successfully.
Nov 22 05:14:52 np0005532048 podman[435514]: 2025-11-22 10:14:52.793911458 +0000 UTC m=+0.179049227 container remove 21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:14:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:14:52 np0005532048 systemd[1]: libpod-conmon-21fe9c3f9a755aff2a54e849d7514f645baa81c15776430be23aa4601bd1d57e.scope: Deactivated successfully.
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:14:52 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:52 np0005532048 podman[435553]: 2025-11-22 10:14:52.964748422 +0000 UTC m=+0.040172140 container create 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:14:52 np0005532048 systemd[1]: Started libpod-conmon-2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a.scope.
Nov 22 05:14:53 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:14:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:53 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:53 np0005532048 podman[435553]: 2025-11-22 10:14:53.034380314 +0000 UTC m=+0.109804052 container init 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:14:53 np0005532048 podman[435553]: 2025-11-22 10:14:52.946073101 +0000 UTC m=+0.021496849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:14:53 np0005532048 podman[435553]: 2025-11-22 10:14:53.044803281 +0000 UTC m=+0.120226999 container start 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 05:14:53 np0005532048 podman[435553]: 2025-11-22 10:14:53.049786843 +0000 UTC m=+0.125210581 container attach 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:14:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:14:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:53.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:53 np0005532048 nova_compute[253661]: 2025-11-22 10:14:53.889 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:53 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:54 np0005532048 agitated_ishizaka[435569]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:14:54 np0005532048 agitated_ishizaka[435569]: --> relative data size: 1.0
Nov 22 05:14:54 np0005532048 agitated_ishizaka[435569]: --> All data devices are unavailable
Nov 22 05:14:54 np0005532048 systemd[1]: libpod-2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a.scope: Deactivated successfully.
Nov 22 05:14:54 np0005532048 podman[435553]: 2025-11-22 10:14:54.113268314 +0000 UTC m=+1.188692042 container died 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:14:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-09cdb6ce22970e0381ed40d612982c8cf7dd9c3a7170c3b7b2c03b638ac3990c-merged.mount: Deactivated successfully.
Nov 22 05:14:54 np0005532048 podman[435553]: 2025-11-22 10:14:54.169963029 +0000 UTC m=+1.245386767 container remove 2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:14:54 np0005532048 systemd[1]: libpod-conmon-2de80026f44b11a60282c37dd83bbe5c8cf343d5d07810c955cfdbce6c60dd0a.scope: Deactivated successfully.
Nov 22 05:14:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1470 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:54.490+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:54 np0005532048 podman[435709]: 2025-11-22 10:14:54.633523595 +0000 UTC m=+0.131934317 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 05:14:54 np0005532048 podman[435779]: 2025-11-22 10:14:54.855709803 +0000 UTC m=+0.039434091 container create 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:14:54 np0005532048 systemd[1]: Started libpod-conmon-3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d.scope.
Nov 22 05:14:54 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:14:54 np0005532048 podman[435779]: 2025-11-22 10:14:54.837757001 +0000 UTC m=+0.021481329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:14:54 np0005532048 podman[435779]: 2025-11-22 10:14:54.93804582 +0000 UTC m=+0.121770148 container init 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:14:54 np0005532048 podman[435779]: 2025-11-22 10:14:54.946096918 +0000 UTC m=+0.129821206 container start 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:14:54 np0005532048 podman[435779]: 2025-11-22 10:14:54.949372598 +0000 UTC m=+0.133096936 container attach 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:14:54 np0005532048 vigorous_sinoussi[435794]: 167 167
Nov 22 05:14:54 np0005532048 systemd[1]: libpod-3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d.scope: Deactivated successfully.
Nov 22 05:14:54 np0005532048 podman[435779]: 2025-11-22 10:14:54.951338336 +0000 UTC m=+0.135062644 container died 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:14:54 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1470 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:54 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:54 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c76fc2089509c928ced8fff1506ef593ee42fb8ebfe8c5b45f2889103c28a544-merged.mount: Deactivated successfully.
Nov 22 05:14:54 np0005532048 podman[435779]: 2025-11-22 10:14:54.992203163 +0000 UTC m=+0.175927491 container remove 3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:14:55 np0005532048 systemd[1]: libpod-conmon-3dbe855569d6bb9c0c4ca15e649dc23bb5c81db9294da5ddd4d13a012d9ed69d.scope: Deactivated successfully.
Nov 22 05:14:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 22 05:14:55 np0005532048 podman[435818]: 2025-11-22 10:14:55.149966894 +0000 UTC m=+0.044116086 container create da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:14:55 np0005532048 systemd[1]: Started libpod-conmon-da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046.scope.
Nov 22 05:14:55 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:14:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:55 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:55 np0005532048 podman[435818]: 2025-11-22 10:14:55.131280645 +0000 UTC m=+0.025429837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:14:55 np0005532048 podman[435818]: 2025-11-22 10:14:55.229209914 +0000 UTC m=+0.123359076 container init da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:14:55 np0005532048 podman[435818]: 2025-11-22 10:14:55.236602626 +0000 UTC m=+0.130751788 container start da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:14:55 np0005532048 podman[435818]: 2025-11-22 10:14:55.239659271 +0000 UTC m=+0.133808433 container attach da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:14:55 np0005532048 nova_compute[253661]: 2025-11-22 10:14:55.373 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:55.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]: {
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:    "0": [
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:        {
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "devices": [
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "/dev/loop3"
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            ],
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_name": "ceph_lv0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_size": "21470642176",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "name": "ceph_lv0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "tags": {
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.cluster_name": "ceph",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.crush_device_class": "",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.encrypted": "0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.osd_id": "0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.type": "block",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.vdo": "0"
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            },
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "type": "block",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "vg_name": "ceph_vg0"
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:        }
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:    ],
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:    "1": [
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:        {
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "devices": [
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "/dev/loop4"
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            ],
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_name": "ceph_lv1",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_size": "21470642176",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "name": "ceph_lv1",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "tags": {
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.cluster_name": "ceph",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.crush_device_class": "",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.encrypted": "0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.osd_id": "1",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.type": "block",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.vdo": "0"
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            },
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "type": "block",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "vg_name": "ceph_vg1"
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:        }
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:    ],
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:    "2": [
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:        {
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "devices": [
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "/dev/loop5"
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            ],
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_name": "ceph_lv2",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_size": "21470642176",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "name": "ceph_lv2",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "tags": {
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.cluster_name": "ceph",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.crush_device_class": "",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.encrypted": "0",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.osd_id": "2",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.type": "block",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:                "ceph.vdo": "0"
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            },
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "type": "block",
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:            "vg_name": "ceph_vg2"
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:        }
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]:    ]
Nov 22 05:14:55 np0005532048 nice_heyrovsky[435834]: }
Nov 22 05:14:55 np0005532048 systemd[1]: libpod-da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046.scope: Deactivated successfully.
Nov 22 05:14:55 np0005532048 podman[435818]: 2025-11-22 10:14:55.977388795 +0000 UTC m=+0.871537977 container died da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:14:55 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-5d11903722171e0f7bb8238934c93bc75fb12a93430db8274aac8bbead147d58-merged.mount: Deactivated successfully.
Nov 22 05:14:56 np0005532048 podman[435818]: 2025-11-22 10:14:56.036463688 +0000 UTC m=+0.930612850 container remove da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 22 05:14:56 np0005532048 systemd[1]: libpod-conmon-da02157f95df7faf74eb60f280b5dea1856d0ac83d92b9ceaae1af0e4a276046.scope: Deactivated successfully.
Nov 22 05:14:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:56.567+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:56 np0005532048 podman[435997]: 2025-11-22 10:14:56.688087344 +0000 UTC m=+0.051577641 container create d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:14:56 np0005532048 systemd[1]: Started libpod-conmon-d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688.scope.
Nov 22 05:14:56 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:14:56 np0005532048 podman[435997]: 2025-11-22 10:14:56.764973276 +0000 UTC m=+0.128463573 container init d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:14:56 np0005532048 podman[435997]: 2025-11-22 10:14:56.670599133 +0000 UTC m=+0.034089420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:14:56 np0005532048 podman[435997]: 2025-11-22 10:14:56.773225449 +0000 UTC m=+0.136715756 container start d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:14:56 np0005532048 podman[435997]: 2025-11-22 10:14:56.777147675 +0000 UTC m=+0.140637992 container attach d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:14:56 np0005532048 gallant_ellis[436013]: 167 167
Nov 22 05:14:56 np0005532048 systemd[1]: libpod-d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688.scope: Deactivated successfully.
Nov 22 05:14:56 np0005532048 conmon[436013]: conmon d403b08139431563078b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688.scope/container/memory.events
Nov 22 05:14:56 np0005532048 podman[435997]: 2025-11-22 10:14:56.780984809 +0000 UTC m=+0.144475096 container died d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:14:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:14:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:14:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:14:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:14:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:14:56 np0005532048 systemd[1]: var-lib-containers-storage-overlay-b45814c6baed6f671b5d73afd854a243c743a1392e5dc84d94c0c46c83ff67bf-merged.mount: Deactivated successfully.
Nov 22 05:14:56 np0005532048 podman[435997]: 2025-11-22 10:14:56.819957108 +0000 UTC m=+0.183447375 container remove d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ellis, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:14:56 np0005532048 systemd[1]: libpod-conmon-d403b08139431563078b47b417edcb787e3480c9fd123cfa8ed4f50060589688.scope: Deactivated successfully.
Nov 22 05:14:56 np0005532048 podman[436036]: 2025-11-22 10:14:56.969480908 +0000 UTC m=+0.040022066 container create 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:14:56 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:57 np0005532048 systemd[1]: Started libpod-conmon-14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98.scope.
Nov 22 05:14:57 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:14:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:57 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:14:57 np0005532048 podman[436036]: 2025-11-22 10:14:57.043490069 +0000 UTC m=+0.114031277 container init 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:14:57 np0005532048 podman[436036]: 2025-11-22 10:14:56.953698659 +0000 UTC m=+0.024239827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:14:57 np0005532048 podman[436036]: 2025-11-22 10:14:57.049174759 +0000 UTC m=+0.119715917 container start 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:14:57 np0005532048 podman[436036]: 2025-11-22 10:14:57.052185423 +0000 UTC m=+0.122726641 container attach 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:14:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:57.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]: {
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "osd_id": 1,
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "type": "bluestore"
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:    },
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "osd_id": 0,
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "type": "bluestore"
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:    },
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "osd_id": 2,
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:        "type": "bluestore"
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]:    }
Nov 22 05:14:58 np0005532048 focused_ardinghelli[436052]: }
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.017913) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498017939, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 422, "num_deletes": 251, "total_data_size": 237660, "memory_usage": 245320, "flush_reason": "Manual Compaction"}
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498021083, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 234151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75971, "largest_seqno": 76392, "table_properties": {"data_size": 231743, "index_size": 443, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6423, "raw_average_key_size": 19, "raw_value_size": 226777, "raw_average_value_size": 678, "num_data_blocks": 20, "num_entries": 334, "num_filter_entries": 334, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806484, "oldest_key_time": 1763806484, "file_creation_time": 1763806498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 3209 microseconds, and 1052 cpu microseconds.
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.021118) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 234151 bytes OK
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.021138) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022345) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022362) EVENT_LOG_v1 {"time_micros": 1763806498022356, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022379) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 234983, prev total WAL file size 234983, number of live WAL files 2.
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022742) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(228KB)], [179(9825KB)]
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498022795, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10295030, "oldest_snapshot_seqno": -1}
Nov 22 05:14:58 np0005532048 systemd[1]: libpod-14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98.scope: Deactivated successfully.
Nov 22 05:14:58 np0005532048 podman[436036]: 2025-11-22 10:14:58.040225286 +0000 UTC m=+1.110766444 container died 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 10160 keys, 8915210 bytes, temperature: kUnknown
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498076244, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 8915210, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8856403, "index_size": 32326, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 273035, "raw_average_key_size": 26, "raw_value_size": 8682946, "raw_average_value_size": 854, "num_data_blocks": 1205, "num_entries": 10160, "num_filter_entries": 10160, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:14:58 np0005532048 systemd[1]: var-lib-containers-storage-overlay-34ad18c11f4ddbe783ae8e2e94bdedde0c3af65b6c2b2a4ccd81cdcb4dada4f2-merged.mount: Deactivated successfully.
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.076605) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 8915210 bytes
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.081081) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.1 rd, 166.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(82.0) write-amplify(38.1) OK, records in: 10671, records dropped: 511 output_compression: NoCompression
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.081110) EVENT_LOG_v1 {"time_micros": 1763806498081098, "job": 112, "event": "compaction_finished", "compaction_time_micros": 53596, "compaction_time_cpu_micros": 28771, "output_level": 6, "num_output_files": 1, "total_output_size": 8915210, "num_input_records": 10671, "num_output_records": 10160, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498081431, "job": 112, "event": "table_file_deletion", "file_number": 181}
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806498083298, "job": 112, "event": "table_file_deletion", "file_number": 179}
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.022688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:14:58.083396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:14:58 np0005532048 podman[436036]: 2025-11-22 10:14:58.105441471 +0000 UTC m=+1.175982629 container remove 14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:14:58 np0005532048 systemd[1]: libpod-conmon-14372510e5f1c6fe8d824e9cfc2ba905f972615905394d61d8ea7aa8e365df98.scope: Deactivated successfully.
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:14:58 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:14:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 704c40b2-59d9-4ab1-b4d9-0c4f843f0ddb does not exist
Nov 22 05:14:58 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1f63fa22-3fa0-46f5-91a6-d5ffcdc02885 does not exist
Nov 22 05:14:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:58.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:58 np0005532048 nova_compute[253661]: 2025-11-22 10:14:58.891 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:14:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:14:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:14:59 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:14:59 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:14:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:14:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:14:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:14:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1475 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:14:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:14:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:14:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:14:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:14:59.603+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:14:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:00 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1475 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:00 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:00 np0005532048 nova_compute[253661]: 2025-11-22 10:15:00.375 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:00.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:01 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:01.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:02 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:02.573+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:03 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:15:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:15:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:03.601+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:03 np0005532048 nova_compute[253661]: 2025-11-22 10:15:03.894 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:04 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1479 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:04.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:05 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1479 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:05 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:05 np0005532048 nova_compute[253661]: 2025-11-22 10:15:05.378 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:05.593+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:06 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:06.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:07 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:07.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:08 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:08.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:08 np0005532048 nova_compute[253661]: 2025-11-22 10:15:08.898 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:09 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:09 np0005532048 nova_compute[253661]: 2025-11-22 10:15:09.242 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:09 np0005532048 nova_compute[253661]: 2025-11-22 10:15:09.243 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 05:15:09 np0005532048 nova_compute[253661]: 2025-11-22 10:15:09.263 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 05:15:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 1484 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:09.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:10 np0005532048 ceph-mon[75021]: Health check update: 1 slow ops, oldest one blocked for 1484 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:10 np0005532048 ceph-mon[75021]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Nov 22 05:15:10 np0005532048 nova_compute[253661]: 2025-11-22 10:15:10.382 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:10.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:11 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:11.578+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:12 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:15:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2732250456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:15:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:15:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2732250456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:15:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:12.572+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:13 np0005532048 nova_compute[253661]: 2025-11-22 10:15:13.249 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:13 np0005532048 nova_compute[253661]: 2025-11-22 10:15:13.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:15:13 np0005532048 nova_compute[253661]: 2025-11-22 10:15:13.249 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:15:13 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:13 np0005532048 nova_compute[253661]: 2025-11-22 10:15:13.298 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:15:13 np0005532048 nova_compute[253661]: 2025-11-22 10:15:13.298 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:13.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:13 np0005532048 nova_compute[253661]: 2025-11-22 10:15:13.901 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:14 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1490 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:14.619+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:15 np0005532048 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1490 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:15 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:15 np0005532048 nova_compute[253661]: 2025-11-22 10:15:15.384 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:15.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:16 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:16.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:17 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:17.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:18 np0005532048 nova_compute[253661]: 2025-11-22 10:15:18.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:18 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:18.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:18 np0005532048 nova_compute[253661]: 2025-11-22 10:15:18.906 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.262 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.263 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.263 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:15:19 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1495 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:19.561+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:15:19 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735788013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.694 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.838 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.840 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.840 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.840 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.896 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.897 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:15:19 np0005532048 nova_compute[253661]: 2025-11-22 10:15:19.915 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:15:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:15:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1337356490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:15:20 np0005532048 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1495 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:20 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:20 np0005532048 nova_compute[253661]: 2025-11-22 10:15:20.339 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:15:20 np0005532048 nova_compute[253661]: 2025-11-22 10:15:20.345 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:15:20 np0005532048 nova_compute[253661]: 2025-11-22 10:15:20.358 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:15:20 np0005532048 nova_compute[253661]: 2025-11-22 10:15:20.360 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:15:20 np0005532048 nova_compute[253661]: 2025-11-22 10:15:20.360 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:15:20 np0005532048 podman[436190]: 2025-11-22 10:15:20.36907643 +0000 UTC m=+0.059785933 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 05:15:20 np0005532048 nova_compute[253661]: 2025-11-22 10:15:20.386 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:20.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:21 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:21 np0005532048 nova_compute[253661]: 2025-11-22 10:15:21.352 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:21 np0005532048 nova_compute[253661]: 2025-11-22 10:15:21.353 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:21 np0005532048 nova_compute[253661]: 2025-11-22 10:15:21.353 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:21 np0005532048 nova_compute[253661]: 2025-11-22 10:15:21.353 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:21 np0005532048 nova_compute[253661]: 2025-11-22 10:15:21.353 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:15:21 np0005532048 podman[436211]: 2025-11-22 10:15:21.370114873 +0000 UTC m=+0.067043740 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 05:15:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:21.607+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:22 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:22.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:15:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:15:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Nov 22 05:15:23 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:23.564+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:23 np0005532048 nova_compute[253661]: 2025-11-22 10:15:23.909 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:24 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1500 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:24.533+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Nov 22 05:15:25 np0005532048 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1500 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:25 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:25 np0005532048 nova_compute[253661]: 2025-11-22 10:15:25.388 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:25 np0005532048 podman[436231]: 2025-11-22 10:15:25.468293549 +0000 UTC m=+0.153281992 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 05:15:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:25.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:26 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:26.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 22 05:15:27 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:27.536+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:15:28.024 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:15:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:15:28.024 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:15:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:15:28.024 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:15:28 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:28.577+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:28 np0005532048 nova_compute[253661]: 2025-11-22 10:15:28.914 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:15:29 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1505 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:29.598+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:30 np0005532048 nova_compute[253661]: 2025-11-22 10:15:30.391 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:30 np0005532048 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1505 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:30 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:30.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:15:31 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:31.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:32 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:32.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:15:33 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:33.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:33 np0005532048 nova_compute[253661]: 2025-11-22 10:15:33.919 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1510 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:34 np0005532048 nova_compute[253661]: 2025-11-22 10:15:34.476 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:15:34 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:34.540+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Nov 22 05:15:35 np0005532048 nova_compute[253661]: 2025-11-22 10:15:35.395 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:35 np0005532048 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1510 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:35 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:35.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:36 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:36.571+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 22 05:15:37 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:37.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:38 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:38.566+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:38 np0005532048 nova_compute[253661]: 2025-11-22 10:15:38.939 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 05:15:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1515 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:39 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:39 np0005532048 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1515 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:39.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:40 np0005532048 nova_compute[253661]: 2025-11-22 10:15:40.396 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:40 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:40.581+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:41 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:41.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:42 np0005532048 ceph-mon[75021]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'images' : 8 ])
Nov 22 05:15:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:42.601+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:43 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:43.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:43 np0005532048 nova_compute[253661]: 2025-11-22 10:15:43.942 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 8 slow ops, oldest one blocked for 1520 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:44.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:44 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:44 np0005532048 ceph-mon[75021]: Health check update: 8 slow ops, oldest one blocked for 1520 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:15:45 np0005532048 nova_compute[253661]: 2025-11-22 10:15:45.397 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:45 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:45.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:46 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:46.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:15:47 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:47.626+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:48 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:48.607+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:48 np0005532048 nova_compute[253661]: 2025-11-22 10:15:48.948 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:15:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1525 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:49.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:49 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:49 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1525 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:50 np0005532048 nova_compute[253661]: 2025-11-22 10:15:50.399 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:50.585+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:50 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:15:51 np0005532048 podman[436258]: 2025-11-22 10:15:51.412247344 +0000 UTC m=+0.093628255 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:15:51 np0005532048 podman[436278]: 2025-11-22 10:15:51.487331981 +0000 UTC m=+0.070930686 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 05:15:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:51.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:51 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:15:52
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data']
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:15:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:52.574+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:52 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:15:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:15:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:15:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:53.608+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:53 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:53 np0005532048 nova_compute[253661]: 2025-11-22 10:15:53.952 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1530 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:54.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:54 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:54 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1530 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:15:55 np0005532048 nova_compute[253661]: 2025-11-22 10:15:55.400 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:55.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:55 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:56 np0005532048 podman[436298]: 2025-11-22 10:15:56.480358524 +0000 UTC m=+0.171731416 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 05:15:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:56.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:15:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:15:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:15:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:15:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:15:56 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:56 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:57.671+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:57 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:58.701+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:15:58 np0005532048 nova_compute[253661]: 2025-11-22 10:15:58.956 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:15:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:15:59 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d3d10e2d-d582-420f-a74a-3043b23a8a66 does not exist
Nov 22 05:15:59 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev dbe6789a-f403-4fb1-bb9d-02efa18211d0 does not exist
Nov 22 05:15:59 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8bed83a2-01a5-43bc-8467-eacf572bd937 does not exist
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:15:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:15:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:15:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:15:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:15:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1534 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:15:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:15:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:15:59.750+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:15:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:00 np0005532048 podman[436601]: 2025-11-22 10:16:00.030162017 +0000 UTC m=+0.059202758 container create f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:16:00 np0005532048 systemd[1]: Started libpod-conmon-f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655.scope.
Nov 22 05:16:00 np0005532048 podman[436601]: 2025-11-22 10:15:59.999050031 +0000 UTC m=+0.028090812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:16:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:16:00 np0005532048 podman[436601]: 2025-11-22 10:16:00.151625705 +0000 UTC m=+0.180666446 container init f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:16:00 np0005532048 podman[436601]: 2025-11-22 10:16:00.165539638 +0000 UTC m=+0.194580339 container start f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:16:00 np0005532048 podman[436601]: 2025-11-22 10:16:00.169904855 +0000 UTC m=+0.198945696 container attach f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:16:00 np0005532048 systemd[1]: libpod-f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655.scope: Deactivated successfully.
Nov 22 05:16:00 np0005532048 boring_kapitsa[436617]: 167 167
Nov 22 05:16:00 np0005532048 podman[436601]: 2025-11-22 10:16:00.17740517 +0000 UTC m=+0.206445921 container died f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:16:00 np0005532048 conmon[436617]: conmon f09fff5d57da1600c040 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655.scope/container/memory.events
Nov 22 05:16:00 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:16:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:16:00 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:16:00 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1534 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:00 np0005532048 systemd[1]: var-lib-containers-storage-overlay-bf9e65bd2428ba5846091f943d4f1e7cd24bd56f429de2b54280ac862adc8f38-merged.mount: Deactivated successfully.
Nov 22 05:16:00 np0005532048 podman[436601]: 2025-11-22 10:16:00.226911988 +0000 UTC m=+0.255952689 container remove f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:16:00 np0005532048 systemd[1]: libpod-conmon-f09fff5d57da1600c040d735330a005005a4ea42ae2219d56e9c28982c7ab655.scope: Deactivated successfully.
Nov 22 05:16:00 np0005532048 nova_compute[253661]: 2025-11-22 10:16:00.402 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:00 np0005532048 podman[436641]: 2025-11-22 10:16:00.418866612 +0000 UTC m=+0.064497308 container create f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:16:00 np0005532048 systemd[1]: Started libpod-conmon-f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe.scope.
Nov 22 05:16:00 np0005532048 podman[436641]: 2025-11-22 10:16:00.386460914 +0000 UTC m=+0.032091710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:16:00 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:16:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:00 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:00 np0005532048 podman[436641]: 2025-11-22 10:16:00.530663203 +0000 UTC m=+0.176293989 container init f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:16:00 np0005532048 podman[436641]: 2025-11-22 10:16:00.54071058 +0000 UTC m=+0.186341276 container start f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:16:00 np0005532048 podman[436641]: 2025-11-22 10:16:00.544806251 +0000 UTC m=+0.190436977 container attach f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:16:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:00.764+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:01 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:01 np0005532048 suspicious_meninsky[436658]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:16:01 np0005532048 suspicious_meninsky[436658]: --> relative data size: 1.0
Nov 22 05:16:01 np0005532048 suspicious_meninsky[436658]: --> All data devices are unavailable
Nov 22 05:16:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:01.784+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:01 np0005532048 systemd[1]: libpod-f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe.scope: Deactivated successfully.
Nov 22 05:16:01 np0005532048 systemd[1]: libpod-f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe.scope: Consumed 1.205s CPU time.
Nov 22 05:16:01 np0005532048 podman[436687]: 2025-11-22 10:16:01.856868127 +0000 UTC m=+0.032346637 container died f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:16:01 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d18469bc6827ff6cf415ae91f68c2fe20053503438b2c547114acfbb439deceb-merged.mount: Deactivated successfully.
Nov 22 05:16:01 np0005532048 podman[436687]: 2025-11-22 10:16:01.943690833 +0000 UTC m=+0.119169333 container remove f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:16:01 np0005532048 systemd[1]: libpod-conmon-f90e201b0d07fc1abc13afb6b8c0bbabb9bda30674222f2713f2a47a112467fe.scope: Deactivated successfully.
Nov 22 05:16:02 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:02 np0005532048 podman[436842]: 2025-11-22 10:16:02.730857173 +0000 UTC m=+0.047747406 container create 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:16:02 np0005532048 systemd[1]: Started libpod-conmon-4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846.scope.
Nov 22 05:16:02 np0005532048 podman[436842]: 2025-11-22 10:16:02.708117784 +0000 UTC m=+0.025008077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:16:02 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:16:02 np0005532048 podman[436842]: 2025-11-22 10:16:02.824072617 +0000 UTC m=+0.140962900 container init 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:16:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:02.823+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:02 np0005532048 podman[436842]: 2025-11-22 10:16:02.837721394 +0000 UTC m=+0.154611627 container start 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:16:02 np0005532048 podman[436842]: 2025-11-22 10:16:02.841760143 +0000 UTC m=+0.158650416 container attach 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:16:02 np0005532048 strange_yalow[436858]: 167 167
Nov 22 05:16:02 np0005532048 systemd[1]: libpod-4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846.scope: Deactivated successfully.
Nov 22 05:16:02 np0005532048 podman[436842]: 2025-11-22 10:16:02.847037932 +0000 UTC m=+0.163928205 container died 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:16:02 np0005532048 systemd[1]: var-lib-containers-storage-overlay-d576643373b2f41a77f98ea8c13e2f11aa5f53db7b8791aef791a9b773362fdd-merged.mount: Deactivated successfully.
Nov 22 05:16:02 np0005532048 podman[436842]: 2025-11-22 10:16:02.897849443 +0000 UTC m=+0.214739706 container remove 4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:16:02 np0005532048 systemd[1]: libpod-conmon-4a10e12c16a8400b1e493b85868bbf754a75aa2470df627fd690f00f06c17846.scope: Deactivated successfully.
Nov 22 05:16:03 np0005532048 podman[436881]: 2025-11-22 10:16:03.111695055 +0000 UTC m=+0.051688692 container create 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:03 np0005532048 systemd[1]: Started libpod-conmon-484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad.scope.
Nov 22 05:16:03 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:16:03 np0005532048 podman[436881]: 2025-11-22 10:16:03.09035495 +0000 UTC m=+0.030348637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:16:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:03 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:03 np0005532048 podman[436881]: 2025-11-22 10:16:03.206302393 +0000 UTC m=+0.146296070 container init 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:16:03 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:03 np0005532048 podman[436881]: 2025-11-22 10:16:03.22608857 +0000 UTC m=+0.166082217 container start 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:16:03 np0005532048 podman[436881]: 2025-11-22 10:16:03.230501358 +0000 UTC m=+0.170495045 container attach 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:16:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:16:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:03.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]: {
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:    "0": [
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:        {
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "devices": [
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "/dev/loop3"
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            ],
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_name": "ceph_lv0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_size": "21470642176",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "name": "ceph_lv0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "tags": {
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.cluster_name": "ceph",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.crush_device_class": "",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.encrypted": "0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.osd_id": "0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.type": "block",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.vdo": "0"
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            },
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "type": "block",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "vg_name": "ceph_vg0"
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:        }
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:    ],
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:    "1": [
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:        {
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "devices": [
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "/dev/loop4"
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            ],
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_name": "ceph_lv1",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_size": "21470642176",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "name": "ceph_lv1",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "tags": {
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.cluster_name": "ceph",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.crush_device_class": "",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.encrypted": "0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.osd_id": "1",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.type": "block",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.vdo": "0"
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            },
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "type": "block",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "vg_name": "ceph_vg1"
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:        }
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:    ],
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:    "2": [
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:        {
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "devices": [
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "/dev/loop5"
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            ],
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_name": "ceph_lv2",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_size": "21470642176",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "name": "ceph_lv2",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "tags": {
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.cluster_name": "ceph",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.crush_device_class": "",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.encrypted": "0",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.osd_id": "2",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.type": "block",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:                "ceph.vdo": "0"
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            },
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "type": "block",
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:            "vg_name": "ceph_vg2"
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:        }
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]:    ]
Nov 22 05:16:03 np0005532048 practical_hamilton[436897]: }
Nov 22 05:16:03 np0005532048 nova_compute[253661]: 2025-11-22 10:16:03.964 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:04 np0005532048 systemd[1]: libpod-484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad.scope: Deactivated successfully.
Nov 22 05:16:04 np0005532048 podman[436906]: 2025-11-22 10:16:04.082488524 +0000 UTC m=+0.046128236 container died 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:16:04 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1270e178a3d4e4df5161cce3f3b423c5b2dd3d245f0078e8b597ab67186b99ea-merged.mount: Deactivated successfully.
Nov 22 05:16:04 np0005532048 podman[436906]: 2025-11-22 10:16:04.170543521 +0000 UTC m=+0.134183173 container remove 484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:16:04 np0005532048 systemd[1]: libpod-conmon-484771767bd00bd3031cc2ad55404ede7a44203fcb0544beb1e5e3f2f50aa0ad.scope: Deactivated successfully.
Nov 22 05:16:04 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1539 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:04.881+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:05 np0005532048 podman[437060]: 2025-11-22 10:16:05.063878173 +0000 UTC m=+0.063506004 container create e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:16:05 np0005532048 systemd[1]: Started libpod-conmon-e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e.scope.
Nov 22 05:16:05 np0005532048 podman[437060]: 2025-11-22 10:16:05.03896172 +0000 UTC m=+0.038589561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:16:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:16:05 np0005532048 podman[437060]: 2025-11-22 10:16:05.169908332 +0000 UTC m=+0.169536213 container init e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:16:05 np0005532048 podman[437060]: 2025-11-22 10:16:05.17792786 +0000 UTC m=+0.177555691 container start e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:16:05 np0005532048 podman[437060]: 2025-11-22 10:16:05.18281689 +0000 UTC m=+0.182444721 container attach e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:16:05 np0005532048 keen_northcutt[437076]: 167 167
Nov 22 05:16:05 np0005532048 systemd[1]: libpod-e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e.scope: Deactivated successfully.
Nov 22 05:16:05 np0005532048 conmon[437076]: conmon e572701ba8623c739b96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e.scope/container/memory.events
Nov 22 05:16:05 np0005532048 podman[437060]: 2025-11-22 10:16:05.187810233 +0000 UTC m=+0.187438094 container died e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:16:05 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4a6a35b430456d49a0869519f473ae0cb7889f0d7cf874b67a6c9eaee4c19612-merged.mount: Deactivated successfully.
Nov 22 05:16:05 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:05 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1539 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:05 np0005532048 podman[437060]: 2025-11-22 10:16:05.246367994 +0000 UTC m=+0.245995825 container remove e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:16:05 np0005532048 systemd[1]: libpod-conmon-e572701ba8623c739b96cff59bbbfa78c769de063b514f4a9b08b04459237b3e.scope: Deactivated successfully.
Nov 22 05:16:05 np0005532048 nova_compute[253661]: 2025-11-22 10:16:05.404 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:05 np0005532048 podman[437102]: 2025-11-22 10:16:05.491052345 +0000 UTC m=+0.065925753 container create f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:16:05 np0005532048 systemd[1]: Started libpod-conmon-f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6.scope.
Nov 22 05:16:05 np0005532048 podman[437102]: 2025-11-22 10:16:05.459432537 +0000 UTC m=+0.034306005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:16:05 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:16:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:05 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:16:05 np0005532048 podman[437102]: 2025-11-22 10:16:05.611261473 +0000 UTC m=+0.186134941 container init f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:16:05 np0005532048 podman[437102]: 2025-11-22 10:16:05.625676908 +0000 UTC m=+0.200550316 container start f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:16:05 np0005532048 podman[437102]: 2025-11-22 10:16:05.631094821 +0000 UTC m=+0.205968329 container attach f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 05:16:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:05.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:06 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:06 np0005532048 strange_haslett[437118]: {
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "osd_id": 1,
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "type": "bluestore"
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:    },
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "osd_id": 0,
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "type": "bluestore"
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:    },
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "osd_id": 2,
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:        "type": "bluestore"
Nov 22 05:16:06 np0005532048 strange_haslett[437118]:    }
Nov 22 05:16:06 np0005532048 strange_haslett[437118]: }
Nov 22 05:16:06 np0005532048 systemd[1]: libpod-f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6.scope: Deactivated successfully.
Nov 22 05:16:06 np0005532048 systemd[1]: libpod-f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6.scope: Consumed 1.106s CPU time.
Nov 22 05:16:06 np0005532048 podman[437102]: 2025-11-22 10:16:06.722946119 +0000 UTC m=+1.297819587 container died f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 05:16:06 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ad75daefa8bfca262676d81246b5089149c796592bf7e13ccb585d6d1817219d-merged.mount: Deactivated successfully.
Nov 22 05:16:06 np0005532048 podman[437102]: 2025-11-22 10:16:06.810514733 +0000 UTC m=+1.385388121 container remove f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_haslett, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:16:06 np0005532048 systemd[1]: libpod-conmon-f6e83cbab15658c1e44edbf15b20fb3fb2024d891695814d513687185f5cbeb6.scope: Deactivated successfully.
Nov 22 05:16:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:06.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:16:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:16:06 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:16:06 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:16:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 40e64a35-5f29-4658-b683-e9c266dd9e08 does not exist
Nov 22 05:16:06 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5c18801e-501b-42c2-9ff4-e4fc82d2b874 does not exist
Nov 22 05:16:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:07 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:16:07 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:16:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:07.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:08 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:16:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:08.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:08 np0005532048 nova_compute[253661]: 2025-11-22 10:16:08.968 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:09 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1544 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:09.897+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:10 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:10 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1544 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:10 np0005532048 nova_compute[253661]: 2025-11-22 10:16:10.407 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:10.879+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:11 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:11.888+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:12 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:16:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054248413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:16:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:16:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2054248413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:16:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:12.845+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:13 np0005532048 nova_compute[253661]: 2025-11-22 10:16:13.254 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:13 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:13.823+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:13 np0005532048 nova_compute[253661]: 2025-11-22 10:16:13.972 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:14 np0005532048 nova_compute[253661]: 2025-11-22 10:16:14.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:14 np0005532048 nova_compute[253661]: 2025-11-22 10:16:14.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:16:14 np0005532048 nova_compute[253661]: 2025-11-22 10:16:14.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:16:14 np0005532048 nova_compute[253661]: 2025-11-22 10:16:14.277 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:16:14 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1549 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:14.777+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:15 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:15 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1549 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:15 np0005532048 nova_compute[253661]: 2025-11-22 10:16:15.409 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:15.738+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:16 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:16.750+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:17 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:17.702+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:18 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:18.665+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:18 np0005532048 nova_compute[253661]: 2025-11-22 10:16:18.975 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:19 np0005532048 nova_compute[253661]: 2025-11-22 10:16:19.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:19 np0005532048 nova_compute[253661]: 2025-11-22 10:16:19.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:19 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:19 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1554 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:19.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.265 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.265 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:16:20 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1554 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:20 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.410 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:20.634+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:20 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:16:20 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486489532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.761 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.948 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.949 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.950 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:16:20 np0005532048 nova_compute[253661]: 2025-11-22 10:16:20.950 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:16:21 np0005532048 nova_compute[253661]: 2025-11-22 10:16:21.003 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:16:21 np0005532048 nova_compute[253661]: 2025-11-22 10:16:21.003 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:16:21 np0005532048 nova_compute[253661]: 2025-11-22 10:16:21.029 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:16:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:21 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:16:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469726142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:16:21 np0005532048 nova_compute[253661]: 2025-11-22 10:16:21.504 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:16:21 np0005532048 nova_compute[253661]: 2025-11-22 10:16:21.513 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:16:21 np0005532048 nova_compute[253661]: 2025-11-22 10:16:21.534 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:16:21 np0005532048 nova_compute[253661]: 2025-11-22 10:16:21.537 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:16:21 np0005532048 nova_compute[253661]: 2025-11-22 10:16:21.537 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:16:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:21.655+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:22 np0005532048 podman[437257]: 2025-11-22 10:16:22.357603825 +0000 UTC m=+0.053000866 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:16:22 np0005532048 podman[437258]: 2025-11-22 10:16:22.366210836 +0000 UTC m=+0.059212548 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:16:22 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:22 np0005532048 nova_compute[253661]: 2025-11-22 10:16:22.539 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:22 np0005532048 nova_compute[253661]: 2025-11-22 10:16:22.539 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:22 np0005532048 nova_compute[253661]: 2025-11-22 10:16:22.540 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:22 np0005532048 nova_compute[253661]: 2025-11-22 10:16:22.540 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:16:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:22.704+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:16:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:16:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:23.672+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:23 np0005532048 nova_compute[253661]: 2025-11-22 10:16:23.980 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:24 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:24 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1559 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:24.622+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:25 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1559 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:25 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:25 np0005532048 nova_compute[253661]: 2025-11-22 10:16:25.412 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:25.595+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:26 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:26.585+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:27 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:27 np0005532048 podman[437294]: 2025-11-22 10:16:27.419427122 +0000 UTC m=+0.110344376 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 05:16:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:27.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:16:28.025 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:16:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:16:28.026 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:16:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:16:28.026 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:16:28 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:28.551+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:29 np0005532048 nova_compute[253661]: 2025-11-22 10:16:29.026 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:29 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1565 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:29.517+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:30 np0005532048 nova_compute[253661]: 2025-11-22 10:16:30.413 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:30 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1565 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:30 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:30.508+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:31 np0005532048 nova_compute[253661]: 2025-11-22 10:16:31.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:16:31 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:31.500+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:32 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:32.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:33 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:33.548+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:34 np0005532048 nova_compute[253661]: 2025-11-22 10:16:34.029 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:34 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1570 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:34.547+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:35 np0005532048 nova_compute[253661]: 2025-11-22 10:16:35.416 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:35 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1570 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:35 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:35.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:36 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:36.521+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:37.506+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:37 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:38.488+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:38 np0005532048 ceph-mon[75021]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'images' : 3 ])
Nov 22 05:16:38 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:39 np0005532048 nova_compute[253661]: 2025-11-22 10:16:39.031 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 3 slow ops, oldest one blocked for 1575 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:39.515+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:39 np0005532048 ceph-mon[75021]: Health check update: 3 slow ops, oldest one blocked for 1575 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:39 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:40 np0005532048 nova_compute[253661]: 2025-11-22 10:16:40.418 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:40.546+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:40 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:41.544+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:41 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:42.500+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:42 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:43.480+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:43 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:44 np0005532048 nova_compute[253661]: 2025-11-22 10:16:44.035 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:44.452+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1580 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:44 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:44 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1580 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:16:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:45.412+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:45 np0005532048 nova_compute[253661]: 2025-11-22 10:16:45.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:45 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:46.366+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:46 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:16:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:47.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:47 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:48.377+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:48 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:49 np0005532048 nova_compute[253661]: 2025-11-22 10:16:49.087 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:16:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:49.348+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1585 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:49 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:49 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1585 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:50.394+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:50 np0005532048 nova_compute[253661]: 2025-11-22 10:16:50.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:50 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:16:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:51.385+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:51 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:52.375+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:16:52
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'vms', 'images']
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:16:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:16:52 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:16:53 np0005532048 ceph-mgr[75315]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1636168236
Nov 22 05:16:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:53.346+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:53 np0005532048 podman[437321]: 2025-11-22 10:16:53.359116127 +0000 UTC m=+0.054865241 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 22 05:16:53 np0005532048 podman[437320]: 2025-11-22 10:16:53.365416062 +0000 UTC m=+0.052990146 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:16:53 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:54 np0005532048 nova_compute[253661]: 2025-11-22 10:16:54.091 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:54.341+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1590 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:16:54 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:54 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1590 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 2 op/s
Nov 22 05:16:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:55.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:55 np0005532048 nova_compute[253661]: 2025-11-22 10:16:55.423 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:55 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:56.406+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:16:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:16:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:16:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:16:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:16:56 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:57.438+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:58 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:58 np0005532048 podman[437358]: 2025-11-22 10:16:58.383468503 +0000 UTC m=+0.078368810 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:16:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:58.446+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:59 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:59 np0005532048 nova_compute[253661]: 2025-11-22 10:16:59.108 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:16:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:16:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:16:59.464+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:16:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:16:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:16:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:16:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:16:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:16:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:16:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1595 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:16:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:00 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:00 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1595 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:00 np0005532048 nova_compute[253661]: 2025-11-22 10:17:00.424 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:00.459+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:01 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:01.496+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:02 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:02.486+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:03 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:17:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:17:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:03.499+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:04 np0005532048 nova_compute[253661]: 2025-11-22 10:17:04.111 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:04 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1600 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:04.499+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:05 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1600 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:05 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:05 np0005532048 nova_compute[253661]: 2025-11-22 10:17:05.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:05.526+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:06 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:06.527+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:07.477+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:17:07 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c51991e1-ca15-40e1-a341-2ba96b221f0c does not exist
Nov 22 05:17:07 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 8f13ef03-50c0-497e-bbaa-8a58ee937adc does not exist
Nov 22 05:17:07 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 6fa81e18-7278-449b-bb45-a8280242b9fe does not exist
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:17:07 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:17:08 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:17:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:17:08 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:17:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:08.474+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:08 np0005532048 podman[437656]: 2025-11-22 10:17:08.625225148 +0000 UTC m=+0.068234911 container create 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:17:08 np0005532048 systemd[1]: Started libpod-conmon-684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd.scope.
Nov 22 05:17:08 np0005532048 podman[437656]: 2025-11-22 10:17:08.597098646 +0000 UTC m=+0.040108459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:17:08 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:17:08 np0005532048 podman[437656]: 2025-11-22 10:17:08.740438983 +0000 UTC m=+0.183448806 container init 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:17:08 np0005532048 podman[437656]: 2025-11-22 10:17:08.751587437 +0000 UTC m=+0.194597170 container start 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:17:08 np0005532048 podman[437656]: 2025-11-22 10:17:08.755465553 +0000 UTC m=+0.198475386 container attach 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:17:08 np0005532048 angry_kepler[437672]: 167 167
Nov 22 05:17:08 np0005532048 systemd[1]: libpod-684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd.scope: Deactivated successfully.
Nov 22 05:17:08 np0005532048 podman[437677]: 2025-11-22 10:17:08.827341601 +0000 UTC m=+0.043394899 container died 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:17:08 np0005532048 systemd[1]: var-lib-containers-storage-overlay-254dd9917cb7835b8f223e35f5149ef865489501c3889b6dad6f49a4bf27c3ce-merged.mount: Deactivated successfully.
Nov 22 05:17:08 np0005532048 podman[437677]: 2025-11-22 10:17:08.874528582 +0000 UTC m=+0.090581880 container remove 684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:17:08 np0005532048 systemd[1]: libpod-conmon-684dfc311fc6533eea01e275eede10609688ffb6b7477eae0f18eb99392b65dd.scope: Deactivated successfully.
Nov 22 05:17:09 np0005532048 nova_compute[253661]: 2025-11-22 10:17:09.114 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:09 np0005532048 podman[437699]: 2025-11-22 10:17:09.136888679 +0000 UTC m=+0.063392082 container create be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:17:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:09 np0005532048 systemd[1]: Started libpod-conmon-be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76.scope.
Nov 22 05:17:09 np0005532048 podman[437699]: 2025-11-22 10:17:09.116391774 +0000 UTC m=+0.042895207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:17:09 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:17:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:09 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:09 np0005532048 podman[437699]: 2025-11-22 10:17:09.22999039 +0000 UTC m=+0.156493823 container init be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 05:17:09 np0005532048 podman[437699]: 2025-11-22 10:17:09.237652889 +0000 UTC m=+0.164156292 container start be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:17:09 np0005532048 podman[437699]: 2025-11-22 10:17:09.241070982 +0000 UTC m=+0.167574385 container attach be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:17:09 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:09.463+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1605 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:10 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:10 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1605 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:10 np0005532048 blissful_mayer[437715]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:17:10 np0005532048 blissful_mayer[437715]: --> relative data size: 1.0
Nov 22 05:17:10 np0005532048 blissful_mayer[437715]: --> All data devices are unavailable
Nov 22 05:17:10 np0005532048 systemd[1]: libpod-be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76.scope: Deactivated successfully.
Nov 22 05:17:10 np0005532048 podman[437699]: 2025-11-22 10:17:10.323338074 +0000 UTC m=+1.249841477 container died be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:17:10 np0005532048 systemd[1]: libpod-be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76.scope: Consumed 1.021s CPU time.
Nov 22 05:17:10 np0005532048 systemd[1]: var-lib-containers-storage-overlay-e599c91104bc671c5a46ee85db4284186b188c8b9c92bf223a2b58a2c7a2b7eb-merged.mount: Deactivated successfully.
Nov 22 05:17:10 np0005532048 podman[437699]: 2025-11-22 10:17:10.378102732 +0000 UTC m=+1.304606125 container remove be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:17:10 np0005532048 systemd[1]: libpod-conmon-be4a6fc19b1403798b74e01172a28631ed5d539cdfa0eb5646cd43c00b1c0b76.scope: Deactivated successfully.
Nov 22 05:17:10 np0005532048 nova_compute[253661]: 2025-11-22 10:17:10.429 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:10.430+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:10 np0005532048 podman[437893]: 2025-11-22 10:17:10.996861058 +0000 UTC m=+0.046755982 container create 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:17:11 np0005532048 systemd[1]: Started libpod-conmon-0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1.scope.
Nov 22 05:17:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:17:11 np0005532048 podman[437893]: 2025-11-22 10:17:10.979618584 +0000 UTC m=+0.029513528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:17:11 np0005532048 podman[437893]: 2025-11-22 10:17:11.077885722 +0000 UTC m=+0.127780646 container init 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:17:11 np0005532048 podman[437893]: 2025-11-22 10:17:11.083754606 +0000 UTC m=+0.133649530 container start 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:17:11 np0005532048 podman[437893]: 2025-11-22 10:17:11.086851102 +0000 UTC m=+0.136746046 container attach 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:17:11 np0005532048 vigorous_taussig[437909]: 167 167
Nov 22 05:17:11 np0005532048 systemd[1]: libpod-0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1.scope: Deactivated successfully.
Nov 22 05:17:11 np0005532048 podman[437893]: 2025-11-22 10:17:11.088749349 +0000 UTC m=+0.138644273 container died 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 05:17:11 np0005532048 systemd[1]: var-lib-containers-storage-overlay-18a3562cb90adf344a378d5184eaee8dc5ccf71c02d4d7d7ab62db0b5da05ae2-merged.mount: Deactivated successfully.
Nov 22 05:17:11 np0005532048 podman[437893]: 2025-11-22 10:17:11.123961045 +0000 UTC m=+0.173855969 container remove 0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:17:11 np0005532048 systemd[1]: libpod-conmon-0ab84c113fbe918055b478807615054414da34802d4f3ada82df56583c2733f1.scope: Deactivated successfully.
Nov 22 05:17:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:11 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:11 np0005532048 podman[437933]: 2025-11-22 10:17:11.315799436 +0000 UTC m=+0.057420124 container create bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:17:11 np0005532048 systemd[1]: Started libpod-conmon-bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2.scope.
Nov 22 05:17:11 np0005532048 podman[437933]: 2025-11-22 10:17:11.290225066 +0000 UTC m=+0.031845764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:17:11 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:17:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:11 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:11 np0005532048 podman[437933]: 2025-11-22 10:17:11.432953988 +0000 UTC m=+0.174574686 container init bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:17:11 np0005532048 podman[437933]: 2025-11-22 10:17:11.443947659 +0000 UTC m=+0.185568347 container start bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:17:11 np0005532048 podman[437933]: 2025-11-22 10:17:11.452693895 +0000 UTC m=+0.194314573 container attach bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:17:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:11.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]: {
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:    "0": [
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:        {
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "devices": [
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "/dev/loop3"
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            ],
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_name": "ceph_lv0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_size": "21470642176",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "name": "ceph_lv0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "tags": {
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.cluster_name": "ceph",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.crush_device_class": "",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.encrypted": "0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.osd_id": "0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.type": "block",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.vdo": "0"
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            },
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "type": "block",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "vg_name": "ceph_vg0"
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:        }
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:    ],
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:    "1": [
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:        {
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "devices": [
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "/dev/loop4"
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            ],
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_name": "ceph_lv1",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_size": "21470642176",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "name": "ceph_lv1",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "tags": {
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.cluster_name": "ceph",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.crush_device_class": "",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.encrypted": "0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.osd_id": "1",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.type": "block",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.vdo": "0"
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            },
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "type": "block",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "vg_name": "ceph_vg1"
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:        }
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:    ],
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:    "2": [
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:        {
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "devices": [
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "/dev/loop5"
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            ],
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_name": "ceph_lv2",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_size": "21470642176",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "name": "ceph_lv2",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "tags": {
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.cluster_name": "ceph",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.crush_device_class": "",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.encrypted": "0",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.osd_id": "2",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.type": "block",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:                "ceph.vdo": "0"
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            },
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "type": "block",
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:            "vg_name": "ceph_vg2"
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:        }
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]:    ]
Nov 22 05:17:12 np0005532048 unruffled_gauss[437949]: }
Nov 22 05:17:12 np0005532048 systemd[1]: libpod-bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2.scope: Deactivated successfully.
Nov 22 05:17:12 np0005532048 podman[437933]: 2025-11-22 10:17:12.196538319 +0000 UTC m=+0.938159007 container died bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:17:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-261a86571dd7fa29f02b3d81cfc91a3d0994b95ed00c0a4a2624610cbb9045fa-merged.mount: Deactivated successfully.
Nov 22 05:17:12 np0005532048 podman[437933]: 2025-11-22 10:17:12.247866271 +0000 UTC m=+0.989486949 container remove bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:17:12 np0005532048 systemd[1]: libpod-conmon-bc8395c9b659edd5cdff9d65a2802afbf4c067e2cfc2e7c7a22b9b0ab94180f2.scope: Deactivated successfully.
Nov 22 05:17:12 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:17:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2381018576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:17:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:17:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2381018576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:17:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:12.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:12 np0005532048 podman[438115]: 2025-11-22 10:17:12.860934937 +0000 UTC m=+0.047442308 container create 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:17:12 np0005532048 systemd[1]: Started libpod-conmon-5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394.scope.
Nov 22 05:17:12 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:17:12 np0005532048 podman[438115]: 2025-11-22 10:17:12.840663979 +0000 UTC m=+0.027171350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:17:12 np0005532048 podman[438115]: 2025-11-22 10:17:12.948947103 +0000 UTC m=+0.135454484 container init 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:17:12 np0005532048 podman[438115]: 2025-11-22 10:17:12.954545221 +0000 UTC m=+0.141052572 container start 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:17:12 np0005532048 podman[438115]: 2025-11-22 10:17:12.957578705 +0000 UTC m=+0.144086086 container attach 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:17:12 np0005532048 vigilant_hamilton[438131]: 167 167
Nov 22 05:17:12 np0005532048 systemd[1]: libpod-5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394.scope: Deactivated successfully.
Nov 22 05:17:12 np0005532048 podman[438115]: 2025-11-22 10:17:12.960942159 +0000 UTC m=+0.147449490 container died 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:17:12 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3a07a0f673e0f15cf6b38eeedf72ffcfc5bd3e3ff1a0b7ebc8c6f120f4558029-merged.mount: Deactivated successfully.
Nov 22 05:17:12 np0005532048 podman[438115]: 2025-11-22 10:17:12.998506233 +0000 UTC m=+0.185013554 container remove 5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:17:13 np0005532048 systemd[1]: libpod-conmon-5588cea8088780378274d40969713b7f64c6b65eeb78e9b9e31d0449f48a8394.scope: Deactivated successfully.
Nov 22 05:17:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:13 np0005532048 podman[438154]: 2025-11-22 10:17:13.175108719 +0000 UTC m=+0.047450349 container create 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:17:13 np0005532048 systemd[1]: Started libpod-conmon-13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e.scope.
Nov 22 05:17:13 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:17:13 np0005532048 podman[438154]: 2025-11-22 10:17:13.156228674 +0000 UTC m=+0.028570334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:17:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:13 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:17:13 np0005532048 podman[438154]: 2025-11-22 10:17:13.2629701 +0000 UTC m=+0.135311750 container init 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:17:13 np0005532048 podman[438154]: 2025-11-22 10:17:13.271019378 +0000 UTC m=+0.143361048 container start 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:17:13 np0005532048 podman[438154]: 2025-11-22 10:17:13.275078198 +0000 UTC m=+0.147419848 container attach 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 05:17:13 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:13.413+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:14 np0005532048 nova_compute[253661]: 2025-11-22 10:17:14.117 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]: {
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "osd_id": 1,
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "type": "bluestore"
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:    },
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "osd_id": 0,
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "type": "bluestore"
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:    },
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "osd_id": 2,
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:        "type": "bluestore"
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]:    }
Nov 22 05:17:14 np0005532048 determined_mccarthy[438171]: }
Nov 22 05:17:14 np0005532048 systemd[1]: libpod-13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e.scope: Deactivated successfully.
Nov 22 05:17:14 np0005532048 podman[438154]: 2025-11-22 10:17:14.313544903 +0000 UTC m=+1.185886553 container died 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:17:14 np0005532048 systemd[1]: libpod-13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e.scope: Consumed 1.047s CPU time.
Nov 22 05:17:14 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:17:14 np0005532048 systemd[1]: var-lib-containers-storage-overlay-cf8825354e2c39a9cecf00faf1ab2399427a727ea103012f7f92fd8394658aa2-merged.mount: Deactivated successfully.
Nov 22 05:17:14 np0005532048 podman[438154]: 2025-11-22 10:17:14.371096839 +0000 UTC m=+1.243438469 container remove 13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:17:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:14.373+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 05:17:14 np0005532048 systemd[1]: libpod-conmon-13d9348556b961c22b0fe1871006acd6b32dec3df0a5bee6968b8c3c0317e22e.scope: Deactivated successfully.
Nov 22 05:17:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:17:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:17:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:17:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:17:14 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 737255a0-a29a-4006-963f-c24642377326 does not exist
Nov 22 05:17:14 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 39ac3acb-4437-4228-a762-05bb21854a72 does not exist
Nov 22 05:17:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1610 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:15 np0005532048 nova_compute[253661]: 2025-11-22 10:17:15.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:17:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:15.388+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 05:17:15 np0005532048 ceph-mon[75021]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 05:17:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:17:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:17:15 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1610 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:15 np0005532048 nova_compute[253661]: 2025-11-22 10:17:15.431 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:16 np0005532048 nova_compute[253661]: 2025-11-22 10:17:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:17:16 np0005532048 nova_compute[253661]: 2025-11-22 10:17:16.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:17:16 np0005532048 nova_compute[253661]: 2025-11-22 10:17:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:17:16 np0005532048 nova_compute[253661]: 2025-11-22 10:17:16.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:17:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:16.405+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:16 np0005532048 ceph-mon[75021]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'images' : 11 ])
Nov 22 05:17:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:17.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:17 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:18.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:18 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:19 np0005532048 nova_compute[253661]: 2025-11-22 10:17:19.120 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:19.398+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:19 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1615 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:20.414+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:20 np0005532048 nova_compute[253661]: 2025-11-22 10:17:20.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:20 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:20 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1615 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.308 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.309 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.309 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.310 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.310 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:17:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:21.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:21 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:17:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/636004843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:17:21 np0005532048 nova_compute[253661]: 2025-11-22 10:17:21.817 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.002 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.004 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3535MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.004 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.004 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.267 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.268 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.367 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 05:17:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:22.438+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:22 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.524 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.525 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.543 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.576 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 05:17:22 np0005532048 nova_compute[253661]: 2025-11-22 10:17:22.591 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:17:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:17:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:17:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091111089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:17:23 np0005532048 nova_compute[253661]: 2025-11-22 10:17:23.040 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:17:23 np0005532048 nova_compute[253661]: 2025-11-22 10:17:23.049 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:17:23 np0005532048 nova_compute[253661]: 2025-11-22 10:17:23.065 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:17:23 np0005532048 nova_compute[253661]: 2025-11-22 10:17:23.068 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:17:23 np0005532048 nova_compute[253661]: 2025-11-22 10:17:23.068 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:17:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:23.475+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:23 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:24 np0005532048 nova_compute[253661]: 2025-11-22 10:17:24.062 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:17:24 np0005532048 nova_compute[253661]: 2025-11-22 10:17:24.169 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:24 np0005532048 nova_compute[253661]: 2025-11-22 10:17:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:17:24 np0005532048 nova_compute[253661]: 2025-11-22 10:17:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:17:24 np0005532048 podman[438308]: 2025-11-22 10:17:24.421169934 +0000 UTC m=+0.095745897 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:17:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:24.430+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:24 np0005532048 podman[438309]: 2025-11-22 10:17:24.457193201 +0000 UTC m=+0.131701293 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 22 05:17:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1620 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:24 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:24 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1620 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:25.436+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:25 np0005532048 nova_compute[253661]: 2025-11-22 10:17:25.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:25 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:26.436+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:26 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:27.411+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:27 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:17:28.027 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:17:28.028 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:17:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:17:28.028 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:17:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:28.417+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:28 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:29 np0005532048 nova_compute[253661]: 2025-11-22 10:17:29.174 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:29.458+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:29 np0005532048 podman[438344]: 2025-11-22 10:17:29.466769373 +0000 UTC m=+0.153709313 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 22 05:17:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1625 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:29 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:29 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1625 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:30 np0005532048 nova_compute[253661]: 2025-11-22 10:17:30.438 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:30.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:30 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:31.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:31 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:32.485+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:32 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:33.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:33 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:34 np0005532048 nova_compute[253661]: 2025-11-22 10:17:34.177 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:34.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1630 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:34 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:34 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1630 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:35 np0005532048 nova_compute[253661]: 2025-11-22 10:17:35.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:35.461+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:35 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:36.418+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:36 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:37.394+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:37 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:37 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:38.386+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:38 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:39 np0005532048 nova_compute[253661]: 2025-11-22 10:17:39.179 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:39.395+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1635 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:39 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:39 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1635 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:40.415+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:40 np0005532048 nova_compute[253661]: 2025-11-22 10:17:40.442 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:40 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:41.413+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:41 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:42.455+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.786666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662786740, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 2164, "num_deletes": 251, "total_data_size": 2729383, "memory_usage": 2788176, "flush_reason": "Manual Compaction"}
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662807473, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2663485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76393, "largest_seqno": 78556, "table_properties": {"data_size": 2654735, "index_size": 4923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22917, "raw_average_key_size": 21, "raw_value_size": 2635048, "raw_average_value_size": 2444, "num_data_blocks": 216, "num_entries": 1078, "num_filter_entries": 1078, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806498, "oldest_key_time": 1763806498, "file_creation_time": 1763806662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 20852 microseconds, and 12504 cpu microseconds.
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.807521) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2663485 bytes OK
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.807542) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.809670) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.809686) EVENT_LOG_v1 {"time_micros": 1763806662809681, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.809704) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2720008, prev total WAL file size 2720008, number of live WAL files 2.
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.810735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2601KB)], [182(8706KB)]
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662810798, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 11578695, "oldest_snapshot_seqno": -1}
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 10724 keys, 10192213 bytes, temperature: kUnknown
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662899827, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10192213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10128956, "index_size": 35368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26821, "raw_key_size": 286525, "raw_average_key_size": 26, "raw_value_size": 9944732, "raw_average_value_size": 927, "num_data_blocks": 1327, "num_entries": 10724, "num_filter_entries": 10724, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.900437) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10192213 bytes
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.902051) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.8 rd, 114.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 11238, records dropped: 514 output_compression: NoCompression
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.902084) EVENT_LOG_v1 {"time_micros": 1763806662902067, "job": 114, "event": "compaction_finished", "compaction_time_micros": 89203, "compaction_time_cpu_micros": 51448, "output_level": 6, "num_output_files": 1, "total_output_size": 10192213, "num_input_records": 11238, "num_output_records": 10724, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662903158, "job": 114, "event": "table_file_deletion", "file_number": 184}
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806662906822, "job": 114, "event": "table_file_deletion", "file_number": 182}
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.810557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:17:42 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:17:42.906956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:17:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:43.435+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:43 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:44 np0005532048 nova_compute[253661]: 2025-11-22 10:17:44.182 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:44.466+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 7 slow ops, oldest one blocked for 1640 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:44 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:44 np0005532048 ceph-mon[75021]: Health check update: 7 slow ops, oldest one blocked for 1640 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:45 np0005532048 nova_compute[253661]: 2025-11-22 10:17:45.445 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:45.514+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:45 np0005532048 ceph-mon[75021]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:17:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:46.471+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'images' : 12 ])
Nov 22 05:17:46 np0005532048 ceph-mon[75021]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'images' : 12 ])
Nov 22 05:17:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:47.465+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:47 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:48.436+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:48 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:49 np0005532048 nova_compute[253661]: 2025-11-22 10:17:49.185 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:49.444+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 12 slow ops, oldest one blocked for 1645 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:49 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:49 np0005532048 ceph-mon[75021]: Health check update: 12 slow ops, oldest one blocked for 1645 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:50.421+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:50 np0005532048 nova_compute[253661]: 2025-11-22 10:17:50.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:50 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:51.374+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:51 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:52.348+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:17:52
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta']
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:17:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:17:52 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:53.368+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:53 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:54 np0005532048 nova_compute[253661]: 2025-11-22 10:17:54.187 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:54.336+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1650 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:54 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:54 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1650 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:55.373+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:55 np0005532048 podman[438374]: 2025-11-22 10:17:55.391532825 +0000 UTC m=+0.069876158 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 22 05:17:55 np0005532048 podman[438373]: 2025-11-22 10:17:55.406729959 +0000 UTC m=+0.083664038 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 05:17:55 np0005532048 nova_compute[253661]: 2025-11-22 10:17:55.448 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:55 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:56.338+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:17:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:17:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:17:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:17:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:17:56 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:57.327+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:57 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:58.370+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:58 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:59 np0005532048 nova_compute[253661]: 2025-11-22 10:17:59.192 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:17:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:17:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:17:59.344+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:17:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:17:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:17:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:17:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:17:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:17:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1655 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:17:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:17:59 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:17:59 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1655 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:00.392+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:00 np0005532048 podman[438413]: 2025-11-22 10:18:00.437239694 +0000 UTC m=+0.121481137 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:18:00 np0005532048 nova_compute[253661]: 2025-11-22 10:18:00.451 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:00 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:01.369+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:01 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:02.351+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:02 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:03.304+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:18:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:18:03 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:04 np0005532048 nova_compute[253661]: 2025-11-22 10:18:04.196 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:04.331+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1660 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:04 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:04 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1660 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:05.286+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:05 np0005532048 nova_compute[253661]: 2025-11-22 10:18:05.493 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:05 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:06.296+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:07 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:07.339+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:08 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:08.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:09 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:09 np0005532048 nova_compute[253661]: 2025-11-22 10:18:09.246 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:09.251+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1665 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:10 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:10 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1665 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:10.254+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:10 np0005532048 nova_compute[253661]: 2025-11-22 10:18:10.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:11 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:11.257+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:12 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:12.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:18:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215536891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:18:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:18:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2215536891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:18:13 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:13.319+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:14 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:14 np0005532048 nova_compute[253661]: 2025-11-22 10:18:14.250 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:14.292+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1670 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1670 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:15.283+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:15 np0005532048 nova_compute[253661]: 2025-11-22 10:18:15.498 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1912043c-2474-429b-b08f-4e187680c98c does not exist
Nov 22 05:18:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 732fe3bb-8fa9-4e6c-8ed0-0abc7f4f2eb5 does not exist
Nov 22 05:18:15 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev eb3563f8-16ab-4fc2-b5f3-da0692a72f15 does not exist
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:18:15 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:18:16 np0005532048 nova_compute[253661]: 2025-11-22 10:18:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:16 np0005532048 nova_compute[253661]: 2025-11-22 10:18:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:18:16 np0005532048 nova_compute[253661]: 2025-11-22 10:18:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:18:16 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:16 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:18:16 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:16 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:18:16 np0005532048 nova_compute[253661]: 2025-11-22 10:18:16.245 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:18:16 np0005532048 podman[438830]: 2025-11-22 10:18:16.278535347 +0000 UTC m=+0.045185241 container create 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:18:16 np0005532048 systemd[1]: Started libpod-conmon-2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee.scope.
Nov 22 05:18:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:16.313+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:18:16 np0005532048 podman[438830]: 2025-11-22 10:18:16.339351231 +0000 UTC m=+0.106001155 container init 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:18:16 np0005532048 podman[438830]: 2025-11-22 10:18:16.345717298 +0000 UTC m=+0.112367192 container start 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:18:16 np0005532048 podman[438830]: 2025-11-22 10:18:16.34867909 +0000 UTC m=+0.115329004 container attach 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:18:16 np0005532048 kind_herschel[438846]: 167 167
Nov 22 05:18:16 np0005532048 systemd[1]: libpod-2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee.scope: Deactivated successfully.
Nov 22 05:18:16 np0005532048 podman[438830]: 2025-11-22 10:18:16.35193424 +0000 UTC m=+0.118584144 container died 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:18:16 np0005532048 podman[438830]: 2025-11-22 10:18:16.259487918 +0000 UTC m=+0.026137842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:18:16 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a2571b72c16eed962fdc716e76ea3a8ff6a0842f3472077787ba677916cc030a-merged.mount: Deactivated successfully.
Nov 22 05:18:16 np0005532048 podman[438830]: 2025-11-22 10:18:16.396721151 +0000 UTC m=+0.163371045 container remove 2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_herschel, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:18:16 np0005532048 systemd[1]: libpod-conmon-2423bd7be98bc9a329955ec290f63a9d1beab44abc4da726f95c44e92c49c0ee.scope: Deactivated successfully.
Nov 22 05:18:16 np0005532048 podman[438870]: 2025-11-22 10:18:16.542668958 +0000 UTC m=+0.037778009 container create d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:18:16 np0005532048 systemd[1]: Started libpod-conmon-d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794.scope.
Nov 22 05:18:16 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:18:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:16 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:16 np0005532048 podman[438870]: 2025-11-22 10:18:16.615952469 +0000 UTC m=+0.111061540 container init d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:18:16 np0005532048 podman[438870]: 2025-11-22 10:18:16.525768533 +0000 UTC m=+0.020877614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:18:16 np0005532048 podman[438870]: 2025-11-22 10:18:16.624914249 +0000 UTC m=+0.120023300 container start d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:18:16 np0005532048 podman[438870]: 2025-11-22 10:18:16.62820947 +0000 UTC m=+0.123318551 container attach d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 05:18:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:17 np0005532048 nova_compute[253661]: 2025-11-22 10:18:17.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:17 np0005532048 ceph-mon[75021]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'images' : 4 ])
Nov 22 05:18:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:17.292+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:17 np0005532048 elastic_satoshi[438888]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:18:17 np0005532048 elastic_satoshi[438888]: --> relative data size: 1.0
Nov 22 05:18:17 np0005532048 elastic_satoshi[438888]: --> All data devices are unavailable
Nov 22 05:18:17 np0005532048 systemd[1]: libpod-d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794.scope: Deactivated successfully.
Nov 22 05:18:17 np0005532048 podman[438870]: 2025-11-22 10:18:17.61617999 +0000 UTC m=+1.111289051 container died d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:18:17 np0005532048 systemd[1]: var-lib-containers-storage-overlay-59cd421d6089512426846c6a37f57bf300cc5b4c536ecb61cd6907c1510b569e-merged.mount: Deactivated successfully.
Nov 22 05:18:17 np0005532048 podman[438870]: 2025-11-22 10:18:17.671132271 +0000 UTC m=+1.166241322 container remove d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_satoshi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:18:17 np0005532048 systemd[1]: libpod-conmon-d4c2c11b8174d8a8710ab45eb806d6182cf7a14934a079ff8a163f79f21e4794.scope: Deactivated successfully.
Nov 22 05:18:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:18.290+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:18 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:18 np0005532048 podman[439070]: 2025-11-22 10:18:18.39057772 +0000 UTC m=+0.064789193 container create cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:18:18 np0005532048 podman[439070]: 2025-11-22 10:18:18.349097632 +0000 UTC m=+0.023309155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:18:18 np0005532048 systemd[1]: Started libpod-conmon-cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77.scope.
Nov 22 05:18:18 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:18:18 np0005532048 podman[439070]: 2025-11-22 10:18:18.607729998 +0000 UTC m=+0.281941491 container init cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:18:18 np0005532048 podman[439070]: 2025-11-22 10:18:18.61475682 +0000 UTC m=+0.288968313 container start cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:18:18 np0005532048 vigorous_keldysh[439086]: 167 167
Nov 22 05:18:18 np0005532048 systemd[1]: libpod-cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77.scope: Deactivated successfully.
Nov 22 05:18:18 np0005532048 conmon[439086]: conmon cc2d6e0cb64f8186064d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77.scope/container/memory.events
Nov 22 05:18:18 np0005532048 podman[439070]: 2025-11-22 10:18:18.810968742 +0000 UTC m=+0.485180235 container attach cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:18:18 np0005532048 podman[439070]: 2025-11-22 10:18:18.811514485 +0000 UTC m=+0.485725958 container died cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:18:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:19 np0005532048 nova_compute[253661]: 2025-11-22 10:18:19.254 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:19.270+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:19 np0005532048 systemd[1]: var-lib-containers-storage-overlay-915292745a9a3d6faaf507b193ee66704b606996ad34b6bfbbfc69eddcb8ab3c-merged.mount: Deactivated successfully.
Nov 22 05:18:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 4 slow ops, oldest one blocked for 1675 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:19 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:20 np0005532048 podman[439070]: 2025-11-22 10:18:20.052708427 +0000 UTC m=+1.726919930 container remove cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:18:20 np0005532048 systemd[1]: libpod-conmon-cc2d6e0cb64f8186064d21c893a1ff33fa91cf73c98e4f155725c6af6e2d8c77.scope: Deactivated successfully.
Nov 22 05:18:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:20.268+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:20 np0005532048 podman[439111]: 2025-11-22 10:18:20.250773864 +0000 UTC m=+0.034222161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:18:20 np0005532048 podman[439111]: 2025-11-22 10:18:20.454733847 +0000 UTC m=+0.238182134 container create a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:18:20 np0005532048 nova_compute[253661]: 2025-11-22 10:18:20.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:20 np0005532048 systemd[1]: Started libpod-conmon-a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f.scope.
Nov 22 05:18:20 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:18:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:20 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:20 np0005532048 podman[439111]: 2025-11-22 10:18:20.603175975 +0000 UTC m=+0.386624312 container init a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:18:20 np0005532048 podman[439111]: 2025-11-22 10:18:20.617369044 +0000 UTC m=+0.400817331 container start a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:18:20 np0005532048 podman[439111]: 2025-11-22 10:18:20.62247885 +0000 UTC m=+0.405927157 container attach a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:18:20 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:20 np0005532048 ceph-mon[75021]: Health check update: 4 slow ops, oldest one blocked for 1675 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:20 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:21.236+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]: {
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:    "0": [
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:        {
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "devices": [
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "/dev/loop3"
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            ],
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_name": "ceph_lv0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_size": "21470642176",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "name": "ceph_lv0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "tags": {
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.cluster_name": "ceph",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.crush_device_class": "",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.encrypted": "0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.osd_id": "0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.type": "block",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.vdo": "0"
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            },
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "type": "block",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "vg_name": "ceph_vg0"
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:        }
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:    ],
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:    "1": [
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:        {
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "devices": [
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "/dev/loop4"
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            ],
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_name": "ceph_lv1",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_size": "21470642176",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "name": "ceph_lv1",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "tags": {
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.cluster_name": "ceph",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.crush_device_class": "",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.encrypted": "0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.osd_id": "1",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.type": "block",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.vdo": "0"
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            },
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "type": "block",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "vg_name": "ceph_vg1"
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:        }
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:    ],
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:    "2": [
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:        {
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "devices": [
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "/dev/loop5"
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            ],
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_name": "ceph_lv2",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_size": "21470642176",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "name": "ceph_lv2",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "tags": {
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.cluster_name": "ceph",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.crush_device_class": "",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.encrypted": "0",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.osd_id": "2",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.type": "block",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:                "ceph.vdo": "0"
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            },
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "type": "block",
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:            "vg_name": "ceph_vg2"
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:        }
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]:    ]
Nov 22 05:18:21 np0005532048 infallible_hertz[439128]: }
Nov 22 05:18:21 np0005532048 systemd[1]: libpod-a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f.scope: Deactivated successfully.
Nov 22 05:18:21 np0005532048 podman[439111]: 2025-11-22 10:18:21.408282591 +0000 UTC m=+1.191730898 container died a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 05:18:21 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a44f09465cd442d1bdace115abb8d6eef80c4a7bf3b274c1bb2f2185ff786584-merged.mount: Deactivated successfully.
Nov 22 05:18:21 np0005532048 podman[439111]: 2025-11-22 10:18:21.496781056 +0000 UTC m=+1.280229313 container remove a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hertz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:18:21 np0005532048 systemd[1]: libpod-conmon-a2cd7605f3ddeb612932ef63d304d1d0160d4a798524ea3dcaab5f35372cd24f.scope: Deactivated successfully.
Nov 22 05:18:21 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:22 np0005532048 podman[439290]: 2025-11-22 10:18:22.061220397 +0000 UTC m=+0.036175731 container create 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:18:22 np0005532048 systemd[1]: Started libpod-conmon-20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4.scope.
Nov 22 05:18:22 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:18:22 np0005532048 podman[439290]: 2025-11-22 10:18:22.128081 +0000 UTC m=+0.103036344 container init 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:18:22 np0005532048 podman[439290]: 2025-11-22 10:18:22.135141993 +0000 UTC m=+0.110097357 container start 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:18:22 np0005532048 eager_booth[439306]: 167 167
Nov 22 05:18:22 np0005532048 systemd[1]: libpod-20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4.scope: Deactivated successfully.
Nov 22 05:18:22 np0005532048 podman[439290]: 2025-11-22 10:18:22.139662704 +0000 UTC m=+0.114618048 container attach 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 05:18:22 np0005532048 podman[439290]: 2025-11-22 10:18:22.140151606 +0000 UTC m=+0.115106940 container died 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:18:22 np0005532048 podman[439290]: 2025-11-22 10:18:22.044858654 +0000 UTC m=+0.019814008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:18:22 np0005532048 systemd[1]: var-lib-containers-storage-overlay-eef13438b587f580fd572af637e8fc00c8a4962aa9de99f18bc5ce69cb6cb61b-merged.mount: Deactivated successfully.
Nov 22 05:18:22 np0005532048 podman[439290]: 2025-11-22 10:18:22.171925627 +0000 UTC m=+0.146880961 container remove 20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:18:22 np0005532048 systemd[1]: libpod-conmon-20bfc1e011c0cf71279080ffff1a0bd8f77ad55b1d5f7ae154a9400984383bf4.scope: Deactivated successfully.
Nov 22 05:18:22 np0005532048 nova_compute[253661]: 2025-11-22 10:18:22.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:22.228+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:22 np0005532048 nova_compute[253661]: 2025-11-22 10:18:22.230 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:22 np0005532048 nova_compute[253661]: 2025-11-22 10:18:22.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:18:22 np0005532048 podman[439331]: 2025-11-22 10:18:22.350359193 +0000 UTC m=+0.050538463 container create 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:18:22 np0005532048 systemd[1]: Started libpod-conmon-94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e.scope.
Nov 22 05:18:22 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:18:22 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:22 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:22 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:22 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:18:22 np0005532048 podman[439331]: 2025-11-22 10:18:22.328791342 +0000 UTC m=+0.028970682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:18:22 np0005532048 podman[439331]: 2025-11-22 10:18:22.433169538 +0000 UTC m=+0.133348848 container init 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 05:18:22 np0005532048 podman[439331]: 2025-11-22 10:18:22.440676162 +0000 UTC m=+0.140855442 container start 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:18:22 np0005532048 podman[439331]: 2025-11-22 10:18:22.444588628 +0000 UTC m=+0.144767908 container attach 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:18:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:18:22 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.222 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.263 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.264 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.264 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.265 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:18:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:23.269+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:23 np0005532048 modest_tesla[439347]: {
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "osd_id": 1,
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "type": "bluestore"
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:    },
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "osd_id": 0,
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "type": "bluestore"
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:    },
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "osd_id": 2,
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:        "type": "bluestore"
Nov 22 05:18:23 np0005532048 modest_tesla[439347]:    }
Nov 22 05:18:23 np0005532048 modest_tesla[439347]: }
Nov 22 05:18:23 np0005532048 systemd[1]: libpod-94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e.scope: Deactivated successfully.
Nov 22 05:18:23 np0005532048 podman[439331]: 2025-11-22 10:18:23.439853357 +0000 UTC m=+1.140032637 container died 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:18:23 np0005532048 systemd[1]: libpod-94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e.scope: Consumed 1.003s CPU time.
Nov 22 05:18:23 np0005532048 systemd[1]: var-lib-containers-storage-overlay-96d9d251a453e04885a23de4c221780205fccf3cd906908fbc3f86ac2c4df5d3-merged.mount: Deactivated successfully.
Nov 22 05:18:23 np0005532048 podman[439331]: 2025-11-22 10:18:23.505551642 +0000 UTC m=+1.205730922 container remove 94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 05:18:23 np0005532048 systemd[1]: libpod-conmon-94c940a9b5e4d18d13a064795b053fb2289d7b2cbf55189f6c301706917f833e.scope: Deactivated successfully.
Nov 22 05:18:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:18:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:18:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev b54fe231-2f01-4a0f-867c-9f3bb42d0d51 does not exist
Nov 22 05:18:23 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev bbfe4b76-50e5-4034-912b-21c22dc217e2 does not exist
Nov 22 05:18:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:18:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813828880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.715 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.871 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.872 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3497MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.873 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.873 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.975 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:18:23 np0005532048 nova_compute[253661]: 2025-11-22 10:18:23.998 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:18:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:24.230+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:24 np0005532048 nova_compute[253661]: 2025-11-22 10:18:24.302 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:18:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878930382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:18:24 np0005532048 nova_compute[253661]: 2025-11-22 10:18:24.410 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:18:24 np0005532048 nova_compute[253661]: 2025-11-22 10:18:24.414 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:18:24 np0005532048 nova_compute[253661]: 2025-11-22 10:18:24.427 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:18:24 np0005532048 nova_compute[253661]: 2025-11-22 10:18:24.429 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:18:24 np0005532048 nova_compute[253661]: 2025-11-22 10:18:24.429 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:18:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1680 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:24 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:24 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:18:24 np0005532048 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1680 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:25.242+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:25 np0005532048 nova_compute[253661]: 2025-11-22 10:18:25.430 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:25 np0005532048 nova_compute[253661]: 2025-11-22 10:18:25.502 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:25 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:26 np0005532048 nova_compute[253661]: 2025-11-22 10:18:26.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:26.254+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:26 np0005532048 podman[439487]: 2025-11-22 10:18:26.402350742 +0000 UTC m=+0.084329634 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 05:18:26 np0005532048 podman[439488]: 2025-11-22 10:18:26.418292673 +0000 UTC m=+0.092530245 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible)
Nov 22 05:18:26 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:27.213+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:27 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:18:28.028 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:18:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:18:28.028 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:18:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:18:28.029 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:18:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:28.235+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:28 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:29.209+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:29 np0005532048 nova_compute[253661]: 2025-11-22 10:18:29.306 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1685 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:29 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:29 np0005532048 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1685 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:30.235+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:30 np0005532048 nova_compute[253661]: 2025-11-22 10:18:30.505 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:30 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:31.209+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:31 np0005532048 podman[439527]: 2025-11-22 10:18:31.448704497 +0000 UTC m=+0.128605572 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:18:31 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:32.177+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:32 np0005532048 nova_compute[253661]: 2025-11-22 10:18:32.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:18:32 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:33.212+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:33 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:34.254+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:34 np0005532048 nova_compute[253661]: 2025-11-22 10:18:34.310 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1690 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:34 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:34 np0005532048 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1690 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:35.260+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:35 np0005532048 nova_compute[253661]: 2025-11-22 10:18:35.561 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:35 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:36.238+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:36 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:37.265+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:37 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:38.256+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:38 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:39.208+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:39 np0005532048 nova_compute[253661]: 2025-11-22 10:18:39.355 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1695 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:39 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:39 np0005532048 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1695 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:40.227+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:40 np0005532048 nova_compute[253661]: 2025-11-22 10:18:40.564 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:40 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:41.263+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:41 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:42.310+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:42 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:42 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:43.300+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:43 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:44.343+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:44 np0005532048 nova_compute[253661]: 2025-11-22 10:18:44.358 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1700 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:44 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:44 np0005532048 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1700 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:45.380+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:45 np0005532048 nova_compute[253661]: 2025-11-22 10:18:45.568 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:45 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:46.411+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:46 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:47.371+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:47 np0005532048 ceph-mon[75021]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'images' : 13 ])
Nov 22 05:18:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:48.379+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:48 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:49 np0005532048 nova_compute[253661]: 2025-11-22 10:18:49.362 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:49.382+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 13 slow ops, oldest one blocked for 1705 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:49 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:49 np0005532048 ceph-mon[75021]: Health check update: 13 slow ops, oldest one blocked for 1705 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:50.408+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:50 np0005532048 nova_compute[253661]: 2025-11-22 10:18:50.570 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:50 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:51.411+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:51 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:18:52
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images', '.rgw.root', 'volumes', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.meta']
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:18:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:52.440+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:18:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:18:52 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:53.418+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:53 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:54 np0005532048 nova_compute[253661]: 2025-11-22 10:18:54.408 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:54.432+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1710 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:54 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:54 np0005532048 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1710 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:55.398+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:55 np0005532048 nova_compute[253661]: 2025-11-22 10:18:55.601 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:55 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:56.357+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:18:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:18:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:18:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:18:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:18:56 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:57.381+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:57 np0005532048 podman[439553]: 2025-11-22 10:18:57.405779805 +0000 UTC m=+0.082517329 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:18:57 np0005532048 podman[439554]: 2025-11-22 10:18:57.414265374 +0000 UTC m=+0.088433434 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 05:18:57 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:58.335+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:58 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:18:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:18:59.376+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:18:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:59 np0005532048 nova_compute[253661]: 2025-11-22 10:18:59.412 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:18:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:18:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:18:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:18:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:18:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:18:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1715 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:18:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:18:59 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:18:59 np0005532048 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1715 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:00.343+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:00 np0005532048 nova_compute[253661]: 2025-11-22 10:19:00.602 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:00 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:01.359+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:01 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:02.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:02 np0005532048 podman[439591]: 2025-11-22 10:19:02.478114638 +0000 UTC m=+0.146929042 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:19:02 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:03.361+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:19:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:19:03 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:04.385+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:04 np0005532048 nova_compute[253661]: 2025-11-22 10:19:04.415 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1720 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.524511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744524625, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1213, "num_deletes": 258, "total_data_size": 1386403, "memory_usage": 1411376, "flush_reason": "Manual Compaction"}
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744549398, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1354008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78557, "largest_seqno": 79769, "table_properties": {"data_size": 1348686, "index_size": 2525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13704, "raw_average_key_size": 20, "raw_value_size": 1336910, "raw_average_value_size": 1983, "num_data_blocks": 111, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806663, "oldest_key_time": 1763806663, "file_creation_time": 1763806744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 25251 microseconds, and 8149 cpu microseconds.
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.549758) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1354008 bytes OK
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.549807) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.553987) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.554006) EVENT_LOG_v1 {"time_micros": 1763806744554000, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.554035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 1380678, prev total WAL file size 1380678, number of live WAL files 2.
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.554831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373638' seq:72057594037927935, type:22 .. '6C6F676D0034303232' seq:0, type:0; will stop at (end)
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1322KB)], [185(9953KB)]
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744554868, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 11546221, "oldest_snapshot_seqno": -1}
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 10870 keys, 11404333 bytes, temperature: kUnknown
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744646831, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 11404333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11338819, "index_size": 37241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27205, "raw_key_size": 291017, "raw_average_key_size": 26, "raw_value_size": 11150622, "raw_average_value_size": 1025, "num_data_blocks": 1403, "num_entries": 10870, "num_filter_entries": 10870, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.647212) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11404333 bytes
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.649564) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.4 rd, 123.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.7 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(17.0) write-amplify(8.4) OK, records in: 11398, records dropped: 528 output_compression: NoCompression
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.649594) EVENT_LOG_v1 {"time_micros": 1763806744649581, "job": 116, "event": "compaction_finished", "compaction_time_micros": 92074, "compaction_time_cpu_micros": 34133, "output_level": 6, "num_output_files": 1, "total_output_size": 11404333, "num_input_records": 11398, "num_output_records": 10870, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744650155, "job": 116, "event": "table_file_deletion", "file_number": 187}
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806744653799, "job": 116, "event": "table_file_deletion", "file_number": 185}
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.554748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:19:04.653940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:04 np0005532048 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1720 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:05.354+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:05 np0005532048 nova_compute[253661]: 2025-11-22 10:19:05.606 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:05 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:06.351+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:06 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:07.323+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:07 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:08.349+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:08 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:09.365+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:09 np0005532048 nova_compute[253661]: 2025-11-22 10:19:09.419 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1725 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:10 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:10 np0005532048 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1725 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:10.381+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:10 np0005532048 nova_compute[253661]: 2025-11-22 10:19:10.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:11 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:11.370+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:12 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:12.344+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:19:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2990060046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:19:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:19:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2990060046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:19:13 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:13.380+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:14 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:14.362+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:14 np0005532048 nova_compute[253661]: 2025-11-22 10:19:14.422 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1730 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:15 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:15 np0005532048 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1730 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:15.357+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:15 np0005532048 nova_compute[253661]: 2025-11-22 10:19:15.610 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:16 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:16 np0005532048 nova_compute[253661]: 2025-11-22 10:19:16.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:16 np0005532048 nova_compute[253661]: 2025-11-22 10:19:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:19:16 np0005532048 nova_compute[253661]: 2025-11-22 10:19:16.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:19:16 np0005532048 nova_compute[253661]: 2025-11-22 10:19:16.244 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:19:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:16.363+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:17 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:17.390+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:18 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:18 np0005532048 nova_compute[253661]: 2025-11-22 10:19:18.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:18.428+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:19 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:19 np0005532048 nova_compute[253661]: 2025-11-22 10:19:19.425 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:19.431+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 14 slow ops, oldest one blocked for 1735 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:20 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:20 np0005532048 ceph-mon[75021]: Health check update: 14 slow ops, oldest one blocked for 1735 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:20.433+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:20 np0005532048 nova_compute[253661]: 2025-11-22 10:19:20.613 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:21 np0005532048 ceph-mon[75021]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'images' : 14 ])
Nov 22 05:19:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:21.392+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:22 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:22 np0005532048 nova_compute[253661]: 2025-11-22 10:19:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:22.416+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:19:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:19:23 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:23 np0005532048 nova_compute[253661]: 2025-11-22 10:19:23.220 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:23 np0005532048 nova_compute[253661]: 2025-11-22 10:19:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:23 np0005532048 nova_compute[253661]: 2025-11-22 10:19:23.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:19:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:23.437+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:24 np0005532048 nova_compute[253661]: 2025-11-22 10:19:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:24 np0005532048 nova_compute[253661]: 2025-11-22 10:19:24.430 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:24.452+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:19:24 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 078e78d2-9dc0-4142-aeac-4b34f096778a does not exist
Nov 22 05:19:24 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 07943f61-122f-4655-8429-46cf5a4a53ca does not exist
Nov 22 05:19:24 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 1584a471-b370-4e8a-964d-1df8beaf1048 does not exist
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1740 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:25 np0005532048 podman[439890]: 2025-11-22 10:19:25.092364818 +0000 UTC m=+0.045439398 container create 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:19:25 np0005532048 systemd[1]: Started libpod-conmon-0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78.scope.
Nov 22 05:19:25 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:19:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:19:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:19:25 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:19:25 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1740 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:25 np0005532048 podman[439890]: 2025-11-22 10:19:25.071042035 +0000 UTC m=+0.024116665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:19:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:19:25 np0005532048 podman[439890]: 2025-11-22 10:19:25.184303077 +0000 UTC m=+0.137377667 container init 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:19:25 np0005532048 podman[439890]: 2025-11-22 10:19:25.203479969 +0000 UTC m=+0.156554549 container start 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:19:25 np0005532048 podman[439890]: 2025-11-22 10:19:25.207692102 +0000 UTC m=+0.160766712 container attach 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:19:25 np0005532048 stoic_kirch[439907]: 167 167
Nov 22 05:19:25 np0005532048 systemd[1]: libpod-0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78.scope: Deactivated successfully.
Nov 22 05:19:25 np0005532048 conmon[439907]: conmon 0ddb2d6e85c20a072a63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78.scope/container/memory.events
Nov 22 05:19:25 np0005532048 podman[439890]: 2025-11-22 10:19:25.212627494 +0000 UTC m=+0.165702084 container died 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:25 np0005532048 systemd[1]: var-lib-containers-storage-overlay-3761fd6199af71927a20f44ca6c8dce06c2602ba715048b7a5ae20695adf8ddb-merged.mount: Deactivated successfully.
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.256 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:19:25 np0005532048 podman[439890]: 2025-11-22 10:19:25.259416834 +0000 UTC m=+0.212491414 container remove 0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:19:25 np0005532048 systemd[1]: libpod-conmon-0ddb2d6e85c20a072a639bbd577b47e0a6deaf5049365a7a6691da7918901f78.scope: Deactivated successfully.
Nov 22 05:19:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:25.426+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:25 np0005532048 podman[439941]: 2025-11-22 10:19:25.496865879 +0000 UTC m=+0.078594042 container create e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:19:25 np0005532048 systemd[1]: Started libpod-conmon-e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35.scope.
Nov 22 05:19:25 np0005532048 podman[439941]: 2025-11-22 10:19:25.467364644 +0000 UTC m=+0.049092857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:19:25 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:19:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:25 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:25 np0005532048 podman[439941]: 2025-11-22 10:19:25.604109684 +0000 UTC m=+0.185837867 container init e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.615 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:25 np0005532048 podman[439941]: 2025-11-22 10:19:25.618817546 +0000 UTC m=+0.200545729 container start e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:19:25 np0005532048 podman[439941]: 2025-11-22 10:19:25.623729726 +0000 UTC m=+0.205457899 container attach e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:19:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:19:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/545251252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.702 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.858 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.859 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3512MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.860 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.860 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.958 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.959 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:19:25 np0005532048 nova_compute[253661]: 2025-11-22 10:19:25.977 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:19:26 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:19:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104404180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:19:26 np0005532048 nova_compute[253661]: 2025-11-22 10:19:26.389 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:19:26 np0005532048 nova_compute[253661]: 2025-11-22 10:19:26.399 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:19:26 np0005532048 nova_compute[253661]: 2025-11-22 10:19:26.413 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:19:26 np0005532048 nova_compute[253661]: 2025-11-22 10:19:26.416 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:19:26 np0005532048 nova_compute[253661]: 2025-11-22 10:19:26.417 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:19:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:26.463+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:26 np0005532048 admiring_ptolemy[439966]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:19:26 np0005532048 admiring_ptolemy[439966]: --> relative data size: 1.0
Nov 22 05:19:26 np0005532048 admiring_ptolemy[439966]: --> All data devices are unavailable
Nov 22 05:19:26 np0005532048 systemd[1]: libpod-e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35.scope: Deactivated successfully.
Nov 22 05:19:26 np0005532048 podman[439941]: 2025-11-22 10:19:26.656779334 +0000 UTC m=+1.238507467 container died e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:19:26 np0005532048 systemd[1]: var-lib-containers-storage-overlay-fcfbbecd84cee08c0d82fba497e3f2a0476397773c56d8f9fd31298dcf2c877e-merged.mount: Deactivated successfully.
Nov 22 05:19:26 np0005532048 podman[439941]: 2025-11-22 10:19:26.732623438 +0000 UTC m=+1.314351571 container remove e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ptolemy, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:19:26 np0005532048 systemd[1]: libpod-conmon-e918b5fe7ec86ddebc6569d24ac9bb4873082a28a2a66ea2c55e42b1e4fd3d35.scope: Deactivated successfully.
Nov 22 05:19:27 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:27 np0005532048 podman[440170]: 2025-11-22 10:19:27.377476045 +0000 UTC m=+0.036632551 container create 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:19:27 np0005532048 systemd[1]: Started libpod-conmon-43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced.scope.
Nov 22 05:19:27 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:19:27 np0005532048 podman[440170]: 2025-11-22 10:19:27.451017342 +0000 UTC m=+0.110173868 container init 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:19:27 np0005532048 podman[440170]: 2025-11-22 10:19:27.360861367 +0000 UTC m=+0.020017893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:19:27 np0005532048 podman[440170]: 2025-11-22 10:19:27.45740714 +0000 UTC m=+0.116563646 container start 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:19:27 np0005532048 podman[440170]: 2025-11-22 10:19:27.460891695 +0000 UTC m=+0.120048201 container attach 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:19:27 np0005532048 nice_taussig[440186]: 167 167
Nov 22 05:19:27 np0005532048 systemd[1]: libpod-43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced.scope: Deactivated successfully.
Nov 22 05:19:27 np0005532048 podman[440170]: 2025-11-22 10:19:27.462853894 +0000 UTC m=+0.122010420 container died 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:19:27 np0005532048 systemd[1]: var-lib-containers-storage-overlay-15b746bb366cbb6f62504d6cb45c6a12445e78799b268d86d9b5bf180529e52f-merged.mount: Deactivated successfully.
Nov 22 05:19:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:27.503+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:27 np0005532048 podman[440170]: 2025-11-22 10:19:27.507059539 +0000 UTC m=+0.166216055 container remove 43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:19:27 np0005532048 systemd[1]: libpod-conmon-43ff34365f2083838cc49953a647e39841c7c025c7fc51ea64648d1f4a853ced.scope: Deactivated successfully.
Nov 22 05:19:27 np0005532048 podman[440190]: 2025-11-22 10:19:27.517710042 +0000 UTC m=+0.064375433 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:19:27 np0005532048 podman[440189]: 2025-11-22 10:19:27.517184159 +0000 UTC m=+0.064788874 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 22 05:19:27 np0005532048 podman[440245]: 2025-11-22 10:19:27.663036493 +0000 UTC m=+0.045087359 container create 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:19:27 np0005532048 systemd[1]: Started libpod-conmon-6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6.scope.
Nov 22 05:19:27 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:19:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:27 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:27 np0005532048 podman[440245]: 2025-11-22 10:19:27.736484478 +0000 UTC m=+0.118535384 container init 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:19:27 np0005532048 podman[440245]: 2025-11-22 10:19:27.643512523 +0000 UTC m=+0.025563429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:19:27 np0005532048 podman[440245]: 2025-11-22 10:19:27.750707227 +0000 UTC m=+0.132758123 container start 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:19:27 np0005532048 podman[440245]: 2025-11-22 10:19:27.756380626 +0000 UTC m=+0.138431542 container attach 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:19:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:19:28.029 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:19:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:19:28.030 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:19:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:19:28.030 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:19:28 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:28 np0005532048 nova_compute[253661]: 2025-11-22 10:19:28.418 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]: {
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:    "0": [
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:        {
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "devices": [
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "/dev/loop3"
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            ],
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_name": "ceph_lv0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_size": "21470642176",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "name": "ceph_lv0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "tags": {
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.cluster_name": "ceph",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.crush_device_class": "",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.encrypted": "0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.osd_id": "0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.type": "block",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.vdo": "0"
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            },
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "type": "block",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "vg_name": "ceph_vg0"
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:        }
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:    ],
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:    "1": [
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:        {
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "devices": [
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "/dev/loop4"
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            ],
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_name": "ceph_lv1",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_size": "21470642176",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "name": "ceph_lv1",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "tags": {
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.cluster_name": "ceph",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.crush_device_class": "",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.encrypted": "0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.osd_id": "1",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.type": "block",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.vdo": "0"
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            },
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "type": "block",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "vg_name": "ceph_vg1"
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:        }
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:    ],
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:    "2": [
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:        {
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "devices": [
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "/dev/loop5"
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            ],
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_name": "ceph_lv2",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_size": "21470642176",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "name": "ceph_lv2",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "tags": {
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.cluster_name": "ceph",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.crush_device_class": "",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.encrypted": "0",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.osd_id": "2",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.type": "block",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:                "ceph.vdo": "0"
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            },
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "type": "block",
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:            "vg_name": "ceph_vg2"
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:        }
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]:    ]
Nov 22 05:19:28 np0005532048 cranky_mirzakhani[440261]: }
Nov 22 05:19:28 np0005532048 systemd[1]: libpod-6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6.scope: Deactivated successfully.
Nov 22 05:19:28 np0005532048 podman[440245]: 2025-11-22 10:19:28.524701958 +0000 UTC m=+0.906752854 container died 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:19:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:28.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:28 np0005532048 systemd[1]: var-lib-containers-storage-overlay-49ed044b58d94063f98bb518931eba854ade0a1c735e8dc53cd605ec9b17bbdc-merged.mount: Deactivated successfully.
Nov 22 05:19:28 np0005532048 podman[440245]: 2025-11-22 10:19:28.587973763 +0000 UTC m=+0.970024619 container remove 6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:19:28 np0005532048 systemd[1]: libpod-conmon-6e5388c813bb282573da1e050b68fede63882f0cb64bf66d07d47adf22dd40a6.scope: Deactivated successfully.
Nov 22 05:19:29 np0005532048 podman[440422]: 2025-11-22 10:19:29.212610415 +0000 UTC m=+0.076090292 container create ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:19:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:29 np0005532048 podman[440422]: 2025-11-22 10:19:29.157056519 +0000 UTC m=+0.020536436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:19:29 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:29 np0005532048 systemd[1]: Started libpod-conmon-ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0.scope.
Nov 22 05:19:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:19:29 np0005532048 podman[440422]: 2025-11-22 10:19:29.434172489 +0000 UTC m=+0.297652456 container init ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:19:29 np0005532048 nova_compute[253661]: 2025-11-22 10:19:29.433 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:29 np0005532048 podman[440422]: 2025-11-22 10:19:29.445676502 +0000 UTC m=+0.309156429 container start ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:19:29 np0005532048 upbeat_tharp[440438]: 167 167
Nov 22 05:19:29 np0005532048 systemd[1]: libpod-ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0.scope: Deactivated successfully.
Nov 22 05:19:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1745 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:29 np0005532048 podman[440422]: 2025-11-22 10:19:29.522969231 +0000 UTC m=+0.386449218 container attach ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:19:29 np0005532048 podman[440422]: 2025-11-22 10:19:29.524277133 +0000 UTC m=+0.387757060 container died ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:19:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:29.535+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:29 np0005532048 systemd[1]: var-lib-containers-storage-overlay-2da2154ab9b0540b708daaaf2a9a8dfb6fc2b8d0047515590b8ef4ee503eaba2-merged.mount: Deactivated successfully.
Nov 22 05:19:29 np0005532048 podman[440422]: 2025-11-22 10:19:29.67917135 +0000 UTC m=+0.542651267 container remove ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_tharp, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:19:29 np0005532048 systemd[1]: libpod-conmon-ecff28de0191de8b47c9855351556622ecbdcf8c44268d38ca1832931add24b0.scope: Deactivated successfully.
Nov 22 05:19:29 np0005532048 podman[440464]: 2025-11-22 10:19:29.869114367 +0000 UTC m=+0.044603816 container create 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:19:29 np0005532048 systemd[1]: Started libpod-conmon-3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b.scope.
Nov 22 05:19:29 np0005532048 podman[440464]: 2025-11-22 10:19:29.852686624 +0000 UTC m=+0.028176093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:19:29 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:19:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:29 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:19:29 np0005532048 podman[440464]: 2025-11-22 10:19:29.973601546 +0000 UTC m=+0.149091015 container init 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:19:29 np0005532048 podman[440464]: 2025-11-22 10:19:29.992732886 +0000 UTC m=+0.168222335 container start 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:19:29 np0005532048 podman[440464]: 2025-11-22 10:19:29.996463927 +0000 UTC m=+0.171953376 container attach 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:19:30 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1745 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:30 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:30.571+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:30 np0005532048 nova_compute[253661]: 2025-11-22 10:19:30.617 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]: {
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "osd_id": 1,
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "type": "bluestore"
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:    },
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "osd_id": 0,
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "type": "bluestore"
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:    },
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "osd_id": 2,
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:19:30 np0005532048 pensive_brahmagupta[440481]:        "type": "bluestore"
Nov 22 05:19:31 np0005532048 pensive_brahmagupta[440481]:    }
Nov 22 05:19:31 np0005532048 pensive_brahmagupta[440481]: }
Nov 22 05:19:31 np0005532048 systemd[1]: libpod-3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b.scope: Deactivated successfully.
Nov 22 05:19:31 np0005532048 podman[440464]: 2025-11-22 10:19:31.027854504 +0000 UTC m=+1.203343963 container died 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:19:31 np0005532048 systemd[1]: libpod-3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b.scope: Consumed 1.041s CPU time.
Nov 22 05:19:31 np0005532048 systemd[1]: var-lib-containers-storage-overlay-7d2f31a8ebb4890533f36766b51e7d2f17399275d3350fbb9a54cedfcb919871-merged.mount: Deactivated successfully.
Nov 22 05:19:31 np0005532048 podman[440464]: 2025-11-22 10:19:31.09688819 +0000 UTC m=+1.272377649 container remove 3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:19:31 np0005532048 systemd[1]: libpod-conmon-3bdccebd0f6534c7567775c7f4af5428caf24e534aa1bb60d9c25e59deb5b78b.scope: Deactivated successfully.
Nov 22 05:19:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:19:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:19:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:19:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:19:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev a1dc3f0d-c208-438e-a9eb-73f79ecdd683 does not exist
Nov 22 05:19:31 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 38361ec1-ac04-4d07-8d4d-5d61e1d2f4ed does not exist
Nov 22 05:19:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:31 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:31 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:19:31 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:19:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:31.576+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:32 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:32.577+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:33 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:33 np0005532048 podman[440579]: 2025-11-22 10:19:33.403303472 +0000 UTC m=+0.098240926 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:19:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:33.560+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:34 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:34 np0005532048 nova_compute[253661]: 2025-11-22 10:19:34.437 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1750 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:34.563+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:35 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1750 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:35 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:35.559+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:35 np0005532048 nova_compute[253661]: 2025-11-22 10:19:35.618 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:36 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:36.570+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:37 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:37.617+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:38 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:38.654+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:39 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:39 np0005532048 nova_compute[253661]: 2025-11-22 10:19:39.440 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1755 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:39.687+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:40 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1755 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:40 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:40 np0005532048 nova_compute[253661]: 2025-11-22 10:19:40.619 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:40.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:41 np0005532048 nova_compute[253661]: 2025-11-22 10:19:41.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:41 np0005532048 nova_compute[253661]: 2025-11-22 10:19:41.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 22 05:19:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:41 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:41.603+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:42 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:42.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:43 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:43.611+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:44 np0005532048 nova_compute[253661]: 2025-11-22 10:19:44.443 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:44 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1760 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:44.598+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:45 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1760 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:45 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:45 np0005532048 nova_compute[253661]: 2025-11-22 10:19:45.620 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:45.642+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:46 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:46.603+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:47 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:47.618+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:48 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:48.621+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:49 np0005532048 nova_compute[253661]: 2025-11-22 10:19:49.446 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:49 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 5 slow ops, oldest one blocked for 1765 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:49.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.22795.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:50 np0005532048 nova_compute[253661]: 2025-11-22 10:19:50.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:19:50 np0005532048 ceph-mon[75021]: Health check update: 5 slow ops, oldest one blocked for 1765 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:50 np0005532048 ceph-mon[75021]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'images' : 5 ])
Nov 22 05:19:50 np0005532048 nova_compute[253661]: 2025-11-22 10:19:50.623 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:50.635+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:51 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:51.589+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:19:52
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:19:52 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:52.560+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:19:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:19:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:53.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:53 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:54 np0005532048 nova_compute[253661]: 2025-11-22 10:19:54.450 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1770 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:19:54 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:54 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1770 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:54.567+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:55.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:55 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:55 np0005532048 nova_compute[253661]: 2025-11-22 10:19:55.625 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:56.519+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:19:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:19:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:19:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:19:56 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:19:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:57 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:57.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:58 np0005532048 podman[440606]: 2025-11-22 10:19:58.363935236 +0000 UTC m=+0.057421292 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:19:58 np0005532048 podman[440607]: 2025-11-22 10:19:58.372946047 +0000 UTC m=+0.065292735 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 05:19:58 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:58 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:58.520+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:19:59 np0005532048 nova_compute[253661]: 2025-11-22 10:19:59.453 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:19:59 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:19:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:19:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:19:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:19:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:19:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:19:59.522+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:19:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:19:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1775 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:19:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:00.495+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:00 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:00 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1775 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:00 np0005532048 nova_compute[253661]: 2025-11-22 10:20:00.627 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:01 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:01.542+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:02.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:02 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:20:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:20:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:03.516+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:03 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:04 np0005532048 podman[440639]: 2025-11-22 10:20:04.396540649 +0000 UTC m=+0.094494593 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 22 05:20:04 np0005532048 nova_compute[253661]: 2025-11-22 10:20:04.455 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:04.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1780 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:04 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:04 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1780 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:05.485+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:05 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:05 np0005532048 nova_compute[253661]: 2025-11-22 10:20:05.628 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:06.534+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:06 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:07.555+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:07 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:08.538+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:08 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:09 np0005532048 nova_compute[253661]: 2025-11-22 10:20:09.459 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1785 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:09.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:09 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:09 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1785 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:10.517+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:10 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:10 np0005532048 nova_compute[253661]: 2025-11-22 10:20:10.631 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:11.520+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:11 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:20:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1218516067' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:20:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:20:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1218516067' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:20:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:12.538+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:12 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:13.536+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:13 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:14 np0005532048 nova_compute[253661]: 2025-11-22 10:20:14.460 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:14.489+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1790 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:14 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:14 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1790 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:15.468+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:15 np0005532048 nova_compute[253661]: 2025-11-22 10:20:15.634 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:15 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:16.451+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:16 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:17 np0005532048 nova_compute[253661]: 2025-11-22 10:20:17.239 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:17 np0005532048 nova_compute[253661]: 2025-11-22 10:20:17.239 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:20:17 np0005532048 nova_compute[253661]: 2025-11-22 10:20:17.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:20:17 np0005532048 nova_compute[253661]: 2025-11-22 10:20:17.251 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:20:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:17.492+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:17 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:18 np0005532048 nova_compute[253661]: 2025-11-22 10:20:18.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:18 np0005532048 nova_compute[253661]: 2025-11-22 10:20:18.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 22 05:20:18 np0005532048 nova_compute[253661]: 2025-11-22 10:20:18.240 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 22 05:20:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:18.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:18 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:19 np0005532048 nova_compute[253661]: 2025-11-22 10:20:19.465 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:19.503+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1795 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:19 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:19 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1795 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:20 np0005532048 nova_compute[253661]: 2025-11-22 10:20:20.240 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:20.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:20 np0005532048 nova_compute[253661]: 2025-11-22 10:20:20.636 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:20 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:21.509+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:21 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:22 np0005532048 nova_compute[253661]: 2025-11-22 10:20:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:22.529+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:22 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:20:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:20:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:23.547+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:23 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:23 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:24 np0005532048 nova_compute[253661]: 2025-11-22 10:20:24.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:24 np0005532048 nova_compute[253661]: 2025-11-22 10:20:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:24 np0005532048 nova_compute[253661]: 2025-11-22 10:20:24.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:24 np0005532048 nova_compute[253661]: 2025-11-22 10:20:24.229 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:20:24 np0005532048 nova_compute[253661]: 2025-11-22 10:20:24.469 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1800 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:24.561+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:24 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1800 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:24 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:25.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:25 np0005532048 nova_compute[253661]: 2025-11-22 10:20:25.638 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.847400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825847474, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1202, "num_deletes": 251, "total_data_size": 1350348, "memory_usage": 1372656, "flush_reason": "Manual Compaction"}
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825861510, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 1328764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79770, "largest_seqno": 80971, "table_properties": {"data_size": 1323460, "index_size": 2507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13861, "raw_average_key_size": 20, "raw_value_size": 1311792, "raw_average_value_size": 1963, "num_data_blocks": 110, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806744, "oldest_key_time": 1763806744, "file_creation_time": 1763806825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 14168 microseconds, and 8405 cpu microseconds.
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.861570) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 1328764 bytes OK
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.861601) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.863750) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.863776) EVENT_LOG_v1 {"time_micros": 1763806825863768, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.863988) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1344683, prev total WAL file size 1344683, number of live WAL files 2.
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.865088) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(1297KB)], [188(10MB)]
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825865152, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 12733097, "oldest_snapshot_seqno": -1}
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 11024 keys, 11340015 bytes, temperature: kUnknown
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825979355, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 11340015, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11273726, "index_size": 37632, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 295451, "raw_average_key_size": 26, "raw_value_size": 11082861, "raw_average_value_size": 1005, "num_data_blocks": 1414, "num_entries": 11024, "num_filter_entries": 11024, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.979630) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11340015 bytes
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.981090) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.4 rd, 99.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(18.1) write-amplify(8.5) OK, records in: 11538, records dropped: 514 output_compression: NoCompression
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.981111) EVENT_LOG_v1 {"time_micros": 1763806825981101, "job": 118, "event": "compaction_finished", "compaction_time_micros": 114291, "compaction_time_cpu_micros": 50219, "output_level": 6, "num_output_files": 1, "total_output_size": 11340015, "num_input_records": 11538, "num_output_records": 11024, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825981490, "job": 118, "event": "table_file_deletion", "file_number": 190}
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806825983435, "job": 118, "event": "table_file_deletion", "file_number": 188}
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.864953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:20:25 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:20:25.983618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.282 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.283 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.284 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.284 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.284 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:20:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:26.543+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:20:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78187973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.777 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:20:26 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.977 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.979 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.979 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:20:26 np0005532048 nova_compute[253661]: 2025-11-22 10:20:26.980 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:20:27 np0005532048 nova_compute[253661]: 2025-11-22 10:20:27.061 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:20:27 np0005532048 nova_compute[253661]: 2025-11-22 10:20:27.062 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:20:27 np0005532048 nova_compute[253661]: 2025-11-22 10:20:27.081 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:20:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:20:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840024796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:20:27 np0005532048 nova_compute[253661]: 2025-11-22 10:20:27.532 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:20:27 np0005532048 nova_compute[253661]: 2025-11-22 10:20:27.539 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:20:27 np0005532048 nova_compute[253661]: 2025-11-22 10:20:27.565 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:20:27 np0005532048 nova_compute[253661]: 2025-11-22 10:20:27.566 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:20:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:27.565+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:27 np0005532048 nova_compute[253661]: 2025-11-22 10:20:27.567 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:20:27 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:20:28.030 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:20:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:20:28.031 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:20:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:20:28.031 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:20:28 np0005532048 nova_compute[253661]: 2025-11-22 10:20:28.567 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:28.583+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:28 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:29 np0005532048 nova_compute[253661]: 2025-11-22 10:20:29.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:29 np0005532048 podman[440709]: 2025-11-22 10:20:29.364746125 +0000 UTC m=+0.046942705 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:20:29 np0005532048 podman[440710]: 2025-11-22 10:20:29.378215457 +0000 UTC m=+0.057056834 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:20:29 np0005532048 nova_compute[253661]: 2025-11-22 10:20:29.473 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1805 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:29.591+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:30 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1805 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:30 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:30.580+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:30 np0005532048 nova_compute[253661]: 2025-11-22 10:20:30.639 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:31.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:31 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:32 np0005532048 podman[440921]: 2025-11-22 10:20:32.291058051 +0000 UTC m=+0.210874224 container exec 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:20:32 np0005532048 podman[440942]: 2025-11-22 10:20:32.534274188 +0000 UTC m=+0.115573892 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 05:20:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:32.567+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:32 np0005532048 podman[440921]: 2025-11-22 10:20:32.586219735 +0000 UTC m=+0.506035658 container exec_died 621c1e8cf040561cddbcd1745ff4b614a1adb34ffb950ed66cab5d41e0b51297 (image=quay.io/ceph/ceph:v18, name=ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:20:32 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:20:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:33 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:20:33 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:33.539+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:33 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:33 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev fdb8d409-e675-4940-a70c-8adfd5918053 does not exist
Nov 22 05:20:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev d8bba59b-da59-485a-bb63-bd4f218666df does not exist
Nov 22 05:20:34 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 3db106c2-faf2-4b79-a52a-47022f82c1b9 does not exist
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:20:34 np0005532048 nova_compute[253661]: 2025-11-22 10:20:34.475 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1810 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:34.541+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:34 np0005532048 podman[441309]: 2025-11-22 10:20:34.56364001 +0000 UTC m=+0.104341295 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:20:34 np0005532048 podman[441375]: 2025-11-22 10:20:34.792765721 +0000 UTC m=+0.044802522 container create a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1810 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:34 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:34 np0005532048 systemd[1]: Started libpod-conmon-a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d.scope.
Nov 22 05:20:34 np0005532048 podman[441375]: 2025-11-22 10:20:34.771670553 +0000 UTC m=+0.023707244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:20:34 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:20:34 np0005532048 podman[441375]: 2025-11-22 10:20:34.894160992 +0000 UTC m=+0.146197753 container init a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:20:34 np0005532048 podman[441375]: 2025-11-22 10:20:34.906019934 +0000 UTC m=+0.158056645 container start a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:20:34 np0005532048 podman[441375]: 2025-11-22 10:20:34.910391152 +0000 UTC m=+0.162427913 container attach a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:20:34 np0005532048 ecstatic_shamir[441391]: 167 167
Nov 22 05:20:34 np0005532048 systemd[1]: libpod-a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d.scope: Deactivated successfully.
Nov 22 05:20:34 np0005532048 conmon[441391]: conmon a89292f7a1e76705d501 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d.scope/container/memory.events
Nov 22 05:20:34 np0005532048 podman[441375]: 2025-11-22 10:20:34.916258626 +0000 UTC m=+0.168295307 container died a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:20:34 np0005532048 systemd[1]: var-lib-containers-storage-overlay-eec9ad2b5109ea0a8a473b5146e6c059757a2a3647555ec51097f654a19e06d5-merged.mount: Deactivated successfully.
Nov 22 05:20:34 np0005532048 podman[441375]: 2025-11-22 10:20:34.965481615 +0000 UTC m=+0.217518326 container remove a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:20:34 np0005532048 systemd[1]: libpod-conmon-a89292f7a1e76705d501fe4f439920c6125813714b689d95055124e99b53998d.scope: Deactivated successfully.
Nov 22 05:20:35 np0005532048 podman[441414]: 2025-11-22 10:20:35.169812646 +0000 UTC m=+0.067250293 container create e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:20:35 np0005532048 systemd[1]: Started libpod-conmon-e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62.scope.
Nov 22 05:20:35 np0005532048 podman[441414]: 2025-11-22 10:20:35.148184635 +0000 UTC m=+0.045622312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:20:35 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:20:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:35 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:35 np0005532048 podman[441414]: 2025-11-22 10:20:35.269577588 +0000 UTC m=+0.167015325 container init e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:20:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:35 np0005532048 podman[441414]: 2025-11-22 10:20:35.277928193 +0000 UTC m=+0.175365860 container start e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:20:35 np0005532048 podman[441414]: 2025-11-22 10:20:35.282345422 +0000 UTC m=+0.179783109 container attach e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:20:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:35.577+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:35 np0005532048 nova_compute[253661]: 2025-11-22 10:20:35.640 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:35 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:36 np0005532048 nova_compute[253661]: 2025-11-22 10:20:36.221 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:20:36 np0005532048 brave_murdock[441430]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:20:36 np0005532048 brave_murdock[441430]: --> relative data size: 1.0
Nov 22 05:20:36 np0005532048 brave_murdock[441430]: --> All data devices are unavailable
Nov 22 05:20:36 np0005532048 systemd[1]: libpod-e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62.scope: Deactivated successfully.
Nov 22 05:20:36 np0005532048 podman[441459]: 2025-11-22 10:20:36.355856684 +0000 UTC m=+0.026446761 container died e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:20:36 np0005532048 systemd[1]: var-lib-containers-storage-overlay-1e25d2ffb6e0a6e557143453161344ba46c45642fefc393538fcc04cd23a8c37-merged.mount: Deactivated successfully.
Nov 22 05:20:36 np0005532048 podman[441459]: 2025-11-22 10:20:36.415586662 +0000 UTC m=+0.086176719 container remove e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:20:36 np0005532048 systemd[1]: libpod-conmon-e5c05b81f34eddad9acd00527d9ce369b828feb1fa5d0baf7b88c5b711b9aa62.scope: Deactivated successfully.
Nov 22 05:20:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:36.545+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:36 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:36 np0005532048 podman[441615]: 2025-11-22 10:20:36.990700026 +0000 UTC m=+0.042586408 container create df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:20:37 np0005532048 systemd[1]: Started libpod-conmon-df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534.scope.
Nov 22 05:20:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:20:37 np0005532048 podman[441615]: 2025-11-22 10:20:36.971914874 +0000 UTC m=+0.023801266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:20:37 np0005532048 podman[441615]: 2025-11-22 10:20:37.071085621 +0000 UTC m=+0.122972083 container init df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:20:37 np0005532048 podman[441615]: 2025-11-22 10:20:37.078514903 +0000 UTC m=+0.130401275 container start df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 05:20:37 np0005532048 podman[441615]: 2025-11-22 10:20:37.082481941 +0000 UTC m=+0.134368403 container attach df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:20:37 np0005532048 elated_shannon[441631]: 167 167
Nov 22 05:20:37 np0005532048 systemd[1]: libpod-df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534.scope: Deactivated successfully.
Nov 22 05:20:37 np0005532048 podman[441615]: 2025-11-22 10:20:37.087654008 +0000 UTC m=+0.139540390 container died df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:20:37 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c07bb9e05b0a1c1226fefcfce4a9dce27a4b4ddce535d88418a72d6e14a1284e-merged.mount: Deactivated successfully.
Nov 22 05:20:37 np0005532048 podman[441615]: 2025-11-22 10:20:37.153138178 +0000 UTC m=+0.205024560 container remove df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shannon, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:20:37 np0005532048 systemd[1]: libpod-conmon-df8cfc47815a5ed72c9c4cc335ca6a6d7694453c5a81f857e69897f0ffc2f534.scope: Deactivated successfully.
Nov 22 05:20:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:37 np0005532048 podman[441655]: 2025-11-22 10:20:37.39008425 +0000 UTC m=+0.069981320 container create 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 05:20:37 np0005532048 systemd[1]: Started libpod-conmon-84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a.scope.
Nov 22 05:20:37 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:20:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:37 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:37 np0005532048 podman[441655]: 2025-11-22 10:20:37.366995843 +0000 UTC m=+0.046892943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:20:37 np0005532048 podman[441655]: 2025-11-22 10:20:37.471231655 +0000 UTC m=+0.151128815 container init 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:20:37 np0005532048 podman[441655]: 2025-11-22 10:20:37.485383992 +0000 UTC m=+0.165281072 container start 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 05:20:37 np0005532048 podman[441655]: 2025-11-22 10:20:37.490716804 +0000 UTC m=+0.170613944 container attach 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:20:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:37.525+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:37 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]: {
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:    "0": [
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:        {
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "devices": [
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "/dev/loop3"
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            ],
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_name": "ceph_lv0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_size": "21470642176",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "name": "ceph_lv0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "tags": {
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.cluster_name": "ceph",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.crush_device_class": "",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.encrypted": "0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.osd_id": "0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.type": "block",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.vdo": "0"
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            },
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "type": "block",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "vg_name": "ceph_vg0"
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:        }
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:    ],
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:    "1": [
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:        {
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "devices": [
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "/dev/loop4"
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            ],
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_name": "ceph_lv1",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_size": "21470642176",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "name": "ceph_lv1",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "tags": {
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.cluster_name": "ceph",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.crush_device_class": "",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.encrypted": "0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.osd_id": "1",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.type": "block",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.vdo": "0"
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            },
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "type": "block",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "vg_name": "ceph_vg1"
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:        }
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:    ],
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:    "2": [
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:        {
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "devices": [
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "/dev/loop5"
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            ],
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_name": "ceph_lv2",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_size": "21470642176",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "name": "ceph_lv2",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "tags": {
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.cluster_name": "ceph",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.crush_device_class": "",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.encrypted": "0",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.osd_id": "2",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.type": "block",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:                "ceph.vdo": "0"
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            },
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "type": "block",
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:            "vg_name": "ceph_vg2"
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:        }
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]:    ]
Nov 22 05:20:38 np0005532048 hardcore_lichterman[441672]: }
Nov 22 05:20:38 np0005532048 systemd[1]: libpod-84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a.scope: Deactivated successfully.
Nov 22 05:20:38 np0005532048 podman[441655]: 2025-11-22 10:20:38.334371776 +0000 UTC m=+1.014268836 container died 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:20:38 np0005532048 systemd[1]: var-lib-containers-storage-overlay-851b228509afc8d0a5969d4528bc37dc904a14bb605c877a88e96d8beec48f55-merged.mount: Deactivated successfully.
Nov 22 05:20:38 np0005532048 podman[441655]: 2025-11-22 10:20:38.399734763 +0000 UTC m=+1.079631833 container remove 84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:20:38 np0005532048 systemd[1]: libpod-conmon-84c5a419845b10e4da0d76d081856db09a97723743fbbc466a3f1a9c3bdd896a.scope: Deactivated successfully.
Nov 22 05:20:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:38.574+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:38 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:39 np0005532048 podman[441834]: 2025-11-22 10:20:39.083723031 +0000 UTC m=+0.049309342 container create 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:20:39 np0005532048 systemd[1]: Started libpod-conmon-44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c.scope.
Nov 22 05:20:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:20:39 np0005532048 podman[441834]: 2025-11-22 10:20:39.058587243 +0000 UTC m=+0.024173554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:20:39 np0005532048 podman[441834]: 2025-11-22 10:20:39.170744771 +0000 UTC m=+0.136331072 container init 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 05:20:39 np0005532048 podman[441834]: 2025-11-22 10:20:39.178276015 +0000 UTC m=+0.143862316 container start 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:20:39 np0005532048 podman[441834]: 2025-11-22 10:20:39.181971976 +0000 UTC m=+0.147558337 container attach 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:20:39 np0005532048 magical_lehmann[441851]: 167 167
Nov 22 05:20:39 np0005532048 podman[441834]: 2025-11-22 10:20:39.186159629 +0000 UTC m=+0.151745910 container died 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:20:39 np0005532048 systemd[1]: libpod-44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c.scope: Deactivated successfully.
Nov 22 05:20:39 np0005532048 systemd[1]: var-lib-containers-storage-overlay-f838eef0af9a60cc141d8bc078239a9a97443439a3906f35c8e44163a3e7b5e8-merged.mount: Deactivated successfully.
Nov 22 05:20:39 np0005532048 podman[441834]: 2025-11-22 10:20:39.224221485 +0000 UTC m=+0.189807766 container remove 44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:20:39 np0005532048 systemd[1]: libpod-conmon-44832010f3dc92d770d6b2f6abcc9d4b33d72c8a4ace79d524f93a0cfa76204c.scope: Deactivated successfully.
Nov 22 05:20:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:39 np0005532048 podman[441873]: 2025-11-22 10:20:39.431628031 +0000 UTC m=+0.044681299 container create 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:20:39 np0005532048 systemd[1]: Started libpod-conmon-370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a.scope.
Nov 22 05:20:39 np0005532048 nova_compute[253661]: 2025-11-22 10:20:39.478 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:39 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:20:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:39 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:20:39 np0005532048 podman[441873]: 2025-11-22 10:20:39.414388008 +0000 UTC m=+0.027441296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:20:39 np0005532048 podman[441873]: 2025-11-22 10:20:39.52065447 +0000 UTC m=+0.133707748 container init 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:20:39 np0005532048 podman[441873]: 2025-11-22 10:20:39.531380843 +0000 UTC m=+0.144434151 container start 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:20:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1815 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:39 np0005532048 podman[441873]: 2025-11-22 10:20:39.537150654 +0000 UTC m=+0.150203932 container attach 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:20:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:39.550+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:39 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1815 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:39 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]: {
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "osd_id": 1,
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "type": "bluestore"
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:    },
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "osd_id": 0,
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "type": "bluestore"
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:    },
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "osd_id": 2,
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:        "type": "bluestore"
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]:    }
Nov 22 05:20:40 np0005532048 stupefied_archimedes[441890]: }
Nov 22 05:20:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:40.510+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:40 np0005532048 systemd[1]: libpod-370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a.scope: Deactivated successfully.
Nov 22 05:20:40 np0005532048 systemd[1]: libpod-370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a.scope: Consumed 1.014s CPU time.
Nov 22 05:20:40 np0005532048 podman[441873]: 2025-11-22 10:20:40.539944619 +0000 UTC m=+1.152997897 container died 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:20:40 np0005532048 systemd[1]: var-lib-containers-storage-overlay-ea25133f0ef2c4c286c7d934c8eb4fd415d16cdc1623654d1929ef3f3f47e9a1-merged.mount: Deactivated successfully.
Nov 22 05:20:40 np0005532048 podman[441873]: 2025-11-22 10:20:40.602171478 +0000 UTC m=+1.215224746 container remove 370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:20:40 np0005532048 systemd[1]: libpod-conmon-370865a59a8db061f1f553d32e154d0abb0dcc17f28d90e4ed6e8e154df29a3a.scope: Deactivated successfully.
Nov 22 05:20:40 np0005532048 nova_compute[253661]: 2025-11-22 10:20:40.642 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:20:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:40 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:20:40 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 85d70833-0a74-4e08-a351-20fde5b84a3c does not exist
Nov 22 05:20:40 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev c96460a7-bbde-46da-99ed-8aed1868288d does not exist
Nov 22 05:20:40 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:40 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:20:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:41.530+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:41 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:42.553+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:42 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:43.597+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:43 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:44 np0005532048 nova_compute[253661]: 2025-11-22 10:20:44.481 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1820 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:44.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:44 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1820 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:44 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:45.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:45 np0005532048 nova_compute[253661]: 2025-11-22 10:20:45.645 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:45 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:46.628+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:46 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:47.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:47 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:48.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:48 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:49 np0005532048 nova_compute[253661]: 2025-11-22 10:20:49.486 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1825 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:49.568+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:49 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1825 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:49 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:50.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:50 np0005532048 nova_compute[253661]: 2025-11-22 10:20:50.647 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:50 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:51.619+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:51 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:20:52
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.control', 'vms']
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:20:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:52.649+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:20:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:20:52 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:53.628+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:54 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:54 np0005532048 nova_compute[253661]: 2025-11-22 10:20:54.490 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1830 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:54.605+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:55 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1830 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:55 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:55.631+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:55 np0005532048 nova_compute[253661]: 2025-11-22 10:20:55.648 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:56 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:56.681+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:57 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:20:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:20:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:20:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:20:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:20:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:57.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:20:57 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 17K writes, 81K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1840 writes, 8776 keys, 1840 commit groups, 1.0 writes per commit group, ingest: 9.55 MB, 0.02 MB/s#012Interval WAL: 1840 writes, 1840 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     45.3      1.99              0.34        59    0.034       0      0       0.0       0.0#012  L6      1/0   10.81 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.5    107.5     92.7      5.38              1.61        58    0.093    434K    30K       0.0       0.0#012 Sum      1/0   10.81 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.5     78.4     79.9      7.37              1.95       117    0.063    434K    30K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.8    125.4    126.4      0.61              0.31        14    0.044     77K   3578       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    107.5     92.7      5.38              1.61        58    0.093    434K    30K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.4      1.99              0.34        58    0.034       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.088, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.58 GB write, 0.09 MB/s write, 0.56 GB read, 0.09 MB/s read, 7.4 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cbe48f11f0#2 capacity: 304.00 MB usage: 63.55 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000467 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4085,60.54 MB,19.9133%) FilterBlock(118,1.19 MB,0.390218%) IndexBlock(118,1.83 MB,0.600619%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 22 05:20:58 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:58.638+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:20:59 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:20:59 np0005532048 nova_compute[253661]: 2025-11-22 10:20:59.495 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:20:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:20:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:20:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:20:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:20:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:20:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1835 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:20:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:20:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:20:59.650+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:20:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:00 np0005532048 podman[441987]: 2025-11-22 10:21:00.378111986 +0000 UTC m=+0.059971905 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 05:21:00 np0005532048 podman[441988]: 2025-11-22 10:21:00.391915235 +0000 UTC m=+0.073173059 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:21:00 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1835 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:00 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:00 np0005532048 nova_compute[253661]: 2025-11-22 10:21:00.649 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:00.659+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:01 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:01.642+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:02 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:02.639+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:21:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:21:03 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:03.644+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:04 np0005532048 nova_compute[253661]: 2025-11-22 10:21:04.496 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:04 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1840 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:04.650+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:05 np0005532048 podman[442028]: 2025-11-22 10:21:05.374871601 +0000 UTC m=+0.072621585 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:21:05 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1840 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:05 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:05.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:05 np0005532048 nova_compute[253661]: 2025-11-22 10:21:05.651 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:06 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:06.677+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:07 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:07.653+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:08 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:08.617+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:09 np0005532048 nova_compute[253661]: 2025-11-22 10:21:09.500 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1845 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:09 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:09 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1845 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:09.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:10 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:10 np0005532048 nova_compute[253661]: 2025-11-22 10:21:10.653 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:10.668+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:11 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:11.637+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:21:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/94693905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:21:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:21:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/94693905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:21:12 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:12.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:13 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:13.701+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:14 np0005532048 nova_compute[253661]: 2025-11-22 10:21:14.503 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1850 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:14.663+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:14 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:14 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1850 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:15.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:15 np0005532048 nova_compute[253661]: 2025-11-22 10:21:15.656 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:15 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:16.671+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:16 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:16 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:17.624+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:17 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:18.597+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:18 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:19 np0005532048 nova_compute[253661]: 2025-11-22 10:21:19.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:21:19 np0005532048 nova_compute[253661]: 2025-11-22 10:21:19.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:21:19 np0005532048 nova_compute[253661]: 2025-11-22 10:21:19.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:21:19 np0005532048 nova_compute[253661]: 2025-11-22 10:21:19.245 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:21:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:19 np0005532048 nova_compute[253661]: 2025-11-22 10:21:19.507 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1855 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:19 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:19.608+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:19 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:19 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:19 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1855 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:19 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:20.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:20 np0005532048 nova_compute[253661]: 2025-11-22 10:21:20.658 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:20 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:21 np0005532048 nova_compute[253661]: 2025-11-22 10:21:21.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:21:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:21.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:21 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:21 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:21 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:22 np0005532048 nova_compute[253661]: 2025-11-22 10:21:22.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:21:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:22.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:21:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:21:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:21:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:21:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:21:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:21:22 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:23.633+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:23 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:24 np0005532048 nova_compute[253661]: 2025-11-22 10:21:24.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:21:24 np0005532048 nova_compute[253661]: 2025-11-22 10:21:24.511 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1860 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:24.610+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:24 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:24 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:24 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1860 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:24 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:25.660+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:25 np0005532048 nova_compute[253661]: 2025-11-22 10:21:25.687 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:25 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.223 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.227 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.228 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.261 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.262 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.262 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:21:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:26.706+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:21:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2603466631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.733 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.907 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.908 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.908 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:21:26 np0005532048 nova_compute[253661]: 2025-11-22 10:21:26.909 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:21:26 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:27 np0005532048 nova_compute[253661]: 2025-11-22 10:21:27.005 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:21:27 np0005532048 nova_compute[253661]: 2025-11-22 10:21:27.006 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:21:27 np0005532048 nova_compute[253661]: 2025-11-22 10:21:27.039 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:21:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:21:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433023368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:21:27 np0005532048 nova_compute[253661]: 2025-11-22 10:21:27.486 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:21:27 np0005532048 nova_compute[253661]: 2025-11-22 10:21:27.494 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:21:27 np0005532048 nova_compute[253661]: 2025-11-22 10:21:27.511 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:21:27 np0005532048 nova_compute[253661]: 2025-11-22 10:21:27.514 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:21:27 np0005532048 nova_compute[253661]: 2025-11-22 10:21:27.515 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:21:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:27.692+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:27 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:21:28.032 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:21:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:21:28.032 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:21:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:21:28.032 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:21:28 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:28.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:28 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:28 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:28 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:29 np0005532048 nova_compute[253661]: 2025-11-22 10:21:29.513 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1864 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:29.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:29 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1864 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:29 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:30 np0005532048 nova_compute[253661]: 2025-11-22 10:21:30.516 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:21:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:30.614+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:30 np0005532048 nova_compute[253661]: 2025-11-22 10:21:30.689 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:30 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:31 np0005532048 nova_compute[253661]: 2025-11-22 10:21:31.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:21:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:31 np0005532048 podman[442099]: 2025-11-22 10:21:31.375683357 +0000 UTC m=+0.055934576 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:21:31 np0005532048 podman[442100]: 2025-11-22 10:21:31.45230719 +0000 UTC m=+0.121401905 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 05:21:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:31.631+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:31 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:32 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:32.649+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:32 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:32 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:33 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:33 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:33 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:33.667+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:33 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:33 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:34 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:34 np0005532048 nova_compute[253661]: 2025-11-22 10:21:34.517 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:34 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1870 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:34 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:34 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:34.629+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:34 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:34 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:35 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1870 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:35 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:35 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:35 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:35.602+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:35 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:35 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:35 np0005532048 nova_compute[253661]: 2025-11-22 10:21:35.690 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:36 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:36 np0005532048 podman[442138]: 2025-11-22 10:21:36.420517535 +0000 UTC m=+0.110576038 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:21:36 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:36.646+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:36 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:36 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:37 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:37 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:37 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:37.668+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:37 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:37 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:38 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:38 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:38.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:38 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:38 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:39 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:39 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:39 np0005532048 nova_compute[253661]: 2025-11-22 10:21:39.522 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:39 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1875 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:39 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:39 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:39.734+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:39 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:39 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:40 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1875 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:40 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:40 np0005532048 nova_compute[253661]: 2025-11-22 10:21:40.691 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:40 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:40.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:40 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:40 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:41 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:21:41 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 5da41cd7-8ed4-48b3-80aa-aa4aab63c7f8 does not exist
Nov 22 05:21:41 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 4408186e-0b23-45ee-8858-d3204aef80fe does not exist
Nov 22 05:21:41 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 41d65415-6e42-4530-9875-fe121bd0cb25 does not exist
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:21:41 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:21:41 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:41 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:41.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:41 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:21:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:21:42 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:21:42 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:42 np0005532048 podman[442438]: 2025-11-22 10:21:42.217925698 +0000 UTC m=+0.053429244 container create 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:21:42 np0005532048 systemd[1]: Started libpod-conmon-30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c.scope.
Nov 22 05:21:42 np0005532048 podman[442438]: 2025-11-22 10:21:42.191088578 +0000 UTC m=+0.026592194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:21:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:21:42 np0005532048 podman[442438]: 2025-11-22 10:21:42.308539914 +0000 UTC m=+0.144043450 container init 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:21:42 np0005532048 podman[442438]: 2025-11-22 10:21:42.315976997 +0000 UTC m=+0.151480563 container start 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:21:42 np0005532048 podman[442438]: 2025-11-22 10:21:42.319964695 +0000 UTC m=+0.155468231 container attach 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:21:42 np0005532048 reverent_greider[442455]: 167 167
Nov 22 05:21:42 np0005532048 systemd[1]: libpod-30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c.scope: Deactivated successfully.
Nov 22 05:21:42 np0005532048 podman[442438]: 2025-11-22 10:21:42.326247709 +0000 UTC m=+0.161751235 container died 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:21:42 np0005532048 systemd[1]: var-lib-containers-storage-overlay-4489e64e7ac81052c16533d6952790d2a675e6f4e691898e76d66efd5ceb67d2-merged.mount: Deactivated successfully.
Nov 22 05:21:42 np0005532048 podman[442438]: 2025-11-22 10:21:42.374815173 +0000 UTC m=+0.210318729 container remove 30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_greider, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:21:42 np0005532048 systemd[1]: libpod-conmon-30ce00f17a8b6c5c730b55bc98e0544d9a14f41c6264041efb76ed375678112c.scope: Deactivated successfully.
Nov 22 05:21:42 np0005532048 podman[442480]: 2025-11-22 10:21:42.577092374 +0000 UTC m=+0.038212510 container create 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:21:42 np0005532048 systemd[1]: Started libpod-conmon-73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27.scope.
Nov 22 05:21:42 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:21:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:42 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:42 np0005532048 podman[442480]: 2025-11-22 10:21:42.643777953 +0000 UTC m=+0.104898139 container init 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:21:42 np0005532048 podman[442480]: 2025-11-22 10:21:42.654593539 +0000 UTC m=+0.115713675 container start 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:21:42 np0005532048 podman[442480]: 2025-11-22 10:21:42.561015739 +0000 UTC m=+0.022135885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:21:42 np0005532048 podman[442480]: 2025-11-22 10:21:42.659101859 +0000 UTC m=+0.120222015 container attach 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:21:42 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:42.729+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:42 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:42 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:43 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:43 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:43 np0005532048 bold_ganguly[442497]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:21:43 np0005532048 bold_ganguly[442497]: --> relative data size: 1.0
Nov 22 05:21:43 np0005532048 bold_ganguly[442497]: --> All data devices are unavailable
Nov 22 05:21:43 np0005532048 systemd[1]: libpod-73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27.scope: Deactivated successfully.
Nov 22 05:21:43 np0005532048 systemd[1]: libpod-73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27.scope: Consumed 1.007s CPU time.
Nov 22 05:21:43 np0005532048 podman[442480]: 2025-11-22 10:21:43.72403579 +0000 UTC m=+1.185155936 container died 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:21:43 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:43.752+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:43 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:43 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:43 np0005532048 systemd[1]: var-lib-containers-storage-overlay-55ffcbb93dba5cca27e976a00c7a71a151525fbdeb9ac44fae8300f382a31461-merged.mount: Deactivated successfully.
Nov 22 05:21:43 np0005532048 podman[442480]: 2025-11-22 10:21:43.861521669 +0000 UTC m=+1.322641805 container remove 73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ganguly, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:21:43 np0005532048 systemd[1]: libpod-conmon-73ce3f5adacffbc262864e9b5308026aa42cad53f1791a2e4e43928907738b27.scope: Deactivated successfully.
Nov 22 05:21:44 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:44 np0005532048 nova_compute[253661]: 2025-11-22 10:21:44.524 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:44 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1880 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:44 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:44 np0005532048 podman[442679]: 2025-11-22 10:21:44.561035889 +0000 UTC m=+0.043406797 container create 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:21:44 np0005532048 systemd[1]: Started libpod-conmon-43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32.scope.
Nov 22 05:21:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:21:44 np0005532048 podman[442679]: 2025-11-22 10:21:44.634185567 +0000 UTC m=+0.116556495 container init 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:21:44 np0005532048 podman[442679]: 2025-11-22 10:21:44.538835684 +0000 UTC m=+0.021206642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:21:44 np0005532048 podman[442679]: 2025-11-22 10:21:44.641581359 +0000 UTC m=+0.123952267 container start 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:21:44 np0005532048 podman[442679]: 2025-11-22 10:21:44.645604458 +0000 UTC m=+0.127975386 container attach 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:21:44 np0005532048 boring_merkle[442695]: 167 167
Nov 22 05:21:44 np0005532048 systemd[1]: libpod-43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32.scope: Deactivated successfully.
Nov 22 05:21:44 np0005532048 conmon[442695]: conmon 43a7c7b8cdbf7f8aaea1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32.scope/container/memory.events
Nov 22 05:21:44 np0005532048 podman[442679]: 2025-11-22 10:21:44.648240713 +0000 UTC m=+0.130611621 container died 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:21:44 np0005532048 systemd[1]: var-lib-containers-storage-overlay-a4a7ef8a479cbc979bb48582dffb246e089e2ea5a4ecc77efe7270d51c42c71f-merged.mount: Deactivated successfully.
Nov 22 05:21:44 np0005532048 podman[442679]: 2025-11-22 10:21:44.68801872 +0000 UTC m=+0.170389658 container remove 43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_merkle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:21:44 np0005532048 systemd[1]: libpod-conmon-43a7c7b8cdbf7f8aaea10d735482f5648bfdbe94eb4475d26e003539fd197a32.scope: Deactivated successfully.
Nov 22 05:21:44 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:44.796+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:44 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:44 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:44 np0005532048 podman[442718]: 2025-11-22 10:21:44.921229231 +0000 UTC m=+0.062774753 container create f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:21:44 np0005532048 systemd[1]: Started libpod-conmon-f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c.scope.
Nov 22 05:21:44 np0005532048 podman[442718]: 2025-11-22 10:21:44.887698338 +0000 UTC m=+0.029243860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:21:44 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:21:44 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:45 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:45 np0005532048 podman[442718]: 2025-11-22 10:21:45.014579615 +0000 UTC m=+0.156125117 container init f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:21:45 np0005532048 podman[442718]: 2025-11-22 10:21:45.022202713 +0000 UTC m=+0.163748195 container start f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:21:45 np0005532048 podman[442718]: 2025-11-22 10:21:45.026513199 +0000 UTC m=+0.168058681 container attach f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:21:45 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1880 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:45 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:45 np0005532048 nova_compute[253661]: 2025-11-22 10:21:45.694 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:45 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:45 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:45.809+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:45 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]: {
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:    "0": [
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:        {
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "devices": [
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "/dev/loop3"
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            ],
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_name": "ceph_lv0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_size": "21470642176",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4cbd1f75-f268-432e-8433-131a982bebcd,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "name": "ceph_lv0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "tags": {
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.block_uuid": "xm1pLm-Wdsq-pbZM-AGwP-eRsV-OGPk-tXsdzE",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.cluster_name": "ceph",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.crush_device_class": "",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.encrypted": "0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.osd_fsid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.osd_id": "0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.type": "block",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.vdo": "0"
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            },
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "type": "block",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "vg_name": "ceph_vg0"
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:        }
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:    ],
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:    "1": [
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:        {
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "devices": [
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "/dev/loop4"
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            ],
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_name": "ceph_lv1",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_size": "21470642176",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=02f6ddd3-1c9e-4a14-b90d-23afe4793555,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "name": "ceph_lv1",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "tags": {
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.block_uuid": "EECla6-I3Nv-C9YP-y6vz-MCOd-wzdK-867Ubb",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.cluster_name": "ceph",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.crush_device_class": "",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.encrypted": "0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.osd_fsid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.osd_id": "1",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.type": "block",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.vdo": "0"
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            },
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "type": "block",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "vg_name": "ceph_vg1"
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:        }
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:    ],
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:    "2": [
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:        {
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "devices": [
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "/dev/loop5"
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            ],
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_name": "ceph_lv2",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_size": "21470642176",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=34829716-a12c-57a6-8915-c1aa615c9d8a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ec419-dbeb-4688-a406-de4eff9337c5,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "lv_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "name": "ceph_lv2",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "tags": {
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.block_uuid": "VX5R35-LiWg-3WHW-KGCc-8XR6-fxd2-y9xDsp",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.cephx_lockbox_secret": "",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.cluster_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.cluster_name": "ceph",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.crush_device_class": "",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.encrypted": "0",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.osd_fsid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.osd_id": "2",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.type": "block",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:                "ceph.vdo": "0"
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            },
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "type": "block",
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:            "vg_name": "ceph_vg2"
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:        }
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]:    ]
Nov 22 05:21:45 np0005532048 gallant_blackwell[442735]: }
Nov 22 05:21:45 np0005532048 systemd[1]: libpod-f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c.scope: Deactivated successfully.
Nov 22 05:21:45 np0005532048 podman[442718]: 2025-11-22 10:21:45.870247883 +0000 UTC m=+1.011793385 container died f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:21:45 np0005532048 systemd[1]: var-lib-containers-storage-overlay-914e6b46f0c6e481e9fffee1b5fcfdcb05d64a78e80c853c9685349df24139a2-merged.mount: Deactivated successfully.
Nov 22 05:21:45 np0005532048 podman[442718]: 2025-11-22 10:21:45.939801843 +0000 UTC m=+1.081347325 container remove f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:21:45 np0005532048 systemd[1]: libpod-conmon-f3a5717d3015bcd3d13cb32addffeb2e801c56549d7194dd65b334c4e0462c5c.scope: Deactivated successfully.
Nov 22 05:21:46 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:46 np0005532048 podman[442894]: 2025-11-22 10:21:46.542185766 +0000 UTC m=+0.036986870 container create 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:21:46 np0005532048 systemd[1]: Started libpod-conmon-3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91.scope.
Nov 22 05:21:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:21:46 np0005532048 podman[442894]: 2025-11-22 10:21:46.52480085 +0000 UTC m=+0.019601953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:21:46 np0005532048 podman[442894]: 2025-11-22 10:21:46.622057259 +0000 UTC m=+0.116858382 container init 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:21:46 np0005532048 podman[442894]: 2025-11-22 10:21:46.631565993 +0000 UTC m=+0.126367096 container start 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:21:46 np0005532048 podman[442894]: 2025-11-22 10:21:46.635303525 +0000 UTC m=+0.130104688 container attach 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:21:46 np0005532048 cranky_ritchie[442910]: 167 167
Nov 22 05:21:46 np0005532048 systemd[1]: libpod-3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91.scope: Deactivated successfully.
Nov 22 05:21:46 np0005532048 conmon[442910]: conmon 3c992f8f775e8d2dc47c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91.scope/container/memory.events
Nov 22 05:21:46 np0005532048 podman[442894]: 2025-11-22 10:21:46.639724824 +0000 UTC m=+0.134525967 container died 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:21:46 np0005532048 systemd[1]: var-lib-containers-storage-overlay-c3422d41979479a482f661f2bea7c7c53dcd51778466c6765e890217be093bb4-merged.mount: Deactivated successfully.
Nov 22 05:21:46 np0005532048 podman[442894]: 2025-11-22 10:21:46.696909399 +0000 UTC m=+0.191710502 container remove 3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:21:46 np0005532048 systemd[1]: libpod-conmon-3c992f8f775e8d2dc47c41b00b4f641e4e73cbc5048a16c8363927b2c02fda91.scope: Deactivated successfully.
Nov 22 05:21:46 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:46.772+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:46 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:46 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:46 np0005532048 podman[442933]: 2025-11-22 10:21:46.885989016 +0000 UTC m=+0.061146244 container create 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:21:46 np0005532048 systemd[1]: Started libpod-conmon-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope.
Nov 22 05:21:46 np0005532048 podman[442933]: 2025-11-22 10:21:46.855549068 +0000 UTC m=+0.030706356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:21:46 np0005532048 systemd[1]: Started libcrun container.
Nov 22 05:21:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:46 np0005532048 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:21:46 np0005532048 podman[442933]: 2025-11-22 10:21:46.98867479 +0000 UTC m=+0.163832008 container init 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:21:47 np0005532048 podman[442933]: 2025-11-22 10:21:47.004440397 +0000 UTC m=+0.179597635 container start 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:21:47 np0005532048 podman[442933]: 2025-11-22 10:21:47.008566128 +0000 UTC m=+0.183723356 container attach 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:21:47 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:47 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:47 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:47 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:47.727+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:47 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:47 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:47 np0005532048 agitated_napier[442950]: {
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:    "02f6ddd3-1c9e-4a14-b90d-23afe4793555": {
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "osd_id": 1,
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "osd_uuid": "02f6ddd3-1c9e-4a14-b90d-23afe4793555",
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "type": "bluestore"
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:    },
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:    "4cbd1f75-f268-432e-8433-131a982bebcd": {
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "osd_id": 0,
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "osd_uuid": "4cbd1f75-f268-432e-8433-131a982bebcd",
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "type": "bluestore"
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:    },
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:    "975ec419-dbeb-4688-a406-de4eff9337c5": {
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "ceph_fsid": "34829716-a12c-57a6-8915-c1aa615c9d8a",
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "osd_id": 2,
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "osd_uuid": "975ec419-dbeb-4688-a406-de4eff9337c5",
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:        "type": "bluestore"
Nov 22 05:21:47 np0005532048 agitated_napier[442950]:    }
Nov 22 05:21:47 np0005532048 agitated_napier[442950]: }
Nov 22 05:21:48 np0005532048 systemd[1]: libpod-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope: Deactivated successfully.
Nov 22 05:21:48 np0005532048 systemd[1]: libpod-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope: Consumed 1.009s CPU time.
Nov 22 05:21:48 np0005532048 conmon[442950]: conmon 5ce8f48e30bc59fd4b17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope/container/memory.events
Nov 22 05:21:48 np0005532048 podman[442933]: 2025-11-22 10:21:48.007325624 +0000 UTC m=+1.182482822 container died 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:21:48 np0005532048 systemd[1]: var-lib-containers-storage-overlay-331d9ad1f81b5ea3d297317bbf350d7c5133830b82ceb94c3ea5982bc36e1d71-merged.mount: Deactivated successfully.
Nov 22 05:21:48 np0005532048 podman[442933]: 2025-11-22 10:21:48.069229225 +0000 UTC m=+1.244386443 container remove 5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_napier, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 05:21:48 np0005532048 systemd[1]: libpod-conmon-5ce8f48e30bc59fd4b1781feb66bbcb0f14f48e5ae84cb90323c32a04eb9b465.scope: Deactivated successfully.
Nov 22 05:21:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:21:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:21:48 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:21:48 np0005532048 ceph-mon[75021]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:21:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 2c4a32c9-d883-450b-bc39-29b2180fdc33 does not exist
Nov 22 05:21:48 np0005532048 ceph-mgr[75315]: [progress WARNING root] complete: ev 0f09524e-428e-4749-80c2-e805a939c562 does not exist
Nov 22 05:21:48 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:21:48 np0005532048 ceph-mon[75021]: from='mgr.14128 192.168.122.100:0/3313797419' entity='mgr.compute-0.ldbkey' 
Nov 22 05:21:48 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:48 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:48 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:48.776+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:49 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:49 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:49 np0005532048 nova_compute[253661]: 2025-11-22 10:21:49.526 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:49 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1884 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:49 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:49 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:49 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:49 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:49.802+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:50 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1884 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:50 np0005532048 nova_compute[253661]: 2025-11-22 10:21:50.695 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:50 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:50 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:50.794+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:50 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:51 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:51 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:51 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:51 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:51.844+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:51 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:52 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Optimize plan auto_2025-11-22_10:21:52
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] do_upmap
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.meta']
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:21:52 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:21:52 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:52 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:52.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:52 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:53 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:53 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:53 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:53 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:53.819+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:53 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:54 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:54 np0005532048 nova_compute[253661]: 2025-11-22 10:21:54.530 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:54 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1889 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:54 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:54 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:54 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:54.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:54 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:55 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:55 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1889 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:55 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:55 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:55 np0005532048 nova_compute[253661]: 2025-11-22 10:21:55.696 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:55 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:55 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:55 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:55.810+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:56 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:56 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:56 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:56.830+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:57 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:57 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:21:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:21:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:21:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:21:57 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:21:57 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:57 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:57 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:57.809+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:58 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:58 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:58 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:58 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:58.826+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:59 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:21:59 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:21:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:21:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:21:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:21:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:21:59 np0005532048 ceph-mgr[75315]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:21:59 np0005532048 nova_compute[253661]: 2025-11-22 10:21:59.533 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:21:59 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1894 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:21:59 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:21:59 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:59 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:21:59.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:21:59 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:00 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:00 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1894 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:00 np0005532048 nova_compute[253661]: 2025-11-22 10:22:00.699 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:00 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:00 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:00 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:00.838+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:01 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:01 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:01 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:01.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:01 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:01 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:02 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:02 np0005532048 podman[443044]: 2025-11-22 10:22:02.393424856 +0000 UTC m=+0.083243787 container health_status b2d641870d1b893a8fb618bd14b406c93c10e30e5ffe3e9ada76911dad68e31c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:22:02 np0005532048 podman[443045]: 2025-11-22 10:22:02.408436845 +0000 UTC m=+0.089294426 container health_status fc6e526ada0e45bb72c721d20598a0e153d6b427b6c098b612826a5d7e1626c8 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:22:02 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:02.878+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:02 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:02 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:03 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007582556000520935 of space, bias 1.0, pg target 0.22747668001562804 quantized to 32 (current 32)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013885666959692648 of space, bias 1.0, pg target 0.41657000879077943 quantized to 32 (current 32)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:22:03 np0005532048 ceph-mgr[75315]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:22:03 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:03.917+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:03 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:03 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:04 np0005532048 nova_compute[253661]: 2025-11-22 10:22:04.538 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1899 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.559308) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924559386, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1424, "num_deletes": 250, "total_data_size": 1700418, "memory_usage": 1735272, "flush_reason": "Manual Compaction"}
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924571700, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1076226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80972, "largest_seqno": 82395, "table_properties": {"data_size": 1071240, "index_size": 2061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15535, "raw_average_key_size": 21, "raw_value_size": 1059315, "raw_average_value_size": 1475, "num_data_blocks": 91, "num_entries": 718, "num_filter_entries": 718, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763806826, "oldest_key_time": 1763806826, "file_creation_time": 1763806924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 12486 microseconds, and 6451 cpu microseconds.
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.571818) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1076226 bytes OK
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.571847) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.574358) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.574395) EVENT_LOG_v1 {"time_micros": 1763806924574387, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.574427) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1693896, prev total WAL file size 1693896, number of live WAL files 2.
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.575660) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303038' seq:72057594037927935, type:22 .. '6D6772737461740033323539' seq:0, type:0; will stop at (end)
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(1051KB)], [191(10MB)]
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924575738, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12416241, "oldest_snapshot_seqno": -1}
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 11273 keys, 9641834 bytes, temperature: kUnknown
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924655555, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 9641834, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9577750, "index_size": 34787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 301453, "raw_average_key_size": 26, "raw_value_size": 9386396, "raw_average_value_size": 832, "num_data_blocks": 1296, "num_entries": 11273, "num_filter_entries": 11273, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763800254, "oldest_key_time": 0, "file_creation_time": 1763806924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b634893-b995-4ffc-939b-a63b09dd2eb8", "db_session_id": "P2L1E0L4EXLHZ7SVTM6H", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.656128) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9641834 bytes
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.658076) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.2 rd, 120.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(20.5) write-amplify(9.0) OK, records in: 11742, records dropped: 469 output_compression: NoCompression
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.658112) EVENT_LOG_v1 {"time_micros": 1763806924658095, "job": 120, "event": "compaction_finished", "compaction_time_micros": 79979, "compaction_time_cpu_micros": 39672, "output_level": 6, "num_output_files": 1, "total_output_size": 9641834, "num_input_records": 11742, "num_output_records": 11273, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924659042, "job": 120, "event": "table_file_deletion", "file_number": 193}
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763806924663720, "job": 120, "event": "table_file_deletion", "file_number": 191}
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.575534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:22:04 np0005532048 ceph-mon[75021]: rocksdb: (Original Log Time 2025/11/22-10:22:04.663809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:22:04 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:04.906+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:04 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:04 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:05 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:05 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1899 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:05 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:05 np0005532048 nova_compute[253661]: 2025-11-22 10:22:05.702 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:05 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:05.929+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:05 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 15 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:05 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:06 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:06 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:06.943+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:06 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:06 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:07 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:07 np0005532048 ceph-mon[75021]: 15 slow requests (by type [ 'delayed' : 15 ] most affected pool [ 'images' : 15 ])
Nov 22 05:22:07 np0005532048 podman[443082]: 2025-11-22 10:22:07.429611751 +0000 UTC m=+0.117525989 container health_status e277511fcbe556b986fb6e2e2f3e6348452cec0121d2c7731f0a5a78c2e3e478 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:22:07 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:07.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:07 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:07 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:08 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:08 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:08.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:08 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:08 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:09 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:09 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:09 np0005532048 nova_compute[253661]: 2025-11-22 10:22:09.544 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:09 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 15 slow ops, oldest one blocked for 1904 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:09 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:22:09 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:09.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:09 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:09 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:10 np0005532048 systemd-logind[822]: New session 55 of user zuul.
Nov 22 05:22:10 np0005532048 systemd[1]: Started Session 55 of User zuul.
Nov 22 05:22:10 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:10 np0005532048 ceph-mon[75021]: Health check update: 15 slow ops, oldest one blocked for 1904 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:10 np0005532048 nova_compute[253661]: 2025-11-22 10:22:10.704 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:10 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:10.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:10 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:10 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:11 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:11 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:11 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:11.972+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:11 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:11 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:12 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:22:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303457913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:22:12 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:22:12 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303457913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:22:12 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:12 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:12.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:12 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:13 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:13 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:13 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23001 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:13 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:13.943+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:13 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:13 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:14 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23003 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:14 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:14 np0005532048 nova_compute[253661]: 2025-11-22 10:22:14.546 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:14 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1909 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:22:14 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 05:22:14 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364009612' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 05:22:14 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:14 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:14 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:14.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:15 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:15 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:15 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1909 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:15 np0005532048 nova_compute[253661]: 2025-11-22 10:22:15.707 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:15 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:15.937+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:15 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:15 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:16 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:16 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:16.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:16 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:16 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:17 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:17 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:17 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:17.963+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:17 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:17 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:18 np0005532048 ovs-vsctl[443398]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 22 05:22:18 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:18 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:18 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:18.982+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:18 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:19 np0005532048 virtqemud[254229]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 22 05:22:19 np0005532048 virtqemud[254229]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 22 05:22:19 np0005532048 virtqemud[254229]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 22 05:22:19 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:19 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:19 np0005532048 nova_compute[253661]: 2025-11-22 10:22:19.550 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:19 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1914 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:19 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:22:19 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: cache status {prefix=cache status} (starting...)
Nov 22 05:22:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:20.015+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:20 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: client ls {prefix=client ls} (starting...)
Nov 22 05:22:20 np0005532048 lvm[443730]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 05:22:20 np0005532048 lvm[443730]: VG ceph_vg1 finished
Nov 22 05:22:20 np0005532048 nova_compute[253661]: 2025-11-22 10:22:20.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:22:20 np0005532048 nova_compute[253661]: 2025-11-22 10:22:20.230 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 22 05:22:20 np0005532048 nova_compute[253661]: 2025-11-22 10:22:20.231 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 22 05:22:20 np0005532048 nova_compute[253661]: 2025-11-22 10:22:20.245 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 22 05:22:20 np0005532048 lvm[443755]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 05:22:20 np0005532048 lvm[443755]: VG ceph_vg2 finished
Nov 22 05:22:20 np0005532048 lvm[443764]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 05:22:20 np0005532048 lvm[443764]: VG ceph_vg0 finished
Nov 22 05:22:20 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23007 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:20 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:20 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1914 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:20 np0005532048 nova_compute[253661]: 2025-11-22 10:22:20.709 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:20 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: damage ls {prefix=damage ls} (starting...)
Nov 22 05:22:20 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23009 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:20 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump loads {prefix=dump loads} (starting...)
Nov 22 05:22:20 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:20 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:20 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:20.994+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:21 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 22 05:22:21 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 22 05:22:21 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:21 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 22 05:22:21 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 22 05:22:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 22 05:22:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3498881504' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 05:22:21 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:21 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23015 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:21 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T10:22:21.641+0000 7f9e5f8d3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 05:22:21 np0005532048 ceph-mgr[75315]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 05:22:21 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 22 05:22:21 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:22:21 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890965168' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:22:21 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 22 05:22:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:22.023+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2888067176' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 05:22:22 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: ops {prefix=ops} (starting...)
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1295135441' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4195121845' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:22:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:22:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:22:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:22:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:22:22 np0005532048 ceph-mgr[75315]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3577998894' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 05:22:22 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: session ls {prefix=session ls} (starting...)
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 05:22:22 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316279566' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 05:22:22 np0005532048 ceph-mds[101348]: mds.cephfs.compute-0.myffln asok_command: status {prefix=status} (starting...)
Nov 22 05:22:22 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:22.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:22 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:22 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:23 np0005532048 nova_compute[253661]: 2025-11-22 10:22:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:22:23 np0005532048 nova_compute[253661]: 2025-11-22 10:22:23.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:22:23 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23029 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 05:22:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138727374' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 05:22:23 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:23 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:23 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23033 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:23 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 05:22:23 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490473122' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 05:22:23 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:23.965+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:23 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:23 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959812179' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1228817719' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 05:22:24 np0005532048 nova_compute[253661]: 2025-11-22 10:22:24.552 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1919 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1919 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1584366632' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 05:22:24 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695941985' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 05:22:24 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23045 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:24 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T10:22:24.992+0000 7f9e5f8d3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 05:22:24 np0005532048 ceph-mgr[75315]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 05:22:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:25.001+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:25 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23047 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:25 np0005532048 nova_compute[253661]: 2025-11-22 10:22:25.229 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:22:25 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 22 05:22:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133801750' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 05:22:25 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23051 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:25 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:25 np0005532048 nova_compute[253661]: 2025-11-22 10:22:25.710 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:25 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23053 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:25 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:25.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:25 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:25 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:25 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 22 05:22:25 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471810433' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 05:22:26 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 05:22:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507056786' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.532464981s of 30.654958725s, submitted: 23
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cca625a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902456 data_alloc: 218103808 data_used: 2912256
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902456 data_alloc: 218103808 data_used: 2912256
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 54247424 heap: 334110720 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed69f000/0x0/0x4ffc00000, data 0x6697c2/0x81f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.075837135s of 11.098571777s, submitted: 6
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6cddde000
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc587000 session 0x55a6ce34ad20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cca4fe00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6ce39c5a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cdd994a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 58449920 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 58449920 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 58449920 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2971718 data_alloc: 218103808 data_used: 2912256
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279871488 unmapped: 58441728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279871488 unmapped: 58441728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecd47000/0x0/0x4ffc00000, data 0xfc17c2/0x1177000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279871488 unmapped: 58441728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cda81e00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 279871488 unmapped: 58441728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281100288 unmapped: 57212928 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc586800 session 0x55a6cdb83c20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce540f00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6cca63e00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cdddf2c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3040700 data_alloc: 234881024 data_used: 12414976
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecd47000/0x0/0x4ffc00000, data 0xfc17c2/0x1177000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 282312704 unmapped: 56000512 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecd46000/0x0/0x4ffc00000, data 0xfc17eb/0x1178000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cb88e960
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce0f92c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cb8b10e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce0f9e00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6cda80d20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062351 data_alloc: 234881024 data_used: 12414976
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa8000/0x0/0x4ffc00000, data 0x125f824/0x1416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281272320 unmapped: 57040896 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.506616592s of 17.300605774s, submitted: 33
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cca4e960
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 281280512 unmapped: 57032704 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ecaa7000/0x0/0x4ffc00000, data 0x125f847/0x1417000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3147521 data_alloc: 234881024 data_used: 12566528
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 53805056 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 53338112 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ebf21000/0x0/0x4ffc00000, data 0x1de5847/0x1f9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ebf21000/0x0/0x4ffc00000, data 0x1de5847/0x1f9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170863 data_alloc: 234881024 data_used: 14196736
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 53108736 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ebf00000/0x0/0x4ffc00000, data 0x1e06847/0x1fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3167159 data_alloc: 234881024 data_used: 14200832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 52985856 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.227722168s of 13.740644455s, submitted: 87
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6d365ac00 session 0x55a6d1939860
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce06e3c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6ce5412c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ebf00000/0x0/0x4ffc00000, data 0x1e06847/0x1fbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cb89d4a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cddbb2c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 49561600 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 50593792 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3283031 data_alloc: 234881024 data_used: 14934016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 50593792 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 50593792 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbe2f000 session 0x55a6cddbb0e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 50585600 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cddbbc20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4eb221000/0x0/0x4ffc00000, data 0x2ae4847/0x2c9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 50585600 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 50585600 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cbd20800 session 0x55a6cddba780
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cddba3c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4eb221000/0x0/0x4ffc00000, data 0x2ae4847/0x2c9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284443 data_alloc: 234881024 data_used: 14938112
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 50438144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 51093504 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 47562752 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4eb1fe000/0x0/0x4ffc00000, data 0x2b08847/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 heartbeat osd_stat(store_statfs(0x4eb1fe000/0x0/0x4ffc00000, data 0x2b08847/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 47562752 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 ms_handle_reset con 0x55a6cca06c00 session 0x55a6ce34a5a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 290750464 unmapped: 47562752 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.964159012s of 12.413674355s, submitted: 64
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdaed000 session 0x55a6cbbd45a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344633 data_alloc: 234881024 data_used: 22663168
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbd20800 session 0x55a6ce1df680
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cca5c960
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 290766848 unmapped: 47546368 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb1fa000/0x0/0x4ffc00000, data 0x2b0a3c4/0x2cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,3])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298745856 unmapped: 39567360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea929000/0x0/0x4ffc00000, data 0x33dc3c4/0x3595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436237 data_alloc: 234881024 data_used: 23756800
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 37126144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea929000/0x0/0x4ffc00000, data 0x33dc3c4/0x3595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,3])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436765 data_alloc: 234881024 data_used: 23756800
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea929000/0x0/0x4ffc00000, data 0x33dc3c4/0x3595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301203456 unmapped: 37109760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 6.877428532s
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 6.877429485s
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.751138687s of 13.710234642s, submitted: 8
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.877703667s, txc = 0x55a6cbc8a900
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.878297806s, txc = 0x55a6cc8b7500
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301219840 unmapped: 37093376 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.962179661s, txc = 0x55a6cbd4db00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421433 data_alloc: 234881024 data_used: 23797760
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 44507136 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea927000/0x0/0x4ffc00000, data 0x33de3c4/0x3597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 44498944 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 44498944 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 44498944 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 42844160 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451777 data_alloc: 234881024 data_used: 23818240
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 41803776 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0x36fc3c4/0x38b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 41730048 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296583168 unmapped: 41730048 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296648704 unmapped: 41664512 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296656896 unmapped: 41656320 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea563000/0x0/0x4ffc00000, data 0x37a23c4/0x395b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.929909229s of 11.050208092s, submitted: 33
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3453977 data_alloc: 234881024 data_used: 23797760
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297820160 unmapped: 40493056 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297828352 unmapped: 40484864 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea4b1000/0x0/0x4ffc00000, data 0x384e3c4/0x3a07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297828352 unmapped: 40484864 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297893888 unmapped: 40419328 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298008576 unmapped: 40304640 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460069 data_alloc: 234881024 data_used: 23945216
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 40239104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 40239104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 40239104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea495000/0x0/0x4ffc00000, data 0x38683c4/0x3a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 40239104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea495000/0x0/0x4ffc00000, data 0x38683c4/0x3a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298082304 unmapped: 40230912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467881 data_alloc: 234881024 data_used: 23949312
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298082304 unmapped: 40230912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea495000/0x0/0x4ffc00000, data 0x38683c4/0x3a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce34b860
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cddbbc20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298082304 unmapped: 40230912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea495000/0x0/0x4ffc00000, data 0x38683c4/0x3a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.882210732s of 12.113567352s, submitted: 43
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cca06c00 session 0x55a6ce06f2c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309917 data_alloc: 234881024 data_used: 16707584
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x2c543c4/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11c00 session 0x55a6ce22f4a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6d1938b40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 44367872 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce34a1e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb0b1000/0x0/0x4ffc00000, data 0x2c543c4/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 44351488 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 44343296 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 44335104 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 44326912 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 44318720 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 44310528 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 44310528 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 44310528 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 44310528 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 44302336 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 44294144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 44294144 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 44285952 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 44285952 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 44285952 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3100872 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec5be000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 44277760 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 44269568 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 63.296806335s of 63.591075897s, submitted: 21
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3189342 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 35684352 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6d1938d20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbd20800 session 0x55a6ce34ba40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce5403c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 44105728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cca2fc20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6cdd99860
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ebd49000/0x0/0x4ffc00000, data 0x1fbc3c4/0x2175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 44105728 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 44040192 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 44040192 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3167422 data_alloc: 218103808 data_used: 11350016
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 44023808 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11c00 session 0x55a6ce06eb40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 44023808 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 44023808 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ebd24000/0x0/0x4ffc00000, data 0x1fe03e7/0x219a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236484 data_alloc: 234881024 data_used: 20135936
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ebd24000/0x0/0x4ffc00000, data 0x1fe03e7/0x219a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236484 data_alloc: 234881024 data_used: 20135936
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ebd24000/0x0/0x4ffc00000, data 0x1fe03e7/0x219a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 43638784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.954549789s of 17.778802872s, submitted: 21
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301252608 unmapped: 37060608 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301285376 unmapped: 37027840 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339572 data_alloc: 234881024 data_used: 22122496
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302260224 unmapped: 36052992 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302260224 unmapped: 36052992 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302260224 unmapped: 36052992 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339572 data_alloc: 234881024 data_used: 22122496
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339892 data_alloc: 234881024 data_used: 22130688
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf60000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339892 data_alloc: 234881024 data_used: 22130688
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11800 session 0x55a6ce39c5a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11800 session 0x55a6cca4fe00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce34ad20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cddde000
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.562274933s of 19.042518616s, submitted: 92
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6cca625a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11c00 session 0x55a6cb890d20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce06fe00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6ce06fa40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6cb89c000
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eac1e000/0x0/0x4ffc00000, data 0x2cd4459/0x2e90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360913 data_alloc: 234881024 data_used: 22130688
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d1b11800 session 0x55a6cb8c3e00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 35954688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdacf400 session 0x55a6cb8b1a40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eac1e000/0x0/0x4ffc00000, data 0x2cd4459/0x2e90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdacf400 session 0x55a6d1938f00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 35938304 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cbbd4d20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 301350912 unmapped: 36962304 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eabf9000/0x0/0x4ffc00000, data 0x2cf847c/0x2eb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eabf9000/0x0/0x4ffc00000, data 0x2cf847c/0x2eb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387642 data_alloc: 234881024 data_used: 23896064
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eabf9000/0x0/0x4ffc00000, data 0x2cf847c/0x2eb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.984582901s of 13.379371643s, submitted: 33
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387946 data_alloc: 234881024 data_used: 23900160
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eabf9000/0x0/0x4ffc00000, data 0x2cf847c/0x2eb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 38002688 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397636 data_alloc: 234881024 data_used: 23916544
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 37519360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 37511168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea9b0000/0x0/0x4ffc00000, data 0x2f3947c/0x30f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420022 data_alloc: 234881024 data_used: 24059904
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea9b0000/0x0/0x4ffc00000, data 0x2f3947c/0x30f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 35995648 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea9b0000/0x0/0x4ffc00000, data 0x2f3947c/0x30f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.431725502s of 13.091328621s, submitted: 67
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 35864576 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 35864576 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414874 data_alloc: 234881024 data_used: 24064000
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 35864576 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ea997000/0x0/0x4ffc00000, data 0x2f5a47c/0x3117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 35856384 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 35856384 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cca5cd20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6ce34a780
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2e000 session 0x55a6cdb830e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3341272 data_alloc: 234881024 data_used: 20606976
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eaf48000/0x0/0x4ffc00000, data 0x29863e7/0x2b40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 38281216 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cbc37680
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.537497520s of 10.164901733s, submitted: 43
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6cbc372c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 38273024 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cbc365a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297721856 unmapped: 40591360 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 40583168 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 40574976 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 40574976 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 40574976 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 40574976 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297746432 unmapped: 40566784 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec1ad000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297754624 unmapped: 40558592 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3122242 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cca5d4a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbe2f400 session 0x55a6cb89de00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce0f8780
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6ce06f680
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 40542208 heap: 338313216 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.186452866s of 27.352705002s, submitted: 34
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb7b8000/0x0/0x4ffc00000, data 0x213c3d4/0x22f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [0,0,0,0,3,0,8])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce22e780
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6cb8c3e00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdacf400 session 0x55a6ce06fe00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cddde000
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cdd99860
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x25003d4/0x26ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f4000/0x0/0x4ffc00000, data 0x25003d4/0x26ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3232119 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298074112 unmapped: 43917312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce34ba40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298008576 unmapped: 43982848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298008576 unmapped: 43982848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324944 data_alloc: 234881024 data_used: 22831104
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.318983078s of 12.736421585s, submitted: 36
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6d1938b40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce2ea800 session 0x55a6cddbbc20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324812 data_alloc: 234881024 data_used: 22831104
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324912 data_alloc: 234881024 data_used: 22835200
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 35K writes, 144K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.92 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3530 writes, 14K keys, 3530 commit groups, 1.0 writes per commit group, ingest: 17.14 MB, 0.03 MB/s#012Interval WAL: 3530 writes, 1327 syncs, 2.66 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cb8912c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cda80d20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324780 data_alloc: 234881024 data_used: 22835200
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298188800 unmapped: 43802624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.976135254s of 15.380604744s, submitted: 3
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324912 data_alloc: 234881024 data_used: 22835200
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4eb3f3000/0x0/0x4ffc00000, data 0x25003f7/0x26bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1214f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce06f2c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6ce1dfe00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 298196992 unmapped: 43794432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293765120 unmapped: 48226304 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce56f000 session 0x55a6cbbd5680
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293781504 unmapped: 48209920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293797888 unmapped: 48193536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 48185344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293806080 unmapped: 48185344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293814272 unmapped: 48177152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293822464 unmapped: 48168960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293830656 unmapped: 48160768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293838848 unmapped: 48152576 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293847040 unmapped: 48144384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293855232 unmapped: 48136192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293863424 unmapped: 48128000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293863424 unmapped: 48128000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293863424 unmapped: 48128000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ed000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293871616 unmapped: 48119808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131954 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 70.865798950s of 72.201339722s, submitted: 33
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293871616 unmapped: 48119808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6d19392c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce17ac00 session 0x55a6cca4c1e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cb88ed20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293871616 unmapped: 48119808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293871616 unmapped: 48119808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,1])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293896192 unmapped: 48095232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cddba3c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293904384 unmapped: 48087040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293912576 unmapped: 48078848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.929806709s of 10.555549622s, submitted: 96
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6cda80000
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293912576 unmapped: 48078848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293912576 unmapped: 48078848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6ce34a1e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293920768 unmapped: 48070656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.821827888s of 10.002042770s, submitted: 4
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cb88e780
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293928960 unmapped: 48062464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 48054272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cb88e1e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 48054272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 48054272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293937152 unmapped: 48054272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 48046080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293945344 unmapped: 48046080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.985437393s of 10.002745628s, submitted: 5
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cca4f0e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293953536 unmapped: 48037888 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293961728 unmapped: 48029696 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce17ac00 session 0x55a6cca11e00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293969920 unmapped: 48021504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.981091499s of 10.003016472s, submitted: 4
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce17ac00 session 0x55a6cdb832c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293978112 unmapped: 48013312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cc6350e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293986304 unmapped: 48005120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 47996928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293994496 unmapped: 47996928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.981596947s of 10.001847267s, submitted: 3
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6ce39c3c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473c4/0x1900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131574 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294002688 unmapped: 47988736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294010880 unmapped: 47980544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cb8c2d20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 47972352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdb5d400 session 0x55a6ce1521e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cbbd5860
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294043648 unmapped: 47947776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 47931392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 47931392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294060032 unmapped: 47931392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 47906816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130290 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 47906816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cddba3c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cca4c1e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6d19392c0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce17ac00 session 0x55a6cbbd5680
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 46.754524231s of 47.820213318s, submitted: 29
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ee000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,5])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6ce1dfe00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cddbbc20
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6d1938b40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6ce34ba40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6ce176c00 session 0x55a6cdd99860
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194964 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec9bf000/0x0/0x4ffc00000, data 0x1f773a1/0x212f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294199296 unmapped: 47792128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294199296 unmapped: 47792128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbba2c00 session 0x55a6cddde000
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec99b000/0x0/0x4ffc00000, data 0x1f9b3a1/0x2153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3252245 data_alloc: 234881024 data_used: 17235968
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cbccfc00 session 0x55a6cb89de00
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cd8e3800 session 0x55a6cdb87680
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ec99b000/0x0/0x4ffc00000, data 0x1f9b3a1/0x2153000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb000 session 0x55a6cc634000
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294486016 unmapped: 47505408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294494208 unmapped: 47497216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 47489024 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294502400 unmapped: 47489024 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3136875 data_alloc: 218103808 data_used: 9773056
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294633472 unmapped: 47357952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294699008 unmapped: 47292416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294699008 unmapped: 47292416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [3])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 47972352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294019072 unmapped: 47972352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294027264 unmapped: 47964160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294035456 unmapped: 47955968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294051840 unmapped: 47939584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294068224 unmapped: 47923200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294076416 unmapped: 47915008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294084608 unmapped: 47906816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294092800 unmapped: 47898624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294109184 unmapped: 47882240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294109184 unmapped: 47882240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 47874048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 47874048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 47874048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294117376 unmapped: 47874048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294125568 unmapped: 47865856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294133760 unmapped: 47857664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294141952 unmapped: 47849472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 47841280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294150144 unmapped: 47841280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294158336 unmapped: 47833088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 47824896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294199296 unmapped: 47792128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294199296 unmapped: 47792128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 47767552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 47767552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294240256 unmapped: 47751168 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294240256 unmapped: 47751168 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294240256 unmapped: 47751168 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294256640 unmapped: 47734784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294264832 unmapped: 47726592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294264832 unmapped: 47726592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 47710208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 35K writes, 145K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.90 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 417 writes, 910 keys, 417 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s#012Interval WAL: 417 writes, 202 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294420480 unmapped: 47570944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 501.154937744s of 501.844177246s, submitted: 40
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294510592 unmapped: 47480832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294518784 unmapped: 47472640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294526976 unmapped: 47464448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294535168 unmapped: 47456256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294543360 unmapped: 47448064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294551552 unmapped: 47439872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294559744 unmapped: 47431680 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294567936 unmapped: 47423488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294576128 unmapped: 47415296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294584320 unmapped: 47407104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294592512 unmapped: 47398912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294600704 unmapped: 47390720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294617088 unmapped: 47374336 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294625280 unmapped: 47366144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294641664 unmapped: 47349760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 47341568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294658048 unmapped: 47333376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294666240 unmapped: 47325184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294674432 unmapped: 47316992 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294682624 unmapped: 47308800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294690816 unmapped: 47300608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294699008 unmapped: 47292416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294699008 unmapped: 47292416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294707200 unmapped: 47284224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294715392 unmapped: 47276032 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294715392 unmapped: 47276032 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294723584 unmapped: 47267840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294731776 unmapped: 47259648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294739968 unmapped: 47251456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294739968 unmapped: 47251456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294739968 unmapped: 47251456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294748160 unmapped: 47243264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294756352 unmapped: 47235072 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294764544 unmapped: 47226880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294772736 unmapped: 47218688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294780928 unmapped: 47210496 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294780928 unmapped: 47210496 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294780928 unmapped: 47210496 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294789120 unmapped: 47202304 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 47194112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 47194112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 47194112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294797312 unmapped: 47194112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 47185920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294805504 unmapped: 47185920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 47177728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 47177728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 47177728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294813696 unmapped: 47177728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 47169536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 47169536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 47169536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294821888 unmapped: 47169536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294830080 unmapped: 47161344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 47153152 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294846464 unmapped: 47144960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294846464 unmapped: 47144960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294846464 unmapped: 47144960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294846464 unmapped: 47144960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 47136768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294854656 unmapped: 47136768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 47120384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 47112192 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 47104000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 47104000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 47104000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294887424 unmapped: 47104000 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294895616 unmapped: 47095808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294903808 unmapped: 47087616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294912000 unmapped: 47079424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294920192 unmapped: 47071232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 47063040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 47054848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 47046656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 47046656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294944768 unmapped: 47046656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294952960 unmapped: 47038464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294952960 unmapped: 47038464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294952960 unmapped: 47038464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 47030272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294961152 unmapped: 47030272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 47022080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 47022080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294969344 unmapped: 47022080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 47013888 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294977536 unmapped: 47013888 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 46997504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 46997504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 46997504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294993920 unmapped: 46997504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295002112 unmapped: 46989312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295010304 unmapped: 46981120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295018496 unmapped: 46972928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295018496 unmapped: 46972928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295018496 unmapped: 46972928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295018496 unmapped: 46972928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295026688 unmapped: 46964736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295026688 unmapped: 46964736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295026688 unmapped: 46964736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295034880 unmapped: 46956544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295043072 unmapped: 46948352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 46931968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295059456 unmapped: 46931968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295067648 unmapped: 46923776 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295084032 unmapped: 46907392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 46899200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295092224 unmapped: 46899200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 46891008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 46891008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295100416 unmapped: 46891008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295108608 unmapped: 46882816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295124992 unmapped: 46866432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295141376 unmapped: 46850048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295141376 unmapped: 46850048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295141376 unmapped: 46850048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295149568 unmapped: 46841856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295157760 unmapped: 46833664 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295165952 unmapped: 46825472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295174144 unmapped: 46817280 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295190528 unmapped: 46800896 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295198720 unmapped: 46792704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 46784512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295206912 unmapped: 46784512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295223296 unmapped: 46768128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 46759936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 46759936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 46759936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295231488 unmapped: 46759936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295247872 unmapped: 46743552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295256064 unmapped: 46735360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 46710784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 46710784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295280640 unmapped: 46710784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 46702592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 46702592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295288832 unmapped: 46702592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23061 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295297024 unmapped: 46694400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 46686208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 46686208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295305216 unmapped: 46686208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295321600 unmapped: 46669824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295329792 unmapped: 46661632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295346176 unmapped: 46645248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295354368 unmapped: 46637056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295370752 unmapped: 46620672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295378944 unmapped: 46612480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295387136 unmapped: 46604288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 46596096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 46596096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295395328 unmapped: 46596096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295403520 unmapped: 46587904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 46579712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295411712 unmapped: 46579712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295419904 unmapped: 46571520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295436288 unmapped: 46555136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295444480 unmapped: 46546944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295452672 unmapped: 46538752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295460864 unmapped: 46530560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 46522368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 46522368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295469056 unmapped: 46522368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295477248 unmapped: 46514176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295477248 unmapped: 46514176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295485440 unmapped: 46505984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 46497792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 46497792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295493632 unmapped: 46497792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295501824 unmapped: 46489600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295510016 unmapped: 46481408 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295518208 unmapped: 46473216 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 46465024 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295526400 unmapped: 46465024 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295534592 unmapped: 46456832 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295542784 unmapped: 46448640 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295550976 unmapped: 46440448 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295559168 unmapped: 46432256 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295567360 unmapped: 46424064 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295575552 unmapped: 46415872 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 46399488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 46399488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295591936 unmapped: 46399488 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 46391296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 46391296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295600128 unmapped: 46391296 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295608320 unmapped: 46383104 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295616512 unmapped: 46374912 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46366720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46366720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46366720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295624704 unmapped: 46366720 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 46358528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 46358528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295632896 unmapped: 46358528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295649280 unmapped: 46342144 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 46333952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 46333952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 46333952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295657472 unmapped: 46333952 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295665664 unmapped: 46325760 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295673856 unmapped: 46317568 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 35K writes, 145K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.90 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a6ca2d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295682048 unmapped: 46309376 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295690240 unmapped: 46301184 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295698432 unmapped: 46292992 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295706624 unmapped: 46284800 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295714816 unmapped: 46276608 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295723008 unmapped: 46268416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295731200 unmapped: 46260224 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 46243840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295747584 unmapped: 46243840 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 46235648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295755776 unmapped: 46235648 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295763968 unmapped: 46227456 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 46219264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295772160 unmapped: 46219264 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 46211072 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 46211072 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295780352 unmapped: 46211072 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 46202880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 46202880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 46202880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295788544 unmapped: 46202880 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 46194688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 46194688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 46194688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295796736 unmapped: 46194688 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295804928 unmapped: 46186496 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 46178304 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295813120 unmapped: 46178304 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295821312 unmapped: 46170112 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295829504 unmapped: 46161920 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 46153728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 46153728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 46153728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295837696 unmapped: 46153728 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 46145536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 46145536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295845888 unmapped: 46145536 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 46137344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 46137344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 46137344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295854080 unmapped: 46137344 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295870464 unmapped: 46120960 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295878656 unmapped: 46112768 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.833068848s of 600.166870117s, submitted: 90
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 46096384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295895040 unmapped: 46096384 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295919616 unmapped: 46071808 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295927808 unmapped: 46063616 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 46055424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295944192 unmapped: 46047232 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295952384 unmapped: 46039040 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295960576 unmapped: 46030848 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cda86400 session 0x55a6cb2ab4a0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295968768 unmapped: 46022656 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 46014464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 46014464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295976960 unmapped: 46014464 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295985152 unmapped: 46006272 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 45998080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 45998080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 45998080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 295993344 unmapped: 45998080 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 45981696 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 45981696 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296009728 unmapped: 45981696 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 45973504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296017920 unmapped: 45973504 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 45965312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 45965312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296026112 unmapped: 45965312 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 45957120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 45957120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296034304 unmapped: 45957120 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296042496 unmapped: 45948928 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296050688 unmapped: 45940736 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 45932544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 45932544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296058880 unmapped: 45932544 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296067072 unmapped: 45924352 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 45916160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 45916160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 45916160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296075264 unmapped: 45916160 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296083456 unmapped: 45907968 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296099840 unmapped: 45891584 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296108032 unmapped: 45883392 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 45875200 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 45867008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 45867008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 45867008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296124416 unmapped: 45867008 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 45858816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 45858816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296132608 unmapped: 45858816 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 45850624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 45850624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296140800 unmapped: 45850624 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 45842432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296148992 unmapped: 45842432 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 45834240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 45834240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 45834240 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 45826048 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 45817856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296173568 unmapped: 45817856 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 45801472 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296206336 unmapped: 45785088 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296222720 unmapped: 45768704 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296230912 unmapped: 45760512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 45752320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 45752320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 45752320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296239104 unmapped: 45752320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 45744128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296247296 unmapped: 45744128 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 45735936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 45735936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 45735936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296255488 unmapped: 45735936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296263680 unmapped: 45727744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6d144e000 session 0x55a6cb89d680
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296263680 unmapped: 45727744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296271872 unmapped: 45719552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296280064 unmapped: 45711360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296288256 unmapped: 45703168 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 45694976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296296448 unmapped: 45694976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296304640 unmapped: 45686784 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 45670400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 45670400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 45670400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296321024 unmapped: 45670400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296329216 unmapped: 45662208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 45654016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296337408 unmapped: 45654016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296345600 unmapped: 45645824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296361984 unmapped: 45629440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296370176 unmapped: 45621248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296378368 unmapped: 45613056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: mgrc ms_handle_reset ms_handle_reset con 0x55a6ce181000
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1636168236
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1636168236,v1:192.168.122.100:6801/1636168236]
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: mgrc handle_mgr_configure stats_period=5
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296468480 unmapped: 45522944 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cef83400 session 0x55a6cca5c960
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 296476672 unmapped: 45514752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294182912 unmapped: 47808512 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294191104 unmapped: 47800320 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294207488 unmapped: 47783936 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294215680 unmapped: 47775744 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 47767552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294223872 unmapped: 47767552 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294232064 unmapped: 47759360 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cc9eac00 session 0x55a6cca5c1e0
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294248448 unmapped: 47742976 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294264832 unmapped: 47726592 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294273024 unmapped: 47718400 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 47710208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 47710208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 47710208 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294289408 unmapped: 47702016 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294297600 unmapped: 47693824 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294305792 unmapped: 47685632 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294313984 unmapped: 47677440 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294322176 unmapped: 47669248 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294330368 unmapped: 47661056 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294338560 unmapped: 47652864 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 47644672 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 47636480 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294363136 unmapped: 47628288 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294371328 unmapped: 47620096 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 47611904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 47603712 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 47595520 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294404096 unmapped: 47587328 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294412288 unmapped: 47579136 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294428672 unmapped: 47562752 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294436864 unmapped: 47554560 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294461440 unmapped: 47529984 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 ms_handle_reset con 0x55a6cdadb400 session 0x55a6cca4f860
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294445056 unmapped: 47546368 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294453248 unmapped: 47538176 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294469632 unmapped: 47521792 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ed1ef000/0x0/0x4ffc00000, data 0x17473a1/0x18ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1110f9c6), peers [0,1] op hist [])
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: bluestore.MempoolThread(0x55a6ca3afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137035 data_alloc: 218103808 data_used: 9777152
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294477824 unmapped: 47513600 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 294608896 unmapped: 47382528 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: do_command 'config diff' '{prefix=config diff}'
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: do_command 'config show' '{prefix=config show}'
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293888000 unmapped: 48103424 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293675008 unmapped: 48316416 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: prioritycache tune_memory target: 4294967296 mapped: 293355520 unmapped: 48635904 heap: 341991424 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:26 np0005532048 ceph-osd[90703]: do_command 'log dump' '{prefix=log dump}'
Nov 22 05:22:26 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 05:22:26 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454304081' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 05:22:26 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:26.963+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:26 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:26 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:27 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23065 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.228 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.255 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.256 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.256 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.257 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:22:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 05:22:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457181493' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 05:22:27 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:27 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23069 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:22:27 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:22:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040159002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:22:27 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 05:22:27 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1787959640' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.750 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:22:27 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23075 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 05:22:27 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:27.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:27 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:27 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.961 253665 WARNING nova.virt.libvirt.driver [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.962 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3293MB free_disk=59.94279479980469GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.962 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:22:27 np0005532048 nova_compute[253661]: 2025-11-22 10:22:27.963 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:22:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:22:28.033 162862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 22 05:22:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:22:28.033 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 22 05:22:28 np0005532048 ovn_metadata_agent[162856]: 2025-11-22 10:22:28.033 162862 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.130 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.131 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.190 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing inventories for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 22 05:22:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 05:22:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357294411' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.265 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating ProviderTree inventory for provider f0c5987a-d277-4022-aba2-19e7fecb4518 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.265 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Updating inventory in ProviderTree for provider f0c5987a-d277-4022-aba2-19e7fecb4518 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.286 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing aggregate associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.312 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Refreshing trait associations for resource provider f0c5987a-d277-4022-aba2-19e7fecb4518, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.328 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 22 05:22:28 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23079 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 05:22:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 22 05:22:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669056139' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 05:22:28 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:28 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:22:28 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1741712470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.851 253665 DEBUG oslo_concurrency.processutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.860 253665 DEBUG nova.compute.provider_tree [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed in ProviderTree for provider: f0c5987a-d277-4022-aba2-19e7fecb4518 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.879 253665 DEBUG nova.scheduler.client.report [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Inventory has not changed for provider f0c5987a-d277-4022-aba2-19e7fecb4518 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.883 253665 DEBUG nova.compute.resource_tracker [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 22 05:22:28 np0005532048 nova_compute[253661]: 2025-11-22 10:22:28.887 253665 DEBUG oslo_concurrency.lockutils [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 22 05:22:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:28.998+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:29 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:29 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:29 np0005532048 ceph-mgr[75315]: log_channel(audit) log [DBG] : from='client.23089 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 05:22:29 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-mgr-compute-0-ldbkey[75311]: 2025-11-22T10:22:29.334+0000 7f9e5f8d3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 05:22:29 np0005532048 ceph-mgr[75315]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 05:22:29 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:29 np0005532048 nova_compute[253661]: 2025-11-22 10:22:29.556 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:29 np0005532048 ceph-mon[75021]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 1924 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:22:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 22 05:22:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995914162' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 05:22:29 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:29 np0005532048 ceph-mon[75021]: Health check update: 10 slow ops, oldest one blocked for 1924 sec, osd.1 has slow ops (SLOW_OPS)
Nov 22 05:22:29 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 22 05:22:29 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/466885523' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 05:22:29 np0005532048 nova_compute[253661]: 2025-11-22 10:22:29.880 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:22:29 np0005532048 nova_compute[253661]: 2025-11-22 10:22:29.881 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:22:29 np0005532048 nova_compute[253661]: 2025-11-22 10:22:29.881 253665 DEBUG oslo_service.periodic_task [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 22 05:22:29 np0005532048 nova_compute[253661]: 2025-11-22 10:22:29.881 253665 DEBUG nova.compute.manager [None req-562bddeb-c8d8-4f99-97de-7d53b4784d15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 22 05:22:30 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:30.000+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:30 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:30 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/882830295' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2813742188' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/260563121' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:30 np0005532048 nova_compute[253661]: 2025-11-22 10:22:30.712 253665 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817642346' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 22 05:22:30 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4136342949' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: 2025-11-22T10:22:31.029+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.22769.0:10 5.1a 5:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 22 05:22:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424716433' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 05:22:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 22 05:22:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811321850' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 05:22:31 np0005532048 ceph-mgr[75315]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 1 active+clean+laggy, 304 active+clean; 196 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Nov 22 05:22:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 22 05:22:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446227911' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 05:22:31 np0005532048 ceph-mon[75021]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'images' : 10 ])
Nov 22 05:22:31 np0005532048 ceph-mon[75021]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 22 05:22:31 np0005532048 ceph-mon[75021]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2160580551' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368566272 unmapped: 48791552 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9994> 2025-11-22T10:08:30.949+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368574464 unmapped: 48783360 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9983> 2025-11-22T10:08:31.946+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 48775168 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9971> 2025-11-22T10:08:32.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 48766976 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9957> 2025-11-22T10:08:33.962+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 48766976 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9946> 2025-11-22T10:08:34.921+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 48766976 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9935> 2025-11-22T10:08:35.958+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 48766976 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9924> 2025-11-22T10:08:36.928+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 48758784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9912> 2025-11-22T10:08:37.902+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 48758784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9900> 2025-11-22T10:08:38.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 48758784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9885> 2025-11-22T10:08:39.859+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 48758784 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9874> 2025-11-22T10:08:40.845+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9863> 2025-11-22T10:08:41.815+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,3,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9850> 2025-11-22T10:08:42.772+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9838> 2025-11-22T10:08:43.817+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9825> 2025-11-22T10:08:44.787+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9813> 2025-11-22T10:08:45.779+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9802> 2025-11-22T10:08:46.804+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9791> 2025-11-22T10:08:47.768+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9780> 2025-11-22T10:08:48.760+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9765> 2025-11-22T10:08:49.737+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9754> 2025-11-22T10:08:50.702+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9743> 2025-11-22T10:08:51.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368623616 unmapped: 48734208 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9734> 2025-11-22T10:08:52.666+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9721> 2025-11-22T10:08:53.703+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9708> 2025-11-22T10:08:54.682+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9697> 2025-11-22T10:08:55.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9686> 2025-11-22T10:08:56.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368631808 unmapped: 48726016 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9675> 2025-11-22T10:08:57.689+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9663> 2025-11-22T10:08:58.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9649> 2025-11-22T10:08:59.641+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9638> 2025-11-22T10:09:00.675+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9626> 2025-11-22T10:09:01.719+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9615> 2025-11-22T10:09:02.695+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9603> 2025-11-22T10:09:03.713+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9589> 2025-11-22T10:09:04.687+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 48701440 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9578> 2025-11-22T10:09:05.658+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 48685056 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9566> 2025-11-22T10:09:06.684+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 48685056 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9555> 2025-11-22T10:09:07.704+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 48685056 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9544> 2025-11-22T10:09:08.689+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 48685056 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9529> 2025-11-22T10:09:09.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 48676864 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9518> 2025-11-22T10:09:10.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 48676864 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9507> 2025-11-22T10:09:11.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 48676864 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9496> 2025-11-22T10:09:12.711+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 48676864 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9485> 2025-11-22T10:09:13.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9470> 2025-11-22T10:09:14.709+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9459> 2025-11-22T10:09:15.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9448> 2025-11-22T10:09:16.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9436> 2025-11-22T10:09:17.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9425> 2025-11-22T10:09:18.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9411> 2025-11-22T10:09:19.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9400> 2025-11-22T10:09:20.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 48660480 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9389> 2025-11-22T10:09:21.669+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 48635904 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9377> 2025-11-22T10:09:22.647+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9365> 2025-11-22T10:09:23.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9350> 2025-11-22T10:09:24.636+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9339> 2025-11-22T10:09:25.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9327> 2025-11-22T10:09:26.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9316> 2025-11-22T10:09:27.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9302> 2025-11-22T10:09:28.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 48627712 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9288> 2025-11-22T10:09:29.625+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 48619520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9285> 2025-11-22T10:09:30.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 48619520 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9269> 2025-11-22T10:09:31.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9259> 2025-11-22T10:09:32.592+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9247> 2025-11-22T10:09:33.609+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9230> 2025-11-22T10:09:34.652+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9221> 2025-11-22T10:09:35.613+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9215> 2025-11-22T10:09:36.590+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 48611328 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9199> 2025-11-22T10:09:37.594+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9190> 2025-11-22T10:09:38.600+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9182> 2025-11-22T10:09:39.589+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9166> 2025-11-22T10:09:40.606+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9163> 2025-11-22T10:09:41.586+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 48594944 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9149> 2025-11-22T10:09:42.546+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 48578560 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9138> 2025-11-22T10:09:43.556+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 48578560 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9124> 2025-11-22T10:09:44.514+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 48578560 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9112> 2025-11-22T10:09:45.493+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9101> 2025-11-22T10:09:46.454+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9089> 2025-11-22T10:09:47.481+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9078> 2025-11-22T10:09:48.505+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9064> 2025-11-22T10:09:49.531+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,4,1,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9052> 2025-11-22T10:09:50.558+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9041> 2025-11-22T10:09:51.588+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9027> 2025-11-22T10:09:52.615+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 48570368 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9018> 2025-11-22T10:09:53.634+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368803840 unmapped: 48553984 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -9004> 2025-11-22T10:09:54.627+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8993> 2025-11-22T10:09:55.660+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8981> 2025-11-22T10:09:56.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8970> 2025-11-22T10:09:57.699+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8959> 2025-11-22T10:09:58.691+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8944> 2025-11-22T10:09:59.656+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8933> 2025-11-22T10:10:00.696+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 48545792 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8921> 2025-11-22T10:10:01.705+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8910> 2025-11-22T10:10:02.748+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8899> 2025-11-22T10:10:03.698+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8884> 2025-11-22T10:10:04.723+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8872> 2025-11-22T10:10:05.678+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8861> 2025-11-22T10:10:06.644+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8850> 2025-11-22T10:10:07.680+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 48529408 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8839> 2025-11-22T10:10:08.708+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8825> 2025-11-22T10:10:09.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8813> 2025-11-22T10:10:10.717+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8801> 2025-11-22T10:10:11.763+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8790> 2025-11-22T10:10:12.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8779> 2025-11-22T10:10:13.792+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 48513024 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8765> 2025-11-22T10:10:14.744+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368852992 unmapped: 48504832 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8754> 2025-11-22T10:10:15.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368852992 unmapped: 48504832 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8743> 2025-11-22T10:10:16.688+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 48496640 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8731> 2025-11-22T10:10:17.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 48496640 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8719> 2025-11-22T10:10:18.664+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8705> 2025-11-22T10:10:19.690+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8694> 2025-11-22T10:10:20.733+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8683> 2025-11-22T10:10:21.715+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8672> 2025-11-22T10:10:22.676+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8659> 2025-11-22T10:10:23.672+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368877568 unmapped: 48480256 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8645> 2025-11-22T10:10:24.720+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8633> 2025-11-22T10:10:25.728+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8622> 2025-11-22T10:10:26.731+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8611> 2025-11-22T10:10:27.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8600> 2025-11-22T10:10:28.817+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8585> 2025-11-22T10:10:29.794+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8574> 2025-11-22T10:10:30.843+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8563> 2025-11-22T10:10:31.803+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368885760 unmapped: 48472064 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8550> 2025-11-22T10:10:32.847+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8539> 2025-11-22T10:10:33.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8525> 2025-11-22T10:10:34.848+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8514> 2025-11-22T10:10:35.880+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8503> 2025-11-22T10:10:36.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8490> 2025-11-22T10:10:37.791+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8479> 2025-11-22T10:10:38.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 48447488 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8463> 2025-11-22T10:10:39.870+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368918528 unmapped: 48439296 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8452> 2025-11-22T10:10:40.840+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8440> 2025-11-22T10:10:41.793+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8429> 2025-11-22T10:10:42.777+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8416> 2025-11-22T10:10:43.816+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8402> 2025-11-22T10:10:44.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368926720 unmapped: 48431104 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8390> 2025-11-22T10:10:45.877+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368934912 unmapped: 48422912 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8379> 2025-11-22T10:10:46.832+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368934912 unmapped: 48422912 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8370> 2025-11-22T10:10:47.818+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368943104 unmapped: 48414720 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8359> 2025-11-22T10:10:48.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8343> 2025-11-22T10:10:49.851+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8331> 2025-11-22T10:10:50.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8320> 2025-11-22T10:10:51.891+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8309> 2025-11-22T10:10:52.922+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8297> 2025-11-22T10:10:53.950+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8283> 2025-11-22T10:10:54.925+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8272> 2025-11-22T10:10:55.939+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368951296 unmapped: 48406528 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8260> 2025-11-22T10:10:56.920+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 48390144 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8249> 2025-11-22T10:10:57.931+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 48390144 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,2,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8239> 2025-11-22T10:10:58.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8225> 2025-11-22T10:10:59.856+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8214> 2025-11-22T10:11:00.834+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8203> 2025-11-22T10:11:01.818+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8191> 2025-11-22T10:11:02.801+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8180> 2025-11-22T10:11:03.805+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 368984064 unmapped: 48373760 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8165> 2025-11-22T10:11:04.840+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 48357376 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8154> 2025-11-22T10:11:05.825+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 48357376 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8143> 2025-11-22T10:11:06.871+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 48357376 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8132> 2025-11-22T10:11:07.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 48357376 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8120> 2025-11-22T10:11:08.855+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369008640 unmapped: 48349184 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8106> 2025-11-22T10:11:09.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369008640 unmapped: 48349184 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8095> 2025-11-22T10:11:10.873+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369008640 unmapped: 48349184 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8084> 2025-11-22T10:11:11.858+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369008640 unmapped: 48349184 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8073> 2025-11-22T10:11:12.836+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369016832 unmapped: 48340992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8062> 2025-11-22T10:11:13.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369016832 unmapped: 48340992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8047> 2025-11-22T10:11:14.811+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369016832 unmapped: 48340992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8036> 2025-11-22T10:11:15.799+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369016832 unmapped: 48340992 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8025> 2025-11-22T10:11:16.807+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369025024 unmapped: 48332800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8014> 2025-11-22T10:11:17.841+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369025024 unmapped: 48332800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -8003> 2025-11-22T10:11:18.876+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369025024 unmapped: 48332800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7989> 2025-11-22T10:11:19.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369025024 unmapped: 48332800 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7977> 2025-11-22T10:11:20.872+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369041408 unmapped: 48316416 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7965> 2025-11-22T10:11:21.891+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369041408 unmapped: 48316416 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7953> 2025-11-22T10:11:22.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369041408 unmapped: 48316416 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7942> 2025-11-22T10:11:23.924+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369049600 unmapped: 48308224 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7928> 2025-11-22T10:11:24.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 48300032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7916> 2025-11-22T10:11:25.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 48300032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7905> 2025-11-22T10:11:27.016+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 48300032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7894> 2025-11-22T10:11:28.046+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 48300032 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7882> 2025-11-22T10:11:29.076+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7868> 2025-11-22T10:11:30.107+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7857> 2025-11-22T10:11:31.144+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7846> 2025-11-22T10:11:32.189+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7835> 2025-11-22T10:11:33.145+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7823> 2025-11-22T10:11:34.175+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7809> 2025-11-22T10:11:35.161+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7798> 2025-11-22T10:11:36.162+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 48283648 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7787> 2025-11-22T10:11:37.137+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369090560 unmapped: 48267264 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7776> 2025-11-22T10:11:38.176+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7764> 2025-11-22T10:11:39.133+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7750> 2025-11-22T10:11:40.105+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7738> 2025-11-22T10:11:41.079+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7726> 2025-11-22T10:11:42.102+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7715> 2025-11-22T10:11:43.065+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7704> 2025-11-22T10:11:44.094+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 48259072 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7689> 2025-11-22T10:11:45.051+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369106944 unmapped: 48250880 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7677> 2025-11-22T10:11:46.044+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369115136 unmapped: 48242688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7666> 2025-11-22T10:11:47.031+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369115136 unmapped: 48242688 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7655> 2025-11-22T10:11:47.988+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369123328 unmapped: 48234496 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7643> 2025-11-22T10:11:48.954+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369131520 unmapped: 48226304 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7628> 2025-11-22T10:11:49.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369131520 unmapped: 48226304 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7617> 2025-11-22T10:11:50.900+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369131520 unmapped: 48226304 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7605> 2025-11-22T10:11:51.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369131520 unmapped: 48226304 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7594> 2025-11-22T10:11:52.890+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7582> 2025-11-22T10:11:53.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7567> 2025-11-22T10:11:54.886+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7556> 2025-11-22T10:11:55.916+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7544> 2025-11-22T10:11:56.951+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7531> 2025-11-22T10:11:57.982+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7520> 2025-11-22T10:11:58.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7505> 2025-11-22T10:12:00.011+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 48218112 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7494> 2025-11-22T10:12:00.978+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7482> 2025-11-22T10:12:01.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7471> 2025-11-22T10:12:02.971+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7460> 2025-11-22T10:12:03.985+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7446> 2025-11-22T10:12:04.959+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7434> 2025-11-22T10:12:05.967+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7423> 2025-11-22T10:12:06.961+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369164288 unmapped: 48193536 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7412> 2025-11-22T10:12:07.984+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369172480 unmapped: 48185344 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7401> 2025-11-22T10:12:09.030+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7387> 2025-11-22T10:12:10.007+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7376> 2025-11-22T10:12:10.987+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7364> 2025-11-22T10:12:11.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7353> 2025-11-22T10:12:13.022+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7342> 2025-11-22T10:12:13.977+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7327> 2025-11-22T10:12:14.979+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7316> 2025-11-22T10:12:15.934+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369188864 unmapped: 48168960 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7304> 2025-11-22T10:12:16.941+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7295> 2025-11-22T10:12:17.909+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7281> 2025-11-22T10:12:18.958+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7267> 2025-11-22T10:12:19.992+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7258> 2025-11-22T10:12:20.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7247> 2025-11-22T10:12:21.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7236> 2025-11-22T10:12:22.952+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7225> 2025-11-22T10:12:23.989+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369197056 unmapped: 48160768 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7210> 2025-11-22T10:12:24.973+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369221632 unmapped: 48136192 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7199> 2025-11-22T10:12:25.942+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369221632 unmapped: 48136192 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7188> 2025-11-22T10:12:26.908+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7176> 2025-11-22T10:12:27.876+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7165> 2025-11-22T10:12:28.838+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7151> 2025-11-22T10:12:29.882+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7140> 2025-11-22T10:12:30.917+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7128> 2025-11-22T10:12:31.957+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369229824 unmapped: 48128000 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7117> 2025-11-22T10:12:32.965+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369246208 unmapped: 48111616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7106> 2025-11-22T10:12:33.932+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369246208 unmapped: 48111616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7092> 2025-11-22T10:12:34.976+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369246208 unmapped: 48111616 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7080> 2025-11-22T10:12:35.951+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7069> 2025-11-22T10:12:36.983+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7058> 2025-11-22T10:12:37.948+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7047> 2025-11-22T10:12:38.933+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7033> 2025-11-22T10:12:39.904+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 48103424 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7021> 2025-11-22T10:12:40.861+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -7009> 2025-11-22T10:12:41.863+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6998> 2025-11-22T10:12:42.867+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6987> 2025-11-22T10:12:43.846+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6977> 2025-11-22T10:12:44.825+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6963> 2025-11-22T10:12:45.844+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6951> 2025-11-22T10:12:46.812+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6940> 2025-11-22T10:12:47.773+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6928> 2025-11-22T10:12:48.782+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369270784 unmapped: 48087040 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6917> 2025-11-22T10:12:49.804+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6903> 2025-11-22T10:12:50.755+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6892> 2025-11-22T10:12:51.763+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6881> 2025-11-22T10:12:52.799+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6870> 2025-11-22T10:12:53.822+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369287168 unmapped: 48070656 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6858> 2025-11-22T10:12:54.860+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369295360 unmapped: 48062464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369295360 unmapped: 48062464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6843> 2025-11-22T10:12:55.905+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6834> 2025-11-22T10:12:56.875+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369295360 unmapped: 48062464 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6823> 2025-11-22T10:12:57.841+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6810> 2025-11-22T10:12:58.869+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6799> 2025-11-22T10:12:59.829+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6785> 2025-11-22T10:13:00.847+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6774> 2025-11-22T10:13:01.833+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,4,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6762> 2025-11-22T10:13:02.793+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6751> 2025-11-22T10:13:03.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369303552 unmapped: 48054272 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6740> 2025-11-22T10:13:04.874+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 48046080 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6726> 2025-11-22T10:13:05.846+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 48037888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6715> 2025-11-22T10:13:06.831+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 48037888 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6703> 2025-11-22T10:13:07.801+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6691> 2025-11-22T10:13:08.754+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6680> 2025-11-22T10:13:09.766+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6666> 2025-11-22T10:13:10.780+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6655> 2025-11-22T10:13:11.798+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369328128 unmapped: 48029696 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6643> 2025-11-22T10:13:12.805+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369336320 unmapped: 48021504 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6632> 2025-11-22T10:13:13.771+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 48005120 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6621> 2025-11-22T10:13:14.773+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 48005120 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6606> 2025-11-22T10:13:15.767+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 48005120 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6595> 2025-11-22T10:13:16.730+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369352704 unmapped: 48005120 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6584> 2025-11-22T10:13:17.712+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369360896 unmapped: 47996928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6572> 2025-11-22T10:13:18.685+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369360896 unmapped: 47996928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6561> 2025-11-22T10:13:19.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369360896 unmapped: 47996928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6546> 2025-11-22T10:13:20.662+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369360896 unmapped: 47996928 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6534> 2025-11-22T10:13:21.707+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6522> 2025-11-22T10:13:22.714+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6510> 2025-11-22T10:13:23.693+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6499> 2025-11-22T10:13:24.706+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: bluestore.MempoolThread(0x561381b0fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112263 data_alloc: 218103808 data_used: 32477184
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6485> 2025-11-22T10:13:25.662+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
Nov 22 05:22:31 np0005532048 ceph-34829716-a12c-57a6-8915-c1aa615c9d8a-osd-1[89675]: -6473> 2025-11-22T10:13:26.695+0000 7fa6980a3640 -1 osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.22503.0:89 5.1a 5:5f9a99fb:::rbd_data.57e59e23bf2c.000000000000000f:head [call rbd.sparse_copyup in=2277688b,set-alloc-hint object_size 4194304 write_size 4194304,write 0~0] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e316)
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'images' : 7 ])
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: prioritycache tune_memory target: 4294967296 mapped: 369369088 unmapped: 47988736 heap: 417357824 old mem: 2845415832 new mem: 2845415832
Nov 22 05:22:31 np0005532048 ceph-osd[89679]: osd.1 316 heartbeat osd_stat(store_statfs(0x4e5a3e000/0x0/0x4ffc00000, data 0x4c45928/0x4b90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,1])
